Add files using upload-large-folder tool
Browse files- samples/pdfs/1340369.pdf +0 -0
- samples/pdfs/202179.pdf +0 -0
- samples/pdfs/3412641.pdf +0 -0
- samples/pdfs/5134879.pdf +0 -0
- samples/pdfs/6110726.pdf +0 -0
- samples/pdfs/6590205.pdf +0 -0
- samples/sample_metadata.jsonl +100 -50
- samples/texts_merged/1096347.md +814 -0
- samples/texts_merged/1096954.md +849 -0
- samples/texts_merged/1230197.md +278 -0
- samples/texts_merged/1323410.md +232 -0
- samples/texts_merged/1623821.md +22 -0
- samples/texts_merged/1834803.md +97 -0
- samples/texts_merged/1922832.md +540 -0
- samples/texts_merged/203609.md +0 -0
- samples/texts_merged/2126836.md +0 -0
- samples/texts_merged/2177428.md +629 -0
- samples/texts_merged/2234121.md +472 -0
- samples/texts_merged/2251660.md +389 -0
- samples/texts_merged/2531237.md +780 -0
- samples/texts_merged/2565362.md +822 -0
- samples/texts_merged/2753278.md +287 -0
- samples/texts_merged/2796137.md +592 -0
- samples/texts_merged/3395999.md +142 -0
- samples/texts_merged/3611010.md +0 -0
- samples/texts_merged/3863943.md +420 -0
- samples/texts_merged/3975828.md +291 -0
- samples/texts_merged/4150074.md +669 -0
- samples/texts_merged/4385907.md +0 -0
- samples/texts_merged/4515563.md +444 -0
- samples/texts_merged/4694300.md +109 -0
- samples/texts_merged/4729919.md +576 -0
- samples/texts_merged/4742797.md +0 -0
- samples/texts_merged/5573174.md +0 -0
- samples/texts_merged/5577417.md +186 -0
- samples/texts_merged/5640834.md +79 -0
- samples/texts_merged/5687555.md +96 -0
- samples/texts_merged/5963949.md +509 -0
- samples/texts_merged/6274397.md +34 -0
- samples/texts_merged/6422547.md +0 -0
- samples/texts_merged/6708780.md +448 -0
- samples/texts_merged/7113096.md +36 -0
- samples/texts_merged/7346654.md +46 -0
- samples/texts_merged/7421586.md +117 -0
- samples/texts_merged/7548747.md +0 -0
- samples/texts_merged/7621530.md +456 -0
- samples/texts_merged/7693403.md +415 -0
- samples/texts_merged/7856253.md +933 -0
- samples/texts_merged/901380.md +141 -0
- samples/texts_merged/93120.md +835 -0
samples/pdfs/1340369.pdf
ADDED
|
Binary file (74.5 kB). View file
|
|
|
samples/pdfs/202179.pdf
ADDED
|
Binary file (39.2 kB). View file
|
|
|
samples/pdfs/3412641.pdf
ADDED
|
Binary file (36 kB). View file
|
|
|
samples/pdfs/5134879.pdf
ADDED
|
Binary file (91.1 kB). View file
|
|
|
samples/pdfs/6110726.pdf
ADDED
|
Binary file (31.8 kB). View file
|
|
|
samples/pdfs/6590205.pdf
ADDED
|
Binary file (8.17 kB). View file
|
|
|
samples/sample_metadata.jsonl
CHANGED
|
@@ -1,50 +1,100 @@
|
|
| 1 |
-
{"doc_id": "
|
| 2 |
-
{"doc_id": "
|
| 3 |
-
{"doc_id": "
|
| 4 |
-
{"doc_id": "
|
| 5 |
-
{"doc_id": "
|
| 6 |
-
{"doc_id": "
|
| 7 |
-
{"doc_id": "
|
| 8 |
-
{"doc_id": "
|
| 9 |
-
{"doc_id": "
|
| 10 |
-
{"doc_id": "
|
| 11 |
-
{"doc_id": "
|
| 12 |
-
{"doc_id": "
|
| 13 |
-
{"doc_id": "
|
| 14 |
-
{"doc_id": "
|
| 15 |
-
{"doc_id": "
|
| 16 |
-
{"doc_id": "
|
| 17 |
-
{"doc_id": "
|
| 18 |
-
{"doc_id": "
|
| 19 |
-
{"doc_id": "
|
| 20 |
-
{"doc_id": "
|
| 21 |
-
{"doc_id": "
|
| 22 |
-
{"doc_id": "
|
| 23 |
-
{"doc_id": "
|
| 24 |
-
{"doc_id": "
|
| 25 |
-
{"doc_id": "
|
| 26 |
-
{"doc_id": "
|
| 27 |
-
{"doc_id": "
|
| 28 |
-
{"doc_id": "
|
| 29 |
-
{"doc_id": "
|
| 30 |
-
{"doc_id": "
|
| 31 |
-
{"doc_id": "
|
| 32 |
-
{"doc_id": "
|
| 33 |
-
{"doc_id": "
|
| 34 |
-
{"doc_id": "
|
| 35 |
-
{"doc_id": "
|
| 36 |
-
{"doc_id": "
|
| 37 |
-
{"doc_id": "
|
| 38 |
-
{"doc_id": "
|
| 39 |
-
{"doc_id": "
|
| 40 |
-
{"doc_id": "
|
| 41 |
-
{"doc_id": "
|
| 42 |
-
{"doc_id": "
|
| 43 |
-
{"doc_id": "
|
| 44 |
-
{"doc_id": "
|
| 45 |
-
{"doc_id": "
|
| 46 |
-
{"doc_id": "
|
| 47 |
-
{"doc_id": "
|
| 48 |
-
{"doc_id": "
|
| 49 |
-
{"doc_id": "
|
| 50 |
-
{"doc_id": "
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{"doc_id": "6545431", "mean_proba": 0.8814646013908916, "num_pages": 36}
|
| 2 |
+
{"doc_id": "3955960", "mean_proba": 0.9913606899125236, "num_pages": 56}
|
| 3 |
+
{"doc_id": "213962", "mean_proba": 0.9412138696227754, "num_pages": 42}
|
| 4 |
+
{"doc_id": "7332466", "mean_proba": 0.9958903292814892, "num_pages": 12}
|
| 5 |
+
{"doc_id": "6729477", "mean_proba": 0.8684389293193817, "num_pages": 16}
|
| 6 |
+
{"doc_id": "1666824", "mean_proba": 0.9380706186805452, "num_pages": 28}
|
| 7 |
+
{"doc_id": "6189594", "mean_proba": 0.9947413866009032, "num_pages": 14}
|
| 8 |
+
{"doc_id": "6361280", "mean_proba": 0.9950645359662862, "num_pages": 13}
|
| 9 |
+
{"doc_id": "1693876", "mean_proba": 0.9907086342573166, "num_pages": 4}
|
| 10 |
+
{"doc_id": "7156187", "mean_proba": 0.9797802279735434, "num_pages": 29}
|
| 11 |
+
{"doc_id": "7755458", "mean_proba": 0.987359182192729, "num_pages": 13}
|
| 12 |
+
{"doc_id": "4055151", "mean_proba": 0.912624683517676, "num_pages": 26}
|
| 13 |
+
{"doc_id": "1172375", "mean_proba": 0.953002940524708, "num_pages": 11}
|
| 14 |
+
{"doc_id": "1789294", "mean_proba": 0.8442278280854225, "num_pages": 4}
|
| 15 |
+
{"doc_id": "2126836", "mean_proba": 0.8163998981609064, "num_pages": 272}
|
| 16 |
+
{"doc_id": "324098", "mean_proba": 0.989006942510605, "num_pages": 5}
|
| 17 |
+
{"doc_id": "5137227", "mean_proba": 0.989109086804092, "num_pages": 32}
|
| 18 |
+
{"doc_id": "5658873", "mean_proba": 0.9915072917938232, "num_pages": 11}
|
| 19 |
+
{"doc_id": "2932683", "mean_proba": 0.9361854828894138, "num_pages": 16}
|
| 20 |
+
{"doc_id": "5999157", "mean_proba": 0.922621601819992, "num_pages": 15}
|
| 21 |
+
{"doc_id": "2487380", "mean_proba": 0.9654558925401596, "num_pages": 21}
|
| 22 |
+
{"doc_id": "6152053", "mean_proba": 0.8964151733595392, "num_pages": 46}
|
| 23 |
+
{"doc_id": "3246292", "mean_proba": 0.9561361407532412, "num_pages": 17}
|
| 24 |
+
{"doc_id": "647655", "mean_proba": 0.9763310998678209, "num_pages": 32}
|
| 25 |
+
{"doc_id": "3336595", "mean_proba": 0.9988973836104076, "num_pages": 6}
|
| 26 |
+
{"doc_id": "1188587", "mean_proba": 0.8714917524386261, "num_pages": 59}
|
| 27 |
+
{"doc_id": "1378706", "mean_proba": 0.999192284213172, "num_pages": 9}
|
| 28 |
+
{"doc_id": "7878336", "mean_proba": 0.9919242039322852, "num_pages": 12}
|
| 29 |
+
{"doc_id": "668834", "mean_proba": 0.9523984690507252, "num_pages": 3}
|
| 30 |
+
{"doc_id": "2665585", "mean_proba": 0.9995250713825226, "num_pages": 25}
|
| 31 |
+
{"doc_id": "1378764", "mean_proba": 0.9841879036496668, "num_pages": 34}
|
| 32 |
+
{"doc_id": "582263", "mean_proba": 0.9408483675548008, "num_pages": 7}
|
| 33 |
+
{"doc_id": "2889479", "mean_proba": 0.978348558319026, "num_pages": 29}
|
| 34 |
+
{"doc_id": "7173360", "mean_proba": 0.9976005894797187, "num_pages": 21}
|
| 35 |
+
{"doc_id": "2947864", "mean_proba": 0.980615821149614, "num_pages": 9}
|
| 36 |
+
{"doc_id": "2384710", "mean_proba": 0.9535730459860392, "num_pages": 14}
|
| 37 |
+
{"doc_id": "841018", "mean_proba": 0.9902740854483384, "num_pages": 13}
|
| 38 |
+
{"doc_id": "3880484", "mean_proba": 0.8303749095648527, "num_pages": 16}
|
| 39 |
+
{"doc_id": "6159994", "mean_proba": 0.9970970955159928, "num_pages": 45}
|
| 40 |
+
{"doc_id": "565481", "mean_proba": 0.9996201992034912, "num_pages": 4}
|
| 41 |
+
{"doc_id": "5725464", "mean_proba": 0.9706398715314112, "num_pages": 76}
|
| 42 |
+
{"doc_id": "2660370", "mean_proba": 0.9425864967153124, "num_pages": 368}
|
| 43 |
+
{"doc_id": "4283718", "mean_proba": 0.8407226204872131, "num_pages": 2}
|
| 44 |
+
{"doc_id": "4961582", "mean_proba": 0.991776500429426, "num_pages": 21}
|
| 45 |
+
{"doc_id": "6038087", "mean_proba": 0.9867547584904564, "num_pages": 27}
|
| 46 |
+
{"doc_id": "1640880", "mean_proba": 0.8685442879796028, "num_pages": 4}
|
| 47 |
+
{"doc_id": "47713", "mean_proba": 0.9748652529331944, "num_pages": 31}
|
| 48 |
+
{"doc_id": "218831", "mean_proba": 0.8872458606194227, "num_pages": 39}
|
| 49 |
+
{"doc_id": "2710881", "mean_proba": 0.9833595033954172, "num_pages": 34}
|
| 50 |
+
{"doc_id": "4742797", "mean_proba": 0.9487975366064348, "num_pages": 512}
|
| 51 |
+
{"doc_id": "4054627", "mean_proba": 0.8568866426746051, "num_pages": 24}
|
| 52 |
+
{"doc_id": "3863109", "mean_proba": 0.9898869842290878, "num_pages": 14}
|
| 53 |
+
{"doc_id": "4767451", "mean_proba": 0.966353714466095, "num_pages": 2}
|
| 54 |
+
{"doc_id": "6284605", "mean_proba": 0.9863201938569546, "num_pages": 24}
|
| 55 |
+
{"doc_id": "1546286", "mean_proba": 0.9824597297645196, "num_pages": 41}
|
| 56 |
+
{"doc_id": "5963949", "mean_proba": 0.8951789796352386, "num_pages": 10}
|
| 57 |
+
{"doc_id": "3975828", "mean_proba": 0.9917463935338534, "num_pages": 13}
|
| 58 |
+
{"doc_id": "4729919", "mean_proba": 0.9977401705349194, "num_pages": 17}
|
| 59 |
+
{"doc_id": "7336068", "mean_proba": 0.9684559280673662, "num_pages": 12}
|
| 60 |
+
{"doc_id": "1834803", "mean_proba": 0.9965955689549446, "num_pages": 4}
|
| 61 |
+
{"doc_id": "6759244", "mean_proba": 0.924437294403712, "num_pages": 30}
|
| 62 |
+
{"doc_id": "2753278", "mean_proba": 0.9987588660283522, "num_pages": 11}
|
| 63 |
+
{"doc_id": "3441871", "mean_proba": 0.9961503624916076, "num_pages": 10}
|
| 64 |
+
{"doc_id": "1768104", "mean_proba": 0.8594257831573486, "num_pages": 16}
|
| 65 |
+
{"doc_id": "2251660", "mean_proba": 0.9966943013040644, "num_pages": 19}
|
| 66 |
+
{"doc_id": "3395999", "mean_proba": 0.9834167063236235, "num_pages": 5}
|
| 67 |
+
{"doc_id": "5577417", "mean_proba": 0.967733658850193, "num_pages": 4}
|
| 68 |
+
{"doc_id": "5640834", "mean_proba": 0.9983150362968444, "num_pages": 2}
|
| 69 |
+
{"doc_id": "6708780", "mean_proba": 0.9998646552364032, "num_pages": 12}
|
| 70 |
+
{"doc_id": "7113096", "mean_proba": 0.9582548439502716, "num_pages": 1}
|
| 71 |
+
{"doc_id": "2565362", "mean_proba": 0.9888773594911282, "num_pages": 26}
|
| 72 |
+
{"doc_id": "4385907", "mean_proba": 0.8863706297495149, "num_pages": 176}
|
| 73 |
+
{"doc_id": "1623821", "mean_proba": 1.0000049471855164, "num_pages": 1}
|
| 74 |
+
{"doc_id": "7346654", "mean_proba": 0.8726378764425005, "num_pages": 7}
|
| 75 |
+
{"doc_id": "93120", "mean_proba": 0.9986219868063926, "num_pages": 20}
|
| 76 |
+
{"doc_id": "2234121", "mean_proba": 0.993724638223648, "num_pages": 10}
|
| 77 |
+
{"doc_id": "7621530", "mean_proba": 0.9448085086686272, "num_pages": 14}
|
| 78 |
+
{"doc_id": "4150074", "mean_proba": 0.9930534839630129, "num_pages": 10}
|
| 79 |
+
{"doc_id": "6274397", "mean_proba": 0.8951933681964874, "num_pages": 1}
|
| 80 |
+
{"doc_id": "5687555", "mean_proba": 0.8110953032970428, "num_pages": 5}
|
| 81 |
+
{"doc_id": "7856253", "mean_proba": 0.8499215410815345, "num_pages": 27}
|
| 82 |
+
{"doc_id": "7548747", "mean_proba": 0.9723348537006892, "num_pages": 37}
|
| 83 |
+
{"doc_id": "1096954", "mean_proba": 0.9979800879955292, "num_pages": 12}
|
| 84 |
+
{"doc_id": "4515563", "mean_proba": 0.9912579745054244, "num_pages": 10}
|
| 85 |
+
{"doc_id": "1230197", "mean_proba": 0.948458981513977, "num_pages": 5}
|
| 86 |
+
{"doc_id": "203609", "mean_proba": 0.9800784200429916, "num_pages": 40}
|
| 87 |
+
{"doc_id": "1096347", "mean_proba": 0.992400233944257, "num_pages": 24}
|
| 88 |
+
{"doc_id": "7693403", "mean_proba": 0.9032960954834433, "num_pages": 17}
|
| 89 |
+
{"doc_id": "3611010", "mean_proba": 0.978197129182918, "num_pages": 93}
|
| 90 |
+
{"doc_id": "2531237", "mean_proba": 0.9984967932105064, "num_pages": 16}
|
| 91 |
+
{"doc_id": "4694300", "mean_proba": 0.9998620549837748, "num_pages": 3}
|
| 92 |
+
{"doc_id": "6422547", "mean_proba": 0.9989109501379344, "num_pages": 109}
|
| 93 |
+
{"doc_id": "2177428", "mean_proba": 0.905204855969974, "num_pages": 14}
|
| 94 |
+
{"doc_id": "1922832", "mean_proba": 0.9945877194404602, "num_pages": 6}
|
| 95 |
+
{"doc_id": "5573174", "mean_proba": 0.9889477075714814, "num_pages": 38}
|
| 96 |
+
{"doc_id": "901380", "mean_proba": 0.955151192843914, "num_pages": 4}
|
| 97 |
+
{"doc_id": "3863943", "mean_proba": 0.9524745146433512, "num_pages": 18}
|
| 98 |
+
{"doc_id": "2796137", "mean_proba": 0.9830208400885264, "num_pages": 15}
|
| 99 |
+
{"doc_id": "1323410", "mean_proba": 0.971997876962026, "num_pages": 6}
|
| 100 |
+
{"doc_id": "7421586", "mean_proba": 0.9990615844726562, "num_pages": 4}
|
samples/texts_merged/1096347.md
ADDED
|
@@ -0,0 +1,814 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Differentially Private Submodular Maximization: Data Summarization in Disguise
|
| 5 |
+
|
| 6 |
+
(Full version)
|
| 7 |
+
|
| 8 |
+
Marko Mitrovic, Mark Bun, Andreas Krause, Amin Karbasi
|
| 9 |
+
|
| 10 |
+
June 12, 2017
|
| 11 |
+
|
| 12 |
+
## Abstract
|
| 13 |
+
|
| 14 |
+
How can we extract representative features from a dataset containing sensitive personal information, while providing individual-level privacy guarantees? Many data summarization applications are captured by the general framework of submodular maximization. As a consequence, a wide range of efficient approximation algorithms for submodular maximization have been developed. However, when such applications involve sensitive data about individuals, their privacy concerns are not automatically addressed by these algorithms.
|
| 15 |
+
|
| 16 |
+
To remedy this problem, we propose a general and systematic study of differentially private submodular maximization. We present privacy-preserving algorithms for both monotone and non-monotone submodular maximization under cardinality, matroid, and $p$-extendible system constraints, with guarantees that are competitive with optimal solutions. Along the way, we analyze a new algorithm for non-monotone submodular maximization under a cardinality constraint, which is the first (even non-privately) to achieve a constant approximation ratio with a linear number of function evaluations. We additionally provide two concrete experiments to validate the efficacy of these algorithms. In the first experiment, we privately solve the facility location problem using a dataset of Uber pickup locations in Manhattan. In the second experiment, we perform private submodular maximization of a mutual information measure to select features relevant to classifying patients by diabetes status.
|
| 17 |
+
|
| 18 |
+
## 1 Introduction
|
| 19 |
+
|
| 20 |
+
A set function $f: 2^V \to \mathbb{R}$ is said to be *submodular* if for all sets $S \subseteq T \subseteq V$ and every element $v \in V$ we have $f(S \cup \{v\}) - f(S) \ge f(T \cup \{v\}) - f(T)$. That is, the marginal contribution of any element $v$ to the value of the function $f(S)$ diminishes as the input set $S$ increases. The theory of *submodular maximization* unifies and generalizes diverse problems in combinatorial optimization, including the Max-Cover, Max-Cut, and Facility Location problems. In turn, this theory has recently found numerous applications to problems in machine learning, data science, and artificial intelligence. A few such applications include exemplar-based clustering (Krause & Gomes, 2010), feature selection for classification (Krause & Guestrin, 2005), document and corpus summarization (Lin & Bilmes, 2011; Kirchhoff & Bilmes, 2014; Sipos et al., 2012), crowd teaching (Singla et al., 2014), and influence maximization in social networks (Kempe et al., 2003).
|
| 21 |
+
|
| 22 |
+
Some of the most compelling use cases for these applications concern sensitive data about individuals (Mirzasoleiman et al., 2016a,b). As a running example, let us consider the specific problem of determining which of a collection of features (e.g. age, height, weight, etc.) are most relevant to a binary classification task (e.g. predicting whether an individual is likely to have diabetes). In this problem, a sensitive training
|
| 23 |
+
---PAGE_BREAK---
|
| 24 |
+
|
| 25 |
+
set takes the form $D = \{(x_i, y_i)\}_{i=1}^n$ where each individual i's data consists of features $x_{i,1}, \dots, x_{i,m}$ together with a class label $y_i$. The goal is to identify a small subset $S \subseteq [m]$ of features which can then be used to build a good classifier for $y$. Many techniques exist for feature selection, including one based on maximizing a submodular function which captures the mutual information between a subset of features and the class label of interest (Krause & Guestrin, 2005). However, for both legal (e.g. compliance with HIPAA regulations) and ethical reasons, it is important that the selection of relevant features does not compromise the privacy of any individual who has contributed to the training data set. Unfortunately, the theory of submodular maximization does not in itself accommodate such privacy concerns.
|
| 26 |
+
|
| 27 |
+
To this end, we propose a systematic study of *differentially private submodular maximization* to enable these applications based on submodular maximization, while provably guaranteeing individual-level privacy. The notion of differential privacy (Dwork et al., 2006) offers a strong protection of individual-level privacy. Nevertheless, differential privacy has been shown to permit useful data analysis and machine learning tasks. In a nutshell, the definition formalizes a guarantee that no individual's data should have too significant an effect on the outcome of a computation. We provide the formal definition in Section 2. Such a privacy guarantee is obtained through the introduction of random noise, so private submodular maximization is conceptually related to the problem of submodular maximization in the presence of noise (Cheraghchi, 2012; Hassidim & Singer, 2016).
|
| 28 |
+
|
| 29 |
+
In this work, we study the following problem under various conditions on the submodular objective function $f$ (monotone vs. non-monotone), and various choices of the constraint $C$ (cardinality, matroid, or $p$-extendible system).
|
| 30 |
+
|
| 31 |
+
**Problem 1.1.** Given a sensitive dataset $D$ associated to a submodular function $f_D: 2^V \to \mathbb{R}:$ Find a subset $S \in C \subseteq 2^V$ that approximately maximizes $f_D(S)$ in a manner that guarantees differential privacy with respect to the input dataset $D$.
|
| 32 |
+
|
| 33 |
+
An important special case of this problem was studied in prior work of Gupta et al. (2010). They considered the “combinatorial public projects” problem (Papadimitriou et al., 2008), where given a dataset $D = (x_1, \dots, x_n)$, the function $f_D$ takes the particular form $f_D(S) = \frac{1}{n} \sum_{i=1}^n f_{x_i}(S)$ for monotone submodular functions $f_{x_i}: 2^V \to [0, 1]$, and is to be maximized subject to a cardinality constraint $|S| \le k$. We call functions of this form *decomposable*. They presented a simple greedy algorithm, which will be central to our work, together with a tailored analysis which achieves strong accuracy guarantees in this special case.
|
| 34 |
+
|
| 35 |
+
However, there are many cases of Problem 1.1 which do not fall into the combinatorial public projects framework. For some problems, including feature selection via mutual information, the submodular function $f_D$ of interest depends on the dataset $D$ in ways much more complicated than averaging functions associated to each individual. The focus of our work is on understanding Problem 1.1 in circumstances which capture a broader class of useful applications and constraints in machine learning. We summarize our specific contributions in Section 1.2.
|
| 36 |
+
|
| 37 |
+
## 1.1 The greedy paradigm
|
| 38 |
+
|
| 39 |
+
Even without concern for privacy, the problem of submodular maximization poses computational challenges. In particular, exact submodular maximization subject to a cardinality constraint is NP-hard. One of the principal approaches to designing efficient approximation algorithms is to use a greedy strategy (Nemhauser et al., 1978). Consider the problem of maximizing a set function $f(S)$ subject to the cardinality constraint $|S| \le k$. In each of rounds $i = 1, \dots, k$, the basic greedy algorithm constructs $S_i$ from $S_{i-1}$ by adding the element $v_i \in (V \setminus S_{i-1})$ which maximizes the marginal gain $f(S_{i-1} \cup \{v_i\}) - f(S_{i-1})$. Nemhauser et al.
|
| 40 |
+
---PAGE_BREAK---
|
| 41 |
+
|
| 42 |
+
<table><thead><tr><th></th><th>Cardinality</th><th>Matroid</th><th>p-Extendible</th></tr></thead><tbody><tr><td>Comb. Pub. Proj.</td><td>(1 - 1/e) OPT - O(k log |V|/n) (Gupta et al., 2010)</td><td>1/2 OPT - O(k log |V|/n)</td><td>1/(p+1) OPT - O(k log |V|/n)</td></tr><tr><td>Monotone</td><td>(1 - 1/e) OPT - O(k<sup>3/2</sup> log |V|/n)</td><td>1/2 OPT - O(k<sup>3/2</sup> log |V|/n)</td><td>1/(p+1) OPT - O(k<sup>3/2</sup> log |V|/n)</td></tr><tr><td>Non-monotone</td><td>1/e (1 - 1/e) OPT - O(k<sup>3/2</sup> log |V|/n)</td><td>-</td><td>-</td></tr></tbody></table>
|
| 43 |
+
|
| 44 |
+
Table 1: Guarantees of expected solution quality for privately maximizing a sensitivity-(1/n) submodular function $f_D$. The parameter *k* represents either a cardinality constraint, or the size of the set returned (for matroid or *p*-extendible system constraints). Full expressions with explicit dependencies on differential privacy parameters $\varepsilon$, $\delta$ appear in the body of the paper.
|
| 45 |
+
|
| 46 |
+
(1978) famously showed that this algorithm yields a $(1 - 1/e)$-approximation to the optimal value of $f(S)$ whenever $f$ is a monotone submodular function.
|
| 47 |
+
|
| 48 |
+
In the combinatorial public projects setting, Gupta et al. (2010) showed how to make the greedy algorithm compatible with differential privacy by randomizing the procedure for selecting each $v_i$. This selection procedure is specified by the differentially private exponential mechanism of McSherry & Talwar (2007), which (probabilistically) guarantees that the $v_i$ selected in each round is almost as good as the true marginal gain maximizer. Remarkably, Gupta et al. (2010) show that the cumulative privacy guarantee of the resulting randomized greedy algorithm is not much worse than that of a single run of the exponential mechanism. This analysis is highly tailored to the structure of the combinatorial public projects problem. However, replacing this tailored analysis with the more generic “advanced composition theorem” for differential privacy (Dwork et al., 2010), one still obtains useful results for the more general class of “low-sensitivity” submodular functions.
|
| 49 |
+
|
| 50 |
+
## 1.2 Our contributions
|
| 51 |
+
|
| 52 |
+
Table 1 summarizes the approximation guarantees we obtain for Problem 1.1 under increasingly more general classes of submodular functions $f_D$ (read top to bottom), and increasingly more general types of constraints (read left to right). In each entry, OPT denotes the value of the optimal non-private solution. Below we draw attention to a few particular contributions, including some that are not expressed in Table 1.
|
| 53 |
+
|
| 54 |
+
**Non-monotone objective functions.** Submodular maximization for non-monotone functions is significantly more challenging than it is for monotone objectives. In particular, the basic greedy algorithm of Nemahauser et al. fails, and cannot guarantee any constant-factor approximation. Several works (Feldman et al., 2017; Mirzasoleiman et al., 2016a; Buchbinder et al., 2014; Feldman et al., 2011) have identified variations of the greedy algorithm that do yield constant-factor approximations for non-monotone objectives. However, it is not clear how to modify any of these algorithms to accommodate differential privacy.
|
| 55 |
+
|
| 56 |
+
Our starting point is instead the “stochastic greedy” algorithm of Mirzasoleiman et al. (2015), which was originally designed to perform *monotone* submodular maximization in linear time. Drawing ideas from Buchbinder et al. (2014), we give a new analysis of the stochastic greedy algorithm to show that it also gives a $1/e(1 - 1/e)$-approximation for non-monotone submodular functions. To our knowledge, this is the first algorithm making exactly $|V|$ function evaluations which achieves a constant-factor approximation for either monotone or non-monotone objectives. Moreover, it is immediately clear how to use the exponential mechanism to make this algorithm differentially private.
|
| 57 |
+
|
| 58 |
+
This phenomenon is analogous to how stochastic variants of gradient descent are more naturally suited to
|
| 59 |
+
---PAGE_BREAK---
|
| 60 |
+
|
| 61 |
+
providing differential privacy than their deterministic counterparts (Song et al., 2013; Bassily et al., 2014). That is, our results illustrate how techniques for making algorithms *fast* are also helpful in making them *privacy-preserving*.
|
| 62 |
+
|
| 63 |
+
**General constraints.** While a cardinality constraint is perhaps the most natural to place on a submodular maximization problem, some machine learning problems, e.g. personalized data summarization (Mirz soleiman et al., 2016a), require the use of more general types of constraints. For instance, one may wish to maximize a submodular function $f(S)$ subject to $S \in \mathcal{I}$ for an arbitrary matroid $\mathcal{I}$, or subject to $S$ be- ing contained in an intersection of $p$ matroids (more generally, a $p$-extendible system). For these types of constraints, the greedy algorithm still yields a constant factor approximation for monotone objective func- tions (Fisher et al., 1978; Jenkyns, 1976; Călinescu et al., 2011). We show in this work that the analysis provided by Călinescu et al. (2011) for matroids and $p$-extendible families can be adapted to handle addi- tional error introduced for differential privacy.
|
| 64 |
+
|
| 65 |
+
**General selection procedures.** For worst-case datasets, the exponential mechanism is optimal within each round of private maximization. However, it may be sub-optimal for datasets enjoying additional structural properties. Fortunately, the greedy framework we use is flexible with regard to the choice of the selection procedure. For instance, one can replace the exponential mechanism in a black-box manner with the “large margin mechanism” of Chaudhuri et al. (2014) to obtain error bounds that replace the explicit dependence on $\log|V|$ in Table 1 with a term that may be significantly smaller for real datasets. We give a slightly simplified analysis of the large margin mechanism, and present it in a manner suitable for greedy algorithms which access the same data set multiple times. (These guarantees are more complicated, but spelled out in Section 5.) For submodular functions exhibiting additional structure, one may also be able to perform each maximization step with the “choosing mechanism” of Beimel et al. (2016) and Bun et al. (2015).
|
| 66 |
+
|
| 67 |
+
# 2 Preliminaries
|
| 68 |
+
|
| 69 |
+
Let $V$ be finite set which we will refer to as the *ground set* and let $X$ be a finite set which we will refer to as the *data universe*. A dataset is an $n$-tuple $D = (x_1, \dots, x_n) \in X^n$. Suppose each dataset $D$ is associated to a set function $f_D : 2^V \to \mathbb{R}$. The manner in which $f_D$ depends on $D$ will be application-specific, but it is assumed that the association between $D$ and $f_D$ is public information.
|
| 70 |
+
|
| 71 |
+
**Definition 2.1.** A set function $f_D : 2^V \to \mathbb{R}$ is *submodular* if for all sets $S \subseteq T \subseteq V$ and every element $v \in V$, we have $f_D(S \cup \{v\}) - f_D(S) \geq f_D(T \cup \{v\}) - f_D(T)$.
|
| 72 |
+
|
| 73 |
+
Moreover, If $f_D(S) \leq f_D(T)$ whenever $S \subseteq T$, we say $f_D$ is *monotone*. If for every dataset $D = (x_1, \dots, x_n)$, the function $f_D = \frac{1}{n} \sum_{i=1}^n f_{x_i}$ for monotone submodular functions $f_{x_i}: 2^V \to [0, \lambda]$, we say $f_D$ is *$\lambda$-decomposable*. The problem of maximizing a decomposable submodular function was considered as the “combinatorial public projects problem” by Papadimitriou et al. (2008).
|
| 74 |
+
|
| 75 |
+
We are interested in the problem of approximately maximizing a submodular function subject to differ- ential privacy. The definition of differential privacy relies on the notion of *neighboring* datasets, which are simply tuples $D, D' \in X^n$ that differ in at most one entry. If $D, D'$ are neighboring, we write $D \sim D'$.
|
| 76 |
+
|
| 77 |
+
**Definition 2.2.** A randomized algorithm $M: X^n \to \mathcal{R}$ satisfies $(\epsilon, \delta)$-differential privacy if for all measurable sets $T \subseteq \mathcal{R}$ and all neighboring datasets $D \sim D'$,
|
| 78 |
+
|
| 79 |
+
$$ \Pr[M(D) \in T] \leq e^{\epsilon} \Pr[M(D') \in T] + \delta. $$
|
| 80 |
+
---PAGE_BREAK---
|
| 81 |
+
|
| 82 |
+
Differentially private algorithms must be calibrated to the sensitivity of the function of interest with
|
| 83 |
+
respect to small changes in the input dataset, defined formally as follows.
|
| 84 |
+
|
| 85 |
+
**Definition 2.3.** The sensitivity of a set function $f_D : 2^V \to \mathbb{R}$ (depending on a dataset $D$) with respect to a constraint $C \subseteq 2^V$ is defined as
|
| 86 |
+
|
| 87 |
+
$$ \max_{D \sim D'} \max_{S \in C} |f_D(S) - f_{D'}(S)|. $$
|
| 88 |
+
|
| 89 |
+
**Composition of Differential Privacy.** The analyses of our algorithms rely crucially on composition theo-
|
| 90 |
+
rems for differential privacy. For a sequence of privacy parameters {($\epsilon_i, \delta_i$)}_{i=1}^k, we informally refer to the
|
| 91 |
+
$k$-fold adaptive composition of ($\epsilon_i, \delta_i$)-d differentially private algorithms as the output of a mechanism $M^*
|
| 92 |
+
$ that behaves as follows on an input $D$: In each of rounds $i = 1, \dots, k$, the algorithm $M^*$ selects an ($\epsilon_i, \delta_i$)-
|
| 93 |
+
d differentially private algorithm $M_i$ possibly depending on the previous outcomes $M_1(D), \dots, M_i(D)$ (but
|
| 94 |
+
not directly on the sensitive dataset $D$ itself), and releases $M_i(D)$. For a formal treatment of adaptive
|
| 95 |
+
composition, see (Dwork et al., 2010; Dwork & Roth, 2014).
|
| 96 |
+
|
| 97 |
+
**Theorem 2.4.** (Dwork & Lei, 2009; Dwork et al., 2010; Bun & Steinke, 2016) *The k-fold adaptive composition of ($\epsilon_0, \delta_0$)-d differentially private algorithms satisfies ($\epsilon, \delta$)-differential privacy where*
|
| 98 |
+
|
| 99 |
+
1. $\epsilon = k\epsilon_0$ and $\delta = k\delta_0$. (Basic Composition).
|
| 100 |
+
|
| 101 |
+
2. $\epsilon = \frac{1}{2}k\epsilon_0^2 + \sqrt{2\log(1/\delta')}\epsilon_0$ and $\delta = \delta' + k\delta$, for any $\delta' > 0$. (Advanced Composition)
|
| 102 |
+
|
| 103 |
+
**Exponential Mechanism.** The exponential mechanism (McSherry & Talwar, 2007) is a general primitive for solving discrete optimization problems. Let $q: V \times X^n \to \mathbb{R}$ be a “quality” function measuring how good a solution $v \in V$ is with respect to a dataset $D \in X^n$. We say a quality function $q$ has sensitivity $\lambda$ if for all $v \in V$ and all neighboring datasets $D \sim D'$, we have $|q(v, D) - q(v, D')| \le \lambda$.
|
| 104 |
+
|
| 105 |
+
**Proposition 2.5.** Let $\epsilon > 0$ and let $q: V \times X^n$ be a quality function with sensitivity $\lambda$. Define the exponential mechanism as the algorithm which selects every $v \in V$ with probability proportional to $\exp(\epsilon q(v, D)/2\lambda)$.
|
| 106 |
+
|
| 107 |
+
• The exponential mechanism provides $(\epsilon, 0)$-differential privacy.
|
| 108 |
+
|
| 109 |
+
* For every $D \in X^n$,
|
| 110 |
+
|
| 111 |
+
$$ \mathbb{E}[q(\hat{v}, D)] \geq \max_{v \in V} q(v, D) - \frac{2\lambda \ln |V|}{\epsilon}, $$
|
| 112 |
+
|
| 113 |
+
where $\hat{v}$ is the output of the exponential mechanism on dataset $D$.
|
| 114 |
+
|
| 115 |
+
The privacy guarantee and a “with high probability” utility guarantee of the exponential mechanism are
|
| 116 |
+
due to McSherry & Talwar (2007). A simple proof of the utility guarantee in expectation appears in (Bassily
|
| 117 |
+
et al., 2016).
|
| 118 |
+
|
| 119 |
+
# 3 Monotone Submodular Maximization
|
| 120 |
+
|
| 121 |
+
In this section, we present a variant of the basic greedy algorithm which will enable maximization of mono-
|
| 122 |
+
tone submodular functions. This algorithm simply replaces each greedy selection step with a privacy-
|
| 123 |
+
preserving selection algorithm denoted $\mathcal{O}$. The selection function $\mathcal{O}$ takes as input a quality function
|
| 124 |
+
$q: U \times X^n \rightarrow \mathbb{R}$ and a dataset $D$, as well as privacy parameters $\epsilon_0, \delta_0$, and outputs an element $u \in U$.
|
| 125 |
+
---PAGE_BREAK---
|
| 126 |
+
|
| 127 |
+
We begin in the simplest case of monotone submodular maximization with a cardinality constraint (Algorithm 1). The algorithm for more general constraints appears in Section 3.1.
|
| 128 |
+
|
| 129 |
+
Algorithm 1 was already studied by Gupta et al. (2010) in the special case where $f_D$ is decomposable, and $\mathcal{O}$ is the exponential mechanism. We generalize their result to the much broader class of low-sensitivity monotone submodular functions.
|
| 130 |
+
|
| 131 |
+
**Algorithm 1** Diff. Private Greedy (Cardinality) $\mathcal{G}^\mathcal{O}$
|
| 132 |
+
|
| 133 |
+
**Input:** Submodular function $f_D: 2^V \to \mathbb{R}$, dataset $D$, cardinality constraint $k$, privacy parameters $\varepsilon_0, \delta_0$
|
| 134 |
+
**Output:** Size $k$ subset of $V$
|
| 135 |
+
|
| 136 |
+
1. Initialize $S_0 = \emptyset$
|
| 137 |
+
|
| 138 |
+
2. For $i = 1, \dots, k$:
|
| 139 |
+
|
| 140 |
+
* Define $q_i : (V \setminus S_{i-1}) \times X^n \to \mathbb{R}$ via $q_i(v, \tilde{D}) = f_{\tilde{D}}(S_{i-1} \cup \{v\}) - f_{\tilde{D}}(S_{i-1})$
|
| 141 |
+
|
| 142 |
+
* Compute $v_i \leftarrow_R \mathcal{O}(q_i, D; \varepsilon_0, \delta_0)$
|
| 143 |
+
|
| 144 |
+
* Update $S_i \leftarrow (S_{i-1} \cup \{v_i\})$
|
| 145 |
+
|
| 146 |
+
3. Return $S_k$
|
| 147 |
+
|
| 148 |
+
**Theorem 3.1.** (Gupta et al., 2010) Suppose $f_D: 2^V \to \mathbb{R}$ is $\lambda$-decomposable (cf. Definition 2.1). Let $\delta > 0$ and let $\varepsilon_0 \ge 0$ be such that $\varepsilon = 2 \cdot \varepsilon_0 \cdot (e-1) \ln(3e/\delta) \le 1$. Then instantiating Algorithm 1 with $\mathcal{O} = \text{EM}$ and parameter $\varepsilon_0 > 0$ provides $(\varepsilon, \delta)$-differential privacy.
|
| 149 |
+
|
| 150 |
+
Moreover, for every $D \in X^n$,
|
| 151 |
+
|
| 152 |
+
$$ \mathbb{E}[f_D(S_k)] \ge \left(1 - \frac{1}{e}\right) \text{OPT} - \frac{2\lambda k \ln|V|}{\varepsilon_0} $$
|
| 153 |
+
|
| 154 |
+
where $S_k \leftarrow_R \mathcal{G}^{\text{EM}}(D)$.
|
| 155 |
+
|
| 156 |
+
Unfortunately, the privacy analysis of Theorem 3.1 makes essential use of the decomposability of $f_D$, and does not directly generalize to arbitrary submodular functions of low-sensitivity. Replacing the privacy analysis of Gupta et al. (2010) with the Composition Theorem 2.4 instead gives
|
| 157 |
+
|
| 158 |
+
**Theorem 3.2.** Suppose $f_D: 2^V \to \mathbb{R}$ is monotone and has sensitivity $\lambda$. Then instantiating Algorithm 1 with $\mathcal{O} = \text{EM}$ and parameter $\varepsilon_0 > 0$ provides $(\varepsilon = k\varepsilon_0, \delta = 0)$-differential privacy. It also provides $(\varepsilon, \delta)$-differential privacy for every $\delta > 0$ with $\varepsilon = k\varepsilon_0^2/2 + \varepsilon_0 \cdot \sqrt{2k \ln(1/\delta)}$.
|
| 159 |
+
|
| 160 |
+
Moreover, for every $D \in X^n$,
|
| 161 |
+
|
| 162 |
+
$$ \mathbb{E}[f_D(S_k)] \geq \left(1 - \frac{1}{e}\right) \mathrm{OPT} - \frac{2\lambda k \ln|V|}{\varepsilon_0} $$
|
| 163 |
+
|
| 164 |
+
where $S_k \leftarrow_R \mathcal{G}^{\text{EM}}(D)$.
|
| 165 |
+
|
| 166 |
+
*Proof.* The privacy guarantee of Theorem 3.2 follows immediately from the $(\varepsilon, 0)$-differential privacy of the exponential mechanism, together with Theorem 2.4.
|
| 167 |
+
|
| 168 |
+
To simplify notation in the utility proofs in this paper, we suppress the dependence of the submodular function of interest on $D$, i.e. we write $f = f_D$. We also introduce the notation $f_S(T) = f(S \cup T) - f(S)$ to denote the marginal gain by adding $T$ to the set $S$.
|
| 169 |
+
---PAGE_BREAK---
|
| 170 |
+
|
| 171 |
+
To argue that the algorithm achieves good utility, recall that in each step $i$, the exponential mechanism guarantees a solution $v_i$ with
|
| 172 |
+
|
| 173 |
+
$$ \mathbb{E}[f_{S_{i-1}}(v_i)] \geq \max_{v \in V \setminus S_{i-1}} f_{S_{i-1}}(v) - \alpha \quad (1) $$
|
| 174 |
+
|
| 175 |
+
where $\alpha = 2\lambda \cdot \ln |V|/\varepsilon$.
|
| 176 |
+
|
| 177 |
+
Let $S^*$ denote any set of size $k$ with $f(S^*) = \text{OPT}$. Below, let us condition on having obtained some
|
| 178 |
+
set $S_{i-1}$ of elements after the first $i-1$ iterations of our algorithm. Then
|
| 179 |
+
|
| 180 |
+
$$
|
| 181 |
+
\begin{align*}
|
| 182 |
+
\mathbb{E}[f_{S_i}(v_i)] &= \max_{v \in V \setminus S_{i-1}} f_{S_{i-1}}(v) - \alpha && \text{(by Condition (1))} \\
|
| 183 |
+
&\geq \frac{1}{k} \left( \sum_{v \in S^*} f_{S_{i-1}}(v) \right) - \alpha \\
|
| 184 |
+
&\geq \frac{f(S^* \cup S_{i-1}) - f(S_{i-1})}{k} - \alpha && \text{(by submodularity of $f$)} \\
|
| 185 |
+
&\geq \frac{\mathrm{OPT} - f(S_{i-1})}{k} - \alpha && \text{(by monotonicity of $f$)}
|
| 186 |
+
\end{align*}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
We now unfix from conditioning on having obtained a specific $S_{i-1}$ by taking the expectation over all choices of such a set. This gives
|
| 190 |
+
|
| 191 |
+
$$ \mathbb{E}[f_{S_{i-1}}(v_i)] \geq \frac{\mathrm{OPT} - \mathbb{E}[f(S_{i-1})]}{k} - \alpha $$
|
| 192 |
+
|
| 193 |
+
Rearranging yields
|
| 194 |
+
|
| 195 |
+
$$ \mathrm{OPT} - \mathbb{E}[f(S_i)] \leq \left(1 - \frac{1}{k}\right) (\mathrm{OPT} - \mathbb{E}[f(S_{i-1})]) + \alpha $$
|
| 196 |
+
|
| 197 |
+
Recursively applying this bound yields
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
\begin{align*}
|
| 201 |
+
\mathrm{OPT} - \mathbb{E}[f(S_i)] &\leq \left(1 - \frac{1}{k}\right)^i (\mathrm{OPT} - \mathbb{E}[f(S_0)]) + \sum_{j=0}^{i-1} \left(1 - \frac{1}{k}\right)^j \cdot \alpha \\
|
| 202 |
+
&\leq \left(1 - \frac{1}{k}\right)^i \mathrm{OPT} + \alpha.
|
| 203 |
+
\end{align*}
|
| 204 |
+
$$
|
| 205 |
+
|
| 206 |
+
Hence, we conclude
|
| 207 |
+
|
| 208 |
+
$$
|
| 209 |
+
\begin{align*}
|
| 210 |
+
\mathbb{E}[f(S_k)] &\geq \left[1 - \left(1 - \frac{1}{k}\right)^k\right] \mathrm{OPT} - \alpha \\
|
| 211 |
+
&\geq \left(1 - \frac{1}{e}\right) \mathrm{OPT} - \alpha.
|
| 212 |
+
\end{align*}
|
| 213 |
+
\hspace*{\fill} \square $$
|
| 214 |
+
---PAGE_BREAK---
|
| 215 |
+
|
| 216 |
+
## 3.1 Matroid and p-Extendible System Constraints
|
| 217 |
+
|
| 218 |
+
We now show how to extend Algorithm 1 to privately maximize monotone submodular functions subject to more general constraints. To start, we review the definition of a *p*-extendible system. Consider a ground set $V$ and a non-empty downward-closed family of subsets $\mathcal{I} \subseteq 2^V$ (i.e. if $T \in \mathcal{I}$, then $S \in \mathcal{I}$ for every $S \subseteq T$). Such an $\mathcal{I}$ is called a family of *independent sets*. The pair $(V, \mathcal{I})$ is said to be a *p*-extendible system (Mestre, 2006) if for all $S \subset T \in \mathcal{I}$, and $v \in V$ such that $S \cup \{v\} \in \mathcal{I}$, there exists a set $Z \subseteq (T \setminus S)$ such that $|Z| \le p$ and $(T \setminus Z) \cup \{v\} \in \mathcal{I}$. Let $r(\mathcal{I})$ denote the size of the largest independent set in $\mathcal{I}$.
|
| 219 |
+
|
| 220 |
+
The definition of a matroid coincides with that of a 1-extendible system (with rank $r(\mathcal{I})$). For $p \ge 2$, the notion of a *p*-extendible system strictly generalizes that of an intersection of $p$ matroids. A slight modification of Algorithm 1 gives a unified algorithm for privately maximizing a monotone submodular function subject to matroid and *p*-extendible system constraints, presented as Algorithm 2.
|
| 221 |
+
|
| 222 |
+
We obtain analogues of the results presented for cardinality constraints.
|
| 223 |
+
|
| 224 |
+
**Theorem 3.3.** Suppose $f_D : 2^V \to \mathbb{R}$ is $\lambda$-decomposable (cf. Definition 2.1). Let $\delta > 0$ and let $\varepsilon_0 \ge 0$ be such that $\varepsilon = 2 \cdot \varepsilon_0 \cdot (e - 1) \ln(3e/\delta) \le 1$. Then instantiating Algorithm 2 with $\mathcal{O} = \text{EM}$ and parameter $\varepsilon_0 > 0$ provides $(\varepsilon, \delta)$-differential privacy. Moreover, for every $D \in X^n$,
|
| 225 |
+
|
| 226 |
+
$$ \mathbb{E}[f_D(S)] \geq \frac{1}{p+1} \cdot \mathrm{OPT} - \frac{p}{p+1} \left( \frac{2\lambda r(\mathcal{I}) \ln|V|}{\varepsilon_0} \right) $$
|
| 227 |
+
|
| 228 |
+
where $S \leftarrow_{\mathcal{R}} \mathcal{G}^{\mathrm{EM}}(D)$.
|
| 229 |
+
|
| 230 |
+
**Algorithm 2 Differentially Private Greedy (p-system) $\mathcal{G}^\mathcal{O}$**
|
| 231 |
+
|
| 232 |
+
**Input:** Submodular function $f_D: 2^V \to \mathbb{R}$, dataset $D$, *p*-extendible family $(V, \mathcal{I})$, privacy parameters $\varepsilon_0, \delta_0$
|
| 233 |
+
**Output:** Maximal independent subset of $V$
|
| 234 |
+
|
| 235 |
+
1. Initialize $S = \emptyset$
|
| 236 |
+
|
| 237 |
+
2. While $S \in \mathcal{I}$ is not maximal:
|
| 238 |
+
* Define $q: (V \setminus S) \times X^n \to \mathbb{R}$ via $q(v, \tilde{D}) = f_{\tilde{D}}(S \cup \{v\}) - f_{\tilde{D}}(S)$
|
| 239 |
+
* Compute $v_i \leftarrow_{\mathcal{R}} \mathcal{O}(q, D; \varepsilon_0, \delta_0)$
|
| 240 |
+
* Update $S \leftarrow (S \cup \{v_i\})$
|
| 241 |
+
|
| 242 |
+
3. Return $S$
|
| 243 |
+
|
| 244 |
+
*Proof.* The privacy guarantee of Theorem 3.3 follows from Gupta et al. (2010).
|
| 245 |
+
|
| 246 |
+
In our proof of utility, we again suppress the dataset $D$, and use the notation $f_S(T)$ to denote $f(S \cup T) - f(S)$. Our proof applies to any greedy algorithm that, in each round $i$, selects an item $v_i$ with
|
| 247 |
+
|
| 248 |
+
$$ \mathbb{E}[f_{S_{i-1}}(v_i)] \geq \max_{v:S_{i-1}\cup\{v\}\in\mathcal{I}} f_{S_{i-1}}(v_i) - \alpha \quad (2) $$
|
| 249 |
+
|
| 250 |
+
for some error term $\alpha > 0$.
|
| 251 |
+
|
| 252 |
+
We follow the proof outlined by Călinescu et al. (2011). Fix an optimal solution $O \in \mathcal{I}$, i.e. $f(O) = \mathrm{OPT}$. Let $S_1, \dots, S_r$ be any sequence representing the output of the algorithm, where $r = r(\mathcal{I})$. (If
|
| 253 |
+
---PAGE_BREAK---
|
| 254 |
+
|
| 255 |
+
the algorithm terminates in an earlier round $k < r$, then extend its output by setting $S_i = S_k$ for each
|
| 256 |
+
$i = k + 1, \dots, r$. To such a sequence, we define a partition $O_1, \dots, O_r$ of $O$ via the following algorithm.
|
| 257 |
+
|
| 258 |
+
**Algorithm 3 Partition construction algorithm**
|
| 259 |
+
|
| 260 |
+
**Input:** Optimal solution *O*, sets $S_1, \dots, S_r$
|
| 261 |
+
**Output:** A partition $O_1, O_2, \dots, O_r$ of *O*
|
| 262 |
+
|
| 263 |
+
1. Initialize $T_0 = O$
|
| 264 |
+
|
| 265 |
+
2. For $i = 1, 2, \dots, r$:
|
| 266 |
+
|
| 267 |
+
(a) If $v_i \in T_{i-1}$, set $O_i = \{v_i\}$;
|
| 268 |
+
Else, let $O_i \subseteq T_{i-1}$ be the smallest subset s.t. $((S_{i-1} \cup T_{i-1}) \setminus O_i) \cup \{v_i\} \in \mathcal{I}$
|
| 269 |
+
|
| 270 |
+
(b) Set $T_i = T_{i-1} \setminus O_i$
|
| 271 |
+
|
| 272 |
+
3. Return $O_1, O_2, \dots, O_r$
|
| 273 |
+
|
| 274 |
+
To see that $O_1, \dots, O_r$ is indeed a partition, observe that $S_i \cup T_i \in \mathcal{I}$ and $S_i \cap T_i = \emptyset$ for every $i$.
|
| 275 |
+
Therefore, it must be the case that $T_r = \emptyset$, since $S_r \cup T_r \in \mathcal{I}$ and $S_r$ is maximal when the algorithm
|
| 276 |
+
terminates. Hence, the disjoint sets $O_1, \dots, O_r$ do in fact exhaust $O$.
|
| 277 |
+
|
| 278 |
+
**Lemma 3.4.** For every $i = 1, \dots, r$, we have $\mathbb{E}[f_{S_{i-1}}(v_i)] \geq \frac{1}{p}\mathbb{E}[f_{S_{i-1}}(O_i)] - \alpha$.
|
| 279 |
+
|
| 280 |
+
Before proving Lemma 3.4, we show how to use it to complete the proof of Theorem 3.3. Recursively applying the lemma shows that for every *i*,
|
| 281 |
+
|
| 282 |
+
$$
|
| 283 |
+
\mathbb{E}[f(S_i)] \geq \frac{1}{p} \sum_{j=1}^{i} \mathbb{E}[f_{S_{j-1}}(O_j)] - i\alpha.
|
| 284 |
+
$$
|
| 285 |
+
|
| 286 |
+
Hence, we obtain
|
| 287 |
+
|
| 288 |
+
$$
|
| 289 |
+
\begin{align*}
|
| 290 |
+
\mathbb{E}[f(S_r)] &\geq \frac{1}{p} \sum_{i=1}^{r} \mathbb{E}[f_{S_{i-1}}(O_i)] - r\alpha \\
|
| 291 |
+
&\geq \frac{1}{p} \sum_{i=1}^{r} \mathbb{E}[f_{S_r}(O_i)] - r\alpha && (\text{by submodularity}) \\
|
| 292 |
+
&\geq \frac{1}{p} \mathbb{E}[f_{S_r}(O)] - r\alpha && (\text{by linearity of expectation and submodularity}) \\
|
| 293 |
+
&\geq \frac{1}{p} (f(O) - \mathbb{E}[f(S_r)]) - r\alpha. && (\text{by monotonicity})
|
| 294 |
+
\end{align*}
|
| 295 |
+
$$
|
| 296 |
+
|
| 297 |
+
Rearranging gives the desired result $\mathbb{E}[f(S_r)] \geq \frac{1}{p+1} f(O) - \frac{p}{p+1} r\alpha$. $\square$
|
| 298 |
+
|
| 299 |
+
*Proof of Lemma 3.4.* The partition construction algorithm that every set $O_i$ satisfies $|O_i| \le p$; this follows from the definition of *p*-extendibility and the fact that $S_{i-1} \cup \{v_i\} \in \mathcal{I}$. Moreover, any element in $O_i$ is a candidate for inclusion in $S_i$, since $S_{i-1} \cup \{v\} \in \mathcal{I}$ for every $v \in O_i$.
|
| 300 |
+
---PAGE_BREAK---
|
| 301 |
+
|
| 302 |
+
It is also clear from the partition construction that for each $v \in O_i$, we have $S_{i-1} \cup \{v\} \in \mathcal{I}$. Below, fix a choice of $i$ and condition on the algorithm's history up to iteration $i-1$. This fixes choices of the sets $S_1, \dots, S_{i-1}$, as well as $T_1, \dots, T_i$ and $O_1, \dots, O_i$.
|
| 303 |
+
|
| 304 |
+
Then since $\mathbb{E}[f_{S_{i-1}}(v_i)] \ge f_{S_{i-1}}(v) - \alpha$ for every $v \in O_i$, we have
|
| 305 |
+
|
| 306 |
+
$$
|
| 307 |
+
\begin{aligned}
|
| 308 |
+
\mathbb{E}[f_{S_{i-1}}(v_i)] &\ge \frac{1}{|O_i|} f_{S_{i-1}}(O_i) - \alpha && (\text{by submodularity}) \\
|
| 309 |
+
&\ge \frac{1}{p} f_{S_{i-1}}(O_i) - \alpha.
|
| 310 |
+
\end{aligned}
|
| 311 |
+
$$
|
| 312 |
+
|
| 313 |
+
Taking the expectation over the conditioned event gives the asserted result. $\square$
|
| 314 |
+
|
| 315 |
+
**Theorem 3.5.** Suppose $f_D : 2^V \to \mathbb{R}$ has sensitivity $\lambda$. Then instantiating Algorithm 2 with $\mathcal{O} = \text{EM}$ and parameter $\epsilon_0 > 0$ provides $(\epsilon = r(\mathcal{I})\epsilon_0, \delta = 0)$-differential privacy. It also provides $(\epsilon, \delta)$-differential privacy for every $\delta > 0$ with $\epsilon = r(\mathcal{I})\epsilon^2/2 + \epsilon \cdot \sqrt{2r(\mathcal{I})\ln(1/\delta)}$.
|
| 316 |
+
|
| 317 |
+
Moreover, for every $D \in X^n$,
|
| 318 |
+
|
| 319 |
+
$$ \mathbb{E}[f_D(S)] \geq \frac{1}{p+1} \cdot \mathrm{OPT} - \frac{p}{p+1} \left( \frac{2\lambda r(\mathcal{I}) \ln|V|}{\epsilon_0} \right) $$
|
| 320 |
+
|
| 321 |
+
where $S \leftarrow_R \mathcal{G}^{\mathrm{EM}}(D)$.
|
| 322 |
+
|
| 323 |
+
*Proof.* The proof of privacy follows from Theorem 2.4. The proof of utility is identical to that of the proof of Theorem 3.3. $\square$
|
| 324 |
+
|
| 325 |
+
# 4 Non-Monotone Submodular Maximization
|
| 326 |
+
|
| 327 |
+
We now consider the problem of privately maximizing an arbitrary, possibly non-monotone, submodular function under a cardinality constraint. In general, the greedy algorithm presented in Section 3 fails to give any constant-factor approximation. Instead, our algorithm in this section will be based on the “stochastic greedy” algorithm first studied by Mirzasoleiman et al. (2015). In each round, the stochastic greedy algorithm first subsamples a random $\frac{1}{k} \ln(1/\alpha)$ fraction of the ground set for some $\alpha > 0$, and then greedily selects the item from this subsample that maximizes marginal gain. Mirzasoleiman et al. (2015) showed that for a monotone objective function $f$, this algorithm provides a $(1 - 1/e - \alpha)$-approximation to the optimal solution. Their original motivation was to improve the running time of the greedy algorithm: from $O(|V| \cdot k)$ evaluations of the objective function to linear $O(|V| \cdot \ln(1/\alpha))$.
|
| 328 |
+
|
| 329 |
+
Unfortunately, the stochastic greedy algorithm does not provide any approximation guarantee for non-monotone submodular functions. Buchbinder et al. (2014) instead proposed a “random greedy” algorithm that, in each iteration, randomly selects one of the $k$ elements with the highest marginal gain. Buchbinder et al. (2014) showed that the random greedy algorithm achieves a $1/e$ approximation to the optimal solution (in expectation), using $k|V|$ function evaluations. However, it is not clear how to adapt this algorithm to accommodate differential privacy, since its analysis has a brittle dependence on the sampling procedure.
|
| 330 |
+
|
| 331 |
+
We make two main contributions to the analysis of the stochastic greedy and random greedy algorithms. First, we show that running the stochastic greedy algorithm on an exact $\frac{1}{k}$ fraction of the ground set per iteration still gives a (0.468)-approximation for monotone objectives, and moreover, gives a $\frac{1}{e}(1 - 1/e)$-approximation even for non-monotone objectives. Note that this algorithm evaluates the objective function on only $|V|$ elements, and still provides a constant factor approximation guarantee. This makes our
|
| 332 |
+
---PAGE_BREAK---
|
| 333 |
+
|
| 334 |
+
"subsample-greedy" algorithm the fastest algorithm for maximizing a general submodular function subject to a cardinality constraint (albeit with slightly worse approximation guarantees). Second, we show that the guarantees of this algorithm are robust to using a randomized greedy selection procedure (e.g. the exponential or large margin mechanism), and hence it can be adapted to ensure differential privacy.
|
| 335 |
+
|
| 336 |
+
We present the subsample-greedy algorithm as Algorithm 4 below. Assume that $V$ is augmented by enough "dummy elements" to ensure that $|V|/k$ is an integer; each dummy element $u$ is defined so that $f_D(S \cup \{u\}) = f_D(S)$ for every set $S$. We also explicitly account for an additional set $U$ of $k$ dummy elements, and ensure that at least one appears in every subsample.
|
| 337 |
+
|
| 338 |
+
**Algorithm 4 Diff. Private “Subsample-Greedy” SG<sup>O</sup>**
|
| 339 |
+
|
| 340 |
+
**Input:** Submodular function $f_D: 2^V \to \mathbb{R}$, dataset $D$, cardinality constraint $k$, privacy parameters $\varepsilon_0, \delta_0$
|
| 341 |
+
**Output:** Size $k$ subset of $V$
|
| 342 |
+
|
| 343 |
+
1. Initialize $S_0 = \emptyset$, dummy elements $U = \{u^1, \dots, u^k\}$
|
| 344 |
+
|
| 345 |
+
2. For $i = 1, \dots, k$:
|
| 346 |
+
|
| 347 |
+
• Sample $V_i \subset V$ a uniformly random subset of size $|V|/k$ and $u_i$ a random dummy element
|
| 348 |
+
|
| 349 |
+
• Define $q_i : (V_i \cup \{u_i\}) \times X^n \to \mathbb{R}$ via $q_i(v, \tilde{D}) = f_{\tilde{D}}(S_{i-1} \cup \{v\}) - f_{\tilde{D}}(S_{i-1})$
|
| 350 |
+
|
| 351 |
+
• Compute $v_i \leftarrow_R O(q_i, D; \varepsilon_0, \delta_0)$
|
| 352 |
+
|
| 353 |
+
• Update $S_i \leftarrow (S_{i-1} \cup \{v_i\})$
|
| 354 |
+
|
| 355 |
+
3. Return $S_k$ with all dummy elements removed
|
| 356 |
+
|
| 357 |
+
**Theorem 4.1.** Suppose $f_D: 2^V \to \mathbb{R}$ has sensitivity $\lambda$. Then instantiating Algorithm 4 with $O = \text{EM}$ provides $(\varepsilon, \delta)$-differential privacy, and for every $D \in X^n$,
|
| 358 |
+
|
| 359 |
+
$$ \mathbb{E}[f_D(S)] \geq \frac{1}{e} \left(1 - \frac{1}{e}\right) \text{OPT} - \frac{2\lambda k \ln |V|}{\varepsilon} $$
|
| 360 |
+
|
| 361 |
+
where $S \leftarrow_R SG^{\text{EM}}(D)$. Moreover, if $f_D$ is monotone, then
|
| 362 |
+
|
| 363 |
+
$$
|
| 364 |
+
\begin{aligned}
|
| 365 |
+
\mathbb{E}[f_D(S)] &\geq (1 - e^{-(1-\frac{1}{e})}) \text{OPT} - \frac{2\lambda k \ln |V|}{\varepsilon} \\
|
| 366 |
+
&\approx 0.468 \text{OPT} - \frac{2\lambda k \ln |V|}{\varepsilon}.
|
| 367 |
+
\end{aligned}
|
| 368 |
+
$$
|
| 369 |
+
|
| 370 |
+
The guarantees of Theorem 4.1 are of interest even without privacy. Letting MAX denote the selection procedure which simply outputs the true maximizer (equivalently, which runs the exponential mechanism with $\varepsilon_0 = +\infty$), we obtain the following non-private algorithm for maximizing a submodular function $f_D$:
|
| 371 |
+
|
| 372 |
+
**Corollary 4.2.** Let $f_D: 2^V \to \mathbb{R}$ be any submodular function. Instantiating Algorithm 4 with $O = \text{MAX}$ gives
|
| 373 |
+
|
| 374 |
+
$$ \mathbb{E}[f_D(S)] \geq \frac{1}{e} \left(1 - \frac{1}{e}\right) \text{OPT} $$
|
| 375 |
+
|
| 376 |
+
where $S \leftarrow_R SG^{\text{MAX}}(D)$. Moreover, if $f_D$ is monotone, then
|
| 377 |
+
|
| 378 |
+
$$ \mathbb{E}[f_D(S)] \geq (1 - e^{-(1-\frac{1}{e})}) \text{OPT} \approx 0.468 \text{OPT}. $$
|
| 379 |
+
---PAGE_BREAK---
|
| 380 |
+
|
| 381 |
+
**4.1 Proof of Theorem 4.1**
|
| 382 |
+
|
| 383 |
+
The analysis below will work generally for any random selection procedure guaranteeing that in every round
|
| 384 |
+
$i = 1, \dots, k,$
|
| 385 |
+
|
| 386 |
+
$$
|
| 387 |
+
\mathbb{E}[f_{S_{i-1}}(v_i)] \geq \max_{v \in (V_i \cup \{u_i\})} f_{S_{i-1}}(v) - \alpha
|
| 388 |
+
$$
|
| 389 |
+
|
| 390 |
+
for some parameter $\alpha > 0$. We begin by fixing an optimal solution $S^*$ with $f(S^*) = \text{OPT}$.
|
| 391 |
+
|
| 392 |
+
**Claim 4.3** ((Buchbinder et al., 2014, Observation 3.2)). For every $i = 0, \dots, k$, we have $\mathbb{E}[f(S^* \cup S_i)] \ge (1 - 1/k)^i \cdot \text{OPT}$.
|
| 393 |
+
|
| 394 |
+
*Proof.* For every iteration *i* = 1, ..., *k*, the subsampling step ensures that every element in *V* ∪ *U* is selected for inclusion in *S*ᵢ with probability at most 1/*k*. Hence, for every *i* = 0, 1, ..., *k*, each element is included in *S*ᵢ with probability at most 1 - (1 - 1/*k*)<sup>*i*</sup>. Define *g*: 2<sup>*V*</sup> → ℝ by *g(S)* = *g(S*<sup>*</sup> ∪ *S*). Then *g* is a submodular function, and
|
| 395 |
+
|
| 396 |
+
$$
|
| 397 |
+
\mathbb{E}[f(S^* \cup S_i)] = \mathbb{E}[g(S_i \setminus S^*)] \geq (1 - 1/k)^i g(\emptyset) = (1 - 1/k)^i \text{OPT}.
|
| 398 |
+
$$
|
| 399 |
+
|
| 400 |
+
The inequality here follows from the fact that for any set T and any random subset T' ⊆ T that includes every element of T with probability p, we have E[g(T')] ≥ (1 − p) ⋅ g(Ø) + p ⋅ g(T) for any submodular function g Feige et al. (2007). □
|
| 401 |
+
|
| 402 |
+
**Claim 4.4.**
|
| 403 |
+
|
| 404 |
+
$$
|
| 405 |
+
\mathbb{E}[f_{S_{i-1}}(v_i)] \geq \left(1 - \frac{1}{e}\right) \cdot \left(\frac{\mathbb{E}[f(S^* \cup S_{i-1})] - \mathbb{E}[f(S_{i-1})]}{k}\right) - \alpha.
|
| 406 |
+
$$
|
| 407 |
+
|
| 408 |
+
*Proof.* Begin by conditioning on a fixed choice of the set $S_{i-1}$. Let $M \subseteq (V \cup U)$ denote a set of $k$ items which maximizes the quantity $\sum_{v \in M} f_{S_{i-1}}(v)$. That is, $M$ consists of the $k$ items in $(V \cup U)$ which result in the largest marginal gain for $f_{S_{i-1}}$.
|
| 409 |
+
|
| 410 |
+
Let $G$ denote the event that the subsampled set $V_i \cup \{u_i\}$ contains at least one element in $M$. Observe that even if $G$ does not occur, we have
|
| 411 |
+
|
| 412 |
+
$$
|
| 413 |
+
\mathbb{E}[f_{S_{i-1}}(v_i)|\bar{G}] \geq f_{S_{i-1}}(u_i) - \alpha \geq -\alpha. \tag{3}
|
| 414 |
+
$$
|
| 415 |
+
|
| 416 |
+
We claim moreover that
|
| 417 |
+
|
| 418 |
+
$$
|
| 419 |
+
\mathbb{E}[f_{S_{i-1}}(v_i)|G] \geq \frac{1}{k} \sum_{v \in M} f_{S_{i-1}}(v) - \alpha. \quad (4)
|
| 420 |
+
$$
|
| 421 |
+
|
| 422 |
+
To see this, sort the items in $V \cup U$ as $v^{(1)}, \dots, v^{(m)}, v^{(m+1)}, \dots, v^{(m+k)}$, where $m = |V|$ and $f_{S_{i-1}}(v^{(j)}) \ge f_{S_{i-1}}(v^{(j+1)})$ for every $j = 1, \dots, m+k-1$. Break ties in such a way that $M = \{v^{(1)}, \dots, v^{(k)}\}$, and that there is some $t \in \{0, \dots, k\}$ such that $v^{(1)}, \dots, v^{(t)} \in V$ and $v^{(t+1)}, \dots, v^{(k)} \in U$ (that is, real elements come before dummy elements).
|
| 423 |
+
|
| 424 |
+
Let $A_j$ denote the event that $j$ is the smallest index such that $v^{(j)} \in V_i \cup \{u_i\}$. Then the events $A_1, \dots, A_{m+k}$ are mutually exclusive and exhaustive. Moreover, by the definition of $G$, we have $\sum_{j=1}^k \Pr[A_j] = \Pr[G]$.
|
| 425 |
+
|
| 426 |
+
|
| 427 |
+
---PAGE_BREAK---
|
| 428 |
+
|
| 429 |
+
It is easy to see that
|
| 430 |
+
|
| 431 |
+
$$
|
| 432 |
+
\begin{align*}
|
| 433 |
+
\mathrm{Pr}[A_1] &= \frac{1}{k} \\
|
| 434 |
+
\mathrm{Pr}[A_j] &= \frac{\binom{m-j}{m/k-1}}{\binom{m}{m/k}} && j = 2, \dots, t, \\
|
| 435 |
+
\mathrm{Pr}[A_j] &= \frac{\binom{m-t}{m/k}}{\binom{m}{m/k}} \cdot \frac{1}{k} \cdot \left(1 - \frac{1}{k}\right)^{j-t-1} && j = t+1, \dots, k.
|
| 436 |
+
\end{align*}
|
| 437 |
+
$$
|
| 438 |
+
|
| 439 |
+
Moreover, $\Pr[A_j]$ is a decreasing function $j = 1, \dots, k$. Hence, $\Pr[A_j|G] = \Pr[A_j]/\Pr[G]$ is a decreasing function of $j = 1, \dots, k$ as well. Moreover, $\Pr[A_1|G] \ge \Pr[A_1] = 1/k$. This allows us to calculate
|
| 440 |
+
|
| 441 |
+
$$
|
| 442 |
+
\begin{align*}
|
| 443 |
+
\mathbb{E} [f_{S_{i-1}}(v_i) | G] &= \sum_{j=1}^{k} \mathbb{E} [f_{S_{i-1}}(v_i) | A_j] \cdot \Pr[A_j | G] \\
|
| 444 |
+
&\geq \frac{1}{k} \sum_{j=1}^{k} (f_{S_{i-1}}(v^{(j)}) - \alpha) && (\text{by Chebyshev's sum inequality}) \\
|
| 445 |
+
&= \frac{1}{k} \sum_{v \in M} f_{S_{i-1}}(v) - \alpha.
|
| 446 |
+
\end{align*}
|
| 447 |
+
$$
|
| 448 |
+
|
| 449 |
+
This establishes the claimed inequality (4).
|
| 450 |
+
|
| 451 |
+
To estimate $\mathbb{E}[f_{S_{i-1}}(v_i)|G]$, it remains to calculate $\Pr[G]$. Suppose $M$ consists of $t$ elements from $V$ and $k-t$ dummy elements from $U$. Then
|
| 452 |
+
|
| 453 |
+
$$
|
| 454 |
+
\begin{align*}
|
| 455 |
+
\Pr[G] &= 1 - \Pr[M \cap (V_i \cup \{u_i\}) = \emptyset] \\
|
| 456 |
+
&= 1 - \frac{\binom{m-t}{m/k}}{\binom{m}{m/k}} \cdot \frac{t}{k} \\
|
| 457 |
+
&= 1 - \frac{(m - (m/k))(m - (m/k) - 1) \cdots (m - (m/k) - t + 1)}{m(m-1)\cdots(m-t+1)} \cdot \frac{t}{k} \\
|
| 458 |
+
&= 1 - \left(1 - \frac{1}{k}\right) \left(1 - \frac{1}{k} \cdot \frac{m}{m-1}\right) \cdots \left(1 - \frac{1}{k} \cdot \frac{m}{m-t+1}\right) \cdot \frac{t}{k} \\
|
| 459 |
+
&\ge 1 - \left(1 - \frac{1}{k}\right)^t \cdot \frac{t}{k} \\
|
| 460 |
+
&\ge 1 - \frac{te^{-t/k}}{k} \\
|
| 461 |
+
&\ge 1 - \frac{1}{e},
|
| 462 |
+
\end{align*}
|
| 463 |
+
$$
|
| 464 |
+
|
| 465 |
+
where the last inequality follows from the fact that the function $r(x) = xe^{-x}$ is maximized at $x = 1$, where it takes the value $1/e$.
|
| 466 |
+
---PAGE_BREAK---
|
| 467 |
+
|
| 468 |
+
Let $M'$ be the set containing $S^* \setminus S_{i-1}$ together with enough dummy elements to have size exactly $k$. We conclude that
|
| 469 |
+
|
| 470 |
+
$$
|
| 471 |
+
\begin{align*}
|
| 472 |
+
\mathbb{E}[f_{S_{i-1}}(v_i)] &= \mathrm{Pr}[G] \cdot \mathbb{E}[f_{S_{i-1}}(v_i)|G] + (1-\mathrm{Pr}[G]) \cdot \mathbb{E}[f_{S_{i-1}}(v_i)|\bar{G}] \\
|
| 473 |
+
&\geq \left(1-\frac{1}{e}\right) \left(\frac{1}{k} \sum_{v \in M} f_{S_{i-1}}(v) - \alpha\right) - \frac{1}{e} \cdot \alpha \tag{by (4) and (3)} \\
|
| 474 |
+
&\geq \left(1-\frac{1}{e}\right) \left(\frac{1}{k} \sum_{v \in M'} f_{S_{i-1}}(v)\right) - \alpha \tag{by definition of $M$} \\
|
| 475 |
+
&\geq \left(1-\frac{1}{e}\right) \left(\frac{f(S^* \cup S_{i-1}) - f(S_{i-1})}{k}\right) - \alpha. \tag{by submodularity}
|
| 476 |
+
\end{align*}
|
| 477 |
+
$$
|
| 478 |
+
|
| 479 |
+
Unconditioning from $S_{i-1}$ by taking the expectation over its choice proves the claim. $\square$
|
| 480 |
+
|
| 481 |
+
*Proof of Theorem 4.1.* Let $f$ be any (possibly non-monotone) submodular function. We show by induction that for every $i=0, \dots, k$, we have
|
| 482 |
+
|
| 483 |
+
$$
|
| 484 |
+
\mathbb{E}[f(S_i)] \geq \left(1 - \frac{1}{e}\right) \cdot \frac{i}{k} \cdot \left(1 - \frac{1}{k}\right)^{i-1} \cdot \text{OPT} - i\alpha. \quad (5)
|
| 485 |
+
$$
|
| 486 |
+
|
| 487 |
+
This clearly holds for the base case of $i=0$. Assuming it holds in iteration $i-1$, we calculate
|
| 488 |
+
|
| 489 |
+
$$
|
| 490 |
+
\begin{align*}
|
| 491 |
+
\mathbb{E}[f(S_i)] &= \mathbb{E}[f(S_{i-1})] + \mathbb{E}[f_{v_i}(S_{i-1})] \\
|
| 492 |
+
&\geq \mathbb{E}[f(S_{i-1})] + \left(1-\frac{1}{e}\right) \left(\frac{\mathbb{E}[f(S^* \cup S_{i-1})] - \mathbb{E}[f(S_{i-1})]}{k}\right) - \alpha && (\text{by Claim 4.4}) \\
|
| 493 |
+
&\geq \mathbb{E}[f(S_{i-1})] + \left(1-\frac{1}{e}\right) \left(\frac{(1-\frac{1}{k})^{i-1} \mathrm{OPT} - \mathbb{E}[f(S_{i-1})]}{k}\right) - \alpha && (\text{by Claim 4.3}) \\
|
| 494 |
+
&= \left(1-\frac{1}{k}\right) \mathbb{E}[f(S_{i-1})] + \left(1-\frac{1}{e}\right) \cdot \left(1-\frac{1}{k}\right)^{i-1} \cdot \frac{1}{k} \cdot \mathrm{OPT} - \alpha \\
|
| 495 |
+
&\geq \left(1-\frac{1}{k}\right) \left(1-\frac{1}{e}\right) \cdot \frac{i-1}{k} \cdot \left(1-\frac{1}{k}\right)^{i-2} \cdot \mathrm{OPT} + \left(1-\frac{1}{e}\right) \cdot \left(1-\frac{1}{k}\right)^{i-1} \cdot \frac{1}{k} \cdot \mathrm{OPT} - i\alpha && (\text{by the inductive hypothesis}) \\
|
| 496 |
+
&= \left(1-\frac{1}{e}\right) \cdot \frac{i}{k} \cdot \left(1-\frac{1}{k}\right)^{i-1} \cdot \mathrm{OPT} - i\alpha.
|
| 497 |
+
\end{align*}
|
| 498 |
+
$$
|
| 499 |
+
|
| 500 |
+
Hence, in iteration $k$, we have
|
| 501 |
+
|
| 502 |
+
$$
|
| 503 |
+
\mathbb{E}[f(S_k)] \geq \left(1 - \frac{1}{e}\right) \cdot \left(1 - \frac{1}{k}\right)^{k-1} \cdot \mathrm{OPT} - k\alpha \geq \left(1 - \frac{1}{e}\right) \cdot \frac{1}{e} \cdot \mathrm{OPT} - k\alpha
|
| 504 |
+
$$
|
| 505 |
+
|
| 506 |
+
as we wanted to show.
|
| 507 |
+
|
| 508 |
+
Now we consider the special case where $f$ is monotone. In this case, we have
|
| 509 |
+
|
| 510 |
+
$$
|
| 511 |
+
\begin{align*}
|
| 512 |
+
\mathbb{E}[f_{v_i}(S_{i-1})] &\geq \left(1 - \frac{1}{e}\right) \left(\frac{\mathbb{E}[f(S^* \cup S_{i-1})] - \mathbb{E}[f(S_{i-1})]}{k}\right) - \alpha && (\text{by Claim 4.4}) \\
|
| 513 |
+
&\geq \left(1 - \frac{1}{e}\right) \left(\frac{\mathrm{OPT} - \mathbb{E}[f(S_{i-1})]}{k}\right) - \alpha && (\text{by monotonicity}).
|
| 514 |
+
\end{align*}
|
| 515 |
+
$$
|
| 516 |
+
---PAGE_BREAK---
|
| 517 |
+
|
| 518 |
+
Rearranging gives us
|
| 519 |
+
|
| 520 |
+
$$
|
| 521 |
+
\mathrm{OPT} - \mathbb{E}[f(S_i)] \le \left(1 - \frac{(1-1/e)}{k}\right) \cdot (\mathrm{OPT} - \mathbb{E}[f(S_{i-1})]) + \alpha.
|
| 522 |
+
$$
|
| 523 |
+
|
| 524 |
+
Recursively applying this bound yields
|
| 525 |
+
|
| 526 |
+
$$
|
| 527 |
+
\begin{align*}
|
| 528 |
+
\mathrm{OPT} - \mathbb{E}[f(S_i)] &\le \left(1 - \frac{(1-1/e)}{k}\right)^i (\mathrm{OPT} - \mathbb{E}[f(S_0)]) + \sum_{j=0}^{i-1} \left(1 - \frac{1}{k}\right)^j \alpha \\
|
| 529 |
+
&\le \left(1 - \frac{(1-1/e)}{k}\right)^i \mathrm{OPT} + i\alpha.
|
| 530 |
+
\end{align*}
|
| 531 |
+
$$
|
| 532 |
+
|
| 533 |
+
Hence, we conclude
|
| 534 |
+
|
| 535 |
+
$$
|
| 536 |
+
\begin{align*}
|
| 537 |
+
\mathbb{E}[f(S_k)] &\geq \left[1 - \left(1 - \frac{(1-1/e)}{k}\right)^k\right] \mathrm{OPT} - k\alpha \\
|
| 538 |
+
&\geq \left(1 - e^{-(1-1/e)}\right) \mathrm{OPT} - k\alpha.
|
| 539 |
+
\end{align*}
|
| 540 |
+
$$
|
| 541 |
+
|
| 542 |
+
5 The Large Margin Mechanism
|
| 543 |
+
|
| 544 |
+
The accuracy guarantee of the exponential mechanism can be pessimistic on datasets where $q(\cdot, D)$ exhibits additional structure. For example, suppose that when the elements of $V$ are sorted so that $q(v_1, D) \ge q(v_2, D) \ge \dots \ge q(v_{|V|}, D)$, there exists an $\ell$ such that $q(v_1, D) \gg q(v_{\ell+1}, D)$. Then only the top $\ell$ ground set items are relevant to the optimization problem, so running the exponential mechanism on these should maintain differential privacy, but with error proportional to $\ln \ell$ rather than to $\ln |V|$. The large margin mechanism of Chaudhuri et al. (2014), like the exponential mechanism, generically solves discrete optimization problems. However, it automatically leverages this additional margin structure whenever it exists. Asymptotically, the error guarantee of the large margin mechanism is always at most that of the exponential mechanism, but can be much smaller when the data exhibits a margin for small $\ell$.
|
| 545 |
+
|
| 546 |
+
Formally, given a quality function $q: V \times X^n \to \mathbb{R}$ and parameters $\ell \in \mathbb{N}, \gamma > 0$, a dataset $D$ satisfies the $(\ell, \gamma)$-margin condition if $q(v_{\ell+1}, D) < q(v_1, D) - \gamma$.
|
| 547 |
+
|
| 548 |
+
For each $\ell = 1, \dots, |V|$, define
|
| 549 |
+
|
| 550 |
+
$$
|
| 551 |
+
g_{\ell} = \lambda \cdot \left( 3 + \frac{4 \ln(2\ell/\delta)}{\varepsilon} \right)
|
| 552 |
+
$$
|
| 553 |
+
|
| 554 |
+
$$
|
| 555 |
+
G_{\ell} = \frac{8\lambda \ln(2/\delta)}{\varepsilon} + \frac{16\lambda \ln(7\ell^2/\delta)}{\varepsilon} + g_{\ell}.
|
| 556 |
+
$$
|
| 557 |
+
|
| 558 |
+
Recall that the Laplace distribution $Lap(b)$ is specified by the density function $\frac{1}{2b} \exp(-|x|/b)$, and a sample $Z \sim Lap(b)$ obeys the tail bound $\Pr[Z > t] = \frac{1}{2} \exp(-t/b)$ for all $t > 0$.
|
| 559 |
+
|
| 560 |
+
**Proposition 5.1.** Let $\epsilon, \delta > 0$. Consider the large margin mechanism described in Algorithm 5. Then
|
| 561 |
+
|
| 562 |
+
• Algorithm LMM is $(\epsilon, \delta)$-differentially private.
|
| 563 |
+
---PAGE_BREAK---
|
| 564 |
+
|
| 565 |
+
**Algorithm 5 Large Margin Mechanism (LMM)**
|
| 566 |
+
|
| 567 |
+
**Input:** Quality function $q: V \times X^n \rightarrow \mathbb{R}$, dataset $D$, privacy parameters $\varepsilon, \delta > 0$
|
| 568 |
+
**Output:** Item $\hat{v} \in V$
|
| 569 |
+
|
| 570 |
+
1. Sort the elements of V so that $q(v_1, D) \ge \dots \ge q(v_{|V|}, D)$
|
| 571 |
+
|
| 572 |
+
2. Let $m = q(v_1, D) + Z$ for $Z \sim \text{Lap}(8\lambda/\varepsilon)$
|
| 573 |
+
|
| 574 |
+
3. For $\ell = 1, \dots, |V|$:
|
| 575 |
+
* Sample $Z_\ell \sim \text{Lap}(16\lambda/\varepsilon)$
|
| 576 |
+
* If $m - q(v_{\ell+1}, D) > G_\ell + Z_\ell$: Report $\ell$ and break
|
| 577 |
+
|
| 578 |
+
4. Return $\hat{v} \in \{v_1, \dots, v_\ell\}$ sampled w.p. $\propto \exp(\varepsilon q(v_i, D)/4\lambda)$.
|
| 579 |
+
|
| 580 |
+
* Suppose $D \in X^n$ satisfies the $(\ell, \gamma)$-margin condition for
|
| 581 |
+
|
| 582 |
+
$$ \gamma = \frac{24\lambda \ln(1/\beta)}{\varepsilon} + G_{\ell} $$
|
| 583 |
+
|
| 584 |
+
for some $\beta > 0$. Then there exists an event $E$ with $\Pr[E] \ge 1 - \beta$ such that
|
| 585 |
+
|
| 586 |
+
$$ \mathbb{E}[q(\hat{v}, D)|E] \ge \text{OPT} - \frac{4\lambda \ln \ell}{\varepsilon}, $$
|
| 587 |
+
|
| 588 |
+
where $\hat{v}$ is the output of LMM($D$).
|
| 589 |
+
|
| 590 |
+
Our presentation of Algorithm 5 differs slightly from that of Chaudhuri et al. (2014). Namely, we simplify the choice of the noisy maximum $m$, and redistribute the algorithm’s use of the privacy budget $\varepsilon$ with an eye toward better performance in applications. Because of these small changes, we sketch the proof of Proposition 5.1 for completeness.
|
| 591 |
+
|
| 592 |
+
**Privacy Analysis of Proposition 5.1.** Algorithm 5 can be thought of as releasing two items in stages: First, the margin parameter $\ell$ in Step 3, and second, the item $\hat{v}$ sampled via the exponential mechanism in Step 4. We first claim that releasing the margin parameter $\ell$ guarantees $(\varepsilon/2, 0)$-differential privacy. This follows because Steps 2 and 3 taken together are an instantiation of the “AboveThreshold” algorithm, as presented by Dwork and Roth (Dwork & Roth, 2014, Theorem 3.23), with respect to the sensitivity-(2$\lambda$) functions $q(v_1, D) - q(v_{\ell+1}, D)$. Denote the output $\ell$ of the algorithm at Step 3 by $S(D)$.
|
| 593 |
+
|
| 594 |
+
We now establish that Step 4 provides differential privacy. Following Chaudhuri et al. (2014), we let $A(\ell, D)$ capture the behavior of the algorithm in Step 4, where on receiving $\ell$ from Step 3, it samples from the exponential mechanism on the top $\ell$ elements. They proved the following lemma about $A(\ell, D)$:
|
| 595 |
+
|
| 596 |
+
**Lemma 5.2** ((Chaudhuri et al., 2014, Lemma 5)). If $D$ satisfies the $(\ell, \gamma)$-margin condition with
|
| 597 |
+
|
| 598 |
+
$$ \gamma \ge 2\lambda \left( 1 + \frac{2 \ln(\ell / \delta')}{\varepsilon} \right) $$
|
| 599 |
+
|
| 600 |
+
for some $\delta' > 0$, then for every neighbor $D' \sim D$ and any $T \subseteq V$, we have
|
| 601 |
+
|
| 602 |
+
$$ \Pr[A(\ell, D) \in T] \le e^{\varepsilon/2} \Pr[A(\ell, D') \in T] + \delta'. $$
|
| 603 |
+
---PAGE_BREAK---
|
| 604 |
+
|
| 605 |
+
Now fix neighboring datasets $D \sim D'$. Let $\mathcal{L}$ denote the set of $\ell$ for which $q(v_1, D) - q(v_{\ell+1}, D) \ge g_\ell$. By definition, if $\ell = S(D) \in \mathcal{L}$, then $D$ indeed satisfies the $(\ell, g_\ell)$-margin condition. Moreover, by tail bounds on the Laplace distribution,
|
| 606 |
+
|
| 607 |
+
$$
|
| 608 |
+
\begin{align*}
|
| 609 |
+
\Pr[S(D) \notin \mathcal{L}] &\le \Pr[Z > 8\lambda \ln(2/\delta)/\varepsilon \lor (\exists \ell \in \{1, \dots, |V|\} : Z_\ell < -16\lambda \ln(7\ell^2/\delta)/\varepsilon)] \\
|
| 610 |
+
&\le \frac{\delta}{4} + \sum_{\ell=1}^{|V|} \frac{6\delta}{4\pi^2 \ell^2} \\
|
| 611 |
+
&\le \frac{\delta}{2}.
|
| 612 |
+
\end{align*}
|
| 613 |
+
$$
|
| 614 |
+
|
| 615 |
+
Hence, we have that for any $T \subseteq V$,
|
| 616 |
+
|
| 617 |
+
$$
|
| 618 |
+
\begin{align*}
|
| 619 |
+
\Pr[\text{LMM}(D) \in T] &\le \sum_{\ell \in \mathcal{L}} \Pr[\text{LMM}(D) \in T | S(D) = \ell] \cdot \Pr[S(D) = \ell] + \Pr[S(D) \notin \mathcal{L}] \\
|
| 620 |
+
&\le \sum_{\ell \in \mathcal{L}} \Pr[\text{LMM}(D) \in T | S(D) = \ell] \cdot e^{\varepsilon/2} \Pr[S(D') = \ell] + \frac{\delta}{2} \\
|
| 621 |
+
&\le \sum_{\ell \in \mathcal{L}} (e^{\varepsilon/2} \Pr[\text{LMM}(D') \in T | S(D') = \ell] + e^{-\varepsilon/2} \frac{\delta}{2}) \cdot e^{\varepsilon/2} \Pr[S(D') = \ell] + \frac{\delta}{2} \quad \text{by Lemma 5.2} \\
|
| 622 |
+
&\le e^{\varepsilon} \Pr[\text{LMM}(D') \in T] + \delta
|
| 623 |
+
\end{align*}
|
| 624 |
+
$$
|
| 625 |
+
|
| 626 |
+
This completes the privacy proof of Proposition 5.1.
|
| 627 |
+
|
| 628 |
+
Utility Analysis of Proposition 5.1. Suppose $D$ satisfies the $(\ell, \beta)$-margin condition with
|
| 629 |
+
|
| 630 |
+
$$
|
| 631 |
+
\gamma \ge \frac{24\lambda \ln(1/\beta)}{\varepsilon} + G_{\ell},
|
| 632 |
+
$$
|
| 633 |
+
|
| 634 |
+
for some $\beta > 0$. By the tail bound for the Laplace distribution and a union bound, we have that with probability at least $1 - \beta$,
|
| 635 |
+
|
| 636 |
+
$$ Z \geq \frac{8}{\lambda} \ln \frac{1}{\beta} \quad \text{and} \quad Z_{\ell} \leq \frac{16}{\lambda} \ln \frac{1}{\beta}. $$
|
| 637 |
+
|
| 638 |
+
Let $E$ be the event where this occurs. If $E$ occurs, then indeed we have
|
| 639 |
+
|
| 640 |
+
$$ (q(v_1, D) + Z) - q(v_{\ell+1}, D) > G_{\ell} + Z_{\ell}, $$
|
| 641 |
+
|
| 642 |
+
and hence Step 3 terminates outputting some $\ell' \le l$. By Proposition 2.5, it follows that
|
| 643 |
+
|
| 644 |
+
$$ \mathbb{E}[q(\hat{v}, D)|E] \geq \mathrm{OPT} - \frac{4\lambda \ln l}{\varepsilon}. $$
|
| 645 |
+
|
| 646 |
+
Replacing the exponential mechanism with the large margin mechanism gives analogues of our results for monotone submodular maximization with a cardinality constraint, monotone submodular maximization over a *p*-extendible system, and non-monotone submodular maximization with a cardinality constraint:
|
| 647 |
+
---PAGE_BREAK---
|
| 648 |
+
|
| 649 |
+
**Theorem 5.3.** Suppose $f_D: 2^V \to \mathbb{R}$ is monotone and has sensitivity $\lambda$. Then instantiating Algorithm 1 with $\mathcal{O} = \text{LMM}$ and parameters $\varepsilon_0, \delta_0 = 0$ provides $(k\varepsilon_0, k\delta_0)$-differential privacy. It also provides $(\varepsilon, \delta' + k\delta_0)$-differential privacy for every $\delta' > 0$ with $\varepsilon = k\varepsilon^2/2 + \varepsilon \cdot \sqrt{2k \ln(1/\delta')}$.
|
| 650 |
+
|
| 651 |
+
Moreover, for every $D \in X^n$, there exists an event $E$ with $\Pr[E] \ge 1 - \beta$ such that
|
| 652 |
+
|
| 653 |
+
$$ \mathbb{E}[f_D(S_k)|E] \ge \left(1 - \frac{1}{e}\right) \text{OPT} - \sum_{i=1}^{k} \frac{4\lambda \ln \ell_i}{\varepsilon_0} $$
|
| 654 |
+
|
| 655 |
+
where $S_k \leftarrow_R \mathcal{G}^{\text{LMM}}(D)$, and $D$ satisfies the $(\ell_i, \gamma_i)$-margin condition with respect to every function of the form $q_i(v, D) = f_D(\hat{S}_{i-1} \cup \{v\}) - f_D(\hat{S}_{i-1})$, with $\gamma_i = 24\lambda \ln(k/\beta)/\varepsilon + G_{\ell_i}$.
|
| 656 |
+
|
| 657 |
+
**Theorem 5.4.** Instantiating Algorithm 2 with $\mathcal{O} = \text{LMM}$ under all of the conditions of Theorem 5.3 gives the same privacy guarantee (replacing $k$ with $r(\mathcal{I})$) and gives
|
| 658 |
+
|
| 659 |
+
$$ \mathbb{E}[f_D(S)|E] \ge \frac{1}{p+1} \cdot \text{OPT} - \sum_{i=1}^{r(\mathcal{I})} \frac{4\lambda \ln \ell_i}{\varepsilon_0}. $$
|
| 660 |
+
|
| 661 |
+
**Theorem 5.5.** Instantiating Algorithm 4 with $\mathcal{O} = \text{LMM}$ under all of the conditions of Theorem 5.3 gives the same privacy guarantee and gives
|
| 662 |
+
|
| 663 |
+
$$ \mathbb{E}[f_D(S_k)|E] \ge \frac{1}{e} \left(1 - \frac{1}{e}\right) \text{OPT} - \sum_{i=1}^{k} \frac{4\lambda \ln \ell_i}{\varepsilon_0}. $$
|
| 664 |
+
|
| 665 |
+
Moreover, if $f_D$ is monotone, then
|
| 666 |
+
|
| 667 |
+
$$ \mathbb{E}[f_D(S_k)|E] \ge 0.468 \text{OPT} - \sum_{i=1}^{k} \frac{4\lambda \ln \ell_i}{\varepsilon_0}. $$
|
| 668 |
+
|
| 669 |
+
# 6 Experimental Results
|
| 670 |
+
|
| 671 |
+
In this section we describe two concrete applications of our mechanisms.
|
| 672 |
+
|
| 673 |
+
## 6.1 Location Privacy
|
| 674 |
+
|
| 675 |
+
We analyze a dataset of 10,000 Uber pickups in Manhattan in April 2014 (UberDataset). Each individual entry in the dataset consists of the longitude and latitude coordinates of the pickup location. We want to use this dataset to select $k$ public locations as waiting spots for idle Uber drivers, while also guaranteeing differential privacy for the passengers whose locations appear in this dataset.¹ We consider two different public sets of locations $L$:
|
| 676 |
+
|
| 677 |
+
* $L_{Popular}$ is a set of 33 popular locations in Manhattan.
|
| 678 |
+
|
| 679 |
+
* $L_{Grid}$ is a set of 33 locations spread evenly across Manhattan in a grid-like manner.
|
| 680 |
+
|
| 681 |
+
We define a utility function $M(i, j)$ to be the normalized Manhattan distance between a pickup location $i$ and the waiting location $j$. That is, if pickup location $i$ is located at coordinates $(i_1, i_2)$ and the waiting location $j$ is located at coordinates $(j_1, j_2)$, then $M(i, j) = \frac{|i_1 - j_1| + |i_2 - j_2|}{m}$, where $m = 0.266$ is simply the Manhattan distance between the two furthest spread apart points in Manhattan. This normalization
|
| 682 |
+
|
| 683 |
+
¹Under the assumption that each pickup corresponds to a unique individual.
|
| 684 |
+
---PAGE_BREAK---
|
| 685 |
+
|
| 686 |
+
Figure 1: (a) and (b) show utility for various cardinalities (k). (c) and (d) fix $k = 3$ and show utility for various privacy parameters (e). Utility is normalized to be between 0 and 1. (e) - (h) shows a representative top 3 set under various settings.
|
| 687 |
+
---PAGE_BREAK---
|
| 688 |
+
|
| 689 |
+
ensures that $0 \le M(i, j) \le 1$, for all i, j. In order to make sure we have a maximization problem, we define
|
| 690 |
+
the following objective function: $f_D(S) = n - \sum_{i \in D} \min_{j \in S} M(i, j)$, where $n = |D| = 10000$.
|
| 691 |
+
|
| 692 |
+
**Observation 6.1.** The function *f*<sub>*D*</sub> is λ-decomposable for λ = 1 (and hence has sensitivity 1).
|
| 693 |
+
|
| 694 |
+
This form of objective function is known to be monotone submodular and so we can use the greedy algorithms studied in this paper. We use $\epsilon = 0.1$ and $\delta = 2^{-20}$. For our settings of parameters, “basic composition” outperforms “advanced composition,” so the privacy budget of $\epsilon = 0.1$ is split equally across the $k$ iterations, meaning the mechanism at each iteration uses $\epsilon_0 = \frac{\epsilon}{k}$. Our figures plot the average utility across 100 simulations.
|
| 695 |
+
|
| 696 |
+
From Figures 1(a) and (b) we see that the results for both $L_{Popular}$ and $L_{Grid}$ are relatively similar and unsurprising. The non-private greedy algorithm achieves the highest utility, but both the exponential mechanism (EM)-based greedy and large margin mechanism (LMM)-based greedy algorithms exhibit comparable utility while preserving a high level of privacy. Interestingly, we also see that the utilities of the EM-based and LMM-based algorithms are almost identical for both $L_{Popular}$ and $L_{Grid}$. This indicates that our mechanisms are actually selecting good locations, rather than just getting lucky because there are a lot of good locations to choose from.
|
| 697 |
+
|
| 698 |
+
Figure 2: Privately selecting health features, from national health examination surveys, that correlate most with diabetes.
|
| 699 |
+
|
| 700 |
+
Figures 1(c) and (d) show how the utility of the EM-based and LMM-based algorithms vary with the privacy parameter $\epsilon$. We can also think of this as varying the dataset size for a fixed $\epsilon$. We fix $k=3$ and take the average of 100 simulations for each value of $\epsilon$. We see that even for very small $\epsilon$, our algorithms outperform fully random selection. As $\epsilon$ increases, so does the utility. It is not shown in this figure, but varying $\delta$ has very little effect.
|
| 701 |
+
|
| 702 |
+
From Figures 1(e) - (h), we see that both the non-private and private algorithms select public locations that are relatively close to each other. For example, for the $L_{Popular}$ set of locations, the Empire State Building is close to the New York Public Library, the Soho Grand Hotel is close to NYU, and the Grand Army Plaza is close to the UN Headquarters. As a result, the private mechanisms manage to achieve comparable utility, while also masking the users' exact locations.
|
| 703 |
+
|
| 704 |
+
The theory described in Section 5 suggests that, at least asymptotically, the large margin mechanism-based algorithm should outperform the exponential mechanism-based algorithm. However, in our experiments, we find that the large margin mechanism is generally only able to find a margin in the first iteration of the greedy algorithm. This is because the threshold for finding a margin depends only on $\epsilon$, $\delta$, and $n$ and
|
| 705 |
+
---PAGE_BREAK---
|
| 706 |
+
|
| 707 |
+
thus it stays the same across all *k* iterations. On the other hand, the marginal gain at each iteration drops very quickly, so the mechanism fails to find a margin and thus samples from all remaining locations. However, since the large margin mechanism spends half of its privacy budget to try to find a margin, the sampling step gives slightly worse guarantees than does the plain exponential mechanism, thus giving us the slightly weaker results we see in the figures.
|
| 708 |
+
|
| 709 |
+
## 6.2 Feature Selection Privacy
|
| 710 |
+
|
| 711 |
+
We analyze a dataset created from a combination of National Health Examination Surveys ranging from 2007 to 2014 (NHANES Dataset). There are $n = 23,876$ individuals in the dataset with information on whether or not they have diabetes, along with $m = 23$ other potentially related binary health features. Our goal is to privately select *k* of these features that provide as much information about the diabetes class variable as possible.
|
| 712 |
+
|
| 713 |
+
More specifically, our goal is to maximize the mutual information between $Y$ and $X_S$, where $Y$ is a binary random variable indicating whether or not an individual has diabetes and $X_S$ is a random variable that represents a set $S$ of $k$ binary health features. Mutual information takes the form:
|
| 714 |
+
|
| 715 |
+
$$I(Y; X) = \sum_{y \in Y} \sum_{x \in X} p(x, y) \log_2 \left( \frac{p(x, y)}{p(x)p(y)} \right).$$
|
| 716 |
+
|
| 717 |
+
Under the Naive Bayes assumption, we suppose the joint distribution on $(Y, X_1, \ldots, X_k)$ takes the form
|
| 718 |
+
$p(y, x_1, \ldots, x_k) = p(y) \prod_{i=1}^k p(x_i | y)$. Therefore, we can easily specify the entire probability distribution by
|
| 719 |
+
finding each $p(x_i | y)$. We estimate each $p(x_i | y)$ by counting frequencies in the dataset.
|
| 720 |
+
|
| 721 |
+
Our goal is to choose a size *k* subset *S* of the features in order to maximize $f_D(S) = I(Y; X_S)$. Mutual information (under the Naive Bayes assumption) for feature selection is known to be monotone submodular in *S* (Krause & Guestrin, 2005), and thus we can apply the greedy algorithms described in this paper.
|
| 722 |
+
|
| 723 |
+
**Claim 6.2.** In iteration *i* of the greedy algorithm, the sensitivity of $f_D(S)$ is $\frac{(2i+1)\log_2(n)}{n}$.
|
| 724 |
+
|
| 725 |
+
We run 1,000 simulations with $\epsilon = 1.0$ and $\delta = 2^{-20}$. As we can see from Figure 2(b), our private mechanisms maintain a comparable utility relative to the non-private algorithm. We also observe an interesting phenomenon where the expected utility obtained by our mechanism is not necessarily monotonically increasing with the number of features selected. This is an artifact of the fact that if we are selecting *k* features, then composition requires us to divide *epsilon* so that each iteration uses privacy budget *epsilon/2*. This is problematic for this particular application because there happens to be one feature (insulin administration) that has much higher value than the rest. Therefore, the reduced probability of picking this single best feature (as a result of the lower privacy parameter *epsilon*) is not compensated for by selecting more features.
|
| 726 |
+
|
| 727 |
+
From Figure 2(c), we see that both the private and non-private mechanisms generally select insulin administration as the top feature. However, while all three of the top features selected by the non-private algorithm are clearly related to diabetes, the non-private mechanisms tend to select one feature (in our case, gender or having received a blood transfusion) that may not be quite as relevant.
|
| 728 |
+
|
| 729 |
+
# 7 Conclusion
|
| 730 |
+
|
| 731 |
+
We have presented a general framework for maximizing submodular functions while guaranteeing differential privacy. Our results demonstrate that simple and flexible greedy algorithms can preserve privacy while
|
| 732 |
+
---PAGE_BREAK---
|
| 733 |
+
|
| 734 |
+
achieving competitive guarantees for a variety of submodular maximization problems: for all functions under cardinality constraints, as well as for monotone functions under matroid and *p*-extendible system constraints. Via our motivation to identify algorithms that could be made differentially private, we discovered a non-monotone submodular maximization algorithm that achieves guarantees that are novel even without concern for privacy. Finally, our experiments show that our algorithms are indeed competitive with their non-private counterparts.
|
| 735 |
+
|
| 736 |
+
**Acknowledgments.** This work was supported by DARPA Young Faculty Award (D16AP00046), Simons-Berkeley fellowship, and ERC StG 307036. This work was done in part while Amin Karbasi and Andreas Krause were visiting the Simons Institute for the Theory of Computing.
|
| 737 |
+
|
| 738 |
+
## References
|
| 739 |
+
|
| 740 |
+
Bassily, Raef, Smith, Adam D., and Thakurta, Abhradeep. Private empirical risk minimization: Efficient algorithms and tight error bounds. In *FOCS*, pp. 464–473, 2014.
|
| 741 |
+
|
| 742 |
+
Bassily, Raef, Nissim, Kobbi, Smith, Adam D., Steinke, Thomas, Stemmer, Uri, and Ullman, Jonathan. Algorithmic stability for adaptive data analysis. In *STOC*, pp. 1046–1059, 2016.
|
| 743 |
+
|
| 744 |
+
Beimel, Amos, Nissim, Kobbi, and Stemmer, Uri. Private learning and sanitization: Pure vs. approximate differential privacy. *Theory of Computing*, 12(1):1–61, 2016.
|
| 745 |
+
|
| 746 |
+
Buchbinder, Niv, Feldman, Moran, Naor, Joseph, and Schwartz, Roy. Submodular maximization with cardinality constraints. In *SODA*, pp. 1433–1452, 2014.
|
| 747 |
+
|
| 748 |
+
Bun, Mark and Steinke, Thomas. Concentrated differential privacy: Simplifications, extensions, and lower bounds. In *TCC*, pp. 635–658, 2016.
|
| 749 |
+
|
| 750 |
+
Bun, Mark, Nissim, Kobbi, Stemmer, Uri, and Vadhan, Salil P. Differentially private release and learning of threshold functions. In *FOCS*, pp. 634–649, 2015.
|
| 751 |
+
|
| 752 |
+
Călinescu, Gruia, Chekuri, Chandra, Pál, Martin, and Vondrák, Jan. Maximizing a monotone submodular function subject to a matroid constraint. *SIAM Journal on Computing*, 2011.
|
| 753 |
+
|
| 754 |
+
Chaudhuri, Kamalika, Hsu, Daniel J., and Song, Shuang. The large margin mechanism for differentially private maximization. In *NIPS*, pp. 1287–1295, 2014.
|
| 755 |
+
|
| 756 |
+
Cheraghchi, Mahdi, et al. Submodular functions are noise stable. In *SODA*, 2012.
|
| 757 |
+
|
| 758 |
+
Dwork, Cynthia and Lei, Jing. Differential privacy and robust statistics. In *STOC*, pp. 371–380, 2009.
|
| 759 |
+
|
| 760 |
+
Dwork, Cynthia and Roth, Aaron. The algorithmic foundations of differential privacy. *Foundations and Trends in Theoretical Computer Science*, 9(3-4):211–407, 2014.
|
| 761 |
+
|
| 762 |
+
Dwork, Cynthia, McSherry, Frank, Nissim, Kobbi, and Smith, Adam D. Calibrating noise to sensitivity in private data analysis. In *TCC*, pp. 265–284, 2006.
|
| 763 |
+
|
| 764 |
+
Dwork, Cynthia, Rothblum, Guy N., and Vadhan, Salil P. Boosting and differential privacy. In *FOCS*, pp. 51–60, 2010.
|
| 765 |
+
|
| 766 |
+
Feige, U., Mirrokni, V., and Vondrak, J. Maximizing non-monotone submodular functions. In *FOCS*, 2007.
|
| 767 |
+
---PAGE_BREAK---
|
| 768 |
+
|
| 769 |
+
Feldman, Moran, Naor, Joseph, and Schwartz, Roy. A unified continuous greedy algorithm for submodular maximization. In *FOCS*, 2011.
|
| 770 |
+
|
| 771 |
+
Feldman, Moran, Harshaw, Christopher, and Karbasi, Amin. Greed is good: Near-optimal submodular maximization via greedy optimization. In *COLT*, 2017.
|
| 772 |
+
|
| 773 |
+
Fisher, Marshall L., Nemhauser, George L., and Wolsey, Laurence A. An analysis of approximations for maximizing submodular set functions - II. *Mathematical Programming Study*, (8), 1978.
|
| 774 |
+
|
| 775 |
+
Gupta, Anupam, Ligett, Katrina, McSherry, Frank, Roth, Aaron, and Talwar, Kunal. Differentially private combinatorial optimization. In *SODA*, pp. 1106–1125, 2010.
|
| 776 |
+
|
| 777 |
+
Hassidim, Avinatan and Singer, Yaron. Submodular optimization under noise. CoRR, abs/1601.03095, 2016. URL http://arxiv.org/abs/1601.03095.
|
| 778 |
+
|
| 779 |
+
Jenkyns, T. A. The efficacy of the “greedy” algorithm. In *South Eastern Conference on Combinatorics, Graph Theory and Computing*, 1976.
|
| 780 |
+
|
| 781 |
+
Kempe, David, Kleinberg, Jon, and Tardos, Éva. Maximizing the spread of influence through a social network. In *KDD*, 2003.
|
| 782 |
+
|
| 783 |
+
Kirchhoff, Katrin and Bilmes, Jeff. Submodularity for data selection in statistical machine translation. In *EMNLP*, 2014.
|
| 784 |
+
|
| 785 |
+
Krause, A. and Guestrin, C. Near-optimal nonmyopic value of information in graphical models. In *UAI*, 2005.
|
| 786 |
+
|
| 787 |
+
Krause, Andreas and Gomes, Ryan G. Budgeted nonparametric learning from data streams. In *ICML*, 2010.
|
| 788 |
+
|
| 789 |
+
Lin, Hui and Bilmes, Jeff. A class of submodular functions for document summarization. In *ACL*, 2011.
|
| 790 |
+
|
| 791 |
+
McSherry, Frank and Talwar, Kunal. Mechanism design via differential privacy. In *FOCS*, pp. 94–103, 2007.
|
| 792 |
+
|
| 793 |
+
Mestre, Julián. Greedy in approximation algorithms. In *ESA*, pp. 528–539, 2006.
|
| 794 |
+
|
| 795 |
+
Mirzasoleiman, Baharan, Badanidiyuru, Ashwinkumar, Karbasi, Amin, Vondrak, Jan, and Krause, Andreas. Lazier than lazy greedy. In *AAAI*, 2015.
|
| 796 |
+
|
| 797 |
+
Mirzasoleiman, Baharan, Badanidiyuru, Ashwinkumar, and Karbasi, Amin. Fast constrained submodular maximization: Personalized data summarization. In *ICML*, 2016a.
|
| 798 |
+
|
| 799 |
+
Mirzasoleiman, Baharan, Zadimoghaddam, Morteza, and Karbasi, Amin. Fast distributed submodular cover: Public-private data summarization. In *NIPS*, 2016b.
|
| 800 |
+
|
| 801 |
+
Nemhauser, George L., Wolsey, Laurence A., and Fisher, Marshall L. An analysis of approximations for maximizing submodular set functions - I. *Mathematical Programming*, 1978.
|
| 802 |
+
|
| 803 |
+
NHANESDataset. National health and nutrition examination survey (2007 - 2014). URL https://wwwn.cdc.gov/nchs/nhanes/default.aspx.
|
| 804 |
+
---PAGE_BREAK---
|
| 805 |
+
|
| 806 |
+
Papadimitriou, Christos H., Schapira, Michael, and Singer, Yaron. On the hardness of being truthful. In *FOCS*, pp. 250–259, 2008.
|
| 807 |
+
|
| 808 |
+
Singla, Adish, Bogunovic, Ilija, Bartók, Gábor, Karbasi, Amin, and Krause, Andreas. Near-optimally teaching the crowd to classify. In *ICML*, 2014.
|
| 809 |
+
|
| 810 |
+
Sipos, Ruben, Swaminathan, Adith, Shivaswamy, Pannaga, and Joachims, Thorsten. Temporal corpus summarization using submodular word coverage. In *CIKM*, 2012.
|
| 811 |
+
|
| 812 |
+
Song, Shuang, Chaudhuri, Kamalika, and Sarwate, Anand D. Stochastic gradient descent with differentially private updates. In *GlobalSIP*, pp. 245–248, 2013.
|
| 813 |
+
|
| 814 |
+
**UberDataset.** Uber pickups in new york city. URL https://www.kaggle.com/fivethirtyeight/uber-pickups-in-new-york-city.
|
samples/texts_merged/1096954.md
ADDED
|
@@ -0,0 +1,849 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Radiative correction in approximate treatments of electromagnetic scattering by point and body scatterers
|
| 5 |
+
|
| 6 |
+
Eric C. Le Ru,∗ Walter R. C. Somerville, and Baptiste Auguié
|
| 7 |
+
|
| 8 |
+
*The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Chemical and Physical Sciences, Victoria University of Wellington, PO Box 600, Wellington 6140, New Zealand*
|
| 9 |
+
|
| 10 |
+
*(Received 15 June 2012; revised manuscript received 30 September 2012; published 10 January 2013)*
|
| 11 |
+
|
| 12 |
+
The transition-matrix (T-matrix) approach provides a general formalism to study scattering problems in various areas of physics, including acoustics (scalar fields) and electromagnetics (vector fields), and is related to the theory of the scattering matrix (S matrix) used in quantum mechanics and quantum field theory. Focusing on electromagnetic scattering, we highlight an alternative formulation of the T-matrix approach, based on the use of the reactance matrix or K matrix, which is more suited to formal studies of energy-conservation constraints (such as the optical theorem). We show in particular that electrostatics or quasistatic approximations can be corrected within this framework to satisfy the energy-conservation constraints associated with radiation. A general formula for such a radiative correction is explicitly obtained, and empirical expressions proposed in earlier studies are shown to be special cases of this general formula. This work therefore provides a justification of the empirical radiative correction to the dipolar polarizability and a generalization of this correction to any types of point or body scatterers of arbitrary shapes, including higher multipolar orders.
|
| 13 |
+
|
| 14 |
+
DOI: 10.1103/PhysRevA.87.012504
|
| 15 |
+
|
| 16 |
+
PACS number(s): 31.30.jn, 42.25.Fx, 11.55.—m, 41.20.—q
|
| 17 |
+
|
| 18 |
+
## I. INTRODUCTION
|
| 19 |
+
|
| 20 |
+
Radiative reaction, also known as radiation damping, refers to the fact that the electromagnetic (EM) field created by a charge or emitter must affect its own dynamics (e.g., motion or power radiated) [1]. When applied to elementary charges [2], no satisfactory classical treatment of this effect has been found [3], yet the radiative reaction is at the core of the concepts of self-energy and renormalization in quantum electrodynamics (QED) [1]. In fact, using a Green's function approach and regularization techniques akin to those of QED, a classical treatment of the radiative reaction for point electric dipole scatterers can be obtained [4].
|
| 21 |
+
|
| 22 |
+
Interestingly, an equivalent result had been obtained heuristically by adding a reaction field postulated from simple energy-conservation arguments [5]. These arguments were inspired by research into simple models of the optical properties of subwavelength particles, notably for applications in plasmonics and surface-enhanced Raman spectroscopy [6], and we briefly present a similar derivation here. The main idea is to use the solution of the electrostatics problem to derive an approximate dipolar polarizability $\alpha_0$ for the particle. $\alpha_0$, assumed isotropic here for simplicity, defines the electrostatics response of the particle and is such that a uniform external electrostatic field $E_0$ induces a dipole moment $p_0 = \alpha_0 E_0$. In such an electrostatics problem, the power absorbed by the particle equals the work done by the external field on the charges [3] and is therefore $P_{\text{abs}}^0 = (1/2)\omega \text{Im}(\alpha_0)|E_0|^2$.
|
| 23 |
+
|
| 24 |
+
In the electrostatics, or quasistatic, approximation, also often called Rayleigh approximation (see, e.g., Chap. 5 in Ref. [7]), the far-field optical response of a subwavelength scatterer to an incident electric field $E_{\text{inc}}$ oscillating at frequency $\omega$ is approximated as that of an oscillating induced dipole given by $p = \alpha E_{\text{inc}}$. Note that we use complex notations with the $\exp(-i\omega t)$ convention and also that SI
|
| 25 |
+
|
| 26 |
+
units are used throughout. We also define the wave vector in the medium $k_1 = \sqrt{\epsilon_1}\omega/c$. The electrostatics approximation consists in approximating $\alpha$ by $\alpha_0$, even if the new polarizability $\alpha$ is in principle different from $\alpha_0$ because the electrostatics solution does not account for radiation. The power radiated (or scattered) in this approximation is therefore $P_{\text{sea}} = (\omega k_1^3 |\alpha_0|^2 |E_{\text{inc}}|^2)/(12\pi \epsilon_0 \epsilon_1)$. The power absorbed by the particle is approximated by its electrostatics value $P_{\text{abs}} = P_{\text{abs}}^0$. The extinguished power is the power extracted by this point dipole from the incident EM field (the work of the field on the dipole) and is $P_{\text{ext}} = (1/2)\omega \text{Im}(\alpha)|E_{\text{inc}}|^2$ in the general case. It therefore reduces in the electrostatics approximation ($\alpha \approx \alpha_0$) to $P_{\text{ext}} = P_{\text{abs}}$, which appears to contradict the energy-conservation condition $P_{\text{ext}} = P_{\text{sea}} + P_{\text{abs}}$. This is not so surprising since the electrostatics solution does not account for radiation (scattering) effects. In fact, there is no contradiction in the strict range of validity of the electrostatics approximation, i.e., in the limit of vanishing size, as we then have $P_{\text{sea}} \ll P_{\text{ext}}$, $P_{\text{abs}}$ since $\alpha_0$ scales with particle volume. Nevertheless, it is useful in many instances to correct this problem to extend the range of applicability of the electrostatics approximation. This can be achieved, as proposed in Ref. [5], by defining a radiation-corrected polarizability, which by construction enforces the energy-conservation condition $P_{\text{ext}} = P_{\text{sea}} + P_{\text{abs}}$, i.e.,
|
| 27 |
+
|
| 28 |
+
$$ \mathrm{Im}\left(-\frac{1}{\alpha^{\mathrm{RC}}}\right) = \frac{\mathrm{Im}(\alpha^{\mathrm{RC}})}{|\alpha^{\mathrm{RC}}|^2} = \mathrm{Im}\left(-\frac{1}{\alpha_0}\right) + \frac{k_1^3}{6\pi \epsilon_0 \epsilon_1}. \quad (1) $$
|
| 29 |
+
|
| 30 |
+
This condition on the imaginary part only is not sufficient to define $\alpha^{\text{RC}}$ uniquely (unless Kramers-Krönig relations [8] are used) and the additional condition that $\text{Re}(1/\alpha^{\text{RC}}) = \text{Re}(1/\alpha_0)$ is usually assumed without further justification to obtain the radiative correction to the polarizability as
|
| 31 |
+
|
| 32 |
+
$$ \frac{1}{\alpha^{\text{RC}}} = \frac{1}{\alpha_0} - i \frac{k_1^3}{6\pi \epsilon_0 \epsilon_1} \quad (2) $$
|
| 33 |
+
---PAGE_BREAK---
|
| 34 |
+
|
| 35 |
+
or, equivalently [5],
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
\alpha^{\text{RC}} = \frac{\alpha_0}{1 - i \frac{k_1^3}{6\pi\epsilon_0\epsilon_1} \alpha_0}. \quad (3)
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
This expression can also be derived rigorously in the special case of spherical particles by expansion of the Mie coefficients [7,9] and can be generalized to spheroidal particles [5,10]. This corrected polarizability has been used in numerous contexts, including, for example, surface-enhanced Raman scattering [5,6], plasmonics [10–12], or the discrete dipole approximation [13–16]. More recently, this radiative correction has also been generalized, using again a heuristic approach based on the optical theorem, to the case of point magnetic dipole and dipolar scatterers with magnetoelectric coupling [17], for the study of metamaterials.
|
| 42 |
+
|
| 43 |
+
In this paper, we propose a general framework for the study and further understanding of the concept of radiative correction, based on a simple reformulation of the *T*-matrix approach to EM scattering [18,19]. The *T*-matrix formalism, often used in conjunction with the extended boundary condition method (EBCM), or null-field method, was introduced more than 40 years ago [20] and is arguably one of the most elegant and efficient methods to solve problems of electromagnetic scattering by particles of arbitrary shape and size [19,21–26]. It has been applied, for example, to the study of scattering by aerosols [27], metallic nanoparticles [28–31], and collections of spheres [32], and also to more formal studies of EM scattering [33]. It has also been used extensively for acoustic scattering [26,34].
|
| 44 |
+
|
| 45 |
+
Our reformulation emphasizes the important role of the *reactance matrix*, or *K* matrix [18,35], in relation to energy conservation and radiative correction. Although the *K* matrix has been used occasionally in the past in the context of the quantum theory of scattering [18,35–37], it seldom appears in EM theory. We show that all the aforementioned results for the radiative correction in EM scattering are special cases of a general formula derived in this work. In addition to highlighting the importance of the *K* matrix for general scattering theory, this work therefore provides a formal justification of existing radiative-correction formulas and a generalization applicable to any type of point scatterer or particle of arbitrary shape. The latter point is a direct consequence of the fact that the *T*-matrix formulation of EM scattering is extremely general. It applies to particles of arbitrary shape and may also cover, for example [19,25], optically active or anisotropic materials, layered particles, and multiple scattering by collections of particles. The proposed *K*-matrix reformulation and associated radiative correction are therefore applicable to all the aforementioned cases.
|
| 46 |
+
|
| 47 |
+
The paper is organized as follows: In Sec. II, we briefly review the general principles of the *T*-matrix approach to EM scattering. We then introduce in Sec. III an alternative, but closely related, formulation of the problem in terms of the *K* matrix. In Sec. IV, we discuss the implications of the *K*-matrix formulation with regard to radiative corrections and obtain a general formula [Eq. (21)] for the radiative correction in EM scattering. Finally, in Sec. V, we show explicitly how this formula applies to specific cases of radiative correction that
|
| 48 |
+
|
| 49 |
+
have been presented in the literature, therefore justifying, and
|
| 50 |
+
in some cases extending, these previously empirical results.
|
| 51 |
+
|
| 52 |
+
## II. T-MATRIX APPROACH
|
| 53 |
+
|
| 54 |
+
### A. Definition of the T matrix
|
| 55 |
+
|
| 56 |
+
We consider the general problem of electromagnetic scat-
|
| 57 |
+
tering by a body characterized by a linear local isotropic
|
| 58 |
+
relative dielectric function $\epsilon_2$ (possibly frequency dependent)
|
| 59 |
+
embedded in a nonabsorbing medium of refractive index $n_1$
|
| 60 |
+
(and relative dielectric function $\epsilon_1 = n_1^2$). Within the T-matrix
|
| 61 |
+
approach [21,22], the EM field solution is expanded in a
|
| 62 |
+
basis of vector spherical wave functions (VSWFs) in a similar
|
| 63 |
+
fashion as for Mie theory [7]. We here follow the conventions
|
| 64 |
+
of Mishchenko [19] for the definition of the VSWFs (see
|
| 65 |
+
Appendix A for details). The incident field $E_{\text{inc}}$ and internal
|
| 66 |
+
field $E_{\text{int}}$ (the field in the region inside the particle) are
|
| 67 |
+
regular at $r=0$ and can therefore be expressed in terms
|
| 68 |
+
of regular VSWFs denoted $M_v^{(1)}$, $N_v^{(1)}$. The scattered field
|
| 69 |
+
$E_{\text{sca}}$ must satisfy the Sommerfeld radiation condition and
|
| 70 |
+
is therefore expanded in terms of outgoing spherical waves
|
| 71 |
+
VSWFs denoted $M_v^{(3)}$, $N_v^{(3)}$. Explicitly,
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\mathbf{E}_{\text{inc}}(\mathbf{r}) = \sum_{\nu} a_{\nu} \mathbf{M}_{\nu}^{(1)}(k_1 \mathbf{r}) + b_{\nu} \mathbf{N}_{\nu}^{(1)}(k_1 \mathbf{r}),
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\mathbf{E}_{\text{sca}}(\mathbf{r}) = \sum_{\nu} p_{\nu} \mathbf{M}_{\nu}^{(3)}(k_1 \mathbf{r}) + q_{\nu} \mathbf{N}_{\nu}^{(3)}(k_1 \mathbf{r}), \quad (4)
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\mathbf{E}_{\text{int}}(\mathbf{r}) = \sum_{\nu} c_{\nu} \mathbf{M}_{\nu}^{(1)}(k_2 \mathbf{r}) + d_{\nu} \mathbf{N}_{\nu}^{(1)}(k_2 \mathbf{r}),
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $k_i = (2\pi/\lambda)\sqrt{\epsilon_i}$ ($i=1,2$) are the wave-vector amplitudes in regions 1 (outside) and 2 (inside) and $\lambda$ is the excitation wavelength. These expansions can be represented as vectors, for example, $(p_\nu, q_\nu) \equiv (\mathbf{p}, \mathbf{q})$ for the scattered field, where the index $\nu = (n,m)$ combines the total $(n)$ and projected $([m] \le n)$ angular momentum indices. The expansion of the incident field $(a_\nu, b_\nu)$ for a given scattering problem is known, with explicit expressions existing, for example, for plane waves [19].
|
| 86 |
+
|
| 87 |
+
By linearity of Maxwell’s equations, the coefficients of the
|
| 88 |
+
scattered field are linearly related to those of the incident field.
|
| 89 |
+
This can be expressed explicitly by introducing the *T* matrix:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\begin{pmatrix} \mathbf{p} \\ \mathbf{q} \end{pmatrix} = \mathbf{T} \begin{pmatrix} \mathbf{a} \\ \mathbf{b} \end{pmatrix}, \qquad (5)
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where **T** is an infinite square matrix, which can be written in
|
| 96 |
+
block notation as
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
\mathbf{T} = \begin{pmatrix}
|
| 100 |
+
\mathbf{T}^{11} & \mathbf{T}^{12} \\
|
| 101 |
+
\mathbf{T}^{21} & \mathbf{T}^{22}
|
| 102 |
+
\end{pmatrix}. \tag{6}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
In principle, from a knowledge of the *T* matrix (at a
|
| 106 |
+
given wavelength), one can infer the scattering properties for
|
| 107 |
+
any incident excitation. The *T*-matrix approach is therefore
|
| 108 |
+
particularly suited for computations of the scattering properties
|
| 109 |
+
of a collection of randomly oriented scatterers [38], which is
|
| 110 |
+
indeed one of the important applications of this formalism [19].
|
| 111 |
+
We note that linear relationships involving the expansion
|
| 112 |
+
---PAGE_BREAK---
|
| 113 |
+
|
| 114 |
+
coefficients of the internal field can also be written as
|
| 115 |
+
|
| 116 |
+
$$
|
| 117 |
+
\left( \begin{matrix} \mathbf{p} \\ \mathbf{q} \end{matrix} \right) = -\mathbf{P} \left( \begin{matrix} \mathbf{c} \\ \mathbf{d} \end{matrix} \right) \quad \text{and} \quad \left( \begin{matrix} \mathbf{a} \\ \mathbf{b} \end{matrix} \right) = \mathbf{Q} \left( \begin{matrix} \mathbf{c} \\ \mathbf{d} \end{matrix} \right). \tag{7}
|
| 118 |
+
$$
|
| 119 |
+
|
| 120 |
+
The *T* matrix can therefore also be obtained from
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
\mathbf{T} = -\mathbf{P}\mathbf{Q}^{-1}. \quad (8)
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
This expression provides the basis for one of the most
|
| 127 |
+
common approaches to calculating the *T* matrix in practice,
|
| 128 |
+
namely, the extended boundary condition method (EBCM)
|
| 129 |
+
or null-field method [19,21,22,24]. Within this approach, the
|
| 130 |
+
matrix elements of **P** and **Q** are obtained analytically as surface
|
| 131 |
+
integrals over the particle surface of VSWF cross products.
|
| 132 |
+
|
| 133 |
+
B. Symmetry, unitarity, and energy conservation
|
| 134 |
+
|
| 135 |
+
The *T*-matrix satisfies [19,21] symmetry relations related
|
| 136 |
+
to optical reciprocity, along with unitarity relations related
|
| 137 |
+
to energy conservation, i.e., the fact that the extinction cross
|
| 138 |
+
section σ<sub>ext</sub> is the sum of scattering σ<sub>sca</sub> and absorption σ<sub>abs</sub>
|
| 139 |
+
(note that this is related to the optical theorem [7,39,40]). The
|
| 140 |
+
optical reciprocity relations are typically easy to check and
|
| 141 |
+
enforce as they are related (see Appendix D) to ensuring the
|
| 142 |
+
symmetry of certain matrices [21]. The energy conservation
|
| 143 |
+
condition is in general more problematic. It is typically
|
| 144 |
+
expressed by introducing the *S* matrix (scattering matrix)
|
| 145 |
+
defined as **S** = **I** + 2**T**. For lossless (nonabsorbing) scatterers
|
| 146 |
+
[for which Im(ε<sub>2</sub>) = 0], it can then be shown that energy
|
| 147 |
+
conservation is equivalent to **S** being unitary [19,21]. In terms
|
| 148 |
+
of the **T** matrix itself, this results in the somewhat more
|
| 149 |
+
cumbersome condition
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\mathbf{T} + \mathbf{T}^{\dagger} = -2\mathbf{T}^{\dagger}\mathbf{T}, \quad (9)
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
which can be viewed as the matrix form of the generalized
|
| 156 |
+
optical theorem [18,39].
|
| 157 |
+
|
| 158 |
+
In EM scattering, absorbing or conducting scatterers, for which Im(ε₂) > 0 (we exclude the special case of perfect conductors here), are also often considered and the equality above no longer holds. In this general case, the inequality σext ≥ σsca then requires that I − S†S be a Hermitian positive-semidefinite (HPSD) matrix (note that it is Hermitian by construction) [19], which results in a relatively cumbersome condition for T. The energy-conservation conditions for T can therefore be summarized as
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
\begin{align}
|
| 162 |
+
& \text{Lossless: } \mathbf{T} + \mathbf{T}^{\dagger} = -2\mathbf{T}^{\dagger}\mathbf{T}, \notag \\
|
| 163 |
+
& \text{General: } [-\mathbf{T} - \mathbf{T}^{\dagger} - 2\mathbf{T}^{\dagger}\mathbf{T}] \text{ HPSD.} \tag{10}
|
| 164 |
+
\end{align}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
III. K MATRIX
|
| 168 |
+
|
| 169 |
+
A. Definition
|
| 170 |
+
|
| 171 |
+
We here highlight an alternative formulation of the *T*-matrix method, which simplifies the energy-conservation condition and naturally provides a connection with the radiative correction. Note that we will not here attempt to give a rigorous mathematical derivation, but rather focus on the new physical insights. Our proposed formulation is related to the reactance matrix or *K* matrix, which can be formally defined as the Cayley transform of the *S* matrix [37] and has been previously
|
| 172 |
+
|
| 173 |
+
discussed in the context of the general quantum theory of
|
| 174 |
+
scattering [18,35]. Explicitly, we have
|
| 175 |
+
|
| 176 |
+
$$
|
| 177 |
+
\mathbf{K} = i(\mathbf{I} - \mathbf{S})(\mathbf{I} + \mathbf{S})^{-1}. \quad (11)
|
| 178 |
+
$$
|
| 179 |
+
|
| 180 |
+
A simple consequence of this definition is that **S** unitary is
|
| 181 |
+
equivalent to **K** Hermitian. In terms of the **T** matrix, we have
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\mathbf{K} = -i\mathbf{T}(\mathbf{I} + \mathbf{T})^{-1} = -i(\mathbf{I} + \mathbf{T})^{-1}\mathbf{T}. \quad (12)
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
We note that $\mathbf{T}$ and $\mathbf{K}$ commute and we also have
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\mathbf{K} + \mathbf{K}\mathbf{T} = -i\mathbf{T} = \mathbf{K} + \mathbf{T}\mathbf{K}, \quad (13)
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
which, to pursue the analogy with quantum scattering, may
|
| 194 |
+
be viewed as the matrix version of Heitler's integral equations
|
| 195 |
+
[1,18,35]. *T* can be obtained from *K* using
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
\mathbf{T} = i\mathbf{K}(\mathbf{I} - i\mathbf{K})^{-1} = i(\mathbf{I} - i\mathbf{K})^{-1}\mathbf{K}, \quad (14)
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
or from the following property:
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
\mathbf{T}^{-1} = -i\mathbf{K}^{-1} - \mathbf{I}. \quad (15)
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
It is important to emphasize that the *K*-matrix and *T*-matrix formulations are fully equivalent from a formal point of view. However, in practice, since approximations are carried out in computing **K** and **T** (at the very least, truncation of these infinite matrices), the equivalence is no longer strictly valid. We will in fact show that the *K*-matrix formulation is then the most appropriate one in approximate treatments where energy conservation needs to remain strictly enforced. This will lead us naturally to a generalization of the radiative-correction procedure discussed earlier.
|
| 208 |
+
|
| 209 |
+
B. Energy conservation and the K matrix
|
| 210 |
+
|
| 211 |
+
It is interesting to discuss the formal implications of the *K*-matrix formulation for energy conservation. For nonabsorbing scatterers, the unitarity of *S*, or Eq. (9) in terms of *T*, are equivalent to **K** being Hermitian: **K** = **K**†. For a general scatterer, the energy-conservation condition [Eq. (10)] can be shown to be equivalent to (*i* **K**† − *i* **K**) being a Hermitian positive-semidefinite matrix (for details see Appendix B). More formally, this condition can be restated as **K** being a dissipative matrix [41,42]. We can therefore rewrite the condition (10) in terms of **K** as
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
\text{Lossless: } \mathbf{K} = \mathbf{K}^{\dagger},
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
\text{General: } \mathbf{K} \text{ dissipative } ([i\boldsymbol{\mathrm{K}}^{\dagger} - i\boldsymbol{\mathrm{K}}]\text{HPSD}). \tag{16}
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
These are much more natural conditions than those obtained
|
| 222 |
+
for **T** (or for **S**). We note that (*i* **K**† − *i* **K**) is simply, up
|
| 223 |
+
to a factor of *i*/2, the skew-Hermitian part of **K** and the
|
| 224 |
+
condition is therefore a generalization of Im(*K*) ≥ 0 to the
|
| 225 |
+
case where **K** is a matrix. We can therefore naturally identify
|
| 226 |
+
the skew-Hermitian part of **K** as representing absorption while
|
| 227 |
+
its Hermitian part is linked to scattering and dispersion. There
|
| 228 |
+
is a clear analogy with simpler response functions such as
|
| 229 |
+
the susceptibility χ = ε − 1 of a material or the polarizability
|
| 230 |
+
α of a scatterer, for which Im(α) corresponds to absorption
|
| 231 |
+
(and is zero for lossless cases) and is subject to the condition
|
| 232 |
+
Im(α) ≥ 0. In fact, the requirement that **K** be a dissipative
|
| 233 |
+
matrix suggests that it is the matrix analog of a scalar linear
|
| 234 |
+
response function [8] like the polarizability α, and should
|
| 235 |
+
therefore in addition satisfy causality and dispersion relations
|
| 236 |
+
---PAGE_BREAK---
|
| 237 |
+
|
| 238 |
+
[43] akin to Kramers-Krönig relations. Such mathematical developments are, however, outside the scope of this work. We here only point out that in the case of nonabsorbing scatterers, the conditions of optical reciprocity and energy conservation on **K** become closely linked [this is because symmetry and Hermiticity become equivalent for real matrices (see Appendix D for further details)].
|
| 239 |
+
|
| 240 |
+
### C. Relation to expansion coefficients
|
| 241 |
+
|
| 242 |
+
We now discuss how the $K$ matrix relates to the field expansion coefficients (in terms of VSWFs). Recall that the $T$ matrix represents the linear connection [see Eq. (5)] between the field expansion coefficients of the scattered field $\mathbf{E}_{\text{sca}}$ (in terms of outgoing spherical waves with VSWFs $\mathbf{M}_{\nu}^{(3)}$ and $\mathbf{N}_{\nu}^{(3)}$) and those of the incident field $\mathbf{E}_{\text{inc}}$ (in terms of regular waves with VSWFs $\mathbf{M}_{\nu}^{(1)}$ and $\mathbf{N}_{\nu}^{(1)}$). Multiplying Eq. (13) by the vector $(a_b)$, we deduce that
|
| 243 |
+
|
| 244 |
+
$$ \begin{pmatrix} \mathbf{p} \\ \mathbf{q} \end{pmatrix} = i\mathbf{K} \begin{pmatrix} a + p \\ b + q \end{pmatrix}. \quad (17) $$
|
| 245 |
+
|
| 246 |
+
The physical meaning of this expression becomes apparent when we expand the total field outside the particle using the basis ($\mathbf{M}_{\nu}^{(1)}, \mathbf{N}_{\nu}^{(1)}, \mathbf{M}_{\nu}^{(2)}, \mathbf{N}_{\nu}^{(2)}$), where the latter two VSWFs use the (irregular) spherical Bessel functions of the second kind (which are superpositions of outgoing and ingoing spherical waves), in contrast to the usual spherical Hankel functions of the first kind (which are outgoing spherical waves only). For this, we simply write $\mathbf{M}_{\nu}^{(3)} = \mathbf{M}_{\nu}^{(1)} + i\mathbf{M}_{\nu}^{(2)}$, which separate the outgoing spherical wave VSWF into a sum of *regular* ($\mathbf{M}^{(1)}$) and *irregular* ($\mathbf{M}^{(2)}$) contributions and obtain
|
| 247 |
+
|
| 248 |
+
$$ \begin{aligned} \mathbf{E}_{\text{out}}(\mathbf{r}) &= \mathbf{E}_{\text{inc}}(\mathbf{r}) + \mathbf{E}_{\text{sca}}(\mathbf{r}) \\ &= \sum_{\nu} (a_{\nu} + p_{\nu}) \mathbf{M}_{\nu}^{(1)}(k_1 \mathbf{r}) + (b_{\nu} + q_{\nu}) \mathbf{N}_{\nu}^{(1)}(k_1 \mathbf{r}) \\ &\quad + ip_{\nu} \mathbf{M}_{\nu}^{(2)}(k_1 \mathbf{r}) + iq_{\nu} \mathbf{N}_{\nu}^{(2)}(k_1 \mathbf{r}). \end{aligned} \quad (18) $$
|
| 249 |
+
|
| 250 |
+
The coefficients $(a_b + p_b)$ in Eq. (17) can then be interpreted as the sum of the incident field and the regular part of the scattered field; the latter can therefore here be viewed as the regularized self-field, i.e., the nondiverging part of the field created by the scatterer at its own position ($r=0$). The $K$ matrix then represents (up to a factor) the linear connection between the expansion coefficients of the scattered field with those of the total field (incident + self-field). It can also be viewed mathematically as the linear connection between the expansions coefficients of the irregular part of the outside field (i.e., those of $\mathbf{M}_{\nu}^{(2)}, \mathbf{N}_{\nu}^{(2)})$ and those of its regular part (i.e., those of $\mathbf{M}_{\nu}^{(1)}, \mathbf{N}_{\nu}^{(1)}$). This latter remark can be used to show that the $K$ matrix can be computed as easily as the $T$ matrix in the most common implementation of the $T$-matrix approach, the EBCM [21,22]. Explicitly, we can show that (see Appendix C for details)
|
| 251 |
+
|
| 252 |
+
$$ \mathbf{K} = \mathbf{P}\mathbf{U}^{-1}, \quad (19) $$
|
| 253 |
+
|
| 254 |
+
where we have introduced the matrix $\mathbf{U}$ such that $\mathbf{Q} = \mathbf{P} + i\mathbf{U}$, which can be computed as easily as $\mathbf{Q}$ by substituting spherical Hankel functions of the first kind, $h_n^{(1)}(x) = j_n(x) + iy_n(x)$, by irregular spherical Bessel functions $y_n(x)$ [note that $j_n(x)$
|
| 255 |
+
|
| 256 |
+
are the regular spherical Bessel functions]. In fact, $i\mathbf{U}$ can be viewed as the irregular part of $\mathbf{Q}$, while $\mathbf{P}$ is its regular part [19]. As a result, within the EBCM approach, **K** can be calculated as simply as **T**, if not more simply.
|
| 257 |
+
|
| 258 |
+
One of the central themes of this work is to argue that the formulation of the EM scattering problem in terms of **K** is much more than a mere change of notation and presents in some cases many advantages in terms of both the practical implementations and the physical interpretations of the method.
|
| 259 |
+
|
| 260 |
+
## IV. FORMAL DERIVATION OF THE RADIATIVE CORRECTION
|
| 261 |
+
|
| 262 |
+
### A. Radiative correction and self-field
|
| 263 |
+
|
| 264 |
+
The observations in the last section provide a link with the radiative correction [3,5] from the point of view of the self-field or self-reaction. Using again the example of a point polarizable dipole as illustration, the radiative correction can be interpreted as the effect of the self-field, i.e., the field $\mathbf{E}_{\text{SF}}$ created by the scatterer onto itself, which acts in addition to the incident field $\mathbf{E}_{\text{inc}}$. The induced dipole is therefore written self-consistently as $\mathbf{p} = \alpha_0(\mathbf{E}_{\text{inc}} + \mathbf{E}_{\text{SF}})$, where $\alpha_0$ again denotes the bare (uncorrected) polarizability. Since by linearity we have $\mathbf{E}_{\text{SF}} = G\mathbf{p}$ (where $G$ is the electric Green dyadic [15] evaluated at the dipole position, taken isotropic for simplicity), we obtain $\mathbf{p} = \alpha^{\text{RC}}\mathbf{E}_{\text{inc}}$, where the corrected polarizability $\alpha^{\text{RC}}$ satisfies
|
| 265 |
+
|
| 266 |
+
$$ (\alpha^{\text{RC}})^{-1} = \alpha_0^{-1} - G. \quad (20) $$
|
| 267 |
+
|
| 268 |
+
Classically, $G$ diverges at the dipole position (which is why regularization is necessary [4]), but its imaginary part is finite and can be computed to recover Eq. (2). The $K$-matrix formulation provides a formal generalization of this approach. As mentioned earlier, the self-field is represented by the regular part of the scattered field, i.e., the part of its VSWF expansion including regular VSWFs $\mathbf{M}_{\nu}^{(1)}$ and $\mathbf{N}_{\nu}^{(1)}$ only. Equation (17) is the generalization of $\mathbf{p} = \alpha_0(\mathbf{E}_{\text{inc}} + \mathbf{E}_{\text{SF}})$. The formulation in terms of **K** therefore automatically includes this self-reaction. $i\mathbf{K}$ is analogous to $G\alpha_0$ and represents the bare response, while **T** is analogous to $G\alpha^{\text{RC}}$ and corresponds to the self-reaction-corrected response, i.e., the radiative correction. This analogy is further reinforced by comparing Eqs. (15) and (20). This crucial point can be further developed to provide a rigorous justification of the empirical radiative correction to the dipolar polarizability using the $K$ matrix, and by extending this concept to more general cases.
|
| 269 |
+
|
| 270 |
+
### B. Energy conservation in approximate treatments
|
| 271 |
+
|
| 272 |
+
In practice, a number of approximations may be made when computing the $T$ matrix; at the very least, truncation is necessary. The energy-conservation conditions [Eq. (10)] may no longer be satisfied by the computed $T$ matrix, which is clearly undesirable (it could, for example, result in a predicted negative absorption cross section). The optical reciprocity can be enforced *a posteriori* by appropriate symmetrization, but it is more difficult to enforce energy conservation. In contrast, the equivalent conditions on **K** [Eq. (16)] can more easily be enforced even when approximations are carried out (they are, for example, conserved upon truncation of the matrix).
|
| 273 |
+
---PAGE_BREAK---
|
| 274 |
+
|
| 275 |
+
As an illustration, the **T** or **K** matrices may be approximated by expansions of their matrix elements [7], for example, with respect to the size parameter (the lowest order being akin to the quasistatic or Rayleigh approximation), or with respect to the refractive index (more precisely, $n-1$) for optically soft particles in the Born or Gans approximations [7]. In such instances, the energy condition on **T** may no longer be valid, notably because it mixes linear, e.g., **T**, and nonlinear terms, e.g., $\mathbf{T}^\dagger\mathbf{T}$. On the other hand, the energy condition on **K** only contains linear terms in **K** and can be preserved. For example, the approximated **K** matrix will remain Hermitian for nonabsorbing scatterers. We can therefore automatically enforce Eq. (10) on **T** by deriving **T** from the approximated **K** using Eqs. (14) or (15). The reformulation of the scattering problem in terms of **K** therefore provides a natural method to enforce energy conservation in approximate treatments within the *T*-matrix framework, and this can (for example) be applied to the problem of radiative correction in EM scattering.
|
| 276 |
+
|
| 277 |
+
### C. General treatment of the radiative correction to the quasistatic approximations
|
| 278 |
+
|
| 279 |
+
To illustrate further the procedure for deriving the radiative correction within the *K*-matrix approach, we focus specifically on the important case of the radiative correction to the electrostatics and magnetostatics (or quasistatic) approximations. These approximations can be obtained from the general solution by taking the lowest nonzero order terms of the long-wavelength limit as $k_1 \to 0$. We will denote $\mathbf{T}^{(0)}$ and $\mathbf{K}^{(0)}$ the corresponding limit of the *T* and *K* matrices. These can in general be obtained from a direct solution of the electrostatics or magnetostatics problem, which is typically much easier than the full wave solution.
|
| 280 |
+
|
| 281 |
+
As explained already, in general, the approximate *T* matrix $\mathbf{T}^{(0)}$ does not satisfy strictly the *T*-matrix energy-conservation condition [Eq. (10)]; it only satisfies it *approximately* to the accuracy to which it was calculated, i.e., in the long-wavelength limit as $k_1 \to 0$. In contrast, it is relatively straightforward to ensure that the approximate $\mathbf{K}$ matrix $\mathbf{K}^{(0)}$ satisfies exactly the $\mathbf{K}$-matrix energy-conservation condition [Eq. (16)]. If we therefore compute the approximate *T* matrix from $\mathbf{K}^{(0)}$, using for example Eq. (15), the resulting *T* matrix will automatically satisfy the energy-conservation condition and can be identified with the radiatively corrected *T* matrix $\mathbf{T}^{\mathrm{RC}}$, i.e.,
|
| 282 |
+
|
| 283 |
+
$$ (\mathbf{T}^{\mathrm{RC}})^{-1} = -i(\mathbf{K}^{(0)})^{-1} - \mathbf{I}. \quad (21) $$
|
| 284 |
+
|
| 285 |
+
These arguments provide a simple procedure to find the expression for the radiative correction for a given problem:
|
| 286 |
+
|
| 287 |
+
(i) Solve the electrostatics and/or magnetostatics problem and find the corresponding $\mathbf{K}^{(0)}$, which should satisfy the energy-conservation condition for $\mathbf{K}$ [Eq. (16)].
|
| 288 |
+
|
| 289 |
+
(ii) Apply Eq. (21) to find the *T* matrix with radiative correction.
|
| 290 |
+
|
| 291 |
+
We note that for point scatterers, the first step is in fact implicit in the definition of its EM response, for example, $\mathbf{p} = \alpha_0 \mathbf{E}$ for an electric dipole. We also note that the matrix elements of $\mathbf{T}^{(0)}$ and $\mathbf{K}^{(0)}$ are of order $k_1^3$ or higher. As a result, the product $\mathbf{K}^{(0)}\mathbf{T}^{(0)}$ is of order at least $k_1^6$, and from Eq. (13), we therefore have the following approximation for the matrix
|
| 292 |
+
|
| 293 |
+
elements to lowest nonzero order:
|
| 294 |
+
|
| 295 |
+
$$ \mathbf{K}^{(0)} \equiv -i\mathbf{T}^{(0)}, \quad (22) $$
|
| 296 |
+
|
| 297 |
+
which can be used in many cases to find $\mathbf{K}^{(0)}$ with standard methods and results in
|
| 298 |
+
|
| 299 |
+
$$ (\mathbf{T}^{\mathrm{RC}})^{-1} = (\mathbf{T}^{(0)})^{-1} - \mathbf{I}. \quad (23) $$
|
| 300 |
+
|
| 301 |
+
This method can be used to generalize the concept of radiative correction to any type of scatterer, punctual, or bodies of arbitrary shapes, including arbitrary multipole orders and interactions between multipoles. Specific examples that have been recently studied by other means include point multipoles (quadrupole, etc.) [44], bianisotropic lossless point dipole scatterers [45], and point magnetic and electric dipoles with magnetoelectric coupling [17]. The expressions obtained in all these studies are in fact special cases of Eq. (21) as we shall explicitly show in the next section. It is also interesting to note that Eqs. (15) and (21) can equally apply to study the radiative correction to higher-order expansions of the polarizabilities in terms of $k_1$ (as illustrated in the simple case of spheres in the next section) or to expansions in terms of other parameters (for example, $n-1$ for optically soft particles).
|
| 302 |
+
|
| 303 |
+
### V. APPLICATION TO SPECIFIC CASES
|
| 304 |
+
|
| 305 |
+
We now study in more detail how the arguments presented so far can be applied to specific cases of interest, some of which have been studied in the past using mostly heuristic arguments. As we shall see, all examples of radiative corrections studied so far in the literature are special cases of Eqs. (15) or (21). The only difficulty is to link the *T*- and *K*-matrix formalisms to more natural physical representations in terms of, for example, polarizability, multipole moments, and multipole fields. We therefore first show explicitly that this link is relatively straightforward; in essence, it is simply a matter of definition and units. We then focus in the rest of this section on specific examples.
|
| 306 |
+
|
| 307 |
+
#### A. Physical interpretations of the vector spherical wave functions
|
| 308 |
+
|
| 309 |
+
The *T* and *K* matrices provide relations between coefficients of the expansions of the fields in vector spherical wave functions (VSWFs). In practice, however, the excitation and response of the system are typically expressed in a more natural form. For example, the excitation may be in the form of a constant external electric field (in electrostatics) or the field of a plane wave. The response is often modeled in the form of an induced dipole (or multipole), the electromagnetic field of which implicitly represents the scattered field. For the applicability of the formalism, it is therefore necessary to link these physical excitations and responses to their VSWF expansions. The VSWFs of the scattered field ($M_{nm}^{(3)}$ and $N_{nm}^{(3)}$) are outgoing spherical waves and can be readily identified [3] with multipolar fields of order *n* (total angular momentum) and angular momentum number *m*. $N_{nm}^{(3)}$ correspond to electric multipoles (also called transverse magnetic [3]), while $M_{nm}^{(3)}$ correspond to magnetic multipoles (also called transverse electric). The expansion coefficients of the scattered field ($p_{nm}, q_{nm}$) are therefore proportional to the magnetic and
|
| 310 |
+
---PAGE_BREAK---
|
| 311 |
+
|
| 312 |
+
electric multipole moments of the scattered field (in a spherical tensor representation). In a similar fashion, the expansion of the incident field in terms of a series of regular VSWFs ($M_{nm}^{(1)}$, $N_{nm}^{(1)}$) can be viewed as the multipolar decomposition of the incident field, also in a spherical tensor representation. For an arbitrary incident plane wave, such an expansion can be computed analytically [19] and it is in fact one of the necessary steps in Mie theory [7].
|
| 313 |
+
|
| 314 |
+
In the context of the radiative correction to the electrostatics approximation, it is interesting to write explicitly these expressions. If we consider a general external electric field (with sources at infinity), which is defined by an electric potential $\phi_{\text{inc}}(\mathbf{r})$ solution of Laplace equation, we may expand it as (the negative sign is chosen for convenience)
|
| 315 |
+
|
| 316 |
+
$$ \phi_{\text{inc}}(\mathbf{r}) = -\sum_{n,m} \tilde{b}_{nm} r^n Y_{nm}(\theta, \phi), \quad (24) $$
|
| 317 |
+
|
| 318 |
+
where $Y_{nm}(\theta, \phi)$ are normalized scalar spherical harmonics (see Appendix A). The electrostatic response of a scatterer (point or body) to this external field can be written as a standard multipole expansion [3] of the potential created outside the scatterer as
|
| 319 |
+
|
| 320 |
+
$$ \phi_{\text{sca}}(\mathbf{r}) = \frac{1}{4\pi\epsilon_0\epsilon_1} \sum_{n,m} \tilde{q}_{nm} \frac{Y_{nm}(\theta, \phi)}{r^{n+1}}. \quad (25) $$
|
| 321 |
+
|
| 322 |
+
The electric fields can be obtained from the standard relation $\mathbf{E} = -\nabla\phi$. In the general case, the induced multipole moments (represented as a vector $\tilde{\mathbf{q}}$) are linearly related to the excitation coefficients $\tilde{\mathbf{b}}$ by
|
| 323 |
+
|
| 324 |
+
$$ \tilde{\mathbf{q}} = \alpha \tilde{\mathbf{b}}. \quad (26) $$
|
| 325 |
+
|
| 326 |
+
$\alpha$ is a generalized multipolar static polarizability tensor in the spherical tensor representation. We note that different proportionality constants (potentially depending on $n,m$) could be introduced in the multipole expansions above and would affect the definition of $\alpha$. In fact, the T-matrix formulation in the electrostatics limit ($k_1 \to 0$) is an example of such an alternative definition. More explicitly, we can obtain the electrostatics limit for the normalized VSWFs (electric multipoles only) as
|
| 327 |
+
|
| 328 |
+
$$ \begin{aligned} \tilde{\mathcal{N}}_{nm}^{(1)} &= \frac{k_1^{n-1}}{(2n+1)!!} \sqrt{\frac{n+1}{n}} \nabla(r^n Y_{nm}), \\ \tilde{\mathcal{N}}_{nm}^{(3)} &= \frac{i(2n-1)!!}{k_1^{n+1}} \sqrt{\frac{n}{n+1}} \nabla \left( \frac{Y_{nm}}{r^{n+1}} \right). \end{aligned} \quad (27) $$
|
| 329 |
+
|
| 330 |
+
The electrostatics problem can therefore be recast within the T-matrix formulation as
|
| 331 |
+
|
| 332 |
+
$$ \begin{aligned} \mathbf{E}_{\text{inc}} &= \sum_{n,m} b_{nm} \tilde{\mathcal{N}}_{nm}^{(1)}, \\ \mathbf{E}_{\text{sca}} &= \sum_{n,m} q_{nm} \tilde{\mathcal{N}}_{nm}^{(3)}, \end{aligned} \quad (28) $$
|
| 333 |
+
|
| 334 |
+
with
|
| 335 |
+
|
| 336 |
+
$$ \mathbf{q} = \tilde{\mathbf{T}}^{22}\mathbf{b}, \quad (29) $$
|
| 337 |
+
|
| 338 |
+
where $\tilde{\mathbf{T}}^{22}$ is the electrostatics limit of $\mathbf{T}^{22}$ (it is the bottom right block of $\mathbf{T}^{(0)}$). Note that $\mathbf{T}^{11}$, $\mathbf{T}^{12}$, and $\mathbf{T}^{21}$ involve magnetic multipoles and are zero in an electrostatics problem.
|
| 339 |
+
|
| 340 |
+
Combining all this, we obtain a relation between the matrix elements of the multipolar static polarizability tensor and the electrostatics limit of the T matrix
|
| 341 |
+
|
| 342 |
+
$$ \tilde{T}_{nm}^{22} = \frac{ik_1^{2n+1}}{(2n-1)!!(2n+1)!!} \frac{n+1}{n} \frac{1}{4\pi\epsilon_0\epsilon_1} \alpha_{nm}. \quad (30) $$
|
| 343 |
+
|
| 344 |
+
The matrix $\tilde{\mathbf{T}}^{22}$ is therefore simply, up to some proportionality factors, the multipolar static polarizability tensor. The radiative correction to the T matrix [Eq. (21)] therefore also applies to any definition of the multipolar polarizability tensors except for the proportionality constants (and correctly keeping track of these proportionality constants is the primary difficulty in writing it out explicitly). We will give specific examples in the following.
|
| 345 |
+
|
| 346 |
+
Finally, a similar result can be obtained for the magneto-statics case in terms of the multipolar magnetic polarizability tensor $\beta_{nm}$, by substituting $\alpha_{nm} = \epsilon_0\epsilon_1\beta_{nm}$:
|
| 347 |
+
|
| 348 |
+
$$ \tilde{T}_{nm}^{11} = \frac{ik_1^{2n+1}}{(2n-1)!!(2n+1)!!} \frac{n+1}{n} \frac{1}{4\pi} \beta_{nm}. \quad (31) $$
|
| 349 |
+
|
| 350 |
+
## B. Electric and magnetic point dipoles
|
| 351 |
+
|
| 352 |
+
The simplest example of radiative correction is that of a point polarizable dipole with polarizability tensor $\alpha_0$, typically obtained from a quasistatic (electrostatics) treatment. The response of such a dipole to an incident field $\mathbf{E}_{\text{inc}}$ at its position is defined by the induced dipole moment: $\mathbf{p} = \alpha_0 \mathbf{E}_{\text{inc}}$. The T-matrix is here simply proportional to the polarizability tensor as given by Eq. (30) with $n=1$:
|
| 353 |
+
|
| 354 |
+
$$ \mathbf{T}^{22} = \frac{ik_1^3}{6\pi\epsilon_0\epsilon_1}\boldsymbol{\alpha}, \quad (32) $$
|
| 355 |
+
|
| 356 |
+
which is valid both for the approximate T-matrix $\mathbf{T}^{(0)}$ in terms of $\alpha_0$ and the corrected T-matrix $\mathbf{T}^{\text{RC}}$ in terms of $\alpha^{\text{RC}}$. The other blocks of the T-matrix are zero in this case. Moreover, $\mathbf{T}^{(0)}$ is of order $k_1^3$ and we therefore have $\mathbf{K}^{(0)} = -i\mathbf{T}^{(0)}$ [Eq. (22)]. We note that for any physical polarizability tensor, $\alpha_0$ should be Hermitian if there is no absorption and a dissipative matrix if there is absorption [in a diagonal basis with eigenvalues $\alpha_i$, this is equivalent to $\alpha_i$ real, or $\operatorname{Im}(\alpha_i) > 0$, respectively]. $\mathbf{K}^{(0)}$ therefore satisfies the energy-conservation conditions [Eq. (16)]. We can therefore rewrite Eq. (21) in terms of $\boldsymbol{\alpha}$ to obtain the expression for the radiative correction as
|
| 357 |
+
|
| 358 |
+
$$ (\boldsymbol{\alpha}^{\text{RC}})^{-1} = (\boldsymbol{\alpha}_0)^{-1} - i \frac{k_1^3}{6\pi\epsilon_0\epsilon_1} \mathbf{I}, \quad (33) $$
|
| 359 |
+
|
| 360 |
+
which is the same as Eq. (2) previously obtained heuristically for an isotropic polarizability tensor. The expression above in fact extends it to the case of a general polarizability tensor. Moreover, the argument remains valid for a body scatterer, when considering only the electric dipolar response. This, for example, justifies the empirical use of such a radiative correction for spheroidal particles [10,46]. Note that in general $\alpha_0$ depends on the frequency $\omega$, and the causality condition [8] for $\boldsymbol{\alpha}^{\text{RC}}$ cannot be easily assessed by inspection of Eq. (33) only. We speculate that, based on the arguments in Ref. [43], the fact that **K** and its approximation are dissipative will automatically enforce causality for $\boldsymbol{\alpha}^{\text{RC}}$, but further work
|
| 361 |
+
---PAGE_BREAK---
|
| 362 |
+
|
| 363 |
+
(outside the scope of this paper) is necessary to investigate
|
| 364 |
+
such aspects.
|
| 365 |
+
|
| 366 |
+
Finally, for a magnetic point dipole, whose response to an
|
| 367 |
+
incident magnetic field **H**<sub>inc</sub> is an induced magnetic dipole
|
| 368 |
+
moment **m** = **β**<sub>0</sub><b>H</b><sub>inc</sub>, we would obtain following a similar
|
| 369 |
+
reasoning
|
| 370 |
+
|
| 371 |
+
$$
|
| 372 |
+
(\beta^{\text{RC}})^{-1} = (\beta_0)^{-1} - i \frac{k_1^3}{6\pi} \mathbf{I}. \quad (34)
|
| 373 |
+
$$
|
| 374 |
+
|
| 375 |
+
C. Electric point quadrupole and higher-order multipoles
|
| 376 |
+
|
| 377 |
+
It is straightforward to extend the arguments above to
|
| 378 |
+
higher-order multipoles. For electric multipoles, we can use
|
| 379 |
+
Eq. (30), which relates the multipolar polarizability in the
|
| 380 |
+
spherical tensor representation to the *T* matrix. Applying the
|
| 381 |
+
general formula (21), we obtain the radiative correction for an
|
| 382 |
+
electric multipole of order *n* as
|
| 383 |
+
|
| 384 |
+
$$
|
| 385 |
+
(\alpha^{\mathrm{RC}})^{-1} = (\alpha_0)^{-1} - \frac{i k_1^{2n+1}}{4\pi \epsilon_0 \epsilon_1 (2n-1)!!(2n+1)!!} \frac{n+1}{n} \mathbf{I}, \quad (35)
|
| 386 |
+
$$
|
| 387 |
+
|
| 388 |
+
which is the same expression as proposed in Ref. [44] for
|
| 389 |
+
spheres. The expression above in fact extends it to the case of
|
| 390 |
+
a general polarizability tensor (for example, anisotropic).
|
| 391 |
+
|
| 392 |
+
D. Point dipoles with magneto-optic coupling
|
| 393 |
+
|
| 394 |
+
Sersic et al. have recently studied in detail the case of a
|
| 395 |
+
magnetoelectric point dipole [17] in the context of metama-
|
| 396 |
+
terials and derived an expression for the radiative correction
|
| 397 |
+
[Eq. (18) in Ref. [17]] using empirical arguments based on the
|
| 398 |
+
optical theorem for this system [45]. This expression appears
|
| 399 |
+
immediately similar to our general formula [Eq. (21)] and the
|
| 400 |
+
equivalence between the two can be demonstrated providing
|
| 401 |
+
the prefactors and units are accounted for carefully, as we now
|
| 402 |
+
show explicitly.
|
| 403 |
+
|
| 404 |
+
Following Ref. [17], the response of the scatterer to
|
| 405 |
+
an incident electromagnetic field (E<sub>inc</sub>, H<sub>inc</sub>) is defined by
|
| 406 |
+
the induced electric (p) and magnetic (m) dipole moments
|
| 407 |
+
obtained from the most general linear relation
|
| 408 |
+
|
| 409 |
+
$$
|
| 410 |
+
\left( \begin{array}{c} \mathbf{p} \\ \mathbf{m} \end{array} \right) = \alpha \left( \begin{array}{cc} \mathbf{E}_{\text{inc}} & \\ & \mathbf{H}_{\text{inc}} \end{array} \right) = \left( \begin{array}{cc} \boldsymbol{\alpha}_{EE} & \boldsymbol{\alpha}_{EH} \\ \boldsymbol{\alpha}_{HE} & \boldsymbol{\alpha}_{HH} \end{array} \right) \left( \begin{array}{c} \mathbf{E}_{\text{inc}} \\ \mathbf{H}_{\text{inc}} \end{array} \right), \quad (36)
|
| 411 |
+
$$
|
| 412 |
+
|
| 413 |
+
which is here written in rationalized units as in Ref. [17]. $\boldsymbol{\alpha}$ is a
|
| 414 |
+
$6 \times 6$ polarizability tensor, compactly written in block-matrix
|
| 415 |
+
notation.
|
| 416 |
+
|
| 417 |
+
To apply our formalism, we rewrite this definition in SI
|
| 418 |
+
units as follows:
|
| 419 |
+
|
| 420 |
+
$$
|
| 421 |
+
\left( \begin{array}{c} \mathbf{p}^{SI} \\ \mathbf{m}^{SI} \end{array} \right) = \alpha^{SI} \left( \begin{array}{c} \mathbf{E}_{\text{inc}}^{SI} \\ \mathbf{H}_{\text{inc}}^{SI} \end{array} \right), \qquad (37)
|
| 422 |
+
$$
|
| 423 |
+
|
| 424 |
+
with [47]
|
| 425 |
+
|
| 426 |
+
$$
|
| 427 |
+
\boldsymbol{\alpha}^{\text{SI}} = 4\pi \epsilon_0 \epsilon_1 \begin{pmatrix} \boldsymbol{\alpha}_{EE} & Z\boldsymbol{\alpha}_{EH} \\ \frac{1}{\epsilon_0 \epsilon_1 Z} \boldsymbol{\alpha}_{HE} & \frac{1}{\epsilon_0 \epsilon_1} \boldsymbol{\alpha}_{HH} \end{pmatrix}, \quad (38)
|
| 428 |
+
$$
|
| 429 |
+
|
| 430 |
+
where $Z = \sqrt{\mu_0 / (\epsilon_0 \epsilon_1)} = 1 / (\epsilon_0 c\sqrt{\epsilon_1})$ is the impedance of the embedding medium (with relative dielectric constant $\epsilon_1$).
|
| 431 |
+
|
| 432 |
+
Using the arguments of Sec. V A, we may express each
|
| 433 |
+
block matrix of the polarizability tensor in terms of a block
|
| 434 |
+
|
| 435 |
+
matrix [defined in Eq. (6)] of the *T* matrix for *n* = 1 (dipole
|
| 436 |
+
terms). For example, using Eqs. (30) and (31) with *n* = 1, we
|
| 437 |
+
obtain
|
| 438 |
+
|
| 439 |
+
$$
|
| 440 |
+
\begin{align}
|
| 441 |
+
\mathbf{T}_{n=1}^{22} &= \frac{i k_1^3}{6\pi \epsilon_0 \epsilon_1} \boldsymbol{\alpha}_{EE}^{\text{SI}} = \frac{2}{3} i k_1^3 \boldsymbol{\alpha}_{EE}, \\
|
| 442 |
+
\mathbf{T}_{n=1}^{11} &= \frac{i k_1^3}{6\pi} \boldsymbol{\alpha}_{HH}^{\text{SI}} = \frac{2}{3} i k_1^3 \boldsymbol{\alpha}_{HH}.
|
| 443 |
+
\end{align}
|
| 444 |
+
$$
|
| 445 |
+
|
| 446 |
+
For magnetoelectric coupling, the arguments of Sec. V A can be applied by noticing that to change from **E** to **H** one may make the substitution **b** → **(-i/Z)****a** for the incident field and **q** → **(-i/Z)****p** for the scattered field. We then get
|
| 447 |
+
|
| 448 |
+
$$
|
| 449 |
+
\begin{align}
|
| 450 |
+
\mathbf{T}_{n=1}^{21} &= \frac{i k_1^3}{6\pi \epsilon_0 \epsilon_1} \frac{-i}{Z} \boldsymbol{\alpha}_{EH}^{\text{SI}} = \frac{2}{3} i k_1^3 (-i \boldsymbol{\alpha}_{EH}), \\
|
| 451 |
+
\mathbf{T}_{n=1}^{12} &= \frac{i k_1^3}{6\pi} (i Z) \boldsymbol{\alpha}_{EH}^{\text{SI}} = -\frac{2}{3} i k_1^3 (i \boldsymbol{\alpha}_{HE}).
|
| 452 |
+
\end{align}
|
| 453 |
+
$$
|
| 454 |
+
|
| 455 |
+
These results can be written in more concise form as
|
| 456 |
+
|
| 457 |
+
$$
|
| 458 |
+
\mathbf{T} = \frac{2}{3} i k_1^3 \begin{pmatrix} \boldsymbol{\alpha}_{HH} & i\boldsymbol{\alpha}_{HE} \\ -i\boldsymbol{\alpha}_{EH} & \boldsymbol{\alpha}_{EE} \end{pmatrix}. \quad (41)
|
| 459 |
+
$$
|
| 460 |
+
|
| 461 |
+
Following our procedure presented in Sec. IV C, we therefore obtain from Eq. (21) the radiative-correction formula for such a magnetoelectric scatterer as
|
| 462 |
+
|
| 463 |
+
$$
|
| 464 |
+
\left(
|
| 465 |
+
\begin{array}{cc}
|
| 466 |
+
\boldsymbol{\alpha}_{HH}^{\mathrm{RC}} & i\boldsymbol{\alpha}_{HE}^{\mathrm{RC}} \\
|
| 467 |
+
-i\boldsymbol{\alpha}_{EH}^{\mathrm{RC}} & \boldsymbol{\alpha}_{EE}^{\mathrm{RC}}
|
| 468 |
+
\end{array}
|
| 469 |
+
\right)^{-1}
|
| 470 |
+
=
|
| 471 |
+
\left(
|
| 472 |
+
\begin{array}{cc}
|
| 473 |
+
\boldsymbol{\alpha}_{HH}^{0} & i\boldsymbol{\alpha}_{HE}^{0} \\
|
| 474 |
+
-i\boldsymbol{\alpha}_{EH}^{0} & \boldsymbol{\alpha}_{EE}^{0}
|
| 475 |
+
\end{array}
|
| 476 |
+
\right)^{-1}
|
| 477 |
+
-\frac{2}{3}i k_{1}^{3}\mathbf{I}.
|
| 478 |
+
\qquad (42)
|
| 479 |
+
$$
|
| 480 |
+
|
| 481 |
+
To recast this expression in terms of the original definition of **α**, it is necessary to change basis. One may introduce the unitary matrix **W** = ($$\begin{smallmatrix} 0 & i \\ i & 0 \end{smallmatrix}$) and by left-multiplying by **W** and right-multiplying by **W**†, we obtain
|
| 482 |
+
|
| 483 |
+
$$
|
| 484 |
+
(\boldsymbol{\alpha}^{\text{RC}})^{-1} = (\boldsymbol{\alpha}_0)^{-1} - \frac{2}{3} i k_1^3 \mathbf{I}, \quad (43)
|
| 485 |
+
$$
|
| 486 |
+
|
| 487 |
+
which is the same expression as previously obtained empiri-
|
| 488 |
+
cally [Eq. (18) in Ref. [17]].
|
| 489 |
+
|
| 490 |
+
E. Beyond the electrostatics approximation
|
| 491 |
+
|
| 492 |
+
As mentioned earlier, our procedure for radiative correction
|
| 493 |
+
can be applied to other types of approximations, for example,
|
| 494 |
+
beyond the electrostatic approximation. As an illustration, we
|
| 495 |
+
will here study in detail the case of a spherical scatterer, for
|
| 496 |
+
which exact results can be obtained from Mie theory [7]. In
|
| 497 |
+
this case, the *P*, *Q*, *U*, *K*, and *T* matrices are diagonal and
|
| 498 |
+
independent of *m*, and the only nonzero terms are
|
| 499 |
+
|
| 500 |
+
$$
|
| 501 |
+
\begin{align*}
|
| 502 |
+
P_{nn}^{11} &= A_n [s \psi_n(x) \psi'_n(sx) - \psi'_n(x) \psi_n(sx)], \\
|
| 503 |
+
Q_{nn}^{11} &= A_n [s \xi_n(x) \psi'_n(sx) - \xi'_n(x) \psi_n(sx)], \\
|
| 504 |
+
U_{nn}^{11} &= A_n [s \chi_n(x) \psi'_n(sx) - \chi'_n(x) \psi_n(sx)], \\
|
| 505 |
+
P_{nn}^{22} &= A_n [\psi_n(x) \psi'_n(sx) - s \psi'_n(x) \psi_n(sx)], \\
|
| 506 |
+
Q_{nn}^{22} &= A_n [\xi_n(x) \psi'_n(sx) - s \xi'_n(x) \psi_n(sx)], \\
|
| 507 |
+
U_{nn}^{22} &= A_n [\chi_n(x) \psi'_n(sx) - s \chi'_n(x) \psi_n(sx)], \\
|
| 508 |
+
T_{nn}^{ii} &= -P_{nn}^{ii}/Q_{nn}^{ii}, \quad K_{nn}^{ii} = P_{nn}^{ii}/U_{nn}^{ii},
|
| 509 |
+
\end{align*}
|
| 510 |
+
$$
|
| 511 |
+
---PAGE_BREAK---
|
| 512 |
+
|
| 513 |
+
where
|
| 514 |
+
|
| 515 |
+
$$A_n = i \frac{n(n+1)}{s}, \quad (45)$$
|
| 516 |
+
|
| 517 |
+
$x = k_1 a$ (with $a$ the radius of the sphere), and $s = \sqrt{\epsilon_2}/\sqrt{\epsilon_1}$ is the relative refractive index. The functions $\psi_n(x)$, $\chi_n(x)$, and $\xi_n(x)$ are the Riccati-Bessel functions [7] defined in terms of the spherical Bessel and Hankel functions as
|
| 518 |
+
|
| 519 |
+
$$\begin{aligned} \psi_n(x) &= xj_n(x), & \chi_n(x) &= xy_n(x), \\ \xi_n(x) &= xh_n^{(1)}(x) = \psi_n(x) + i\chi_n(x). \end{aligned} \quad (46)$$
|
| 520 |
+
|
| 521 |
+
This is consistent with standard approaches to Mie theory since
|
| 522 |
+
|
| 523 |
+
$$p_{nm} = \Gamma_n a_{nm}, \quad q_{nm} = \Delta_n b_{nm}, \quad (47)$$
|
| 524 |
+
|
| 525 |
+
where $\Gamma_n = T_{nn}^{11} = -P_{nn}^{11}/Q_{nn}^{11}$ and $\Delta_n = T_{nn}^{22} = -P_{nn}^{22}/Q_{nn}^{22}$ are the magnetic and electric Mie susceptibilities
|
| 526 |
+
|
| 527 |
+
$$\begin{aligned} \Gamma_n &= -\frac{s \psi_n(x) \psi'_n(sx) - \psi'_n(x) \psi_n(sx)}{s \xi_n(x) \psi'_n(sx) - \xi'_n(x) \psi_n(sx)}, \\ \Delta_n &= -\frac{\psi_n(x) \psi'_n(sx) - s \psi'_n(x) \psi_n(sx)}{\xi_n(x) \psi'_n(sx) - s \xi'_n(x) \psi_n(sx)}. \end{aligned} \quad (48)$$
|
| 528 |
+
|
| 529 |
+
The extinction, scattering, and absorption coefficients (i.e., cross sections normalized to the geometrical cross section) can be obtained from these as
|
| 530 |
+
|
| 531 |
+
$$\begin{aligned} Q_{\text{ext}} &= -\frac{2}{x^2} \sum_n (2n + 1)[\text{Re}(\Gamma_n) + \text{Re}(\Delta_n)], \\ Q_{\text{sca}} &= \frac{2}{x^2} \sum_n (2n + 1)[|\Gamma_n|^2 + |\Delta_n|^2], \\ Q_{\text{abs}} &= Q_{\text{ext}} - Q_{\text{sca}} = -\frac{2}{x^2} \sum_n (2n + 1) \\ &\qquad \times [|\Gamma_n|^2 \text{Re}(1 + \Gamma_n^{-1}) + |\Delta_n|^2 \text{Re}(1 + \Delta_n^{-1})]. \end{aligned} \quad (49)$$
|
| 532 |
+
|
| 533 |
+
Energy conservation $Q_{\text{ext}} = Q_{\text{sca}} + Q_{\text{abs}}$ with $Q_{\text{abs}} \ge 0$ then requires that [40]
|
| 534 |
+
|
| 535 |
+
$$1 + \operatorname{Re}(\Delta_n^{-1}) \le 0, \quad (50)$$
|
| 536 |
+
|
| 537 |
+
with the equality holding for nonabsorbing spheres (for which $s$ is real). Note that this condition is simply Eq. (10) in the special case of spherical scatterers. In terms of the $K$ matrix, it takes a simpler form $\operatorname{Im}(K_{nn}^{ii}) \ge 0$, where $K_{nn}^{ii}$ is up to a sign the same as $\Gamma_n$ and $\Delta_n$ upon substitution of $\xi_n(x)$ by $\chi_n(x)$.
|
| 538 |
+
|
| 539 |
+
There have been many attempts to find suitable small- argument expansions of the Mie susceptibilities [5,9,10,48, 49], notably in the context of plasmonics for the study of localized surface plasmon resonances (LSPR) in metallic nanospheres, where $|s|$ may be relatively large. The main dipolar LSPR is determined by $\Delta_1$ and its resonant character is evident in the wavelength dependence of the far-field properties, which are then given by
|
| 540 |
+
|
| 541 |
+
$$Q_{\text{ext}} \approx -\frac{6}{x^2} \text{Re}(\Delta_1), \quad Q_{\text{sca}} \approx \frac{6}{x^2} |\Delta_1|^2. \quad (51)$$
|
| 542 |
+
|
| 543 |
+
The lowest-order approximation to $\Delta_1$ is
|
| 544 |
+
|
| 545 |
+
$$\Delta_1^{(0)} = \frac{2i s^2 - 1}{3 s^2 + 2} x^3, \quad (52)$$
|
| 546 |
+
|
| 547 |
+
which is simply equivalent to the electrostatics approximation [7].
|
| 548 |
+
|
| 549 |
+
However, as illustrated in Fig. 1 for a silver nanosphere immersed in water, this approximation is only valid up to very small sizes of $\approx 5$ nm for metallic spheres, and in fact predicts a negative absorption as $x$ increases. Moreover, since the electrostatics approximation is size independent, it does not predict the red-shift and broadening of the LSPR as the size increases. The radiative correction to this dipolar polarizability as given in Eq. (2) was in fact originally introduced empirically to remedy this problem [5]. It can be simply expressed as
|
| 550 |
+
|
| 551 |
+
$$(\Delta_1^{(0)-RC})^{-1} = (\Delta_1^{(0)})^{-1} - 1 \quad (53)$$
|
| 552 |
+
|
| 553 |
+
and is another example of an application of our general formula (21). However, as shown in Fig. 1, the improvement is marginal and not quantitative. It corrects the problem of negative absorption as expected, and predicts the strength of the resonance and its broadening, but not the size-induced red-shift.
|
| 554 |
+
|
| 555 |
+
Higher-order expansions, up to third relative order, have been proposed, notably [49]
|
| 556 |
+
|
| 557 |
+
$$\Delta_1^A = \Delta_1^{(0)} \frac{1 - \frac{x^2}{10}(s^2 + 1) + O(x^4)}{1 - \frac{x^2}{10} \frac{s^2-1}{s^2+2}(s^2 + 10) - \Delta_1^{(0)} + O(x^4)}. \quad (54)$$
|
| 558 |
+
|
| 559 |
+
In this expression, the numerator and denominator have been expanded to third order (relative to lowest order). As shown in Fig. 1, this significantly increases the range of validity of the approximation, up to $a \approx 20–30$ nm. For larger sizes, however, although Eq. (54) predicts the correct red-shift, it fails to predict the correct magnitude of the resonance. This can be attributed to the fact that $\Delta_1^A$ does not strictly satisfy the energy-conservation condition for the $T$ matrix [Eq. (10)], equivalent to Eq. (50) here.
|
| 560 |
+
|
| 561 |
+
The $K$-matrix formalism here provides a simple method to address this issue and improve upon this approximation. Instead of approximating directly the $T$ matrix ($\Delta_1 = -P_{11}^{22}/Q_{11}^{22}$), we therefore use an approximation of the $K$ matrix, which considering only the electric dipole term is simply $K_1 = P_{11}^{22}/U_{11}^{22}$. Expanding the numerator $P_{11}^{22}$ and denominator $U_{11}^{22}$ as before, we have
|
| 562 |
+
|
| 563 |
+
$$K_1^A = -i\Delta_1^{(0)} \frac{1 - \frac{x^2}{10}(s^2 + 1) + O(x^4)}{1 - \frac{x^2}{10} \frac{s^2-1}{s^2+2}(s^2 + 10) + O(x^4)}. \quad (55)$$
|
| 564 |
+
|
| 565 |
+
It is already apparent that the use of the $K$ matrix provides simpler expansions, as the denominator $U_{11}^{22}$ now has a well-defined parity as opposed to $Q_{11}^{22}$ [this is because $\chi_n(x)$ is odd or even, while $\xi_n(x)$ is not]. All odd-order terms in the denominator therefore disappear (they will in fact reappear as a result of the radiative correction). $K_1^A$ also satisfies the energy-conservation condition [Eq. (16)], at least for sufficiently small $x$. In fact, for $s$ real (nonabsorbing sphere), the condition is strictly satisfied for all $x$ since $K_1^A$ is then real. We apply the central formula of this work [Eq. (21)] to derive the radiative
|
| 566 |
+
---PAGE_BREAK---
|
| 567 |
+
|
| 568 |
+
FIG. 1. (Color online) Predictions of the dipolar localized surface plasmon resonance for a silver nanosphere in water, as evidenced by the wavelength dependence of the far-field properties: extinction ($Q_{\text{ext}}$), scattering ($Q_{\text{sca}}$), and absorption ($Q_{\text{abs}} = Q_{\text{ext}} - Q_{\text{sca}}$). Only the dominant electric dipole response (corresponding to $\Delta_1$) was included in these calculations. We compare the exact result [bold (blue) lines] with approximate results for increasing sphere size. For the lowest sizes (radii of $a = 5$, 10, and 20 nm, we compare with the predictions of the electrostatics approximation (ESA) from Eq. (52) [red (dashed) lines] and those of the radiative correction to the ESA from Eq. (53) [green (solid) lines]. For larger sizes, we compare to the higher-order expansion approximations using $\Delta_1^A$ from Eq. (54) [pink (dashed) lines] and its proposed radiatively corrected version $\Delta_1^{A-RC}$ from Eq. (56) [dark cyan (solid) lines]. Note that these higher-order expansions are accurate for $a \lesssim 20$ nm (their predictions would lie on top of the exact results). In all cases, the vertical scale has been adjusted for best visualization of the quality of the approximation and the zero corresponds to the x axis (except in the two cases where the hatched area indicates the negative region).
|
| 569 |
+
|
| 570 |
+
correction to the approximated $K_1^A$ and obtain
|
| 571 |
+
|
| 572 |
+
$$ \left( \frac{\Delta_1^{A-RC}}{\Delta_1^{(0)}} \right)^{-1} = \frac{1 - \frac{x^2}{10} \frac{s^2-1}{s^2+2} (s^2 + 10)}{1 - \frac{x^2}{10} (s^2 + 1)} - 1. \quad (56) $$
|
| 573 |
+
|
| 574 |
+
This expression differs from the earlier expression [Eq. (54)] only by terms of relative order $x^4$ or larger. However, because it was derived from the *K*-matrix formalism, it should be more physically valid, especially in resonant systems where energy conservation is crucial. This is indeed the case, as it correctly predicts the LSP resonance behavior better than the previous approach based on a direct expansion of $\Delta_1^A$, in fact up to $a \approx 50$ nm as shown in Fig. 1.
|
| 575 |
+
|
| 576 |
+
## VI. CONCLUSION
|
| 577 |
+
|
| 578 |
+
In conclusion, we have shown that the *K* matrix provides a simple formalism to study general radiative correction problems in EM scattering. This was demonstrated in this work on two fronts. First, from an abstract point of view, we studied and highlighted the formal properties of the *K* matrix,
|
| 579 |
+
|
| 580 |
+
in particular with regard to energy-conservation constraints. And second, from a practical point of view, we showed that previously published radiative-correction formulas are a straightforward consequence of the *K*-matrix formalism when applying the method described in this work, namely, rather than obtaining approximations of the *T* matrix, it is beneficial to derive it from an approximate *K* matrix using Eqs. (15) and (21). We expect that other systems will now be able to be studied following the same procedure. In addition, we believe that the *K*-matrix formulation will play an important role in the general *T*-matrix approach to EM scattering. For example, as briefly mentioned in the text and in Appendix C, it provides an alternative route to numerical implementations of the *T*-matrix method, which may be more suited in some situations (for example, for nonabsorbing scatterers).
|
| 581 |
+
|
| 582 |
+
## ACKNOWLEDGMENT
|
| 583 |
+
|
| 584 |
+
The authors are indebted to the Royal Society of New Zealand for support through a Marsden Grant (W.R.C.S. and E.C.L.R.) and Rutherford Discovery Fellowship (E.C.L.R.).
|
| 585 |
+
---PAGE_BREAK---
|
| 586 |
+
|
| 587 |
+
APPENDIX A: VSWF DEFINITIONS
|
| 588 |
+
|
| 589 |
+
The four types of VSWFs ($j = 1,2,3,4$) are defined using
|
| 590 |
+
the same convention as Ref. [19] as follows:
|
| 591 |
+
|
| 592 |
+
$$
|
| 593 |
+
\begin{align}
|
| 594 |
+
\mathbf{M}_{nm}^{(j)} &= \gamma_{nm} \nabla \times (\mathbf{r} \psi_{nm}^{(j)}), & \mathbf{N}_{nm}^{(j)} &= \frac{1}{k_1} \nabla \times \mathbf{M}_{nm}^{(j)}, \tag{A1} \\
|
| 595 |
+
\left(\text{Note that } \mathbf{M}_{nm}^{(j)} = \frac{1}{k_1} \nabla \times \mathbf{N}_{nm}^{(j)}\right), & &
|
| 596 |
+
\end{align}
|
| 597 |
+
$$
|
| 598 |
+
|
| 599 |
+
where
|
| 600 |
+
|
| 601 |
+
$$
|
| 602 |
+
\gamma_{nm} = \sqrt{\frac{(2n+1)(n-m)!}{4\pi n(n+1)(n+m)!}} \quad (A2)
|
| 603 |
+
$$
|
| 604 |
+
|
| 605 |
+
is a normalization constant and
|
| 606 |
+
|
| 607 |
+
$$
|
| 608 |
+
\psi_{nm}^{(j)}(r, \theta, \phi) = z_n^{(j)}(kr) P_n^m[\cos(\theta)] e^{im\phi} \quad (A3)
|
| 609 |
+
$$
|
| 610 |
+
|
| 611 |
+
are solutions of the scalar Helmholtz equation (with wave
|
| 612 |
+
vector $k$) in spherical coordinates. $z_n^{(j)}$ are spherical Bessel
|
| 613 |
+
functions, the choice of which defines the type of VSWFs
|
| 614 |
+
(characterized by the superscript $j$):
|
| 615 |
+
|
| 616 |
+
(i) $z_n^{(1)} = j_n$, i.e., spherical Bessel function of the first kind, for the regular VSWFs: $j_n(x) = x^n/(2n+1)!$ is regular at the origin.
|
| 617 |
+
|
| 618 |
+
(ii) $z_n^{(2)} = y_n$, i.e., spherical Bessel function of the second kind, for irregular VSWFs: $y_n(x) \sim -(2n-1)!!x^{-n-1}$ is irregular at the origin.
|
| 619 |
+
|
| 620 |
+
(iii) $z_n^{(3)} = h_n^{(1)} = j_n + i y_n$, i.e., spherical Hankel function of the first kind, for outgoing spherical wave VSWFs: $h_n^{(1)}(x) \sim -(i)^{n+1} e^{ix}/x$.
|
| 621 |
+
|
| 622 |
+
(iv) $z_n^{(4)} = h_n^{(2)} = j_n - i y_n$, i.e., spherical Hankel function of the second kind, for ingoing spherical wave VSWFs: $h_n^{(2)}(x) \sim i^{n+1} e^{-ix}/x$.
|
| 623 |
+
|
| 624 |
+
The associated Legendre functions $P_n^m[\cos(\theta)]$ are here
|
| 625 |
+
defined with the Condon-Shortley phase, i.e., as
|
| 626 |
+
|
| 627 |
+
$$
|
| 628 |
+
P_n^m(x) = (-1)^m (1 - x^2)^{m/2} \frac{d^m}{dx^m} P_n(x), \quad (A4)
|
| 629 |
+
$$
|
| 630 |
+
|
| 631 |
+
where $P_n(x)$ are the Legendre polynomials.
|
| 632 |
+
|
| 633 |
+
Note that in terms of normalized scalar spherical harmonics
|
| 634 |
+
$Y_{nm}(\theta, \phi)$ [3], we have
|
| 635 |
+
|
| 636 |
+
$$
|
| 637 |
+
\gamma_{nm} \psi_{nm}^{(j)}(r, \theta, \phi) = \frac{1}{\sqrt{n(n+1)}} z_n^{(j)}(k_1 r) Y_{nm}(\theta, \phi). \quad (A5)
|
| 638 |
+
$$
|
| 639 |
+
|
| 640 |
+
APPENDIX B: ENERGY-CONSERVATION CONDITION FOR ABSORBING SCATTERERS
|
| 641 |
+
|
| 642 |
+
As mentioned in the main text, for absorbing particles, the inequality $\sigma_{\text{ext}} \ge \sigma_{\text{sca}}$ requires that $\mathbf{I} - \mathbf{S}^\dagger\mathbf{S}$ be a Hermitian positive-semidefinite matrix (HPSD) [19]. In terms of the $\mathbf{T}$ matrix itself, this results in the somewhat cumbersome condition that the matrix $-[\mathbf{T} + \mathbf{T}^\dagger + 2\mathbf{T}^\dagger\mathbf{T}]$ is HPSD. A much simpler condition is obtained in terms of the $\mathbf{K}$ matrix by
|
| 643 |
+
|
| 644 |
+
noticing that (assuming **K** and **T** are invertible)
|
| 645 |
+
|
| 646 |
+
$$
|
| 647 |
+
\begin{align*}
|
| 648 |
+
& I - S^{\dagger}S && \text{HPSD,} \\
|
| 649 |
+
\Leftrightarrow & K^{\dagger}[I - S^{\dagger}S]K && \text{HPSD,} \\
|
| 650 |
+
& -K^{\dagger}[T^{\dagger}(I + T) + (I + T^{\dagger})T]K && \text{HPSD,} \\
|
| 651 |
+
\Leftrightarrow & -K^{\dagger}T^{\dagger}(-iT) - (iT^{\dagger})TK && \text{HPSD,} \\
|
| 652 |
+
\Leftrightarrow & T^{\dagger}[iK^{\dagger} - iK]T && \text{HPSD,} \\
|
| 653 |
+
\Leftrightarrow & [iK^{\dagger} - iK] && \text{HPSD.}
|
| 654 |
+
\end{align*}
|
| 655 |
+
\tag{B1}
|
| 656 |
+
$$
|
| 657 |
+
|
| 658 |
+
The condition for the *K* matrix is therefore that [*i***K**† − *i***K**] is Hermitian positive semidefinite or, equivalently, that *K* is a dissipative matrix [41]. Note that the same proof can easily be adapted to prove the special case that **S** unitary is equivalent to **K** Hermitian.
|
| 659 |
+
|
| 660 |
+
APPENDIX C: COMPUTING THE K MATRIX WITH THE EXTENDED BOUNDARY CONDITION METHOD (EBCM)
|
| 661 |
+
|
| 662 |
+
One of the most common approaches to calculating the
|
| 663 |
+
T matrix in practice is the extended boundary condition
|
| 664 |
+
method (EBCM) or null-field method [19,21,22,24]. Within
|
| 665 |
+
this approach, T can be conveniently obtained from T =
|
| 666 |
+
-PQ-¹ [Eq. (8)], where the matrix elements of P and Q can
|
| 667 |
+
be expressed analytically as surface integrals over the particle
|
| 668 |
+
surface. Substituting this into Eq. (13) and right-multiplying
|
| 669 |
+
by Q, we obtain K(Q – P) = iP. This leads us to introduce
|
| 670 |
+
the matrix U such that Q = P + iU and we then have
|
| 671 |
+
|
| 672 |
+
$$
|
| 673 |
+
\mathbf{K} = \mathbf{P}\mathbf{U}^{-1}. \qquad (\mathrm{C}1)
|
| 674 |
+
$$
|
| 675 |
+
|
| 676 |
+
In addition, the matrix elements of **Q** and **P** have identical analytical expressions except for the substitution of **M**<sub>ν</sub><sup>(3)</sup>(k<sub>1</sub>r) for **Q** by **M**<sub>ν</sub><sup>(1)</sup>(k<sub>1</sub>r) for **P**. The matrix elements of **U** can therefore simply be obtained using the same expressions but now with **M**<sub>ν</sub><sup>(2)</sup>(k<sub>1</sub>r) [this follows from **Q** = **P** + i**U** and **M**<sub>ν</sub><sup>(3)</sup>(k<sub>1</sub>r) = **M**<sub>ν</sub><sup>(1)</sup>(k<sub>1</sub>r) + i**M**<sub>ν</sub><sup>(2)</sup>(k<sub>1</sub>r)]. Equivalently, **U** can be computed like **Q**, simply substituting spherical Hankel func- tions of the first kind, $h_n^{(1)}(x) = j_n(x) + iy_n(x)$, by irregular spherical Bessel functions $y_n(x)$. As a result, within the EBCM approach, **K** can be calculated as simply as **T**, if not more simply. The same conclusion could have been obtained by direct comparison of Eqs. (4) and (5) for **T** with the equivalent expressions (17) and (18).
|
| 677 |
+
|
| 678 |
+
In addition, it is interesting to note that the energy-
|
| 679 |
+
conservation condition for lossless scatterers is equivalent
|
| 680 |
+
within the EBCM to
|
| 681 |
+
|
| 682 |
+
$$
|
| 683 |
+
U^{\dagger}P = P^{\dagger}U. \qquad (C2)
|
| 684 |
+
$$
|
| 685 |
+
|
| 686 |
+
Defining $\mathbf{Y} = \mathbf{U}^{\dagger}\mathbf{P}$, this is also equivalent to $\mathbf{Y}$ Hermitian. In the general case of absorbing scatterers, the condition that $\mathbf{K}$ be a dissipative matrix is also equivalent to $\mathbf{Y}$ dissipative. $\mathbf{K}$ can then be obtained from the following expression:
|
| 687 |
+
|
| 688 |
+
$$
|
| 689 |
+
\mathbf{K} = \mathbf{P}\mathbf{Y}^{-1}\mathbf{P}^{\dagger}, \qquad (\text{C3})
|
| 690 |
+
$$
|
| 691 |
+
|
| 692 |
+
which automatically implies that **K** is Hermitian (dissipative)
|
| 693 |
+
if **Y** is Hermitian (dissipative). These observations allow one to
|
| 694 |
+
check (and even enforce) the energy-conservation condition on
|
| 695 |
+
**Y** (for example, as a function of truncation) before carrying out
|
| 696 |
+
any matrix inversion. We believe that such an approach may
|
| 697 |
+
also be further developed to improve the numerical stability
|
| 698 |
+
---PAGE_BREAK---
|
| 699 |
+
|
| 700 |
+
of the *T*-matrix approach, which is a common issue in EBCM implementations [50–52].
|
| 701 |
+
|
| 702 |
+
## APPENDIX D: EQUIVALENCE OF OPTICAL RECIPROCITY AND ENERGY-CONSERVATION CONDITIONS ON K
|
| 703 |
+
|
| 704 |
+
We here restrict ourselves to nonabsorbing scatterers. In order to highlight the central idea without being hampered by technicalities, it is enlightening to first consider the somewhat artificial case involving electric multipoles only, i.e., only the block **T**²² of the *T* matrix, and the case of a scatterer with symmetry of revolution (for which different values of *m* are decoupled). In this case, optical reciprocity is equivalent to **K**²² symmetric, whereas energy conservation is equivalent to **K**²² Hermitian. For nonabsorbing scatterers, the matrix elements of **P**²² and **U**²² are pure imaginary numbers and therefore **K**²² is by construction a real matrix [from Eq. (C1)]. The conditions for optical reciprocity (**K**²² symmetric) and energy conservation (**K**²² Hermitian) then become trivially equivalent. We note that this equivalence is not obvious when considering the *T* matrix as opposed to the **K** matrix.
|
| 705 |
+
|
| 706 |
+
This argument can in fact be generalized to the full **K** matrix for a scatterer of arbitrary shape. The optical reciprocity condition then takes the form
|
| 707 |
+
|
| 708 |
+
$$ K_{n,m,n',m'}^{ij} = (-1)^{m+m'} K_{n',-m',n,-m}^{ji} \quad (\text{D1}) $$
|
| 709 |
+
|
| 710 |
+
which is deduced from an identical relation for **T** [Ref. [19], Eq. (5.34)].
|
| 711 |
+
|
| 712 |
+
In the framework of the EBCM approach, **K** is computed from **K** = **PU**⁻¹, where the matrix elements of **P** and **U** are given by surface integrals involving cross products of **M**(1),
|
| 713 |
+
|
| 714 |
+
$${\bf N}^{(1)}, {\bf M}^{(2)}, {\bf M}^{(2)}$$
|
| 715 |
+
|
| 716 |
+
for example,
|
| 717 |
+
|
| 718 |
+
$$ J_{n,m,n',m'} = -i(-1)^m \int_S dS \mathbf{n} \cdot [\mathbf{M}_{n',m'}^{(1)} \times \mathbf{M}_{n,-m}^{(1)}]. \quad (\text{D2}) $$
|
| 719 |
+
|
| 720 |
+
Moreover, in a nonabsorbing medium (with wave vector *k* real), we have
|
| 721 |
+
|
| 722 |
+
$$ \mathbf{M}_{n,-m}^{(1)}(\mathbf{k}\mathbf{r}) = (-1)^m (\mathbf{M}_{n,m}^{(1)}(\mathbf{k}\mathbf{r}))^* \quad (\text{D3}) $$
|
| 723 |
+
|
| 724 |
+
along with identical relations relating to $\mathbf{N}^{(1)}$, $\mathbf{M}^{(2)}$, and $\mathbf{N}^{(2)}$. For an integral like the one given above, we therefore have
|
| 725 |
+
|
| 726 |
+
$$ (J_{n,m,n',m'})^* = -(-1)^{m+m'} J_{n,-m,n',-m'}. \quad (\text{D4}) $$
|
| 727 |
+
|
| 728 |
+
By inspection of the integrals for the matrix elements of **P** and **U** (Ref. [19], p. 145), we therefore deduce that
|
| 729 |
+
|
| 730 |
+
$$ (P_{n,m,n',m'}^{ij})^* = -(-1)^{m+m'} P_{n,-m,n',-m'}^{ij} \quad (\text{D5}) $$
|
| 731 |
+
|
| 732 |
+
and
|
| 733 |
+
|
| 734 |
+
$$ (U_{n,m,n',m'}^{ij})^* = -(-1)^{m+m'} U_{n,-m,n',-m'}^{ij} \quad (\text{D6}) $$
|
| 735 |
+
|
| 736 |
+
By carrying out explicitly the block inversion of **U** and block-matrix multiplication of **PU**⁻¹, one may then show that
|
| 737 |
+
|
| 738 |
+
$$ (K_{n,m,n',m'}^{ij})^* = (-1)^{m+m'} K_{n,-m,n',-m'}^{ji} \quad (\text{D7}) $$
|
| 739 |
+
|
| 740 |
+
Using this expression (only valid for nonabsorbing scatterers), it is clear that the optical reciprocity condition [Eq. (D1)] is equivalent to
|
| 741 |
+
|
| 742 |
+
$$ (K_{n,m,n',m'}^{ij})^* = K_{n',m',n,m}^{ji} \quad (\text{D8}) $$
|
| 743 |
+
|
| 744 |
+
which is exactly the condition for energy conservation $K = K^\dagger$.
|
| 745 |
+
|
| 746 |
+
[1] W. Heitler, *The Quantum Theory of Radiation*, 3rd ed. (Oxford University Press, Oxford, 1954).
|
| 747 |
+
|
| 748 |
+
[2] W. Heitler, *Math. Proc. Cambridge Philos. Soc.* **37**, 291 (1941).
|
| 749 |
+
|
| 750 |
+
[3] J. D. Jackson, *Classical Electrodynamics*, 2nd ed. (Wiley, New York, 1998).
|
| 751 |
+
|
| 752 |
+
[4] P. de Vries, D. V. van Coevorden, and A. Lagendijk, *Rev. Mod. Phys.* **70**, 447 (1998).
|
| 753 |
+
|
| 754 |
+
[5] A. Wokaun, J. P. Gordon, and P. F. Liao, *Phys. Rev. Lett.* **48**, 957 (1982).
|
| 755 |
+
|
| 756 |
+
[6] E. C. Le Ru and P. G. Etchegoin, *Principles of Surface-Enhanced Raman Spectroscopy and Related Plasmonic Effects* (Elsevier, Amsterdam, 2009).
|
| 757 |
+
|
| 758 |
+
[7] C. F. Bohren and D. R. Huffman, *Absorption and Scattering of Light by Small Particles* (Wiley, New York, 1983).
|
| 759 |
+
|
| 760 |
+
[8] J. S. Toll, *Phys. Rev.* **104**, 1760 (1956).
|
| 761 |
+
|
| 762 |
+
[9] M. Meier and A. Wokaun, *Opt. Lett.* **8**, 581 (1983).
|
| 763 |
+
|
| 764 |
+
[10] K. L. Kelly, E. Coronado, L. L. Zhao, and G. C. Schatz, *J. Phys. Chem. B* **107**, 668 (2003).
|
| 765 |
+
|
| 766 |
+
[11] L. Novotny, B. Hecht, and D. W. Pohl, *J. Appl. Phys.* **81**, 1798 (1997).
|
| 767 |
+
|
| 768 |
+
[12] W. H. Weber and G. W. Ford, *Phys. Rev. B* **70**, 125429 (2004).
|
| 769 |
+
|
| 770 |
+
[13] A. Lakhtakia, *Int. J. Mod. Phys. C* **3**, 583 (1992).
|
| 771 |
+
|
| 772 |
+
[14] B. T. Draine and P. J. Flatau, *J. Opt. Soc. Am. A* **11**, 1491 (1994).
|
| 773 |
+
|
| 774 |
+
[15] L. Novotny and B. Hecht, *Principles of Nano-Optics* (Cambridge University Press, Cambridge, 2006).
|
| 775 |
+
|
| 776 |
+
[16] M. A. Yurkin and A. G. Hoekstra, *J. Quant. Spectrosc. Radiat. Transfer* **106**, 558 (2007).
|
| 777 |
+
|
| 778 |
+
[17] I. Sersic, C. Tuambilangana, T. Kampfrath, and A. F. Koenдерink, *Phys. Rev. B* **83**, 245102 (2011).
|
| 779 |
+
|
| 780 |
+
[18] R. G. Newton, *Scattering Theory of Waves and Particles* (McGraw-Hill, New York, 1966).
|
| 781 |
+
|
| 782 |
+
[19] M. I. Mishchenko, L. D. Travis, and A. A. Lacis, *Scattering, Absorption and Emission of Light by Small Particles*, 3rd ed. (Cambridge University Press, Cambridge, 2002).
|
| 783 |
+
|
| 784 |
+
[20] P. C. Waterman, *Proc. IEEE* **53**, 805 (1965).
|
| 785 |
+
|
| 786 |
+
[21] P. C. Waterman, *Phys. Rev. D* **3**, 825 (1971).
|
| 787 |
+
|
| 788 |
+
[22] P. W. Barber and C. Yeh, *Appl. Opt.* **14**, 2864 (1975).
|
| 789 |
+
|
| 790 |
+
[23] P. W. Barber and S. C. Hill, *Light Scattering by Particles: Computational Methods* (World Scientific, Singapore, 1990).
|
| 791 |
+
|
| 792 |
+
[24] L. Tsang, J. A. Kong, and K.-H. Ding, *Scattering of Electromagnetic Waves* (Wiley, New York, 2000).
|
| 793 |
+
|
| 794 |
+
[25] A. Doicu, T. Wriedt, and Y. A. Eremin, *Light Scattering by Systems of Particles: Null-Field Method with Discrete Sources: Theory and Programs*, Springer Series in Optical Sciences, Vol. 124 (Springer, Berlin, 2006).
|
| 795 |
+
---PAGE_BREAK---
|
| 796 |
+
|
| 797 |
+
[26] P. Martin, *Multiple Scattering* (Cambridge University Press, Cambridge, 2006).
|
| 798 |
+
|
| 799 |
+
[27] P. Yang, Q. Feng, G. Hong, G. W. Kattawar, W. J. Wiscombe, M. I. Mishchenko, O. Dubovik, I. Laszlo, and I. N. Sokolik, *J. Aerosol Sci.* **38**, 995 (2007).
|
| 800 |
+
|
| 801 |
+
[28] R. Boyack and E. C. Le Ru, *Phys. Chem. Chem. Phys.* **11**, 7398 (2009).
|
| 802 |
+
|
| 803 |
+
[29] B. N. Khlebtsov and N. G. Khlebtsov, *J. Phys. Chem. C* **111**, 11516 (2007).
|
| 804 |
+
|
| 805 |
+
[30] P. W. Barber, R. K. Chang, and H. Massoudi, *Phys. Rev. Lett.* **50**, 997 (1983).
|
| 806 |
+
|
| 807 |
+
[31] P. W. Barber, R. K. Chang, and H. Massoudi, *Phys. Rev. B* **27**, 7251 (1983).
|
| 808 |
+
|
| 809 |
+
[32] D. W. Mackowski and M. I. Mishchenko, *J. Opt. Soc. Am. A* **13**, 2266 (1996).
|
| 810 |
+
|
| 811 |
+
[33] F. Xu, J. A. Lock, and G. Gouesbet, *Phys. Rev. A* **81**, 043824 (2010).
|
| 812 |
+
|
| 813 |
+
[34] P. C. Waterman, *J. Acoust. Soc. Am.* **45**, 1417 (1969).
|
| 814 |
+
|
| 815 |
+
[35] J. R. Taylor, *Scattering Theory: The Quantum Theory of Nonrelativistic Collisions* (Wiley, New York, 1972).
|
| 816 |
+
|
| 817 |
+
[36] W. Tobocman and M. A. Nagarajan, *Phys. Rev.* **163**, 1011 (1967).
|
| 818 |
+
|
| 819 |
+
[37] P. V. Landshoff, *J. Math. Phys.* **9**, 2279 (1968).
|
| 820 |
+
|
| 821 |
+
[38] M. I. Mishchenko, *J. Opt. Soc. Am. A* **8**, 871 (1991).
|
| 822 |
+
|
| 823 |
+
[39] R. G. Newton, *Am. J. Phys.* **44**, 639 (1976).
|
| 824 |
+
|
| 825 |
+
[40] P. Chýlek and R. G. Pinnick, *Appl. Opt.* **18**, 1123 (1979).
|
| 826 |
+
|
| 827 |
+
[41] K. Fan, *Linear Algebra Applicat.* **9**, 223 (1974).
|
| 828 |
+
|
| 829 |
+
[42] R. C. Thompson, *Houston J. Math.* **1**, 137 (1975).
|
| 830 |
+
|
| 831 |
+
[43] E. J. Beltrami, *J. Math. Anal. Applicat.* **19**, 231 (1967).
|
| 832 |
+
|
| 833 |
+
[44] G. Colas des Francs, *Int. J. Mol. Sci.* **10**, 3931 (2009).
|
| 834 |
+
|
| 835 |
+
[45] P. A. Belov, S. I. Maslovski, K. R. Simovski, and S. A. Tretyakov, *Tech. Phys. Lett.* **29**, 718 (2003) [Pis'ma Zh. Tekh. Fiz.* **29**, 36 (2003)].
|
| 836 |
+
|
| 837 |
+
[46] A. Moroz, *J. Opt. Soc. Am. B* **26**, 517 (2003).
|
| 838 |
+
|
| 839 |
+
[47] Note that there is a factor $\sqrt{\epsilon_0/\epsilon}$ missing in Table I of Ref. [17] for $\alpha_{EH}$.
|
| 840 |
+
|
| 841 |
+
[48] W. J. Wiscombe, *Appl. Opt.* **19**, 1505 (1980).
|
| 842 |
+
|
| 843 |
+
[49] H. Kuwata, H. Tamaru, K. Esumi, and K. Miyano, *Appl. Phys. Lett.* **83**, 4625 (2003).
|
| 844 |
+
|
| 845 |
+
[50] P. Barber, *IEEE Trans. Microwave Theory Tech.* **25**, 373 (1977).
|
| 846 |
+
|
| 847 |
+
[51] W. R. C. Somerville, B. Auguié, and E. C. Le Ru, *Opt. Lett.* **36**, 3482 (2011).
|
| 848 |
+
|
| 849 |
+
[52] W. R. C. Somerville, B. Auguié, and E. C. Le Ru, *J. Quant. Spectrosc. Radiat. Transfer* **113**, 524 (2012).
|
samples/texts_merged/1230197.md
ADDED
|
@@ -0,0 +1,278 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
1996
|
| 5 |
+
|
| 6 |
+
# Dynamic Exponent of the Two-Dimensional Ising Model and Monte Carlo Computation of the Subdominant Eigenvalue of the Stochastic Matrix
|
| 7 |
+
|
| 8 |
+
M. P. Nightingale
|
| 9 |
+
|
| 10 |
+
University of Rhode Island, nightingale@uri.edu
|
| 11 |
+
|
| 12 |
+
H. W.J. Blöte
|
| 13 |
+
|
| 14 |
+
Follow this and additional works at: https://digitalcommons.uri.edu/phys_facpubs
|
| 15 |
+
|
| 16 |
+
Terms of Use
|
| 17 |
+
|
| 18 |
+
All rights reserved under copyright.
|
| 19 |
+
|
| 20 |
+
**Citation/Publisher Attribution**
|
| 21 |
+
|
| 22 |
+
Nightingale, M. P., & Blöte, H. W.J. (1996). Dynamic Exponent of the Two-Dimensional Ising Model and Monte Carlo Computation of the Subdominant Eigenvalue of the Stochastic Matrix. *Physical Review Letters*, 76(24), 4548-4551. doi: 10.1103/PhysRevLett.76.4548
|
| 23 |
+
Available at: http://dx.doi.org/10.1103/PhysRevLett.76.4548
|
| 24 |
+
---PAGE_BREAK---
|
| 25 |
+
|
| 26 |
+
# Dynamic Exponent of the Two-Dimensional Ising Model and Monte Carlo Computation of the Subdominant Eigenvalue of the Stochastic Matrix
|
| 27 |
+
|
| 28 |
+
M. P. Nightingale
|
| 29 |
+
|
| 30 |
+
Department of Physics, University of Rhode Island, Kingston, Rhode Island 02881
|
| 31 |
+
|
| 32 |
+
H. W. J. Blöte
|
| 33 |
+
|
| 34 |
+
Department of Applied Physics, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft, The Netherlands
|
| 35 |
+
(Received 16 January 1996)
|
| 36 |
+
|
| 37 |
+
We introduce a novel variance-reducing Monte Carlo algorithm for accurate determination of correlation times. We apply this method to two-dimensional Ising systems with sizes up to 15 × 15, using single-spin flip dynamics, random site selection, and transition probabilities according to the heat-bath method. From a finite-size scaling analysis of these correlation times, the dynamic critical exponent z is determined as $z = 2.1665(12)$. [S0031-9007(96)00379-1]
|
| 38 |
+
|
| 39 |
+
PACS numbers: 64.60.Ht, 02.70.Lq, 05.50.+q, 05.70.Jk
|
| 40 |
+
|
| 41 |
+
The onset of criticality is marked by a divergence of both the correlation length $\xi$ and the correlation time $\tau$. While the former divergence yields singularities in static quantities, the latter manifests itself notably as critical slowing down. To describe dynamic scaling properties, only one exponent is required in addition to the static exponents. This dynamic exponent z links the divergences of length and time scales: $\tau \sim \xi^z$. In our computation of z we exploit that, for a finite system, $\xi$ is limited by the system size L, so that $\tau \sim L^z$ at the incipient critical point.
|
| 42 |
+
|
| 43 |
+
In this Letter, we focus on the two-dimensional Ising model with Glauber-like dynamics. Values quoted in the literature for z vary vastly, from $z = 1.7$ to $z = 2.7$ [1], but recent computations seem to be converging towards the value reported here. Finally, results are beginning to emerge of precision sufficient for sensitive tests of fundamental issues such as universality.
|
| 44 |
+
|
| 45 |
+
The numerical method introduced in this Letter is related to Monte Carlo methods used to compute eigenvalues of Hamiltonians of discrete or continuous quantum systems [2,3] and transfer matrices of statistical mechanical systems [4]. In particular, the current method is suitable to obtain more than one eigenvalue by adaptation of the diffusion Monte Carlo algorithm of Ref. [5].
|
| 46 |
+
|
| 47 |
+
To compute the correlation time of small $L \times L$ lattices we exploit the following properties of the single-spin-flip Markov (or stochastic) matrix **P** [6]. It operates in the linear space of all spin configurations and its largest eigenvalue equals unity. The corresponding right eigenvector contains the Boltzmann weights of the spin configurations; the left eigenvector is constant, reflecting probability conservation. The correlation time $\tau_L$ (in units of one flip per spin, i.e., $L^2$ single-spin flips) is determined by the second-largest eigenvalue $\lambda_L$,
|
| 48 |
+
|
| 49 |
+
$$\tau_L = - \frac{1}{L^2 \ln \lambda_L} \quad (1)$$
|
| 50 |
+
|
| 51 |
+
For a system symmetric under spin inversion, the corresponding eigenvector is expected to be antisymmetric.
|
| 52 |
+
|
| 53 |
+
We used two methods to compute $\lambda_L$: exact, numerical computation for $L \le 5$ and Monte Carlo for $4 \le L \le 15$. The exact method used the conjugate gradient algorithm [7] and the symmetries of periodic systems. This calculation resembles that in Ref. [8], but currently we realize Glauber-like dynamics using heat-bath or Yang [9] transition probabilities and random site selection.
|
| 54 |
+
|
| 55 |
+
The Monte Carlo method used a stochastic form of the power method, as follows [5]. A spin configuration s with energy $E(s)$ has a probability
|
| 56 |
+
|
| 57 |
+
$$\frac{\exp[-E(s)/kT]}{Z} \equiv \frac{\psi_B(s)^2}{Z}, \quad (2)$$
|
| 58 |
+
|
| 59 |
+
where Z is the partition function. The element $P(s'|s)$ of the Markov matrix is the probability of a single-spin-flip transition from s to s'. Since **P** satisfies detailed balance,
|
| 60 |
+
|
| 61 |
+
$$\hat{P}(s'|s) = \frac{1}{\psi_B(s')} P(s'|s) \psi_B(s) \quad (3)$$
|
| 62 |
+
|
| 63 |
+
is symmetric. For an arbitrary trial state $|f\rangle$ an effective eigenvalue $\lambda_L^{(t)}$ is defined by
|
| 64 |
+
|
| 65 |
+
$$\lambda_L^{(t)} = \frac{\langle \hat{\mathbf{P}}^{t+1} \rangle_f}{\langle \hat{\mathbf{P}}^t \rangle_f}, \quad (4)$$
|
| 66 |
+
|
| 67 |
+
where $\langle \cdot \rangle_f$ is the expectation value in the state $|f\rangle$. In the limit $t \to \infty$, the effective eigenvalue converges generically to the dominant eigenvalue allowed by the symmetry of $|f\rangle$. The convergence is exponential in the time lag t.
|
| 68 |
+
|
| 69 |
+
Given a trial state $|f\rangle$, standard Monte Carlo method suffices to compute the right-hand side of Eq. (4), i.e.,
|
| 70 |
+
---PAGE_BREAK---
|
| 71 |
+
|
| 72 |
+
the denominator of Eq. (4),
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
\begin{align}
|
| 76 |
+
N^{(t)} &\equiv \langle f | \hat{\mathbf{P}}^t | f \rangle = \sum_{s_1, \dots, s_{t+1}} f(s_{t+1}) \hat{\mathbf{P}}(s_{t+1} | s_t) \cdots \hat{\mathbf{P}}(s_2 | s_1) f(s_1) \nonumber \\
|
| 77 |
+
&= \sum_{s_1, \dots, s_{t+1}} \frac{f(s_1)f(s_{t+1})}{\psi_B(s_1)\psi_B(s_{t+1})} P(s_{t+1}|s_t) \cdots P(s_2|s_1)\psi_B(s_1)^2 = Z \left\langle \frac{f(s_1)f(s_{t+1})}{\psi_B(s_1)\psi_B(s_{t+1})} \right\rangle_P, \tag{5}
|
| 78 |
+
\end{align}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
is an autocorrelation; $f(s) = \langle s | f \rangle$ and $\langle \cdot \rangle_P$ denotes the average with respect to the probability
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
P(s_{t+1}|s_t) \cdots P(s_2|s_1) \psi_B(s_1)^2 / Z \quad (6)
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
of finding a configuration $s_1$ in equilibrium and subsequent transitions to configurations $s_2$ through $s_{t+1}$.
|
| 88 |
+
|
| 89 |
+
Similarly, the numerator of Eq. (4),
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\begin{align}
|
| 93 |
+
H^{(t)} &\equiv \langle f | \hat{\mathbf{P}}^{t+1} | f \rangle \nonumber \\
|
| 94 |
+
&= \sum_{s_0, \dots, s_{t+1}} f(s_{t+1}) \hat{\mathbf{P}}(s_{t+1} | s_t) \cdots \hat{\mathbf{P}}(s_1 | s_0) f(s_0) \nonumber \\
|
| 95 |
+
&= \frac{1}{2} Z \left\langle [\lambda_L(s_1) + \lambda_L(s_{t+1})] \frac{f(s_1)f(s_{t+1})}{\psi_B(s_1)\psi_B(s_{t+1})} \right\rangle_P \tag{7}
|
| 96 |
+
\end{align}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
is a cross correlation, where the “configurational eigenvalue” $\lambda_L(s)$ of spin configuration $s$ is defined as
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\lambda_L(s) = \frac{1}{f(s)} \sum_{s'} f(s') \hat{P}(s'|s). \quad (8)
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
Finally, with Eqs. (5) and (7), one has $\lambda_L^{(t)} = H^{(t)}/N^{(t)}$ for the effective eigenvalue.
|
| 106 |
+
|
| 107 |
+
In practice, $H^{(t)}$ and $N^{(t)}$ are estimated by conventional Monte Carlo methods. As usual, these estimators involve time averages of stochastic variables. Thus, on the right of Eqs. (5) and (7) $s_i$ is replaced by $s_{t+i-1}$ ($i = 1, \dots, t$), and the Monte Carlo average is taken over an appropriately chosen subset of times $t'$ after thermal equilibration.
|
| 108 |
+
|
| 109 |
+
In principle, one could choose $f = m\psi_B$, where $m$ is the magnetization. In that case, the above method reduces to estimating the effective eigenvalue of the Markov matrix in terms of the magnetization autocorrelation function $g(t)$ via $\lambda_L^{(t)} = g(t+1)/g(t)$. To estimate $g(t)$ one would average over time products of the form $m(s_1)m(s_{t+1})$. Equation (7) would then yield $g(t+1)$ by replacing $m(s_t)$ by the conditional expectation value of the magnetization at time $t+1$, evaluated explicitly as $\sum_{s_{t+1}} m(s_{t+1})P(s_{t+1}|s_t)$.
|
| 110 |
+
|
| 111 |
+
The crux is that the estimator of $\lambda_L^{(t)}$ satisfies a zero-variance principle [5], since Eqs. (5) and (7) contain an optimizable trial state $|f\rangle$. In the ideal case, $|f\rangle$ is an exact eigenstate of the symmetrized Markov matrix $\hat{\mathbf{P}}$, and the “configurational eigenvalue” $\lambda_L(s)$ equals the eigenvalue independent of $s$. Then, the estimator of the effective eigenvalue $\lambda_L^{(t)}$ yields the exact eigenvalue without statistical and systematic errors at finite $t$, if care
|
| 112 |
+
|
| 113 |
+
is taken to arrange cancellation of the fluctuating factors in the estimators of $H^{(t)}$ and $N^{(t)}$. It should be noted that this is true only if the numerator of Eq. (4) is evaluated with Eq. (7), in which the change from $t$ to $t+1$ is made by an explicit matrix multiplication, rather than by using the analog of Eq. (5) with $t$ replaced by $t+1$. In practice, $|f\rangle$ is not an exact eigenstate, and this introduces statistical and systematic errors. However, these errors are kept small by the zero-variance principle, if the trial states are accurate.
|
| 114 |
+
|
| 115 |
+
Such optimized trial states are constructed prior to the main Monte Carlo run, by minimization of the variance $\chi^2$ of the configurational eigenvalue
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\chi^2(p) = \langle (\hat{\mathbf{P}} - \langle \hat{\mathbf{P}} \rangle_f)^2 \rangle_f . \qquad (9)
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
As indicated, the variance depends on the parameters $p$ of the trial state. Optimization over $p$ is done following Umrigar, Wilson, and Wilkins [10]: one samples $M$ configurations $s_i$, typically a few thousand, with probability $\psi_B^{-2}$ and approximates $\chi^2(p)$ by
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\chi^2(p) \approx \frac{\sum_{i=1}^{M} [f(s_i, p)/\psi_B(s_i)]^2 [\lambda_L(s_i, p) - \bar{\lambda}_L(p)]^2}{\sum_{i=1}^{M} [f(s_i, p)/\psi_B(s_i)]^2}. \quad (10)
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
Here $\bar{\lambda}_L$ denotes the weighted average of the configurational eigenvalue over the sample, while the modified notation explicitly shows dependences on the parameters $p$ of the trial state $|f\rangle$. Near-optimal values of the parameters $p$ can be obtained by minimization of the expression on the right-hand side of Eq. (10) for a fixed sample. Statistical independence in the sample requires that the configurations be selected at intervals on the order of the correlation time.
|
| 128 |
+
|
| 129 |
+
A guiding principle for the construction of trial states is that long-wavelength fluctuations of the magnetization have the longest decay time. Furthermore, analysis of the exact left eigenvectors of the Markov matrix **P** for systems with $L \le 5$ shows that the elements depend only on the magnetization to good approximation. This suggests trial functions depending on long-wavelength components of the Fourier transform of $s_i$, the zero-momentum component of which is just the magnetization $m$. The form
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
f(s) = \tilde{\psi}_B(s) \psi^{(+)}(s) \psi^{(-)}(s), \quad (11)
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where $\psi^{(\pm)} \rightarrow \pm i\psi^{(\pm)}$ under spin inversion, yields an antisymmetric trial function, as required. The tilde in
|
| 136 |
+
---PAGE_BREAK---
|
| 137 |
+
|
| 138 |
+
$\tilde{\psi}_B$ indicates that the temperature is used as a variational parameter, but we found that its optimal value is virtually indistinguishable from the true temperature. The $\psi^{(\pm)}$ were chosen as
|
| 139 |
+
|
| 140 |
+
$$ \psi^{(+)} = \sum_{\mathbf{k}} a_{\mathbf{k}}(m^2)m_{\mathbf{k}}^{(+)} + m \sum_{\mathbf{k}} b_{\mathbf{k}}(m^2)m_{\mathbf{k}}^{(-)}, \quad (12) $$
|
| 141 |
+
|
| 142 |
+
$$ \psi^{(-)} = m \sum_{\mathbf{k}} c_{\mathbf{k}}(m^2)m_{\mathbf{k}}^{(+)} + \sum_{\mathbf{k}} d_{\mathbf{k}}(m^2)m_{\mathbf{k}}^{(-)}, \quad (13) $$
|
| 143 |
+
|
| 144 |
+
where the index **k** runs through a small set of multiplets of four or fewer long-wavelength wave vectors defining the $m_{\mathbf{k}}^{(\pm)}$, translation and rotation symmetric sums of products of Fourier transforms of the local magnetization; the **k** are selected so that $m_{\mathbf{k}}^{(-)}$ is odd and $m_{\mathbf{k}}^{(+)}$ is even under spin inversion; the coefficients $a_{\mathbf{k}}, b_{\mathbf{k}}, c_{\mathbf{k}}$, and $d_{\mathbf{k}}$ are polynomials of second order or less in $m^2$. The degrees of these polynomials were chosen so that no terms occur of higher degree than four in the spin variables. We used trial functions dependent on system size only in the optimal values of the parameters. This yielded a $\chi^2$ and an error in the variational estimate $\lambda_L^{(1)}$ decreasing with $L$; yet, the relative error in $\tau_L$ increases.
|
| 145 |
+
|
| 146 |
+
Since the probability distribution Eq. (6) is precisely the one purportedly generated by standard Monte Carlo method, the sampling procedure is straightforward. The Monte Carlo algorithm used a random-number generator of the shift-register type. It was selected with care to avoid the introduction of systematic errors; see discussion and references in Ref. [11]. We used two Kirkpatrick-Stoll [12] generators, the results of which were combined by a bitwise exclusive or [13]. For test purposes we replaced one Kirkpatrick-Stoll generator by a linear congruential rule, but this did not reveal clear differences [11].
|
| 147 |
+
|
| 148 |
+
For each system size $4 \le L \le 15$, Monte Carlo averages were taken over $8 \times 10^8$ spin configurations. For $L=13-15$ these were separated by intervals of 16 sweeps (Monte Carlo steps per spin); 8 sweeps for $L=11$ and $12$; 2 sweeps for $L=5$ and $6$; and only one sweep for $L=4$. The simulations of the remaining system sizes consisted of parts using intervals of 2, 4, or 8 sweeps.
|
| 149 |
+
|
| 150 |
+
The numerical results for the effective second largest eigenvalue $\lambda_L^{(t)}$ as a function of the projection time $t$ appeared to converge rapidly. In agreement with scaled results for $L \le 5$ spectra, we observe that convergence occurs within a few intervals as given above. Monte Carlo estimates of $\lambda_L$ are shown in Table I, as are exact results for small systems. For system sizes $L=4$ and $5$, the two types of calculation agree satisfactorily. The small numerical errors indicate that the variance-reducing method introduced above is quite effective.
|
| 151 |
+
|
| 152 |
+
For finite system size $L$ there are corrections to the leading scaling behavior $\tau_L \sim L^z$. In the two-dimensional Ising model corrections to static equilibrium quantities occur with even powers of $1/L$ [14]; thus we
|
| 153 |
+
|
| 154 |
+
TABLE I. Second-largest eigenvalue $\lambda_L$ of the Markov matrix. The first column indicates the method: exact numerical or Monte Carlo.
|
| 155 |
+
|
| 156 |
+
<table><thead><tr><th>Method</th><th>L</th><th>λ<sub>L</sub></th><th>Error</th></tr></thead><tbody><tr><td>Exact</td><td>2</td><td>0.985702260395516</td><td>0.00000000001</td></tr><tr><td>Exact</td><td>3</td><td>0.997409385126011</td><td>0.00000000001</td></tr><tr><td>Exact</td><td>4</td><td>0.999245567376453</td><td>0.00000000001</td></tr><tr><td>Exact</td><td>5</td><td>0.999708953624452</td><td>0.00000000001</td></tr><tr><td>MC</td><td>4</td><td>0.9992455685</td><td>0.0000000094</td></tr><tr><td>MC</td><td>5</td><td>0.9997089453</td><td>0.0000000060</td></tr><tr><td>MC</td><td>6</td><td>0.9998657194</td><td>0.0000000045</td></tr><tr><td>MC</td><td>7</td><td>0.9999299708</td><td>0.0000000031</td></tr><tr><td>MC</td><td>8</td><td>0.999960854</td><td>0.0000000023</td></tr><tr><td>MC</td><td>9</td><td>0.9999756630</td><td>0.0000000017</td></tr><tr><td>MC</td><td>10</td><td>0.9999843577</td><td>0.0000000014</td></tr><tr><td>MC</td><td>11</td><td>0.9999895056</td><td>0.0000000010</td></tr><tr><td>MC</td><td>12</td><td>0.9999927107</td><td>0.000000008</td></tr><tr><td>MC</td><td>13</td><td>0.999994784</td><td>0.000000006</td></tr><tr><td>MC</td><td>14</td><td>0.9999961736</td><td>0.000000005</td></tr><tr><td>MC</td><td>15</td><td>0.9999971314</td><td>0.000000005</td></tr></tbody></table>
|
| 157 |
+
|
| 158 |
+
expect
|
| 159 |
+
|
| 160 |
+
$$ \tau_L \approx L^z \sum_{k=0}^{n_c} \alpha_k L^{-2k}, \quad (14) $$
|
| 161 |
+
|
| 162 |
+
where the series was arbitrarily truncated at order $n_c$, but other powers of $1/L$ might occur as well. Ignoring the latter, we fitted the correlation times of Table I to this form. Typical results of such fits are given in Table II. The smallest systems do not fit Eq. (14) well, at least not for the $n_c$ values used. The residuals decrease rapidly
|
| 163 |
+
|
| 164 |
+
TABLE II. Results of least-squares fits for the dynamic exponent. The first column shows the minimum system size included, the second the number of correction terms included, and the third column whether (y) or not (n) numerical exact results (for $L \le 5$) are included. The last column contains the chi-square confidence index [16].
|
| 165 |
+
|
| 166 |
+
<table><thead><tr><th>L ≥ n<sub>c</sub></th><th>n<sub>c</sub></th><th>Exact</th><th>z</th><th>Error</th><th>Q</th></tr></thead><tbody><tr><td>4</td><td>1</td><td>n</td><td>2.1769</td><td>0.0001</td><td>0.00</td></tr><tr><td>5</td><td>1</td><td>n</td><td>2.1705</td><td>0.0002</td><td>0.00</td></tr><tr><td>6</td><td>1</td><td>n</td><td>2.1688</td><td>0.0003</td><td>0.23</td></tr><tr><td>7</td><td>1</td><td>n</td><td>2.1679</td><td>0.0006</td><td>0.43</td></tr><tr><td>8</td><td>1</td><td>n</td><td>2.1672</td><td>0.0010</td><td>0.42</td></tr><tr><td>4</td><td>2</td><td>n</td><td>2.1650</td><td>0.0003</td><td>0.17</td></tr><tr><td>5</td><td>2</td><td>n</td><td>2.1665</td><td>0.0006</td><td>0.70</td></tr><tr><td>6</td><td>2</td><td>n</td><td>2.1662</td><td>0.0013</td><td>0.60</td></tr><tr><td>7</td><td>2</td><td>n</td><td>2.1648</td><td>0.0024</td><td>0.52</td></tr><tr><td>4</td><td>3</td><td>n</td><td>2.1672</td><td>0.0009</td><td>0.64</td></tr><tr><td>5</td><td>3</td><td>n</td><td>2.1656</td><td>0.0020</td><td>0.61</td></tr><tr><td>6</td><td>3</td><td>n</td><td>2.1625</td><td>0.0044</td><td>0.56</td></tr><tr><td>3</td><td>3</td><td>y</td><td>2.1653</td><td>0.0047</td><td>0.34</td></tr><tr><td>4</td><td>3</td><td>y</td><td>2.1678</td><td>...</div>
|
| 167 |
+
|
| 168 |
+
<table>
|
| 169 |
+
<thead>
|
| 170 |
+
<tr>
|
| 171 |
+
<th>L ≥ n<sub>c</sub></th>
|
| 172 |
+
<th>n<sub>c</sub></th>
|
| 173 |
+
<th>Exact z'</th>
|
| 174 |
+
<th>Error Q'</th>
|
| 175 |
+
</tr>
|
| 176 |
+
</thead>
|
| 177 |
+
<tbody>
|
| 178 |
+
<tr>
|
| 179 |
+
<th scope="row">4 1 n<sub>c<sup>Lz'</sup></sub></th>
|
| 180 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 181 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 182 |
+
<th scope="row">2.1769 4 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 n<sub>c<sup>Lz'</sup></sub></th>
|
| 183 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 184 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 185 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 186 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 187 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 188 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 189 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 190 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 191 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 192 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 193 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 194 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 195 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 196 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 197 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 198 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 199 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 200 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 201 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 202 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 203 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 204 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 205 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 206 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 207 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 208 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 209 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 210 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 211 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 212 |
+
<th scope="row">n<sub>c<sup>Lz'</sup></sub></th>
|
| 213 |
+
</tr>
|
| 214 |
+
</tbody>
|
| 215 |
+
<tfoot>
|
| 216 |
+
<tr>
|
| 217 |
+
<th scope="row">4<br/>5<br/>6<br/>7<br/>8<br/>4<br/>5<br/>6<br/>7<br/>8<br/>... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... …………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………...</div>
|
| 218 |
+
|
| 219 |
+
|
| 220 |
+
---PAGE_BREAK---
|
| 221 |
+
|
| 222 |
+
TABLE III. Comparison of recent results for the dynamic exponent z. Numerical errors are in parentheses.
|
| 223 |
+
|
| 224 |
+
<table><thead><tr><th>Reference</th><th>Year</th><th>Value</th></tr></thead><tbody><tr><td>Present work</td><td>1996</td><td>2.1665 (12)</td></tr><tr><td>Li et al. [15]</td><td>1995</td><td>2.1337 (41)</td></tr><tr><td>Linke et al. [17]</td><td>1995</td><td>2.160 (5)</td></tr><tr><td>Grassberger [18]</td><td>1995</td><td>2.172 (6)</td></tr><tr><td>Wang et al. [19]</td><td>1995</td><td>2.16 (4)</td></tr><tr><td>Baker and Erpenbeck [20]</td><td>1994</td><td>2.17 (1)</td></tr><tr><td>Ito [21]</td><td>1993</td><td>2.165 (10)</td></tr><tr><td>Dammann and Reger [22]</td><td>1993</td><td>2.183 (5)</td></tr><tr><td>Matz et al. [23]</td><td>1993</td><td>2.35 (5)</td></tr><tr><td>Münkel et al. [24]</td><td>1993</td><td>2.21 (3)</td></tr><tr><td>Stauffer [25]</td><td>1993</td><td>2.06 (2)</td></tr></tbody></table>
|
| 225 |
+
|
| 226 |
+
when the minimum system size is increased and the consistency between the results for different $n_c$ suggests that Eq. (14) captures the essential scaling behavior of $\tau_L$. From these results we chose the entry for $L \ge 5$ and $n_c = 2$ as our best estimate: $z = 2.1665 \pm 0.0012$, where we conservatively quote a $2\sigma$ error. To our knowledge, this is the most precise estimate of $z$ obtained to date, as evidenced by recent results summarized in Table III. The table shows that the mutual consistency of the results for $z$ has tended to improve in recent years. The only recent result that appears inconsistent with ours is due to Li, Schulke, and Zheng [15]. Its error is copied from Table I of Li et al. The data in that table display finite-size dependences that exceed the quoted errors, which may explain the discrepancy with our result.
|
| 227 |
+
|
| 228 |
+
This research was supported by the (U.S.) National Science Foundation through Grant No. DMR-9214669, by the Office of Naval Research and by the NATO through Grant No. CRG 910152. This research was conducted in part using the resources of the Cornell Theory Center, which receives major funding from the National Science Foundation (NSF) and New York State, with additional support from the Advanced Research Projects Agency (ARPA), the National Center for Research Resources at the National Institutes of Health (NIH), IBM Corporation, and other members of the center's Corporate Research Institute.
|
| 229 |
+
|
| 230 |
+
[1] See, e.g., references in G.F. Mazenko and O.T. Valls, Phys. Rev. B **24**, 1419 (1981), and in M.-D. Lacasse, J. Viñals, and M. Grant, Phys. Rev. B **47**, 5646 (1993).
|
| 231 |
+
|
| 232 |
+
[2] S. Zhang, N. Kawashima, J. Carlson, and J.E. Gubernatis, Phys. Rev. Lett. **74**, 1500 (1995), and references therein.
|
| 233 |
+
|
| 234 |
+
[3] M. Takahashi, Phys. Rev. Lett. **62**, 2313 (1989).
|
| 235 |
+
|
| 236 |
+
[4] M. P. Nightingale, E. Granato, and J. M. Kosterlitz, Phys. Rev. B **52**, 7402 (1995); M. P. Nightingale and H. W. J. Blöte, Phys. Rev. B (to be published), and references therein.
|
| 237 |
+
|
| 238 |
+
[5] D. M. Ceperley and B. Bernu, J. Chem. Phys. **89**, 6316 (1988); B. Bernu, D. M. Ceperley, and W. A. Lester, J. Chem. Phys. **93**, 552 (1990).
|
| 239 |
+
|
| 240 |
+
[6] See, e.g., W. Feller, *An Introduction to Probability Theory and its Applications* (John Wiley and Sons, New York, 1968), Vol. 1; H. Haken, *Synergetics: an Introduction: Nonequilibrium Phase Transitions and Self-Organization in Physics, Chemistry, and Biology* (Springer-Verlag, Berlin, 1978).
|
| 241 |
+
|
| 242 |
+
[7] M. P. Nightingale, V. S. Viswanath, and G. Müller, Phys. Rev. B **48**, 7696 (1993).
|
| 243 |
+
|
| 244 |
+
[8] M. P. Nightingale and H.W.J. Blöte, Physica (Amsterdam) **104A**, 352 (1980).
|
| 245 |
+
|
| 246 |
+
[9] C.P. Yang, Proc. Symp. Appl. Math. **15**, 351 (1963).
|
| 247 |
+
|
| 248 |
+
[10] C.J. Umrigar, K.G. Wilson, and J.W. Wilkins, Phys. Rev. Lett. **60**, 1719 (1988); in *Computer Simulation Studies in Condensed Matter Physics: Recent Developments*, edited by D.P. Landau, K.K. Mon, and H.B. Schüttler, Springer Proceedings of Physics Vol. 33 (Springer, Berlin, 1988); C.J. Umrigar, Int. J. Quant. Chem. Symp. **23**, 217 (1989).
|
| 249 |
+
|
| 250 |
+
[11] H.W.J. Blöte, E. Luijten, and J.R. Heringa, J. Phys. A **28**, 6289 (1995).
|
| 251 |
+
|
| 252 |
+
[12] S. Kirkpatrick and E.P. Stoll, J. Comput. Phys. **40**, 517 (1981).
|
| 253 |
+
|
| 254 |
+
[13] J.R. Heringa, H.W.J. Blöte, and A. Compagner, Int. J. Mod. Phys. C **3**, 561 (1992).
|
| 255 |
+
|
| 256 |
+
[14] See, e.g., H.W.J. Blöte and M.P.M. den Nijs, Phys. Rev. B **37**, 1766 (1988), and references therein.
|
| 257 |
+
|
| 258 |
+
[15] Z.B. Li, L. Schulke, and B. Zheng, Phys. Rev. Lett. **74**, 3396 (1995).
|
| 259 |
+
|
| 260 |
+
[16] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, *Numerical Recipes* (Cambridge University Press, Cambridge, 1986 and 1992), Sec. 6.2.
|
| 261 |
+
|
| 262 |
+
[17] A. Linke, D.W. Heermann, P. Altevogt, and M. Siegert, Physica (Amsterdam) **222A**, 205 (1995).
|
| 263 |
+
|
| 264 |
+
[18] P. Grassberger, Physica (Amsterdam) **214A**, 547 (1995).
|
| 265 |
+
|
| 266 |
+
[19] F. Wang, N. Hatano, and M. Suzuki, J. Phys. A **28**, 4543 (1995).
|
| 267 |
+
|
| 268 |
+
[20] G.A. Baker, Jr. and J.J. Erpenbeck, in *Computer Simulation Studies in Condensed Matter Physics*, edited by D.P. Landau, K.K. Mon, and H.B. Schüttler, Springer Proceedings in Physics Vol. 78 (Springer-Verlag, Berlin, 1994), Vol. VII.
|
| 269 |
+
|
| 270 |
+
[21] N. Ito, Physica (Amsterdam) **196A**, 591 (1993).
|
| 271 |
+
|
| 272 |
+
[22] B. Dammann and J.D. Reger, Europhys. Lett. **21**, 157 (1993).
|
| 273 |
+
|
| 274 |
+
[23] R. Matz, D.L. Hunter, and N.Jan, J.Stat.Phys (**74**, 903 (1993).
|
| 275 |
+
|
| 276 |
+
[24] C.Münkel, D.W.Heermann, J.Adler,M.Gofman,and D.Stauffer, Physica(Amsterdam) **193A**, 540 (1993).
|
| 277 |
+
|
| 278 |
+
[25] D.Stauffer, J.Phys.A **26**, L599 (1993).
|
samples/texts_merged/1323410.md
ADDED
|
@@ -0,0 +1,232 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Ridge Regression for the Functional Concurrent Model
|
| 5 |
+
|
| 6 |
+
Tito Manrique* ¹, ², Nadine Hilgert ¹, Christophe Crambes ², André Mas ²
|
| 7 |
+
|
| 8 |
+
¹ UMR 729 MISTEA, INRA, Montpellier, France
|
| 9 |
+
|
| 10 |
+
² UMR 5149 I3M, Montpellier University, Montpellier, France
|
| 11 |
+
|
| 12 |
+
*Corresponding author: Tito Manrique
|
| 13 |
+
|
| 14 |
+
E-mails: tito.manrique@supagro.inra.fr; Nadine.Hilgert@supagro.inra.fr;
|
| 15 |
+
|
| 16 |
+
christophe.crambes@univ-montp2.fr; andre.mas@univ-montp2.fr
|
| 17 |
+
|
| 18 |
+
Abstract
|
| 19 |
+
|
| 20 |
+
The aim of the paper is to propose an estimator of the unknown function in the functional concurrent model.
|
| 21 |
+
This is a general model in which all functional linear models can be reduced. We follow a strictly functional
|
| 22 |
+
approach and extend the ridge regression method developed in the classical linear case to the functional
|
| 23 |
+
data framework. We establish asymptotic statistical properties of the proposed estimator and present some
|
| 24 |
+
simulations which show its high accuracy in fitting the unknown function, despite a low signal-to-noise ratio.
|
| 25 |
+
|
| 26 |
+
**Keywords:** ridge regression; functional data; concurrent model; varying coefficient model.
|
| 27 |
+
|
| 28 |
+
# 1 Introduction
|
| 29 |
+
|
| 30 |
+
Functional Data Analysis (FDA) proposes very good tools to handle data that are functions of some covariate (e.g. time, when dealing with longitudinal data). These tools may allow a better modelling of complex relationships than classical multivariate data analysis would do, as noticed by Ramsay and Silverman [2005, Ch. 1], Yao et al. [2005a,b], among others.
|
| 31 |
+
|
| 32 |
+
There are several models in FDA to study the relationship between two variables. In particular in this paper we are interested in the Functional Concurrent Model (FCM) because, as stated by Ramsay and Silverman [2005, p. 220], all functional linear models can be reduced to this form. This model can be defined as follows
|
| 33 |
+
|
| 34 |
+
$$
|
| 35 |
+
Y(t) = \beta(t) X(t) + \epsilon(t), \tag{1}
|
| 36 |
+
$$
|
| 37 |
+
|
| 38 |
+
where $t \in \mathbb{R}$, $\beta$ is an unknown function to be estimated, $X, Y$ are random functions and $\epsilon$ is a noise random function.
|
| 39 |
+
|
| 40 |
+
Some related models have already been discussed by several authors. For instance West et al. [1985] defined a similar model called ‘dynamic generalized linear model’ which is written in the next equation over time
|
| 41 |
+
|
| 42 |
+
$$
|
| 43 |
+
\eta_t = \beta_0(t) + X_1(t) \beta_1(t) + \dots + X_p(t) \beta_p(t).
|
| 44 |
+
$$
|
| 45 |
+
|
| 46 |
+
Hastie and Tibshirani [1993] themselves proposed a generalization of FCM called ‘varying coefficient model’. Afterwards many people studied this model trying to estimate the unknown smooth regression functions $\beta_i$, for instance by local maximum likelihood estimation (Dreesman and Tutz [2001]; Cai et al. [2000a,b]), by kernel smoothing (Wu et al. [1998]), or by local polynomial smoothing (Zhang et al. [2000]; Fan et al. [2003]; Zhang et al. [2002]).
|
| 47 |
+
|
| 48 |
+
As far as we know, despite the abundant literature related to FCM, there is no paper providing a strictly functional approach (i.e. with random functions defined inside normed functional spaces) as noticed by Ramsay and Silverman [2005, p. 259], who said that all these methods come more from a multivariate data analysis approach rather than from a functional one. This may cause a loss of information because these approaches, as noticed by Müller and Sentürk [2010, p. 1257], “do not take full advantage of the functional nature of the underlying data”.
|
| 49 |
+
|
| 50 |
+
The goal of this paper is to extend the ridge regression method developed in the classical linear case to the functional data framework. We establish asymptotic statistical properties of the proposed estimator and present some simulation trials which show its high accuracy in fitting the unknown function, despite a low signal-to-noise ratio.
|
| 51 |
+
---PAGE_BREAK---
|
| 52 |
+
|
| 53 |
+
## 2 General Hypotheses and Estimator
|
| 54 |
+
|
| 55 |
+
The space of the real valued continuous functions vanishing at infinity is denoted $C_0(\mathbb{R})$. In this space we use the supremum norm, that is $\|f\|_{C_0} := \sup_{x \in \mathbb{R}} |f(x)|$ for some $f \in C_0(\mathbb{R})$. In the same way, for a compact $K \subset \mathbb{R}$, $C^0(K)$ is the space of real valued continuous functions defined on $K$, with the supremum norm $\|f\|_{C^0(K)} := \sup_{x \in K} |f(x)|$. Here are the general hypotheses made on the FCM (1) throughout this paper.
|
| 56 |
+
|
| 57 |
+
### General Hypotheses of FCM
|
| 58 |
+
|
| 59 |
+
(H1$_{FCM}$) $X, \epsilon$ are independent $C_0(\mathbb{R})$ valued random functions, $\mathbb{E}(\epsilon) = \mathbb{E}(X) = 0$, $\mathbb{E}[\|\epsilon\|_{C_0}] < +\infty$ and $\mathbb{E}[\|X\|_{C_0}] < +\infty$.
|
| 60 |
+
|
| 61 |
+
(H2$_{FCM}$) $\beta \in C_0(\mathbb{R})$.
|
| 62 |
+
|
| 63 |
+
(H3$_{FCM}$) $\mathbb{E}[\|X\|_{C_0}^2] < +\infty$.
|
| 64 |
+
|
| 65 |
+
### The Estimator
|
| 66 |
+
|
| 67 |
+
The definition of the estimator of $\beta$ is inspired by the estimator introduced by Hoerl [1962] used in the ridge regularization method that deal with ill-posed problems in the classical linear regression.
|
| 68 |
+
|
| 69 |
+
Let $(X_i, Y_i)_{i=1,\dots,n}$ be an i.i.d sample of FCM (1) and $\lambda_n > 0$. We define the estimator of $\beta$ as follows
|
| 70 |
+
|
| 71 |
+
$$ \hat{\beta}_n = \frac{\frac{1}{n} \sum_{i=1}^{n} Y_i X_i}{\frac{1}{n} \sum_{i=1}^{n} |X_i|^2 + \frac{\lambda_n}{n}}. \quad (2) $$
|
| 72 |
+
|
| 73 |
+
In the classical linear regression case, Hoerl and Kennard [1970, p. 62] proved that there is always a regularization parameter for which the ridge estimator is better than the Ordinary Linear Squares (OLS) estimator. Huh and Olkin [1995] made a study of some asymptotic properties of the ridge estimator in this context.
|
| 74 |
+
|
| 75 |
+
## 3 Asymptotic Properties
|
| 76 |
+
|
| 77 |
+
**Theorem 3.1.** Let $(X_i, Y_i)_{i=1,\dots,n}$ be an i.i.d. sample of FCM (1). Then under the following hypotheses
|
| 78 |
+
|
| 79 |
+
(A1) The sequence of positive numbers $(\lambda_n)_{n \ge 1} \subset \mathbb{R}^+$ is such that $\frac{\lambda_n}{n} \to 0$ and $\frac{\sqrt{n}}{\lambda_n} \to 0$,
|
| 80 |
+
|
| 81 |
+
(A2) $0 < \mathbb{E}[|X(t)|^2] < +\infty$, for all $t \in \mathbb{R}$,
|
| 82 |
+
|
| 83 |
+
(A3) There exists a sequence of positive numbers $(D_k)_{k \ge 1} \subset \mathbb{R}^+$ such that
|
| 84 |
+
|
| 85 |
+
$$ \lim_{k \to \infty} \left[ \frac{\lambda_k}{k} \cdot \frac{1}{\inf_{t \in [-D_k, D_k]} \mathbb{E}[|X(t)|^2]} \right] = 0, $$
|
| 86 |
+
|
| 87 |
+
and for every $t \in \mathbb{R}$, if $|t| > D_k$ then $|\beta(t)| \le \frac{1}{k}$,
|
| 88 |
+
|
| 89 |
+
we obtain
|
| 90 |
+
|
| 91 |
+
$$ \|\hat{\beta}_n - \beta\|_{C_0} \xrightarrow{P} 0. \quad (3) $$
|
| 92 |
+
|
| 93 |
+
*Proof. (sketch)* Given that $Y_i = \beta X_i + \epsilon_i$, for each $i = 1, \dots, n$ we can decompose $\hat{\beta}_n$ as follows
|
| 94 |
+
|
| 95 |
+
$$ \hat{\beta}_n = \beta - \frac{\lambda_n}{n} \left( \frac{\beta}{\frac{1}{n} \sum_{i=1}^{n} |X_i|^2 + \frac{\lambda_n}{n}} \right) + \left( \frac{\frac{1}{n} \sum_{j=1}^{n} \epsilon_j X_j}{\frac{1}{n} \sum_{i=1}^{n} |X_i|^2 + \frac{\lambda_n}{n}} \right). \quad (4) $$
|
| 96 |
+
|
| 97 |
+
Then the hypothesis (A2) and the Strong Law of Large Numbers (SLLN) in the separable Banach space $C_0(\mathbb{R})$ are used to show that
|
| 98 |
+
|
| 99 |
+
$$ \left\| \frac{\frac{1}{n} \sum_{j=1}^{n} \epsilon_j X_j}{\frac{1}{n} \sum_{i=1}^{n} |X_i|^2 + \frac{\lambda_n}{n}} \right\|_{C_0} \xrightarrow{P} 0. $$
|
| 100 |
+
---PAGE_BREAK---
|
| 101 |
+
|
| 102 |
+
Finally (A3) and SLLN are used to prove that
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\left\| \frac{\lambda_n}{n} \left( \frac{\beta}{\frac{1}{n} \sum_{i=1}^{n} |X_i|^2 + \frac{\lambda_n}{n}} \right) \right\|_{C_0} \xrightarrow{\text{a.s.}} 0,
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
which implies (3) by the triangle inequality in (4). □
|
| 109 |
+
|
| 110 |
+
3.1 Comments about the Hypotheses
|
| 111 |
+
|
| 112 |
+
**Hypothesis (A1):** This hypothesis is about how fast $\lambda_n$ has to go to infinity, it must be slower than $n$ but faster than $\sqrt{n}$.
|
| 113 |
+
|
| 114 |
+
**Hypothesis (A2):** We use (A2) because, if for some $t \in \mathbb{R}$, we have $E[|X(t)|^2] = 0$ then almost surely $X(t) = 0$ and thus $\hat{\beta}_n(t) = 0$ also. Therefore when $\beta(t) \neq 0$ and $E[|X(t)|^2] = 0$, $\hat{\beta}_n$ cannot estimate $\beta$ at the point $t$.
|
| 115 |
+
|
| 116 |
+
**Hypothesis (A3):** Finally (A3) says that $\beta$ must decrease faster than $E[|X(t)|^2]$. In this sense this hypothesis may be interpreted as an assumption about the decreasing rate of the function $\beta$ with respect to that of $X$, as we can see in the following proposition where $K_1$ can be understood as the decreasing rate of $\beta$ and $K_2$ that of $E[|X|^2]$.
|
| 117 |
+
|
| 118 |
+
**Proposition 3.2.** If $\beta(t) = \frac{1}{e^{\kappa_1 |t|}}$ and $E[|X(t)|^2] = \frac{1}{e^{\kappa_2 |t|}}$ in such a way that $K_1 > 2K_2 > 0$, then the hypothesis (A3) is satisfied when we take $\lambda_n = \sqrt{n} \log n$ which satisfies (A1) in Theorem 3.1.
|
| 119 |
+
|
| 120 |
+
*Proof.* (sketch) We define $D_k := \frac{\log k}{K_1} > 0$ for each $k \ge 1$ and use the fact that $\beta$ and $E[|X|^2]$ are strictly decreasing functions. $\square$
|
| 121 |
+
|
| 122 |
+
It is possible to get a similar proposition for polynomial decreasing rates.
|
| 123 |
+
|
| 124 |
+
**Proposition 3.3.** If $\beta(t) = \frac{1}{|t|^r}$ and $E[|X(t)|^2] = \frac{1}{|t|^s}$ in such a way that $r > 2s > 0$ with $r, s \in \mathbb{N}$ then the hypothesis (A3) is satisfied when we take $\lambda_n = \sqrt{n} \log n$ which satisfies (A1) in the Theorem 3.1.
|
| 125 |
+
|
| 126 |
+
3.2 Further Results
|
| 127 |
+
|
| 128 |
+
We can prove the next corollaries by using similar ideas.
|
| 129 |
+
|
| 130 |
+
**Corollary 3.4.** Let $(X_i, Y_i)_{i=1,\dots,n}$ be an i.i.d. sample of FCM (1), then under (A1) and the following hypotheses
|
| 131 |
+
|
| 132 |
+
$$
|
| 133 |
+
(A2bis) \quad \inf_{t \in \text{supp}(\beta)} \mathbb{E}[|X(t)|^2] > 0,
|
| 134 |
+
$$
|
| 135 |
+
|
| 136 |
+
(A3bis) $\overline{\text{supp}}(\beta)$ is bounded, where $\overline{\text{supp}}(\beta)$ is the closure of the support of $\beta$, $\text{supp}(\beta)$,
|
| 137 |
+
|
| 138 |
+
we obtain
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\|\hat{\beta}_n - \beta\|_{C_0} \xrightarrow{P} 0. \tag{5}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
In the following corollary we establish a similar result as that of Theorem 3.1 in the space $C^0(K)$.
|
| 145 |
+
|
| 146 |
+
**Corollary 3.5.** Let $(X_i, Y_i)_{i=1,\dots,n}$ be an i.i.d. sample of FCM (1), then under (A1), (A2bis) and the following hypothesis
|
| 147 |
+
|
| 148 |
+
(A3ter) There exists a compact $K \subset \mathbb{R}$ such that $\text{supp}(\beta) \subset K$, and almost surely $\text{supp}(X)$, $\text{supp}(\epsilon) \subset K$,
|
| 149 |
+
we obtain
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\|\hat{\beta}_n - \beta\|_{C^0(K)} \xrightarrow{P} 0. \tag{6}
|
| 153 |
+
$$
|
| 154 |
+
---PAGE_BREAK---
|
| 155 |
+
|
| 156 |
+
# 4 Simulations
|
| 157 |
+
|
| 158 |
+
The accuracy of the estimator $\hat{\beta}_n$ is illustrated for two choices of the function $\beta$. In all experiments, $X$ is a Brownian Motion (BM) on the interval $[0, 1]$ and $\epsilon$ is a BM on $[0, 1]$ too, which is independent of $X$. To simulate the BM we used the Karhunen-Loève decomposition with the first 100 eigenfunctions. All the functions are observed on 100 evenly spaced points in $[0, 1]$.
|
| 159 |
+
|
| 160 |
+
We calculate the signal-to-noise ratio (SNR) as $\text{SNR} = \frac{\text{Var}[\|Y\|_{C_0}]}{\text{Var}[\|\epsilon\|_{C_0}]}$.
|
| 161 |
+
|
| 162 |
+
We set two sample sizes $n = 100$ and $n = 200$ and fix three values for $\lambda$. For each set of parameter $(n, \lambda)$, 100 trials were run to estimate the mean and the standard deviation of the relative estimation error $\frac{\|\hat{\beta}_n - \beta\|_{C_0}}{\|\beta\|_{C_0}}$.
|
| 163 |
+
|
| 164 |
+
**Simulation 1** $\beta$ is defined as follows $\beta(t) = \sqrt{2}\sin((8 - 1/2)\pi t)$. The accuracy of the estimation of $\beta$ shall be appreciated on the example given in the Figure 1,
|
| 165 |
+
|
| 166 |
+
Figure 1: $\beta$ and its estimator $\hat{\beta}$ with a SNR $\approx 2.937$ (around 33% noise).
|
| 167 |
+
|
| 168 |
+
as well as in the following tables:
|
| 169 |
+
|
| 170 |
+
$$ n = 100 $$
|
| 171 |
+
|
| 172 |
+
<table><thead><tr><td>λ</td><td>mean</td><td>sd</td></tr></thead><tbody><tr><td>10<sup>-1</sup></td><td>0.1386390</td><td>0.04017871</td></tr><tr><td>10<sup>-2</sup></td><td>0.1391908</td><td>0.03942906</td></tr><tr><td>10<sup>-3</sup></td><td>0.1394522</td><td>0.03932101</td></tr></tbody></table>
|
| 173 |
+
|
| 174 |
+
$$ n = 200 $$
|
| 175 |
+
|
| 176 |
+
<table><thead><tr><td>λ</td><td>mean</td><td>sd</td></tr></thead><tbody><tr><td>10<sup>-1</sup></td><td>0.1003225</td><td>0.02600884</td></tr><tr><td>10<sup>-2</sup></td><td>0.1000697</td><td>0.02628544</td></tr><tr><td>10<sup>-3</sup></td><td>0.1000806</td><td>0.02634147</td></tr></tbody></table>
|
| 177 |
+
|
| 178 |
+
**Simulation 2** $\beta$ is defined as follows $\beta(t) = 4(t^6 - 1/2t^7 + 2t^3 + 2t^2 - 4t + 1) \sin(22\pi t)$. One example of the estimation is given in the Figure 2, and the relative estimation error is illustrated in the following tables:
|
| 179 |
+
|
| 180 |
+
$$ n = 100 $$
|
| 181 |
+
|
| 182 |
+
<table><thead><tr><td>λ</td><td>mean</td><td>sd</td></tr></thead><tbody><tr><td>10<sup>-1</sup></td><td>0.05175795</td><td>0.01389744</td></tr><tr><td>10<sup>-2</sup></td><td>0.03762299</td><td>0.01079769</td></tr><tr><td>10<sup>-3</sup></td><td>0.03760215</td><td>0.01061630</td></tr></tbody></table>
|
| 183 |
+
|
| 184 |
+
$$ n = 200 $$
|
| 185 |
+
|
| 186 |
+
<table><thead><tr><td>λ</td><td>mean</td><td>sd</td></tr></thead><tbody><tr><td>10<sup>-1</sup></td><td>0.03233332</td><td>0.009219947</td></tr><tr><td>10<sup>-2</sup></td><td>0.02701885</td><td>0.007106225</td></tr><tr><td>10<sup>-3</sup></td><td>0.02698448</td><td>0.007102699</td></tr></tbody></table>
|
| 187 |
+
---PAGE_BREAK---
|
| 188 |
+
|
| 189 |
+
Figure 2: $\hat{\beta}$ and its estimator $\hat{\beta}$ with a SNR $\approx$ 32.743 (around 3% noise).
|
| 190 |
+
|
| 191 |
+
# 5 Conclusions
|
| 192 |
+
|
| 193 |
+
We established the asymptotic convergence of the functional ridge estimator. The simulations showed the good accuracy of the estimator even for a low signal-to-noise ratio (around 3 in Simulation 1). For further research, some work on the rate of convergence should be considered as well as a discussion about the choice of the regularization parameter $\lambda_n$.
|
| 194 |
+
|
| 195 |
+
# 6 Acknowledgments
|
| 196 |
+
|
| 197 |
+
The authors would like to thank the Labex NUMEV (convention ANR-10-LABX-20) for partly funding the PhD thesis of Tito Manrique (under project 2013-1-007).
|
| 198 |
+
|
| 199 |
+
# References
|
| 200 |
+
|
| 201 |
+
Cai, Z., Fan, J., and Li, R. (2000a). Efficient estimation and inferences for varying-coefficient models. *Journal of the American Statistical Association*, 95(451), 888-902.
|
| 202 |
+
|
| 203 |
+
Cai, Z., Fan, J., and Yao, Q. (2000b). Functional-coefficient regression models for nonlinear time series. *Journal of the American Statistical Association*, 95(451), 941-956.
|
| 204 |
+
|
| 205 |
+
Dreesman, J. M., and Tutz, G. (2001). Non Stationary Conditional Models for Spatial Data Based on Varying Coefficients. *Journal of the Royal Statistical Society: Series D (The Statistician)*, 50(1), 1-15.
|
| 206 |
+
|
| 207 |
+
Fan, J., Yao, Q., and Cai, Z. (2003). Adaptive varying coefficient linear models. *Journal of the Royal Statistical Society: series B (statistical methodology)*, 65(1), 57-80.
|
| 208 |
+
|
| 209 |
+
Hastie, T., and Tibshirani, R. (1993). Varying-coefficient models. *Journal of the Royal Statistical Society. Series B (Methodological)*, 757-796.
|
| 210 |
+
|
| 211 |
+
Hoerl, A. E. (1962). Application of ridge analysis to regression problems. *Chemical Engineering Progress* 58, 54-59.
|
| 212 |
+
---PAGE_BREAK---
|
| 213 |
+
|
| 214 |
+
Hoerl, A. E., and Kennard, R. W. (1970). Ridge regression: Biased estimation for nonorthogonal problems. *Technometrics*, 12(1), 55-67.
|
| 215 |
+
|
| 216 |
+
Huh, M. H., and Olkin, I. (1995). Asymptotic aspects of ordinary ridge regression. *American Journal of Mathematical and Management Sciences*, 15(3-4), 239-254.
|
| 217 |
+
|
| 218 |
+
Müller, H. G. and Sentürk, D. (2010). Functional varying coefficient models for longitudinal data. *Journal of the American Statistical Association*, 105(491), 1256-1264.
|
| 219 |
+
|
| 220 |
+
Ramsay J. and Silverman B. (2005) *Functional Data Analysis*, Springer-Verlag, USA.
|
| 221 |
+
|
| 222 |
+
West, M., Harrison, P. J., and Migon, H. S. (1985). Dynamic generalized linear models and Bayesian forecasting. *Journal of the American Statistical Association*, 80(389), 73-83.
|
| 223 |
+
|
| 224 |
+
Wu, C. O., Chiang, C. T., and Hoover, D. R. (1998). Asymptotic confidence regions for kernel smoothing of a varying-coefficient model with longitudinal data. *Journal of the American statistical Association*, 93(444), 1388-1402.
|
| 225 |
+
|
| 226 |
+
Yao, F., Müller, H. G., and Wang, J. L. (2005a). Functional data analysis for sparse longitudinal data. *Journal of the American Statistical Association*, 100(470), 577-590.
|
| 227 |
+
|
| 228 |
+
Yao, F., Müller, H. G., and Wang, J. L. (2005b). Functional linear regression analysis for longitudinal data. *The Annals of Statistics*, 33(6), 2873-2903.
|
| 229 |
+
|
| 230 |
+
Zhang, W., and Lee, S. Y. (2000). Variable bandwidth selection in varying-coefficient models. *Journal of Multivariate Analysis*, 74(1), 116-134.
|
| 231 |
+
|
| 232 |
+
Zhang, W., Lee, S. Y., and Song, X. (2002). Local polynomial fitting in semivarying coefficient model. *Journal of Multivariate Analysis*, 82(1), 166-188.
|
samples/texts_merged/1623821.md
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# PRELIMINARY EXAM IN ALGEBRA
|
| 5 |
+
|
| 6 |
+
## FALL, 2016
|
| 7 |
+
|
| 8 |
+
All problems are worth the same amount. You should give full proofs or explanations of your solutions. Remember to state or cite theorems that you use in your solutions.
|
| 9 |
+
|
| 10 |
+
Important: Please use a different sheet for the solution to each problem.
|
| 11 |
+
|
| 12 |
+
1. Let $F_n$ be the free group on $n$ generators with $n \ge 2$. Prove that the center $Z(F)$ of $F$ is trivial.
|
| 13 |
+
|
| 14 |
+
2. Let $G$ be a finite group that acts transitively on a set $X$ of cardinality $\ge 2$. Show that there exists an element $g \in G$ which acts on $X$ without any fixed points. Is the same true if $G$ is infinite?
|
| 15 |
+
|
| 16 |
+
3. Show that every linear map $A : \mathbb{R}^3 \rightarrow \mathbb{R}^3$ has both a 1-dimensional invariant subspace and a 2-dimensional invariant subspace.
|
| 17 |
+
|
| 18 |
+
4. Let $I, J \subseteq R$ be ideals in a principal ideal domain $R$. Prove that $I+J = R$ if and only if $IJ = I \cap J$.
|
| 19 |
+
|
| 20 |
+
5. Let $F$ be a finite field and let $L$ be the subfield of $F$ generated by elements of the form $x^3$ for all $x \in F$. Prove that if $L \neq F$, then $F$ has exactly 4 elements.
|
| 21 |
+
|
| 22 |
+
6. Show that the $\mathbb{R}$-modules $L = \mathbb{C} \otimes_{\mathbb{R}} \mathbb{C}$ and $M = \mathbb{C} \otimes_{\mathbb{C}} \mathbb{C}$ are not isomorphic.
|
samples/texts_merged/1834803.md
ADDED
|
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Math 431, Assignment #10: Solutions
|
| 5 |
+
|
| 6 |
+
*(due 5/3/01)*
|
| 7 |
+
|
| 8 |
+
1. Chapter 6, problem 34: Let X and Y denote, respectively, the number of males and females in the sample that never eat breakfast. Since
|
| 9 |
+
|
| 10 |
+
$$E(X) = 200 \times .252 = 50.4,$$
|
| 11 |
+
|
| 12 |
+
$$\operatorname{Var}(X) = 200 \times .252 \times (1 - .252) = 37.6992,$$
|
| 13 |
+
|
| 14 |
+
$$E(Y) = 200 \times .236 = 47.2,$$
|
| 15 |
+
|
| 16 |
+
$$\operatorname{Var}(Y) = 200 \times .236 \times (1 - .236) = 36.0608,$$
|
| 17 |
+
|
| 18 |
+
it follows from the normal approximation to the binomial that X is approximately distributed as a normal random variable with mean 50.4 and variance 37.6992, and that Y is approximately distributed as a normal random variable with mean 47.2 and variance 36.0608. By Proposition 3.2, $X + Y$ is approximately distributed as a normal random variable with mean 97.6 and variance 73.7600 and $Y - X$ is approximately distributed as a normal random variable with mean -3.2 and variance 73.7600. Let Z be a standard normal random variable.
|
| 19 |
+
|
| 20 |
+
(a)
|
| 21 |
+
|
| 22 |
+
$$
|
| 23 |
+
\begin{align*}
|
| 24 |
+
P(X + Y \geq 110) &= P(X + Y \geq 109.5) \\
|
| 25 |
+
&= P\left(\frac{X + Y - 97.6}{\sqrt{73.76}} \geq \frac{109.5 - 97.6}{\sqrt{73.76}}\right) \\
|
| 26 |
+
&= P(Z > 1.3856) \approx .083.
|
| 27 |
+
\end{align*}
|
| 28 |
+
$$
|
| 29 |
+
|
| 30 |
+
(b)
|
| 31 |
+
|
| 32 |
+
$$
|
| 33 |
+
\begin{align*}
|
| 34 |
+
P(Y \ge X) &= P(Y - X \ge -.5) \\
|
| 35 |
+
&= P\left(\frac{Y - X - (-3.2)}{\sqrt{73.76}} \ge \frac{-.5 - (-3.2)}{\sqrt{73.76}}\right) \\
|
| 36 |
+
&= P(Z > .3144) \approx .377.
|
| 37 |
+
\end{align*}
|
| 38 |
+
$$
|
| 39 |
+
---PAGE_BREAK---
|
| 40 |
+
|
| 41 |
+
2. Chapter 6, problem 42:
|
| 42 |
+
|
| 43 |
+
(a)
|
| 44 |
+
|
| 45 |
+
$$f_{X|Y}(x|y) = \frac{xe^{-x(y+1)}}{\int xe^{-x(y+1)}dx} = (y+1)^2 xe^{-x(y+1)} \text{ for } x > 0;$$
|
| 46 |
+
|
| 47 |
+
$$f_{Y|X}(y|x) = \frac{xe^{-x(y+1)}}{\int xe^{-x(y+1)}dy} = xe^{-xy} \text{ for } y > 0.$$
|
| 48 |
+
|
| 49 |
+
(b)
|
| 50 |
+
|
| 51 |
+
$$\begin{align*}
|
| 52 |
+
P(XY < a) &= \int_0^\infty \int_0^{a/x} xe^{-x(y+1)} dy dx \\
|
| 53 |
+
&= \int_0^\infty (1 - e^{-a})e^{-x} dx \\
|
| 54 |
+
&= 1 - e^{-a}
|
| 55 |
+
\end{align*}$$
|
| 56 |
+
|
| 57 |
+
so $f_{XY}(a) = e^{-a}$ for $0 < a$. That is, XY is an exponential r.v. of rate 1.
|
| 58 |
+
|
| 59 |
+
3. Chapter 6, problem 48: Let $X_1, X_2, X_3, X_4, X_5$ be the 5 numbers chosen. With probability 1, they are all distinct. There are 5 equally likely possibilities for which of them is the largest, then 4 remaining equally likely possibilities for which of them is the next largest, etc., for a total of $5 \times 4 \times 3 \times 2 \times 1 = 5! = 120$ different situations, each of which has the same probability. In each of the 120 situations, the probability of having the median lie between 1/4 and 3/4 is the same as for each of the others. For simplicity, let's focus on the case in which $X_1 < X_2 < X_3 < X_4 < X_5$. The event $\{X_1 < X_2 < X_3 < X_4 < X_5 \text{ and } 1/4 < X_3 < 3/4\}$ can be broken down into events of the form $\{X_1 < X_2 < X_3 < X_4 < X_5 \text{ and } x < X_3 < x + dx\}$ where $x$ lies between 1/4 and 3/4, so its probability can be written as the integral
|
| 60 |
+
|
| 61 |
+
$$\int_{1/4}^{3/4} P(X_1 < X_2 < x < X_4 < X_5 \text{ and } x < X_3 < x + dx).$$
|
| 62 |
+
|
| 63 |
+
Since $X_1, \dots, X_5$ are independent,
|
| 64 |
+
|
| 65 |
+
$$P(X_1 < X_2 < x < X_4 < X_5 \text{ and } x < X_3 < x + dx)$$
|
| 66 |
+
---PAGE_BREAK---
|
| 67 |
+
|
| 68 |
+
splits up as the product
|
| 69 |
+
|
| 70 |
+
$$P(X_1 < X_2 < x)P(x < X_4 < X_5)P(x < X_3 < x + dx).$$
|
| 71 |
+
|
| 72 |
+
$$P(X_1 < X_2 < x) = \int_0^x \int_0^t 1 \, ds \, dt = \int_0^x t \, dt = \frac{x^2}{2}.$$ Likewise, $P(x < X_4 < X_5) = \frac{(1-x)^2}{2}$. Also, $P(x < X_3 < x+dx) = dx$. So the integral is $\int_{1/4}^{3/4} \frac{x^2(1-x)^2}{4} dx$, and the desired probability is $120 \int_{1/4}^{3/4} \frac{x^2(1-x)^2}{4} dx$. (Section 6.6 contains a formula that gives you the equivalent answer $\frac{5}{2!} \int_{1/4}^{3/4} \frac{x^2(1-x)^2}{4} dx$.) The integral evaluates to approximately .79297.
|
| 73 |
+
|
| 74 |
+
4. Chapter 6, theoretical exercise 18: For $a < s < 1$, $P(U > s | U > a) = P(U > s)/P(U > a) = \frac{1-s}{1-a}$, whence $U | U > a$ is uniform on $(a, 1)$. For $0 < s < a$, $P(U < s | U < a) = P(U < s)/P(U < a) = \frac{s}{a}$, whence $U | U < a$ is uniform on $(0, a)$.
|
| 75 |
+
|
| 76 |
+
5. Chapter 7, problem 6 (also find the variance):
|
| 77 |
+
|
| 78 |
+
$$E\left(\sum_{i=1}^{10} X_i\right) = \sum_{i=1}^{10} E(X_i) = 10(7/2) = 35$$
|
| 79 |
+
|
| 80 |
+
$$\text{Var}\left(\sum_{i=1}^{10} X_i\right) = \sum_{i=1}^{10} \text{Var}(X_i) = 10(35/12) = 175/6$$
|
| 81 |
+
|
| 82 |
+
6. Chapter 7, problem 11 (also find the variance when $p = \frac{1}{2}$): For $i$ between 2 and $n$, let $X_i$ equal 1 if a changeover occurs on the $i$th flip and 0 otherwise. Then $E(X_i) = P(i-1 \text{ is } H, i \text{ is } T) + P(i-1 \text{ is } T, i \text{ is } H) = 2p(1-p)$. Hence the expected number of changeovers is $E(\sum_{i=2}^n X_i) = \sum_{i=2}^n E(X_i) = 2(n-1)p(1-p)$.
|
| 83 |
+
|
| 84 |
+
In general, the events $X_i$ are not independent of each other. For instance, take $n=3$. The expected value of $X_2X_3$ is the probability that $X_2$ and $X_3$ both equal 1, which is $P(1 \text{ is } H, 2 \text{ is } T, 3 \text{ is } H) + P(1 \text{ is } T, 2 \text{ is } H, 3 \text{ is } T) = p^2(1-p) + p(1-p)^2 = p - p^2$, which in general is not equal to $E(X_2)E(X_3) = 4p^2(1-p)^2$.
|
| 85 |
+
|
| 86 |
+
However, when $p = \frac{1}{2}$, the probability of a changeover occurring at any stage is $\frac{1}{2}$ independently of everything that's happened before, up to and including the preceding toss. So in this case the $X_i$'s are indeed independent. Each $X_i$ has variance $1/4$, and $\text{Var}(\sum_{i=2}^n X_i) = \sum_{i=2}^n \text{Var}(X_i) = (n-1)/4$.
|
| 87 |
+
---PAGE_BREAK---
|
| 88 |
+
|
| 89 |
+
7. Chapter 7, problem 15 (also find the variance): Let $X_i$ denote the number of white balls taken from urn $i$, and $X$ the total number of white balls taken. Then $E(X) = \sum E(X_i) = \frac{1}{6} + \frac{3}{6} + \frac{6}{10} + \frac{2}{8} + \frac{3}{10} = 109/60$. Also, the $X_i$'s are independent of each other, so $\text{Var}(X) = \sum \text{Var}(X_i) = \frac{1}{6}(1-\frac{1}{6}) + \frac{3}{6}(1-\frac{3}{6}) + \frac{6}{10}(1-\frac{6}{10}) + \frac{2}{8}(1-\frac{2}{8}) + \frac{3}{10}(1-\frac{3}{10}) = 739/720$.
|
| 90 |
+
|
| 91 |
+
8. Chapter 7, problem 16:
|
| 92 |
+
|
| 93 |
+
$$E(X) = \int_{y>x} y \frac{1}{\sqrt{2\pi}} e^{-y^2/2} dy = e^{-x^2/2} / \sqrt{2\pi}.$$
|
| 94 |
+
|
| 95 |
+
9. Chapter 7, problem 22 (also find the variance): For $i = 1$ to 6, let $X_i$ denote the number of rolls after we've seen $i-1$ distinct numbers until we've seen $i$ distinct numbers. The $X_i$'s are independent geometric random variables with probability of success $p_i = (7-i)/6$, expected value $E(X_i) = 1/p_i = 6/(7-i)$, and variance $\text{Var}(X_i) = (1-p_i)/p_i^2 = 6(i-1)/(7-i)^2$. Hence $E(\sum X_i) = \sum E(X_i) = 6/6+6/5+6/4+6/3+6/2+6/1 = 14.7$ and $\text{Var}(\sum X_i) = \sum \text{Var}(X_i) = 0/6^2+6/5^2+12/4^2+18/3^2+24/2^2+30/1^2 = 23.99$.
|
| 96 |
+
|
| 97 |
+
10. Chapter 7, problem 25: $P(N \ge n) = P(X_1 \ge X_2 \ge \cdots \ge X_n) = 1/n!$ so $E(N) = \sum_{n=1}^{\infty} P(N \ge n) = \sum_{n=1}^{\infty} 1/n! = e$.
|
samples/texts_merged/1922832.md
ADDED
|
@@ -0,0 +1,540 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Sensor Placement for Fault Detection and Isolation based on Structural Models*
|
| 5 |
+
|
| 6 |
+
A. Rosich*
|
| 7 |
+
|
| 8 |
+
* Institut de Robòtica i Informàtica Industrial (CSIC-UPC),
|
| 9 |
+
Llorens i Artigas 4-6, 08028 Barcelona, Spain
|
| 10 |
+
(e-mail: albert.rosich@upc.edu).
|
| 11 |
+
|
| 12 |
+
**Abstract:** This paper presents a new algorithm for solving the sensor placement problem in order to achieve the required fault detectability and isolability. The method is based on structural models, which means that non-linear differential systems can be efficiently handled. The developed algorithm does not need to compute any residual since it is based on structural model properties. Thus, the residual computation burden is avoided. The method can also handle faults of the extra placed sensors as well as the possibility of installing repeated sensors.
|
| 13 |
+
|
| 14 |
+
**Keywords:** Sensor placement, Structural model, Minimal Set Cover, Fault detection and isolation.
|
| 15 |
+
|
| 16 |
+
## 1. INTRODUCTION
|
| 17 |
+
|
| 18 |
+
The diagnosis task is often made difficult due to insufficient, incomplete or useless process observations. The problem on how to obtain the required process observations is known as the *sensor placement problem*. Several works have tackled this problem, where most of them presents a graph-based approach (Bagajewicz, 2000; Travé-Massuyès et al., 2006; Commault et al., 2008; Krysander and Frisk, 2008; Rosich et al., 2009). Graph-based model representations are suitable for the sensor placement problem since they allow to rid of the analytical expressions which are not always available at a first stage of the design. Also, graph-based tools are free of numerical problems and have in general better computational efficiency. However, only best-case results can be computed from graph-based methods. A graph model representation widely used in the area of model-based diagnosis is the *structural model* representation (Blanke et al., 2006), which will be used in this paper.
|
| 19 |
+
|
| 20 |
+
This paper specifically focuses on analysing which sensors should be installed in a process in order to achieve predefined fault detectability and isolability properties. The method presented here is rather close to Krysander and Frisk (2008), since the same diagnosis framework is used and exactly the same problem is solved. Furthermore, the sensor placement problem is also approached by first adding sensors for detectability and then, in a second step for isolability. Nevertheless, the strategy adopted here differs from Krysander and Frisk (2008) work in the sense that now the contribution of each sensor to the diagnosability and isolability of the system is sought.
|
| 21 |
+
|
| 22 |
+
A Matlab implementation of all algorithm presented here can be downloaded from http://www.iri.upc.edu/people/arosich/Software.html.
|
| 23 |
+
|
| 24 |
+
## 2. STRUCTURAL ANALYSIS REVIEW
|
| 25 |
+
|
| 26 |
+
An analytical model of a process typically consist of a set of equations describing the interaction among process variables. Then, a structural model is an abstraction of the model where the analytical expressions are neglected and only the structure of the model is preserved. More formally, given an analytical model, the corresponding structural model is represented by a bipartite graph with two vertex sets: the set $M$ of model equations and the set $X$ of unknown variables. An edge $(e, x)$, for $e \in M$ and $x \in X$, stands for variable $x$ is involved in equation $e$. It should be noted that known process variables are not included in the structural model since they will not be used throughout the paper. Furthermore, $\text{var}(M)$ is introduced to denote the subset of unknown variables adjacent to the equations in $M$, that is $\text{var}(M)$ are the unknown variables involved in $M$.
|
| 27 |
+
|
| 28 |
+
Structural models have been widely studied in the fault diagnosis field. As a result of this, several model decompositions have been developed in order to exploit model properties. Next, Dulmage-Mendelsohn decomposition (Dulmage and Mendelsohn, 1958) is reviewed. A thorough description of this decomposition and its properties can be found in Murota (2000).
|
| 29 |
+
|
| 30 |
+
Given a structural model $M$, the following function, called *surplus function* (Lovász and Plummer, 1986), is defined as
|
| 31 |
+
|
| 32 |
+
$$p_o(E) = |\text{var}(E)| - |E| \quad (1)$$
|
| 33 |
+
|
| 34 |
+
for any $E \subseteq M$ and $|\cdot|$ denoting the cardinality of the set. It is worth noting that the surplus function $p_0$ is a sub-modular function. Therefore, it holds that
|
| 35 |
+
|
| 36 |
+
$$p_0(E_1 \cup E_2) + p_0(E_1 \cap E_2) \leq p_0(E_1) + p_0(E_2) \quad (2)$$
|
| 37 |
+
|
| 38 |
+
for any two sets of equations $E_1$ and $E_2$.
|
| 39 |
+
|
| 40 |
+
* This work has been funded by the Spanish Ministry of Science and Technology through the CICYT project WATMAN (ref. DPI2009-13744), and by the European Commission through contract i-Sense (ref. FP7-ICT-2009-6-270428).
|
| 41 |
+
---PAGE_BREAK---
|
| 42 |
+
|
| 43 |
+
From $p_0$ function, the family of all minimizers with minimal surplus can be defined as
|
| 44 |
+
|
| 45 |
+
$$ \mathcal{L}_{\min} = \{E \subset M \mid p_0(E) \le p_0(E'), \forall E' \subseteq M\}. \quad (3) $$
|
| 46 |
+
|
| 47 |
+
This family of minimizers defines a lattice in $\mathcal{L}_{\min}$. Hence, it can be stated that $E_i \cup E_j \in \mathcal{L}_{\min}$ and $E_i \cap E_j \in \mathcal{L}_{\min}$, for any $E_i, E_j \in \mathcal{L}_{\min}$.
|
| 48 |
+
|
| 49 |
+
Let $E_0 \subset E_1 \subset \dots \subset E_{b-1} \subset E_b$ be any maximal ascending chain of $\mathcal{L}_{\min}$. Then, the partitions of the Dulmage-Mendelsohn decomposition on the equation set is thereby computed from the sets in the maximal ascending chain according to
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
\begin{align*}
|
| 53 |
+
M_0 &= E_0, \\
|
| 54 |
+
M_k &= E_k \setminus E_{k-1}, && (k = 1, \dots, b), \\
|
| 55 |
+
M_\infty &= M \setminus E_b.
|
| 56 |
+
\end{align*}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
In the diagnosis field, a coarse partition in three main parts is usually defined from the above partition as
|
| 60 |
+
|
| 61 |
+
* the overdetermined part $M^+ = M_0$.
|
| 62 |
+
|
| 63 |
+
* the just-determined part $M^0 = \bigcup_{k=1}^b M_k$.
|
| 64 |
+
|
| 65 |
+
* the under-determined part $M^- = \bar{M}_\infty$.
|
| 66 |
+
|
| 67 |
+
From the diagnosis point of view, a key property in the over-determined part is that there are more equations than unknown variables, i.e., $|M^+| > |\text{var}(M^+|)$, and hence the existence of redundancy in $M^+$. Indeed, the $M^+$ part of the model is the only useful part to perform diagnosis, see Blanke et al. (2006).
|
| 68 |
+
|
| 69 |
+
# 3. FAULT DIAGNOSIS FRAMEWORK FOR SENSOR PLACEMENT
|
| 70 |
+
|
| 71 |
+
The fault diagnosis framework on which this paper is based is presented in this section. First, some theoretical background and standard definitions will be reviewed. Then, sensor characterisation in the framework, as well as the sensor placement problem formalization, will be introduced.
|
| 72 |
+
|
| 73 |
+
## 3.1 Fault detectability and isolability analysis
|
| 74 |
+
|
| 75 |
+
Usually, to perform fault diagnosis based on structural models, it is assumed that different subsets of model equations describe the expected behaviour of process components. Then, when the component equations become inconsistent with the process observations, it may be suspected that the corresponding component is not working properly, i.e., the component is faulty. Here, it will be assumed that a fault can only cause inconsistency in one model equation. Thus, let $F$ be the set of faults, then there exists a fault equation $e_f \in M$ for each fault $f \in F$.
|
| 76 |
+
|
| 77 |
+
The consistency of a fault equation, together with other model equations, can be checked if there are more equations than unknown variables. This fact motivates the following fault detectability definition (Krysander and Frisk, 2008).
|
| 78 |
+
|
| 79 |
+
**Definition 1.** (Fault detectability). A fault $f \in F$ is detectable in a model $M$ as long as $e_f \in M^+$. ◇
|
| 80 |
+
|
| 81 |
+
A fault $f_i \in F$ can be isolated from another fault $f_j \in F$ when the inconsistent set of equations involves the equation of the fault $f_i$ but not the equation of the fault
|
| 82 |
+
|
| 83 |
+
$f_j$. Next, formal fault isolability definition (Krysander and Frisk, 2008) is introduced.
|
| 84 |
+
|
| 85 |
+
**Definition 2.** (Fault isolability) A fault $f_i \in F$ is isolable from fault $f_j$ in a model $M$ as long as $e_{f_i} \in (M \setminus \{e_{f_j}\})^+$. ◇
|
| 86 |
+
|
| 87 |
+
Given a structural model $M$ and a set of predefined faults $F$, diagnosability analysis can be performed from Definitions 1 and 2. The class of detectable faults is therefore computed as
|
| 88 |
+
|
| 89 |
+
$$ D = \{f \in F \mid e_f \in M^+\}. \qquad (4) $$
|
| 90 |
+
|
| 91 |
+
Fault isolability can be characterised by means of pairs of isolable faults. Here, it is assumed that all faults in the isolability analysis are detectable, which implies that the isolability relation is symmetric (Krysander and Frisk, 2008), i.e., if $f_i$ is isolable from $f_j$ then $f_j$ is isolable from $f_i$. Let $D$ be the ordered set of detectable faults, then fault isolability is characterised as
|
| 92 |
+
|
| 93 |
+
$$ I = \{(f_i, f_j) \in D \times D \mid e_{f_i} \in (M \setminus \{e_{f_j}\})^+, \text{ for } i < j\}. \quad (5) $$
|
| 94 |
+
|
| 95 |
+
For convenience, it is introduced the diagnosability analysis as a procedure $(D, I) = \text{Diagnosability}(M, F)$ which, given a structural model $M$ and a set of faults $F$, returns the set of detectable faults $D$ and the set of all isolable faults pairs $I$, computed according to (4) and (5), respectively.
|
| 96 |
+
|
| 97 |
+
## 3.2 Sensor placement problem formulation
|
| 98 |
+
|
| 99 |
+
Sensors are regarded as system components that may be installed or not. Thus, installing sensors implies extending the model by adding those equations describing the sensor behaviour. Here, for the sake of simplicity, it is assumed that a sensor can be modelled by means of a single sensor equation of the form $y = x$, where $x \in X$ is the measured process variable and $y$ represents the reading of the sensor, i.e., a known variables or observation. Therefore, the set of candidate sensors is characterised as a subset of unknown variables $S \subseteq X$.
|
| 100 |
+
|
| 101 |
+
It is worth noting that installing sensors increases the number of model equations whereas the number of unknown variables remains unaltered. This means that, in general, the more sensors are installed the better diagnosis performance can be expected. Indeed, maximum diagnosis performance can be determined by performing diagnosability analysis with all possible sensors installed in the system. Let $M_S$ be the sensor equations of all candidate sensors in $S$, the maximum fault detectability and isolability attainable by placing sensors is determined as
|
| 102 |
+
|
| 103 |
+
$$ (D_{max}, I_{max}) = \text{Diagnosability}(M \cup M_S, F). $$
|
| 104 |
+
|
| 105 |
+
This fact establishes an upper bound on the required fault detectability and isolability specifications in the sensor placement problem. However, diagnosis specifications other than the maximum ones may be desired. The diagnosis specifications, $D$ and $I$, are feasible as long as $D \subseteq D_{max}$ and $I \subseteq I_{max}$.
|
| 106 |
+
|
| 107 |
+
Now, the sensor placement problem to be solved in this paper can be formulated as follows (Krysander and Frisk, 2008).
|
| 108 |
+
|
| 109 |
+
**Problem 3.** (Sensor placement problem). Given a structural model $M$ of the process, a set of candidate sensors
|
| 110 |
+
---PAGE_BREAK---
|
| 111 |
+
|
| 112 |
+
S to be installed in the process and the required diagnosis specifications, D and I, defined from the set of process faults F, find all minimal sensors configurations $S' \subseteq S$ such that diagnosis specifications are fulfilled within the model $M \cup M_{S'}$. $\diamond$
|
| 113 |
+
|
| 114 |
+
Diagnosis specifications are fulfilled as long as $D \subseteq D'$ and $I \subseteq I'$, where $D'$ and $I'$ represent detectability and isolability derived from a sensor configuration $S' \subseteq S$, i.e., $(D', I') = \text{Diagnosability}(M \cup M_{S'}, F)$. Moreover, note that all minimal sensor configurations characterise all possible solutions since any combination of these minimal configurations is also a solution for the sensor placement problem.
|
| 115 |
+
|
| 116 |
+
## 4. SENSOR PLACEMENT FOR FAULT DIAGNOSIS
|
| 117 |
+
|
| 118 |
+
First, the theoretical concepts, on which the present approach is based, will be introduced. This will provide the basis to solve the sensor placement problem for fault detectability and isolability.
|
| 119 |
+
|
| 120 |
+
The sensor placement problem is then solved in two main steps. First step deals with those sensors that solve the problem only for fault detectability, while in the second step the sensors solving the fault isolability problem are computed. The *detectability* and the *isolability problem* are presented separately. Then, they are combined to finally solve the problem for both detectability and isolability.
|
| 121 |
+
|
| 122 |
+
### 4.1 Preliminary concepts
|
| 123 |
+
|
| 124 |
+
The basic idea of this approach is to study the detectability and isolability achieved by installing every candidate sensor individually. Then, the complete solution is derived from the result obtained when each individual sensor is considered. In order to do so, it is assumed that the model has no under-determined part, i.e., $M^- = \emptyset$. Lemma 4 shows that the overdetermined equations remain in the overdetermined part when new equations are taken into account.
|
| 125 |
+
|
| 126 |
+
Lemma 4. Let $M_1$ and $M_2$ be two arbitrary sets of equations such that $M_1 \subseteq M_2$, then it holds that
|
| 127 |
+
|
| 128 |
+
$$M_1^+ \subseteq M_2^+. \quad (6)$$
|
| 129 |
+
|
| 130 |
+
**Proof.** See Lemma 7 in Krysander et al. (2008). $\square$
|
| 131 |
+
|
| 132 |
+
Theorem 5 shows that the overdetermined part of the model with a sensor set S placed in it can be deduced from the overdetermined parts of the model obtained by placing single sensor $s \in S$ one at a time. Thus, from the diagnosability analysis of every single sensor, it can be straightforwardly obtained the diagnosability properties of the process for any combination of sensors.
|
| 133 |
+
|
| 134 |
+
Theorem 5. Let M be a structural model with no under-determined part and S a set of sensors. Then, it holds that
|
| 135 |
+
|
| 136 |
+
$$\bigcup_{s \in S} (M \cup \{e_s\})^+ = (M \cup M_S)^+, \quad (7)$$
|
| 137 |
+
|
| 138 |
+
where $M_s$ is the sensor equation set of S and $e_s$ is the equation of the sensor $s \in S$.
|
| 139 |
+
|
| 140 |
+
**Proof.** Let $\alpha$ be the minimal surplus of all subsets of M, then $p_0(M') \ge \alpha$ for any $M' \subseteq M$. Since $\text{var}(M_S) \subseteq$
|
| 141 |
+
|
| 142 |
+
var(M), it holds that $p_0(M' \cup M_S) \ge \alpha - n$ for any $M' \subseteq M$ where $n$ is the number of equations in $M_S$. This implies that any subset in $M \cup M_S$ with minimal surplus contains the equation set $M_S$. Hence, the set $(M \cup M_S)^+$ can be rewritten as $E \cup M_S$ where $E \subseteq M$. Then $(M \cup M_S)^+$ is the minimal set with minimum surplus only if E is the minimal set in M with $p_0(E) = \alpha$ and $\text{var}(M_S) \subseteq \text{var}(E)$.
|
| 143 |
+
|
| 144 |
+
Since $M_S$ is an arbitrary set of sensor equations, the above reasoning also holds for single sensor equations. Therefore, it follows for any $e_s \in M_S$ that
|
| 145 |
+
|
| 146 |
+
$$ (M \cup \{e_s\})^+ = K \cup \{e_s\}, \qquad (8) $$
|
| 147 |
+
|
| 148 |
+
where $K \subseteq M$ is the minimal set in M with $p_0(K) = \alpha$ and $\text{var}(\{e_s\}) \subseteq \text{var}(K)$.
|
| 149 |
+
|
| 150 |
+
Now, let $\mathcal{K}$ be the family of K sets obtained for every $e_s \in M_S$, i.e.,
|
| 151 |
+
|
| 152 |
+
$$\mathcal{K} = \{K \subseteq M \mid K \cup \{e_s\} = (M \cup \{e_s\})^+, e_s \in M_S\}, (9)$$
|
| 153 |
+
|
| 154 |
+
and also let $\mathcal{L}_\alpha$ be the family of the surplus minimizers, i.e.,
|
| 155 |
+
|
| 156 |
+
$$\mathcal{L}_\alpha = \{U \subseteq M \mid p_0(U) = \alpha\}. \qquad (10)$$
|
| 157 |
+
|
| 158 |
+
Then, it holds that the set E and all the set $K \in \mathcal{K}$ are contained in $\mathcal{L}_\alpha$. Recall that the set of minimizers in $\mathcal{L}_\alpha$ define a lattice, which means that $\cup_{K \in \mathcal{K}} K \in \mathcal{L}_\alpha$. Note also that $\text{var}(M_S) \subseteq \text{var}(\cup_{K \in \mathcal{K}} K)$. Hence, since E is the minimal set in $\mathcal{L}_\alpha$ containing all the equations in $M_S$, it follows that $\cup_{K \in \mathcal{K}} K \supseteq E$. From this, it can be deduced that
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
\begin{align}
|
| 162 |
+
\bigcup_{s \in S} (M \cup \{e_s\})^+ &= (\bigcup_{K \in \mathcal{K}} K) \cup (\bigcup_{s \in S} \{e_s\}) = \\
|
| 163 |
+
&= (\bigcup_{K \in \mathcal{K}} K) \cup M_S \supseteq E \cup M_S = (M \cup M_S)^+. \tag{11}
|
| 164 |
+
\end{align}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
Finally, from Lemma 4 it directly holds that $(M \cup \{e_s\})^+ \subseteq (M \cup M_S)^+$ since $e_s \in M_S$. Then, it can be straightforwardly stated that
|
| 168 |
+
|
| 169 |
+
$$\bigcup_{s \in S} (M \cup \{e_s\})^+ \subseteq (M \cup M_S)^+. \quad (12)$$
|
| 170 |
+
|
| 171 |
+
Thus, the proof is concluded from (11) and (12). $\square$
|
| 172 |
+
|
| 173 |
+
From Theorem 5 and taken detectability and isolability definitions into account, it can be intuitively seen how detectability and isolability properties of the process when a sensor configuration is installed can be deduced by gathering the detectability and isolability properties achieved by installing individual sensor one by one.
|
| 174 |
+
|
| 175 |
+
### 4.2 Sensor placement for detectability
|
| 176 |
+
|
| 177 |
+
In this subsection, the sensor placement sub-problem for fault detectability is first addressed. Recall that, $M^- = \emptyset$ by assumption and also that, according to Definition 1, any fault is detectable as long as its corresponding fault equation is in $M^+$. Therefore, it is only needed to focus on the fault equations in the just-determined part of the model in order to fulfill detectability.
|
| 178 |
+
|
| 179 |
+
Now, let $F_D(s)$ be the class of detectable faults when sensor s is chosen for installation, i.e.,
|
| 180 |
+
|
| 181 |
+
$$F_D(s) = \{f \in D | e_f \in (M \cup \{e_s\})^+\}. \quad (13)$$
|
| 182 |
+
---PAGE_BREAK---
|
| 183 |
+
|
| 184 |
+
Based on Theorem 5, it can be stated that a sensor configuration $S' \subseteq S$ is a solution for the detectability problem as long as
|
| 185 |
+
|
| 186 |
+
$$ \bigcup_{s \in S'} F_D(s) = D. \quad (14) $$
|
| 187 |
+
|
| 188 |
+
Hence, the set of $F_D(s)$ classes that covers $D$ characterises a possible solution. Therefore, given the set $\mathcal{F}_D = \{F_I(s_1), F_I(s_2), \dots, F_I(s_n)\}$ (for $n = |\mathcal{S}|$) of all detectability fault classes, the sensor placement problem for fault detectability is solved if all minimal subsets $C \subseteq \mathcal{F}_D$ that cover $D$ are found.
|
| 189 |
+
|
| 190 |
+
Algorithm 1 is proposed for solving Problem 3 when only detectability specification is sought. First, the detectability fault classes $F_D(s)$ are generated. Then, all minimal covers are computed by means of the procedure **MinimalSetCovers**. In Section 5, this procedure is introduced. Finally, in last step, the minimal sensor sets $S_D = \{S_1, S_2, \dots, S_m\}$ that solve the problem are obtained from the family of all minimal covers $\Gamma = \{\{C_1\}, \{C_2\}, \dots, \{C_m\}\}$.
|
| 191 |
+
|
| 192 |
+
**Algorithm 1** $S_D = \text{SPforDetectability}(M, S, D)$
|
| 193 |
+
|
| 194 |
+
$$ \begin{array}{l} \mathcal{F}_D = \emptyset \\ \quad \text{for all } s \in S \text{ do} \\ \quad \quad F_D(s) = \{f \in D \mid e_f \in (M \cup \{e_s\})^+\} \\ \quad \quad \mathcal{F}_D = \mathcal{F}_D \cup \{F_D(s)\} \\ \quad \text{end for} \\ \Gamma = \text{MinimalSetCovers}(\mathcal{F}_D, D) \\ S_D = \{S_i \subseteq S \mid \forall s \in S_i \Rightarrow F_D(s) \in C_i, C_i \in \Gamma\} \end{array} $$
|
| 195 |
+
|
| 196 |
+
### 4.3 Sensor placement for isolability
|
| 197 |
+
|
| 198 |
+
This subsection explains how to place sensors for fault isolability. Theorem 5 can be easily extended to the isolability case by replacing $M$ with $M \setminus \{e\}$ in (7). Then, isolability performance achieved by installing a set of sensors can be determined by studying the isolability achieved with each individual sensor. Similarly to the detectability case, a class of isolable fault pairs $F_I(s)$ for each sensor $s \in S$ is defined,
|
| 199 |
+
|
| 200 |
+
$$ F_I(s) = \{(f_i, f_j) \in I \mid e_{f_i} \in ((M \setminus \{e_{f_j}\}) \cup \{e_s\})^+\}. \quad (15) $$
|
| 201 |
+
|
| 202 |
+
Therefore, a sensor configuration $S' \subseteq S$ is a solution for the isolability problem if its corresponding isolability fault classes $F_I(s)$ cover all the pairs in $I$, i.e.,
|
| 203 |
+
|
| 204 |
+
$$ \bigcup_{s \in S'} F_I(s) = I. \quad (16) $$
|
| 205 |
+
|
| 206 |
+
This means that the same approach for the detectability case in Subsection 4.2 can be developed here. Let $\mathcal{F}_I = \{F_I(s_1), F_I(s_2), \dots, F_I(s_n)\}$ be the set of all isolability fault classes, then all minimal subsets $C \subseteq \mathcal{F}_I$ that cover $I$ characterise the solution of the sensor placement problem for fault isolability. Note, however, that now the elements to be covered are pair of faults instead of single faults. Algorithm 2 summarizes the sensor placement approach for the fault isolability case.
|
| 207 |
+
|
| 208 |
+
### 4.4 Sensor placement for fault detectability and isolability
|
| 209 |
+
|
| 210 |
+
As mentioned, the problem of placing sensors for fault detectability and isolability is directly solved in two steps.
|
| 211 |
+
|
| 212 |
+
**Algorithm 2** $S_I = \text{SPforIsolability}(M, S, I)$
|
| 213 |
+
|
| 214 |
+
$$
|
| 215 |
+
\begin{array}{l}
|
| 216 |
+
\mathcal{F}_I = \emptyset \\
|
| 217 |
+
\quad \textbf{for all } s \in S \textbf{ do} \\
|
| 218 |
+
\quad \quad F_I(s) = \{(f_i, f_j) \in I \mid e_{f_i} \in ((M \setminus \{e_{f_j}\}) \cup \{e_s\})^+\} \\
|
| 219 |
+
\quad \quad \mathcal{F}_I = \mathcal{F}_I \cup \{F_I(s)\} \\
|
| 220 |
+
\quad \textbf{end for} \\
|
| 221 |
+
\Gamma = \textbf{MinimalSetCovers}(\mathcal{F}_I, I) \\
|
| 222 |
+
S_I = \{S_i \subseteq S \mid \forall s \in S_i \Rightarrow F_I(s) \in C_i, C_i \in \Gamma\}
|
| 223 |
+
\end{array}
|
| 224 |
+
$$
|
| 225 |
+
|
| 226 |
+
First, the detectability problem is solved. And then, based on the obtained results, the isolability problem is solved. Before solving the isolability problem, the sensor subset $S' \subseteq S$ computed in the detectability problem is added to the model by means of its corresponding sensor equations set, i.e., $M \cup M_{S'}$, whereas the sensors in $S'$ are removed from the candidate sensor set, i.e., $S \setminus S'$. This procedure is performed for each sensor set $S' \in S_D$.
|
| 227 |
+
|
| 228 |
+
Algorithm 3 is introduced to solve the sensor placement problem for fault detection and isolation. All minimal sensor subsets of $S$ solving Problem 3 are computed.
|
| 229 |
+
|
| 230 |
+
**Algorithm 3** $S = \text{SPforFDI}(M, S, D, I)$
|
| 231 |
+
|
| 232 |
+
$$
|
| 233 |
+
\begin{array}{l}
|
| 234 |
+
S = \emptyset \\
|
| 235 |
+
S_D = \text{SPforDetectability}(M, S, D) \\
|
| 236 |
+
\quad \textbf{for all } S' \in S_D \textbf{ do} \\
|
| 237 |
+
\quad \quad \text{Construct } M_{S'} \\
|
| 238 |
+
\quad \quad M_i = M \cup M_{S'} \\
|
| 239 |
+
\quad \quad S_i = S \setminus S' \\
|
| 240 |
+
\quad \quad S_I = \text{SPforIsolability}(M_i, S_i, I) \\
|
| 241 |
+
\quad \quad S = S \cup \{S' \cup S_j | S_j \in S_I\} \\
|
| 242 |
+
\quad \textbf{end for} \\
|
| 243 |
+
\textbf{return } S
|
| 244 |
+
\end{array}
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
**Sensor faults** As mentioned before, sensors are system components and thus can be faulty. However, fault in sensors are not taken into account in Algorithm 3. In order to handle these faults, here the sensor equation $e_s$ will be used to represent the corresponding sensor fault.
|
| 248 |
+
|
| 249 |
+
The presence of a sensor fault in the analysis depends on whether the sensor is regarded installed, which hinders the characterization of the diagnosis specifications. Nevertheless, this characterization is relaxed by the following statements (Krysander and Frisk, 2008):
|
| 250 |
+
|
| 251 |
+
(a) A fault of a sensor placed for solving the detectability problem is always detectable.
|
| 252 |
+
|
| 253 |
+
(b) Faults of two sensors placed for solving the detectability problem are always isolable between them.
|
| 254 |
+
|
| 255 |
+
(c) A fault of a sensor placed for solving the isolability problem is always isolable from any other fault.
|
| 256 |
+
|
| 257 |
+
Therefore, according to these statements, only isolability between fault of sensors placed for the detectability problem needs to be specified in order to handle sensor faults. Let $S_f = \{s_1, s_2, \dots, s_m\} \subseteq S$ be a subset of sensors, $F_{S_f} = \{f_{s_1}, f_{s_2}, \dots, f_{s_m}\}$ be the set of its corresponding sensor faults that need to be isolable from any other fault, and $S' \in S_D$ be a solution of the detectability problem, then the isolability specification when sensor faults are regarded is defined as
|
| 258 |
+
|
| 259 |
+
$$ I_{S_f} = (\{(f_i, f_{s_j}) \in D \times F_{S_f} | f_i \in D, s_j \in (S' \cap F_{S_f})\}). (17) $$
|
| 260 |
+
---PAGE_BREAK---
|
| 261 |
+
|
| 262 |
+
These new isolability specifications should be computed
|
| 263 |
+
for each sensor set $S' \in S_D$ and inserted into the $I$ set
|
| 264 |
+
before the **SPforIsolability** call in Algorithm 3.
|
| 265 |
+
|
| 266 |
+
*Repeated sensors* There may exist the possibility of installing the same sensor more than once. From Theorem 5, it follows that repeated sensors are only useful to improve sensor fault isolability. Furthermore, let $s'$ denote the repeated sensor of $s \in S$, then it holds that $(\{e_s, e_{s'}\})^+ = \{e_s, e_{s'}\}$, which implies that the fault of $s$ becomes completely isolable from any other fault if the repeated sensor $s'$ is installed in the process. This means that installing more that one repeated sensor per variable does not improve any diagnosis specification.
|
| 267 |
+
|
| 268 |
+
Based on this discussion and statement (c), it can be
|
| 269 |
+
conclude that only the sensors placed to solve the de-
|
| 270 |
+
tectability problem need to be repeated at most once. Let
|
| 271 |
+
$S_r \subseteq S$ the subset of sensors that can be repeated and
|
| 272 |
+
$S' \in S_D$ a solution of the detectability problem, then the
|
| 273 |
+
set of candidate sensors to solve the isolability problem is
|
| 274 |
+
$(S \setminus S') \cup (S_r \cap S')$.
|
| 275 |
+
|
| 276 |
+
Algorithm 3 is modified in order to handle both sensor faults and repeated sensors. The resulting procedure is summarized in Algorithm 4 where now the set $S_f$ of faulty sensors and the set $S_r$ of repeated sensors are specified.
|
| 277 |
+
|
| 278 |
+
**Algorithm 4**
|
| 279 |
+
* *S* = SPforFDI(*M*, *S*, *D*, *I*, *S*<sub>*f*</sub>, *S*<sub>*r*</sub>)
|
| 280 |
+
|
| 281 |
+
* *S* = ∅
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
S_D = \text{SPforDetectability}(M, S, D)
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
**for all** *S'* ∈ *S*<sub>*D*</sub> **do**
|
| 288 |
+
|
| 289 |
+
Construct $M_{S'}$
|
| 290 |
+
|
| 291 |
+
$$
|
| 292 |
+
\begin{align*}
|
| 293 |
+
I_{S_f} &= \{(f_i, f_{s_j}) \mid f_i \in D, s_j \in (S' \cap S_f)\} \\
|
| 294 |
+
M_i &= M \cup M_{S'}
|
| 295 |
+
\end{align*}
|
| 296 |
+
$$
|
| 297 |
+
|
| 298 |
+
$$
|
| 299 |
+
\begin{align*}
|
| 300 |
+
S_i &= (S \setminus S') \cup (S_r \cap S') \\
|
| 301 |
+
I_i &= I \cup I_{S_f}
|
| 302 |
+
\end{align*}
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
\begin{align*}
|
| 307 |
+
S_I &= \text{SPforIsolability}(M_i, S_i, I_i) \\
|
| 308 |
+
S &= S \cup \{S' \uplus S_j \mid S_j \in S_I\}
|
| 309 |
+
\end{align*}
|
| 310 |
+
$$
|
| 311 |
+
|
| 312 |
+
end for
|
| 313 |
+
|
| 314 |
+
return *S*
|
| 315 |
+
|
| 316 |
+
Note that, now, the operator $\uplus$ is used as the joint
|
| 317 |
+
operation in the union of $S' \in S_D$ and $S_j \in S_i$ to indicate
|
| 318 |
+
that multiple occurrences of the same element are allowed
|
| 319 |
+
in the solution.
|
| 320 |
+
|
| 321 |
+
5. MINIMAL SET COVERS ALGORITHM
|
| 322 |
+
|
| 323 |
+
There exists a duality between the *hitting set problem*
|
| 324 |
+
and the *set cover problem*. Hence, algorithms used to
|
| 325 |
+
solve hitting set problems can be used to solve set cover
|
| 326 |
+
problems and vice versa.
|
| 327 |
+
|
| 328 |
+
Let A be a set of elements and B be a family of subsets
|
| 329 |
+
of A. A subset H ⊆ A is a *hitting set* if it has non-empty
|
| 330 |
+
intersection with any set B ∈ B, i.e., it is said that H
|
| 331 |
+
hits each and every one set in B. On the other hand, a
|
| 332 |
+
subfamily of subsets C ⊆ B is a *cover* if the union of its
|
| 333 |
+
sets is A, then it is said that the sets in C cover A. The
|
| 334 |
+
duality of both problems can be seen by representing both
|
| 335 |
+
sets in a bipartite graph G(A, B; E) where the set of edges
|
| 336 |
+
is defined as: (a, B) ∈ E if the element-node a ∈ A is
|
| 337 |
+
contained in the set-node B ∈ B. Then, a hitting set is
|
| 338 |
+
any subset of nodes in A such that their adjacent nodes
|
| 339 |
+
|
| 340 |
+
are all nodes in $\mathcal{B}$. Analogously, a cover is any subset of nodes in $\mathcal{B}$ such that their adjacent nodes are all nodes in $\mathcal{A}$.
|
| 341 |
+
|
| 342 |
+
In this paper, finding all minimal covers is of interest to solve the sensor placement problem. A cover $\mathcal{C} \subseteq \mathcal{B}$ is minimal if there is no subfamily $\mathcal{C}' \subset \mathcal{C}$ that is a cover. The algorithm presented in De Kleer and Williams (1987) to compute the minimal hitting sets is here used to find now all minimal covers. This is done by first determining the set of nodes in $\mathcal{B}$ adjacent to each node in $\mathcal{A}$ and then solving the minimal hitting set problem.
|
| 343 |
+
|
| 344 |
+
**Algorithm 5 Γ = MiniamlSetCovers(Ω, A)**
|
| 345 |
+
|
| 346 |
+
$$
|
| 347 |
+
\begin{array}{l}
|
| 348 |
+
\mathcal{E} = \emptyset \\
|
| 349 |
+
\quad \textbf{for all } a \in A \textbf{ do} \\
|
| 350 |
+
\quad \quad \mathcal{E} = \mathcal{E} \cup \{B \in \mathcal{B} \mid a \in B\} \{\text{Find the adjacent nodes}\} \\
|
| 351 |
+
\quad \quad \textbf{end for} \\
|
| 352 |
+
\quad \text{Use iteratively the De Kleer and Williams (1987) algo-} \\
|
| 353 |
+
\quad \quad \text{rithm for each set in } \mathcal{E} \\
|
| 354 |
+
\quad \quad \text{Store the minimal hitting sets in } \Gamma \\
|
| 355 |
+
\quad \quad \textbf{return } \Gamma
|
| 356 |
+
\end{array}
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
Algorithm 5 returns by means of Γ all minimal sets
|
| 360 |
+
in B that each of them covers A. It is important to
|
| 361 |
+
point out that this algorithm is non-polynomial and hence
|
| 362 |
+
computational issues may be expected for large number of
|
| 363 |
+
sets in B.
|
| 364 |
+
|
| 365 |
+
6. EXAMPLE
|
| 366 |
+
|
| 367 |
+
In this section, an illustrative example is shown in order
|
| 368 |
+
to clarify the procedure steps of Algorithm 4. The same
|
| 369 |
+
example introduced in Krysander and Frisk (2008) is used
|
| 370 |
+
here. The analytical model consists of five equations, M =
|
| 371 |
+
{e₁, e₂, e₃, e₄, e₅}, of the following form:
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
\begin{align*}
|
| 375 |
+
e_1: & \dot{x}_1 = -x_1 + x_2 + x_5 \\
|
| 376 |
+
e_2: & \dot{x}_2 = -2x_2 + x_3 + x_4 \\
|
| 377 |
+
e_3: & \dot{x}_3 = -3x_3 + x_5 + f_1 \\
|
| 378 |
+
e_4: & \dot{x}_4 = -x_4 + x_5 + f_2 \\
|
| 379 |
+
e_5: & \dot{x}_5 = -5x_5 + u + f_3
|
| 380 |
+
\end{align*}
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
where $x_i$ (for $i = 1, \dots, 5$) are the unknown process variables. Three faults $F = \{f_1, f_2, f_3\}$ are defined, which corresponds to inconsistencies in equations $e_3$, $e_4$ and $e_5$, respectively (i.e., $e_{f_1} = e_3$, $e_{f_2} = e_4$ and $e_{f_3} = e_5$). Moreover, all unknown variables can be measured by introducing the set of candidate sensors $S = \{s_1, s_2, s_3, s_4, s_5\}$, where sensor $s_i$ measures variable $x_i$.
|
| 384 |
+
|
| 385 |
+
The required diagnosis specifications are maximum fault detectability and isolability. Therefore, after performing the diagnosability analysis with all sensors installed, the following detectability and isolability sets are defined:
|
| 386 |
+
|
| 387 |
+
$$
|
| 388 |
+
D = D_{max} = \{f_1, f_2, f_3\},
|
| 389 |
+
$$
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
I = I_{max} = \{(f_1, f_2), (f_1, f_3), (f_2, f_3)\}.
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
Furthermore, sensor faults and repeated sensors for all $s \in S$ will be taken into account, i.e., $S_f = \{s_1, s_2, s_3, s_4, s_5\}$ and $S_r = \{s_1, s_2, s_3, s_4, s_5\}$.
|
| 396 |
+
|
| 397 |
+
First, sensors for fault detectability are computed, i.e.,
|
| 398 |
+
$S_D = SPforDetectability(M, S, D)$. Algorithm 1 gener-
|
| 399 |
+
ates the following fault detectability classes for each sensor:
|
| 400 |
+
---PAGE_BREAK---
|
| 401 |
+
|
| 402 |
+
$$
|
| 403 |
+
\begin{align*}
|
| 404 |
+
F_D(s_1) &= \{f_1, f_2, f_3\} \\
|
| 405 |
+
F_D(s_2) &= \{f_1, f_2, f_3\} \\
|
| 406 |
+
F_D(s_3) &= \{f_1, f_3\} \\
|
| 407 |
+
F_D(s_4) &= \{f_2, f_3\} \\
|
| 408 |
+
F_D(s_5) &= \{f_3\}
|
| 409 |
+
\end{align*}
|
| 410 |
+
$$
|
| 411 |
+
|
| 412 |
+
Then, the minimal sets that cover D are
|
| 413 |
+
|
| 414 |
+
$$
|
| 415 |
+
\Gamma = \{\{F_D(s_1)\}, \{F_D(s_2)\}, \{F_D(s_3), F_D(s_4)\}\},
|
| 416 |
+
$$
|
| 417 |
+
|
| 418 |
+
which characterizes the solution
|
| 419 |
+
|
| 420 |
+
$$
|
| 421 |
+
S_D = \{\{s_1\}, \{s_2\}, \{s_3, s_4\}\}.
|
| 422 |
+
$$
|
| 423 |
+
|
| 424 |
+
After solving the detectability problem, the isolability
|
| 425 |
+
problem is solved for each set in S<sub>D</sub>. First, S' = {s<sub>1</sub>}
|
| 426 |
+
is chosen with
|
| 427 |
+
|
| 428 |
+
$$
|
| 429 |
+
\begin{align*}
|
| 430 |
+
M_i &= M \cup \{e_{s_1}\}, && \text{for } e_{s_1} : x_1 = y_1, \\
|
| 431 |
+
S_i &= S, \\
|
| 432 |
+
I_i &= I \cup \{(f_1, f_{s_1}), (f_2, f_{s_1}), (f_3, f_{s_1})\}.
|
| 433 |
+
\end{align*}
|
| 434 |
+
$$
|
| 435 |
+
|
| 436 |
+
Algorithm 2 produces for each sensor in $S_i$, the following classes of isolable fault pairs:
|
| 437 |
+
|
| 438 |
+
$$
|
| 439 |
+
\begin{align*}
|
| 440 |
+
F_I(s_1) &= \{(f_1, f_{s_1}), (f_2, f_{s_1}), (f_3, f_{s_1})\} \\
|
| 441 |
+
F_I(s_2) &= \{(f_1, f_3), (f_2, f_3), (f_1, f_{s_1}), (f_2, f_{s_1}), (f_3, f_{s_1})\} \\
|
| 442 |
+
F_I(s_3) &= \{(f_1, f_2), (f_1, f_3), (f_2, f_3), (f_1, f_{s_1}), (f_3, f_{s_1})\} \\
|
| 443 |
+
F_I(s_4) &= \{(f_1, f_2), (f_1, f_3), (f_2, f_3), (f_2, f_{s_1}), (f_3, f_{s_1})\} \\
|
| 444 |
+
F_I(s_5) &= \{(f_1, f_3), (f_2, f_3), (f_3, f_{s_1})\}
|
| 445 |
+
\end{align*}
|
| 446 |
+
$$
|
| 447 |
+
|
| 448 |
+
The minimal covers of $I_i$ are
|
| 449 |
+
|
| 450 |
+
$$
|
| 451 |
+
\Gamma = \left\{
|
| 452 |
+
\begin{array}{@{}l@{}}
|
| 453 |
+
\{F_I(s_1), F_I(s_4)\}, \{F_I(s_2), F_I(s_4)\}, \{F_I(s_1), F_I(s_3)\}, \\
|
| 454 |
+
\quad \{F_I(s_2), F_I(s_3)\}, \{F_I(s_3), F_I(s_4)\}.
|
| 455 |
+
\end{array}
|
| 456 |
+
\right\}.
|
| 457 |
+
$$
|
| 458 |
+
|
| 459 |
+
These covers, together with the already installed sensors
|
| 460 |
+
in $S'$, characterize the following partial solution
|
| 461 |
+
|
| 462 |
+
$$
|
| 463 |
+
S = \left\{
|
| 464 |
+
\begin{aligned}
|
| 465 |
+
& \{s_1, s_1, s_4\}, \{s_1, s_2, s_4\}, \{s_1, s_1, s_3\}, \\
|
| 466 |
+
& \quad \{s_1, s_2, s_3\}, \{s_1, s_3, s_4\}.
|
| 467 |
+
\end{aligned}
|
| 468 |
+
\right\}.
|
| 469 |
+
$$
|
| 470 |
+
|
| 471 |
+
The same procedure is repeated for $S' = \{s_2\}$ and $S' = \{s_3, s_4\}$. The new computed minimal sensor sets are added to the solution, which finally is
|
| 472 |
+
|
| 473 |
+
$$
|
| 474 |
+
S = \left\{
|
| 475 |
+
\begin{array}{@{}l@{}}
|
| 476 |
+
\{s_1, s_1, s_4\}, \{s_1, s_2, s_4\}, \{s_1, s_1, s_3\}, \\
|
| 477 |
+
\quad \{s_1, s_2, s_3\}, \{s_1, s_3, s_4\}, \{s_2, s_2, s_4\}, \\
|
| 478 |
+
\quad \{s_2, s_2, s_3\}, \{s_2, s_3, s_4\}, \{s_3, s_3, s_4, s_4\}.
|
| 479 |
+
\end{array}
|
| 480 |
+
\right\}.
|
| 481 |
+
$$
|
| 482 |
+
|
| 483 |
+
Any of these sensor configurations is suitable to achieve
|
| 484 |
+
the specified fault detectability *D* and isolability *I*, as well
|
| 485 |
+
as detectability and isolability of all sensor faults according
|
| 486 |
+
to the chosen sensor configuration.
|
| 487 |
+
|
| 488 |
+
7. CONCLUSIONS
|
| 489 |
+
|
| 490 |
+
A novel approach for the sensor placement problem has been addressed in this paper. The computed solution characterises all possible sensor configurations that meet the required diagnosis specifications. Typically, maximum fault detectability and isolability are the most relevant specifications. However, since required specifications are explicitly expressed, Algorithm 4 can handle any different detectability and isolability specifications. Furthermore, faults in the extra sensors as well as repeated sensors are also taken into consideration.
|
| 491 |
+
|
| 492 |
+
A key step to characterise all possible solutions is to formalize the problem as a minimal set cover problem
|
| 493 |
+
|
| 494 |
+
where all the minimal covers need to be found. However, it
|
| 495 |
+
should be noted that solving the minimal set cover problem
|
| 496 |
+
may entail computational problems for large number of
|
| 497 |
+
classes.
|
| 498 |
+
|
| 499 |
+
The method presented in this paper has some similarities
|
| 500 |
+
with the method present in Krysander and Frisk (2008).
|
| 501 |
+
Both methods solve first the detectability problem and
|
| 502 |
+
then, based on the obtained results, the isolabiltiy problem
|
| 503 |
+
is solved in order to finally compute all possible solutions.
|
| 504 |
+
The main difference is the classes used to compute the
|
| 505 |
+
solution. Here a class of faults induced by each sensor is
|
| 506 |
+
used, whereas in Krysander and Frisk (2008) a class of
|
| 507 |
+
sensors induced by each fault is required. This entails that
|
| 508 |
+
the procedure to compute the solution for the detectabil-
|
| 509 |
+
ity problem (and thereby also the isolability problem) is
|
| 510 |
+
different in both works. However, the classes used in one
|
| 511 |
+
paper can be easy derived from the classed used in the
|
| 512 |
+
other paper which makes that, in terms of computational
|
| 513 |
+
complexity, both works can be implemented with equiva-
|
| 514 |
+
lent efficiency.
|
| 515 |
+
|
| 516 |
+
Finally, the main contribution of this paper is Theorem 5 which allows to verify that what can be done in terms of detectability and isolability with a set of sensors can be determined by the union of what can be done by each of them. This result can be useful in more sophisticated search algorithms devoted to the sensor placement problem for fault detection and isolation.
|
| 517 |
+
|
| 518 |
+
REFERENCES
|
| 519 |
+
|
| 520 |
+
Bagajewicz, M. (2000). *Design and Upgrade of Process Plant Instrumentation*. Technomic Publishers, Lancaster, PA.
|
| 521 |
+
|
| 522 |
+
Blanke, M., Kinnaert, M., Lunze, J., and Staroswiecki, M. (2006). *Diagnosis and Fault-Tolerant Control*. Springer, 2nd edition.
|
| 523 |
+
|
| 524 |
+
Commault, C., Dion, J.M., and Agha, S.Y. (2008). Structural analysis for the sensor location problem in fault detection and isolation. *Automatica*, **44**(8), 2074–2080.
|
| 525 |
+
|
| 526 |
+
De Kleer, J. and Williams, B.C. (1987). Diagnosing multiple faults. *Artificial Intelligence*, **32**(1), 97–130.
|
| 527 |
+
|
| 528 |
+
Dulmage, A.L. and Mendelsohn, N.S. (1958). Covering of bi-partite graph. *Canada J. Math*, **10**, 527–534.
|
| 529 |
+
|
| 530 |
+
Krysander, M., Åslund, J., and Nyberg, M. (2008). An efficient algorithm for finding minimal over-constrained sub-systems for model-based diagnosis. *IEEE Transactions on Systems, Man, and Cybernetics-Part A*, **38**(1).
|
| 531 |
+
|
| 532 |
+
Krysander, M. and Frisk, E. (2008). Sensor placement for fault diagnosis. *IEEE Transactions on Systems, Man, and Cybernetics-Part A*, **38**(6), 1398–1410.
|
| 533 |
+
|
| 534 |
+
Lovász, L. and Plummer, M. (1986). *Matching Theory*. North-Holland.
|
| 535 |
+
|
| 536 |
+
Murota, K. (2000). *Matrices and Matroids for Systems Analysis*. Springer-Verlag.
|
| 537 |
+
|
| 538 |
+
Rosich, A., Sarrate, R., and Nejjari, F. (2009). Optimal sensor placement for fdi using binary integer linear programming. In *20th International Workshop on Principles of Diagnosis (DX-09)*. Stockholm, Sweden.
|
| 539 |
+
|
| 540 |
+
Travé-Massuyès, L., Escobet, T., and Olive, X. (2006). Diagnosability analysis based on component supported analytical redundancy relations. *IEEE Transactions on Systems, Man, and Cybernetics-Part A*, **36**(6), 1146–1160.
|
samples/texts_merged/203609.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples/texts_merged/2126836.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples/texts_merged/2177428.md
ADDED
|
@@ -0,0 +1,629 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Information Spreading on Almost Torus Networks
|
| 5 |
+
|
| 6 |
+
Antonia Maria Masucci, Alonso Silva
|
| 7 |
+
|
| 8 |
+
► To cite this version:
|
| 9 |
+
|
| 10 |
+
Antonia Maria Masucci, Alonso Silva. Information Spreading on Almost Torus Networks. 52nd IEEE Conference on Decision and Control (CDC), Dec 2013, Florence, Italy. hal-00922262
|
| 11 |
+
|
| 12 |
+
HAL Id: hal-00922262
|
| 13 |
+
|
| 14 |
+
https://hal.inria.fr/hal-00922262
|
| 15 |
+
|
| 16 |
+
Submitted on 25 Dec 2013
|
| 17 |
+
|
| 18 |
+
**HAL** is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
|
| 19 |
+
|
| 20 |
+
L'archive ouverte pluridisciplinaire **HAL**, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
|
| 21 |
+
---PAGE_BREAK---
|
| 22 |
+
|
| 23 |
+
Information Spreading on Almost Torus Networks
|
| 24 |
+
|
| 25 |
+
Antonia Maria Masucci *¹ and Alonso Silva †²
|
| 26 |
+
|
| 27 |
+
¹ETIS/ENSEA - Université de Cergy Pontoise - CNRS, 6, avenue du Ponceau, 95014 Cergy-Pontoise, France
|
| 28 |
+
|
| 29 |
+
²Alcatel-Lucent Bell Labs France, Centre de Villarceaux, Route de Villejust, 91620 Nozay, France
|
| 30 |
+
|
| 31 |
+
Abstract
|
| 32 |
+
|
| 33 |
+
Epidemic modeling has been extensively used in the
|
| 34 |
+
last years in the field of telecommunications and com-
|
| 35 |
+
puter networks. We consider the popular Susceptible-
|
| 36 |
+
Infected-Susceptible spreading model as the metric
|
| 37 |
+
for information spreading. In this work, we analyze
|
| 38 |
+
information spreading on a particular class of net-
|
| 39 |
+
works denoted almost torus networks and over the
|
| 40 |
+
lattice which can be considered as the limit when the
|
| 41 |
+
torus length goes to infinity. Almost torus networks
|
| 42 |
+
consist on the torus network topology where some
|
| 43 |
+
nodes or edges have been removed. We find explicit
|
| 44 |
+
expressions for the characteristic polynomial of these
|
| 45 |
+
graphs and tight lower bounds for its computation.
|
| 46 |
+
These expressions allow us to estimate their spectral
|
| 47 |
+
radius and thus how the information spreads on these
|
| 48 |
+
networks.
|
| 49 |
+
|
| 50 |
+
1 Introduction
|
| 51 |
+
|
| 52 |
+
There exists an abundant literature on epidemic mod-
|
| 53 |
+
eling and in particular on epidemics on networks (see
|
| 54 |
+
e.g., the books [1, 2, 3] and references therein). In the
|
| 55 |
+
last decades, epidemic modeling has been extensively
|
| 56 |
+
used in the field of telecommunications and computer
|
| 57 |
+
networks. For example, epidemic models have been
|
| 58 |
+
applied in order to analyze the spread of computer
|
| 59 |
+
|
| 60 |
+
viruses and worms [4], they have also been applied on
|
| 61 |
+
epidemic routing for delay tolerant networks [5], etc.
|
| 62 |
+
The structure of networks plays a critical role in the
|
| 63 |
+
spread of a viral message and the authors of [6, 7, 8]
|
| 64 |
+
have identified some conditions for successful perfor-
|
| 65 |
+
mance of viral marketing. In particular, the authors
|
| 66 |
+
of [7] have shown that the spectral radius of the graph
|
| 67 |
+
determines the epidemic lifetime and its coverage.
|
| 68 |
+
More recently on [9], the authors have studied the
|
| 69 |
+
intertwined propagation of two competing “memes”
|
| 70 |
+
(or viruses, rumors, etc.) in a composite network
|
| 71 |
+
(individual agents are represented across two planes
|
| 72 |
+
e.g. Facebook and Twitter). The authors have re-
|
| 73 |
+
alized that the meme persistence $\delta$ and the meme
|
| 74 |
+
strength $\beta$ on each plane, and the spectral radius of
|
| 75 |
+
the graph, completely determine which of the two
|
| 76 |
+
competing “memes” prevail.
|
| 77 |
+
|
| 78 |
+
The topologies of focus in this work, torus net-
|
| 79 |
+
work topologies, are commonly used in the produc-
|
| 80 |
+
tion of high-end computing systems [10]. A num-
|
| 81 |
+
ber of supercomputers on the TOP500 list use three-
|
| 82 |
+
dimensional torus networks [11]. For instance, IBM's
|
| 83 |
+
Blue Gene L [12, 13] and Blue Gene P [14]; Cray's
|
| 84 |
+
XT and XT3 [15] systems use three-dimensional
|
| 85 |
+
torus networks for node communication. IBM Blue
|
| 86 |
+
Gene/Q uses a five-dimensional torus network [16].
|
| 87 |
+
Fujitsu's K computer and the PRIMEHPC FX10
|
| 88 |
+
use a proprietary six-dimensional torus interconnect
|
| 89 |
+
called Tofu [17]. Torus networks are used because of
|
| 90 |
+
a combination of their linear per node cost scaling
|
| 91 |
+
and their competitive overall performance.
|
| 92 |
+
|
| 93 |
+
*Email: antonia-maria.masucci@ensea.fr
|
| 94 |
+
|
| 95 |
+
†Email: alonso.silva@alcatel-lucent.com To whom correspondence should be addressed.
|
| 96 |
+
---PAGE_BREAK---
|
| 97 |
+
|
| 98 |
+
In this work, we analyze information spreading on a particular class of networks denoted almost torus networks where we assume the popular Susceptible-Infected-Susceptible model as model of information spreading. Almost torus networks consist on the torus network topology where some nodes or edges have been removed. This situation can model the failure of some computer nodes or connections between the computer nodes. As we will see, in those graphs, the study of the spectral radius of the graph is determinant in order to analyze the information spreading. We find explicit expressions for the characteristic polynomial of these graphs and very tight lower bounds for its computation. These expressions allow us to estimate their spectral radius and thus how the information spreads on these networks.
|
| 99 |
+
|
| 100 |
+
The outline of the work is as follows. In Section 2, we recall some preliminary notions of graph theory. In Section 3, we present the Susceptible-Infected-Susceptible model of information spreading. Then, in Section 4, we analyze the information spreading in the almost torus network where one node has been removed and in Section 5 we extend our results to the cases when a set of nodes has been removed and when an edge has been removed from the torus network. In Section 6, we provide lower bounds for the two-dimensional torus network. In Section 7, we present numerical results that validate our analysis. Finally, in Section 8 we conclude.
|
| 101 |
+
|
| 102 |
+
## 2 Preliminaries
|
| 103 |
+
|
| 104 |
+
Let $\mathcal{G} = (\mathcal{V}, \mathcal{E})$ denote an undirected graph with no self-loops. We denote by $\mathcal{V} = \mathcal{V}(\mathcal{G}) = \{v_1, \dots, v_n\}$ the set of nodes and by $\mathcal{E} = \mathcal{E}(\mathcal{G}) \subseteq \mathcal{V} \times \mathcal{V}$ the set of undirected edges of $\mathcal{G}$. If $\{v_i, v_j\} \in \mathcal{E}(\mathcal{G})$ we call nodes $v_i$ and $v_j$ *adjacent* (or neighbors), which we denote by $v_i \sim v_j$. We define the set of neighbors of node $v$ as $\mathcal{N}_v = \{w \in \mathcal{V} : \{v, w\} \in \mathcal{E}\}$. The *degree* of a node $v$, denoted by $\deg_v$, corresponds to the number of neighbors of $v$, i.e. the cardinality of the set $\mathcal{N}_v$. We define a *walk* of length $k$ from $v_0$ to $v_k$ to be an ordered sequence of nodes ($v_0, v_1, \dots, v_k$) such that $v_i \sim v_{i+1}$ for $i = 0, 1, \dots, k-1$. If $v_0 = v_k$, then the walk is closed.
|
| 105 |
+
|
| 106 |
+
Graphs can be algebraically represented via matrices. The *adjacency matrix* of an undirected graph $\mathcal{G}$, denoted by $A = A(\mathcal{G})$, is an $n \times n$ symmetric matrix defined entry-wise as
|
| 107 |
+
|
| 108 |
+
$$A_{ij} = \begin{cases} 1 & \text{if } v_i \text{ and } v_j \text{ are adjacent,} \\ 0 & \text{otherwise.} \end{cases}$$
|
| 109 |
+
|
| 110 |
+
We recall the well-known result that for $k \in \mathbb{N}$, $A_{ij}^k$ is the number of paths of length $k$ connecting the $i$-th and $j$-th vertices (proof by induction). Since $\mathcal{A}^0$ is the identity matrix, we thus accept the existence of walks of length zero. We use $I$ to denote the identity matrix, where its order is determined by the context.
|
| 111 |
+
|
| 112 |
+
We define the *Laplacian matrix* $L$ for graphs without loops or multiple edges, as follows:
|
| 113 |
+
|
| 114 |
+
$$L_{ij} = \begin{cases} \deg_{v_i} & \text{if } v_i = v_j, \\ -1 & \text{if } v_i \text{ and } v_j \text{ are adjacent,} \\ 0 & \text{otherwise.} \end{cases}$$
|
| 115 |
+
|
| 116 |
+
We notice that the Laplacian of a graph can be written as $L = D - A$ where $D$ is a diagonal matrix whose diagonal entries correspond to the degree of each node and $A$ is the adjacency matrix.
|
| 117 |
+
|
| 118 |
+
The *spectral radius* of a graph $\mathcal{G}$, denoted $\rho(A)$, is the size of the largest eigenvalue (in absolute value) of the adjacency matrix of the graph, i.e. $\rho(A) = \max_i(|\lambda_i|)$. Since $A$ is a symmetric matrix with non-negative entries, all its eigenvalues are real. The characteristic polynomial of $\mathcal{G}$, denoted $\phi(\mathcal{G}, x)$ is defined as $\det(xI - A)$, that corresponds to the characteristic polynomial of the adjacency matrix $A$. The *walk generating function*¹ $W(\mathcal{G}, x)$ is defined to be $(I - xA)^{-1}$. The *ij*-entry of $W(\mathcal{G}, x)$ will be written as $W_{ij}(\mathcal{G}, x)$. If $\mathcal{S}$ is a subset of $\mathcal{V}(\mathcal{G})$ then $\mathcal{G} \setminus \mathcal{S}$ is the subgraph of $\mathcal{G}$ induced by the vertices not in $\mathcal{S}$. We normally write $\mathcal{G} \setminus i$ instead of $\mathcal{G} \setminus \{i\}$ and $\mathcal{G} \setminus ij$ instead of $\mathcal{G} \setminus \{i,j\}$.
|
| 119 |
+
|
| 120 |
+
¹This can be viewed indifferently as a matrix with rational functions as entries, or as a formal power series in $x$ over the ring of all polynomials in the matrix $A$.
|
| 121 |
+
---PAGE_BREAK---
|
| 122 |
+
|
| 123 |
+
### 3 Model of Information Spreading
|
| 124 |
+
|
| 125 |
+
We use the popular Susceptible-Infected-Susceptible (SIS) model of viral spreading [18] as the metric for information spreading. We remark that our results can be easily extended to the Susceptible-Infected$_1$-Infected$_2$-Susceptible (SI$_1$I$_2$S) model [9] of spreading over composite networks. We consider that each node can be in two possible states: susceptible (of being infected) or infected. We denote these two states as $\mathcal{S}$ and $\mathcal{I}$, respectively.
|
| 126 |
+
|
| 127 |
+
The result presented on this section was first found in [19] and [20], through mean-field approximations of the Markov chain evolution of the $2^n$ possible states. We believe this alternative proof to be simpler and we presented here for completeness.
|
| 128 |
+
|
| 129 |
+
Consider a population of $n$ nodes interconnected via an undirected graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$. Time is slotted. In each time slot, infected nodes attempt to contaminate their susceptible neighbors, where each infection attempt is successful with a probability $\beta$, independent of other infection attempts. The parameter $\beta$ is called the virus birth rate (or meme strength as denoted in [9]). Each infected node recover in time slot $t$ with probability $\delta$. The parameter $\delta$ is called virus curing rate (or meme persistence). We notice that this parameter captures the meme persistence in an inverse way, i.e. a high $\delta$ means low persistence. This means a very contagious virus will be modeled with a low $\delta$ value.
|
| 130 |
+
|
| 131 |
+
We define $p_{i,t}$ as the probability that node $i$ is infected at time $t$. We define $\zeta_{i,t+1}$ as the probability that node $i$ will not receive infections from its neighbors in the next time-slot, which is given by
|
| 132 |
+
|
| 133 |
+
$$\zeta_{i,t+1} = \prod_{j \in \mathcal{N}_i} (1 - \beta p_{j,t}), \quad (1)$$
|
| 134 |
+
|
| 135 |
+
where $\mathcal{N}_i$ denotes the set of neighbors of node $i$. This expression can be interpreted as the probability that none of the infection attempts is successful.
|
| 136 |
+
|
| 137 |
+
The probability for a node $i$ of not being infected at time $t+1$ depends on the conditioning event if the node $i$ was or was not infected at time $t$:
|
| 138 |
+
|
| 139 |
+
1. For the first case, if the node $i$ was infected at time $t$, then the probability for not being infected at time $t+1$ is equal to $\delta\zeta_{i,t+1}$,
|
| 140 |
+
|
| 141 |
+
2. For the second case, if the node $i$ was not infected at time $t$, then the probability for not being infected at time $t+1$ is equal to $\zeta_{i,t+1}$.
|
| 142 |
+
|
| 143 |
+
From 1) and 2), the probability for a node $i$ of not being infected at time $t+1$ is equal to
|
| 144 |
+
|
| 145 |
+
$$1 - p_{i,t+1} = \zeta_{i,t+1}(1 - p_{i,t}) + \delta\zeta_{i,t+1}p_{i,t}.$$
|
| 146 |
+
|
| 147 |
+
Replacing from eq. (1), we obtain that
|
| 148 |
+
|
| 149 |
+
$$1-p_{i,t+1} = (1-p_{i,t}) \prod_{j \in \mathcal{N}_i} (1-\beta p_{j,t}) + \delta p_{i,t} \prod_{j \in \mathcal{N}_i} (1-\beta p_{j,t}). \quad (2)$$
|
| 150 |
+
|
| 151 |
+
We focus on the criterion based on the asymptotic stability of the disease-free equilibrium $p_i^*(t) = 0$ for all $i$. For doing this, we will make use of the following theorem.
|
| 152 |
+
|
| 153 |
+
Figure 1: Number of infected nodes vs time.
|
| 154 |
+
|
| 155 |
+
**Theorem 1** (Hirsch and Smale, 1974 [21]). The system given by $\mathbf{p}_{t+1} = g(\mathbf{p}_t)$ is asymptotically stable at an equilibrium point $\mathbf{p}^*$, if the eigenvalues of the Jacobiian $J = \nabla g(\mathbf{p})$ are less than 1 in absolute value, where
|
| 156 |
+
|
| 157 |
+
$$J_{k,l} = [\nabla g(\mathbf{p}^*)]_{k,l} = \frac{\partial p_{k,t+1}}{\partial p_{l,t}} \bigg|_{\mathbf{p}_t=\mathbf{p}^*}.$$
|
| 158 |
+
---PAGE_BREAK---
|
| 159 |
+
|
| 160 |
+
Figure 2: A two-dimensional grid of length 4.
|
| 161 |
+
|
| 162 |
+
Figure 3: A two-dimensional torus of length 4.
|
| 163 |
+
|
| 164 |
+
We rewrite eq. (2) as follows:
|
| 165 |
+
|
| 166 |
+
$$p_{i,t+1} = (1-p_{i,t}) \prod_{j \in N_i} (1-\beta p_{j,t}) - \delta p_{i,t} \prod_{j \in N_i} (1-\beta p_{j,t}).$$
|
| 167 |
+
|
| 168 |
+
information spreading dies out, however for the first scenario the virus or information spreading may continue as time increases.
|
| 169 |
+
|
| 170 |
+
Then
|
| 171 |
+
|
| 172 |
+
$$J_{k,l} = \begin{cases} 1-\delta & \text{if } k=l, \\ \beta & \text{if } l \text{ is neighbor of } k, \\ 0 & \text{otherwise,} \end{cases} \quad (3)$$
|
| 173 |
+
|
| 174 |
+
where we recall that the evaluating point is $\mathbf{p}^* = 0$.
|
| 175 |
+
|
| 176 |
+
Eq. (3) can be written in a more compact way as $J = (1-\delta)I + \beta A$. Using Theorem 1, we obtain that for an asymptotic stability of the disease-free equilibrium we need to impose that the eigenvalues of $(1-\delta)I + \beta A$ are in absolute value smaller than 1, or equivalently,
|
| 177 |
+
|
| 178 |
+
$$\rho(A) < \frac{\delta}{\beta}. \quad (4)$$
|
| 179 |
+
|
| 180 |
+
In Fig. 1, we present two scenarios where we consider a total population of 900 nodes and an initial seeding of 20 nodes with the same probability of infection $\beta_1 = \beta_2 = 0.1$ but with different probabilities of recovery, $\delta_1 = 0.2$ for the solid curve and $\delta_2 = 0.6$ for the dashed curve. We consider a graph with spectral radius $\rho(A) = 4$. In the second scenario, as predicted by the analysis and equation (4), the virus or
|
| 181 |
+
|
| 182 |
+
# 4 Information Spreading over the Torus with one removed node
|
| 183 |
+
|
| 184 |
+
In a regular grid topology, each node in the network is connected with at most two neighbors along one or more directions (see e.g. Fig. 2). If the network is one-dimensional and we connect the first and last nodes, then the resulting topology consists on a chain of nodes connected in a circular loop, which is known as a ring. In general, when an $n$-dimensional grid network is connected circularly in more than one dimension, the resulting network topology is a torus (see e.g. Fig. 3). In this work, we consider torus networks with the same number of nodes in every direction. From Fig. 2, by connecting each first node to the last node in each direction we obtain Fig. 3. For example, if we connect node 1 with node 4 in the horizontal direction and node 1 with node 13 in the vertical direction, we obtain the neighbors of node 1 on the torus (nodes 2, 4, 5, 13) as shown in Fig. 3.
|
| 185 |
+
---PAGE_BREAK---
|
| 186 |
+
|
| 187 |
+
From the previous section, we obtained that the
|
| 188 |
+
spectral radius is an important quantity to study
|
| 189 |
+
if our interest is the spreading of information (or
|
| 190 |
+
virus spreading) through the network. The follow-
|
| 191 |
+
ing proposition give us a relationship between the
|
| 192 |
+
spectral radius and the degrees of the nodes.
|
| 193 |
+
|
| 194 |
+
**Lemma 1.** [22] Let $\deg_{\min}$ denote the minimum degree of $\mathcal{G}$, let $\deg$ be the average degree of $\mathcal{G}$, and let $\deg_{\max}$ be the maximum degree of $\mathcal{G}$. For every graph $\mathcal{G}$,
|
| 195 |
+
|
| 196 |
+
$$ \max\{\sqrt{\deg}, \sqrt{\deg_{\max}}\} \le \rho(A) \le \deg_{\max}. $$
|
| 197 |
+
|
| 198 |
+
We observe that for a *k*-regular graph, its average
|
| 199 |
+
degree and its maximum degree are equal to *k*, and
|
| 200 |
+
thus *k* corresponds to its spectral radius. This means
|
| 201 |
+
that for a *d*-dimensional torus, *2d* corresponds to its
|
| 202 |
+
spectral radius.
|
| 203 |
+
|
| 204 |
+
The previous lemma seems to prove that there is
|
| 205 |
+
no interest in studying the spectral radius over the
|
| 206 |
+
torus since it is a well-known quantity. However, if
|
| 207 |
+
the torus network is modified (removing some of the
|
| 208 |
+
nodes or edges to the graph) then the propagation of
|
| 209 |
+
information will change (see Fig. 4 and Fig. 5). In the
|
| 210 |
+
following analysis, we will give an explicit closed-form
|
| 211 |
+
expression of these changes on the spreading.
|
| 212 |
+
|
| 213 |
+
First of all, we recall a well-known result in lin-
|
| 214 |
+
ear algebra. Cramer’s rule [23] states that a system
|
| 215 |
+
of $n$ linear equations with $n$ unknowns, represented
|
| 216 |
+
in matrix multiplication form $Mx = y$, where the
|
| 217 |
+
$n \times n$ matrix $M$ has a nonzero determinant and the
|
| 218 |
+
vector $x = (x_1, \dots, x_n)^T$ is the column vector of the
|
| 219 |
+
unknown variables, has a unique solution, whose in-
|
| 220 |
+
dividual values for the unknowns are given by
|
| 221 |
+
|
| 222 |
+
$$ x_i = \frac{\det(M_i)}{\det(M)}, \quad \forall i \in \{1, \dots, n\}, $$
|
| 223 |
+
|
| 224 |
+
where $M_i$ is the matrix formed by replacing the $i$-th
|
| 225 |
+
column of $M$ by the column vector $y$.
|
| 226 |
+
|
| 227 |
+
We use Cramer's rule in the next lemma, in order
|
| 228 |
+
to establish the connection between the characteris-
|
| 229 |
+
tic polynomials and walk generating functions. This
|
| 230 |
+
connection will allow us to compute the spectral ra-
|
| 231 |
+
dius of the modified torus when we remove a node.
|
| 232 |
+
|
| 233 |
+
**Lemma 2.** [24] For any graph $\mathcal{G}$ we have
|
| 234 |
+
|
| 235 |
+
$$ x^{-1}W_{ii}(\mathcal{G}, x^{-1}) = \phi(\mathcal{G} \setminus i, x)/\phi(\mathcal{G}, x). $$
|
| 236 |
+
|
| 237 |
+
*Proof.* We have that $W(\mathcal{G}, x) = (I - xA)^{-1} = x^{-1}(x^{-1}I - A)^{-1}$. The entries of $(x^{-1}I - A)^{-1}$ are given by Cramer's rule (by noting that $(x^{-1}I - A)^{-1}$ corresponds to the matrix $M$ such that $(x^{-1}I - A)M = I$). The $i$-th diagonal entry of $(x^{-1}I - A)^{-1}$ is given by the $i$-th principal diagonal minor of $(x^{-1}I - A)$ divided by $\det(x^{-1}I - A) = \phi(\mathcal{G}, x^{-1})$. We note that the $i$-th principal diagonal minor of $(x^{-1}I - A)$ is $\phi(\mathcal{G} \setminus i, x^{-1})$, and so the lemma follows immediately from this. □
|
| 238 |
+
|
| 239 |
+
We are interested on finding the spectral radius
|
| 240 |
+
of the almost regular torus, in order to analyze the
|
| 241 |
+
information spreading on this topology. In the case
|
| 242 |
+
of the removal of one node, this is equivalent to find
|
| 243 |
+
$\phi(\mathcal{G} \setminus i, x)$. From Lemma 2, we need to know the
|
| 244 |
+
characteristic polynomial of the regular torus $\phi(\mathcal{G}, x)$
|
| 245 |
+
and the diagonal entries of the walking generating
|
| 246 |
+
function of the regular torus $W_{ii}(\mathcal{G}, x^{-1})$ since
|
| 247 |
+
|
| 248 |
+
$$ \phi(\mathcal{G} \setminus i, x) = x^{-1} W_{ii}(\mathcal{G}, x^{-1}) \phi(\mathcal{G}, x). \quad (5) $$
|
| 249 |
+
|
| 250 |
+
In the next proposition, we give an explicit ex-
|
| 251 |
+
pression for the characteristic polynomial of the two-
|
| 252 |
+
dimensional torus network of length m.
|
| 253 |
+
|
| 254 |
+
**Proposition 1.** The characteristic polynomial of the two-dimensional torus network, denoted $T_m^2$, is given by
|
| 255 |
+
|
| 256 |
+
$$ \phi(T_m^2, x) = \prod_{1 \le i,j \le m} (x - 2 \cos(2\pi i/m) - 2 \cos(2\pi j/m)). $$
|
| 257 |
+
|
| 258 |
+
*Proof.* Consider $R_m$ the ring graph which has edge set $\{(u, u+1) : 1 \le u < m\} \cup \{(m, 1)\}$. The Laplacian of $R_m$ has eigenvectors [25]:
|
| 259 |
+
|
| 260 |
+
$$ x_k(u) = \sin(2\pi ku/m) \text{ and } y_k(u) = \cos(2\pi ku/m). \tag{6} $$
|
| 261 |
+
|
| 262 |
+
for $k \le m/2$. Both of these eigenvectors have eigen-
|
| 263 |
+
value $\lambda_k = 2 - 2 \cos(2\pi k/m)$. We notice that $x_0$
|
| 264 |
+
---PAGE_BREAK---
|
| 265 |
+
|
| 266 |
+
Figure 4: Spectral radius reduction by removing one node vs torus length.
|
| 267 |
+
|
| 268 |
+
Figure 5: Spectral radius reduction by removing two nodes from a torus network of length 7. One of the removed nodes is the central node. We remove another node and we compute the spectral radius reduction and we assign the value of this reduction at the removed node's position.
|
| 269 |
+
|
| 270 |
+
should be ignored and $y_0$ is the all 1s vector. If $m$ is even, then also $x_{m/2}$ should be ignored.
|
| 271 |
+
|
| 272 |
+
We recall that the product of two graphs $G = (V, E)$ and $\mathcal{H} = (\mathcal{W}, \mathcal{F})$, denoted by $G \times \mathcal{H}$, corresponds to the graph with vertex set $V \times W$ and edge set $((v_1, w_1), (v_2, w_2))$ where $(v_1, v_2) \in E$ and $(w_1, w_2) \in \mathcal{F}$. If $G$ has Laplacian eigenvalues $\lambda_1, \dots, \lambda_m$ and eigenvectors $p_1, \dots, p_m$; and $\mathcal{H}$ has Laplacian eigenvalues $\mu_1, \dots, \mu_m$ and eigenvectors $q_1, \dots, q_m$; then for each $1 \le i \le m$ and $1 \le j \le m$, $G \times \mathcal{H}$ has an eigenvector $z$ of eigenvalue $\lambda_i + \mu_j$ given by $z(v, w) = p_i(v)q_j(w)$.
|
| 273 |
+
|
| 274 |
+
In our case, the two-dimensional torus can be written as the product of two rings, $G = \mathcal{H} = R_m$, and then $\lambda_i = \mu_i = 2 - 2\cos(2\pi i/m) \quad \forall 1 \le i \le m$. Thus the eigenvalues of the Laplacian of the two-dimensional torus network of length $m$ are given by
|
| 275 |
+
|
| 276 |
+
$$\lambda_i+\mu_j = 4-2\cos(2\pi i/m)-2\cos(2\pi j/m) \quad \forall 1 \le i,j \le m.$$
|
| 277 |
+
|
| 278 |
+
We recall that the eigenvalues of a matrix $M$ are the solutions $\lambda$ to the equation $\det(M - \lambda I) = 0$. For
|
| 279 |
+
|
| 280 |
+
a k-regular graph of $n$ nodes we have
|
| 281 |
+
|
| 282 |
+
$$
|
| 283 |
+
\begin{align*}
|
| 284 |
+
\det(L - \lambda I) &= \det(D - A - \lambda I) \\
|
| 285 |
+
&= \det(-[A - (k - \lambda)I]) = (-1)^n \det(A - (k - \lambda)I),
|
| 286 |
+
\end{align*}
|
| 287 |
+
$$
|
| 288 |
+
|
| 289 |
+
which means that $\lambda$ is an eigenvalue of $L$ if and only if $k - \lambda$ is an eigenvalue of $A$.
|
| 290 |
+
|
| 291 |
+
In our case, the two-dimensional torus network is a 4-regular graph, and thus the eigenvalues of the two-dimensional torus network of length $m$ are given by $2\cos(2\pi i/m) + 2\cos(2\pi j/m)$ for all $1 \le i, j \le m$. Thus we conclude that the characteristic polynomial is equal to
|
| 292 |
+
|
| 293 |
+
$$\phi(\mathcal{T}_m^2, x) = \prod_{1 \le i,j \le m} (x - 2 \cos(2\pi i/m) - 2 \cos(2\pi j/m)).$$
|
| 294 |
+
|
| 295 |
+
We notice from the proof that the eigenvectors of the Laplacian coincide with the eigenvectors of the adjacency matrix and are given by the Kronecker product of the eigenvectors given by (6). $\square$
|
| 296 |
+
|
| 297 |
+
In the following proposition, we find an explicit expression for the diagonal entries of the walk generating function.
|
| 298 |
+
---PAGE_BREAK---
|
| 299 |
+
|
| 300 |
+
**Proposition 2.** The diagonal entries of the walk generating function of the two-dimensional torus of length m are:
|
| 301 |
+
|
| 302 |
+
$$
|
| 303 |
+
\begin{equation} \tag{7}
|
| 304 |
+
W_{ii}(\mathcal{T}_m^2, x) = \sum_{\ell \ge 0} \frac{x^\ell}{m^2} \sum_{1 \le i,j \le m} 4^\ell \left( \cos\left(\frac{\pi(i+j)}{m}\right) \right)^\ell \left( \cos\left(\frac{\pi(i-j)}{m}\right) \right)^\ell \\
|
| 305 |
+
\qquad = \sum_{\ell \ge 0} \frac{x^{-\ell}}{m^2} \sum_{1 \le i,j \le m} 4^\ell \left( \cos\left(\frac{\pi(i+j)}{m}\right) \right)^\ell \left( \cos\left(\frac{\pi(i-j)}{m}\right) \right)^\ell,
|
| 306 |
+
\end{equation}
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
node has been removed is given by:
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
\phi(\mathcal{T}_m^2 \setminus i, x) = x^{-1} W_{ii}(\mathcal{T}_m^2, x^{-1}) \phi(\mathcal{T}_m^2, x), \text{ where}
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
$$
|
| 316 |
+
W_{ii}(\mathcal{T}_m^2, x^{-1}) =
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
$$
|
| 320 |
+
\sum_{\ell \ge 0} \frac{x^{-\ell}}{m^2} \sum_{1 \le i,j \le m} 4^{\ell} \left( \cos\left(\frac{\pi(i+j)}{m}\right) \right)^{\ell} \left( \cos\left(\frac{\pi(i-j)}{m}\right) \right)^{\ell},
|
| 321 |
+
$$
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
\phi(\mathcal{T}_m^2, x) = \prod_{1 \le i, j \le m} (x - 2 \cos(2\pi i / m) - 2 \cos(2\pi j / m)).
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
Proof. We recall from Section 2 that $A_{ii}^\ell$ corresponds to the closed walks of length $\ell$. Since each node is indistinguishable on the torus network, we know that $\mathrm{tr}(A^\ell) = n \times \# $ of closed walks of length $\ell$, where $n = m^2$ is the total number of nodes. But we also have that
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
\begin{align*}
|
| 331 |
+
\operatorname{tr}(A^{\ell}) &= \sum_{k=1}^{n} (\lambda_i(A))^{\ell}, \\
|
| 332 |
+
&= \sum_{1 \le i,j \le m} (2 \cos(2\pi i/m) + 2 \cos(2\pi j/m))^{\ell}, \\
|
| 333 |
+
&= \sum_{1 \le i,j \le m} (4 \cos(\pi(i+j)/m) \cos(\pi(i-j)/m))^{\ell}.
|
| 334 |
+
\end{align*}
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
Thus
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
A_{ii}^{\ell} = \frac{1}{m^2} \sum_{1 \le i,j \le m} (4 \cos(\pi(i+j)/m) \cos(\pi(i-j)/m))^{\ell}. \quad (8)
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
Let us notice that
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
W_{ii}(\mathcal{G}, x) = [(I - xA)^{-1}]_{ii} = \left[ \sum_{n \ge 0} (xA)^n \right]_{ii} = \sum_{\ell \ge 0} x^{\ell} A_{i\ell}^{\ell} W_{ii}(\mathcal{T}_{m}^{d}, x^{-1}) =
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
$$
|
| 350 |
+
\sum_{\ell \ge 0} \frac{x^{-\ell}}{m^d} \sum_{1 \le i_1, i_2, \ldots, i_d \le m} (2 \cos(2\pi i_1 / m) + \ldots + 2 \cos(2\pi i_d / m))^\ell, \quad (9)
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
From eq. (8) and eq. (9) we conclude eq. (7). $\square$
|
| 354 |
+
|
| 355 |
+
From Proposition 1 and Proposition 2, we obtain
|
| 356 |
+
the following theorem.
|
| 357 |
+
|
| 358 |
+
**Theorem 2.** *The characteristic polynomial of the two-dimensional torus network of length m where one*
|
| 359 |
+
|
| 360 |
+
We observe that all the previous calculations do not
|
| 361 |
+
depend on the particular removed node *i*. This means
|
| 362 |
+
that the spreading of information over the modified
|
| 363 |
+
torus, if we remove one node, is not affected by the
|
| 364 |
+
position of the removed node. The importance is that
|
| 365 |
+
one and only one node is removed. In the next sec-
|
| 366 |
+
tion, we will see that this is very different from the
|
| 367 |
+
case of the removal of two nodes.
|
| 368 |
+
|
| 369 |
+
The previous results can also be derived for the d-
|
| 370 |
+
dimensional torus network. Since the proofs are sim-
|
| 371 |
+
ilar to the proofs of Proposition 1 and Proposition 2
|
| 372 |
+
we do not include them here.
|
| 373 |
+
|
| 374 |
+
**Theorem 3.** *The characteristic polynomial of the d-dimensional torus network of length m where one node has been removed is given by:*
|
| 375 |
+
|
| 376 |
+
$$
|
| 377 |
+
\phi(\mathcal{T}_m^d \setminus i, x) = x^{-1} W_{ii}(\mathcal{T}_m^d, x^{-1}) \phi(\mathcal{T}_m^d, x),
|
| 378 |
+
$$
|
| 379 |
+
|
| 380 |
+
where the diagonal entries of the walk generating
|
| 381 |
+
function are
|
| 382 |
+
|
| 383 |
+
and the characteristic polynomial of the d-
|
| 384 |
+
dimensional torus network is given by
|
| 385 |
+
|
| 386 |
+
$$
|
| 387 |
+
\phi(\mathcal{T}_m^d, x) = \prod_{1 \le i_1, \dots, i_d \le m} (x - 2 \cos(2\pi i_1 / m) - \dots - 2 \cos(2\pi i_d / m)).
|
| 388 |
+
$$
|
| 389 |
+
---PAGE_BREAK---
|
| 390 |
+
|
| 391 |
+
# 5 Information Spreading over the Torus with a set of removed nodes
|
| 392 |
+
|
| 393 |
+
Following a similar approach to Lemma 2, Godsil [24] is able to prove the following theorem.
|
| 394 |
+
|
| 395 |
+
**Theorem 4.** [Nodes removal][24] Let $S$ be a subset of $s$ nodes from the graph $\mathcal{G}$. Then
|
| 396 |
+
|
| 397 |
+
$$x^{-s}\det W_{S,\mathcal{S}}(\mathcal{G}, x^{-1}) = \phi(\mathcal{G} \setminus S, x)/\phi(\mathcal{G}, x),$$
|
| 398 |
+
|
| 399 |
+
where $W_{S,\mathcal{S}}(\mathcal{G}, x)$ denotes the submatrix of $W(\mathcal{G}, x)$ with rows and columns indexed by the elements of $\mathcal{S}$.
|
| 400 |
+
|
| 401 |
+
We observe that if $\mathcal{S}$ consists of two nodes $i$ and $j$ then
|
| 402 |
+
|
| 403 |
+
$$\det W_{S,\mathcal{S}}(\mathcal{G}, x) = W_{ii}(\mathcal{G}, x)W_{jj}(\mathcal{G}, x) - W_{ij}(\mathcal{G}, x)W_{ji}(\mathcal{G}, x). \quad (10)$$
|
| 404 |
+
|
| 405 |
+
and since $\mathcal{G}$ is an undirected graph, $W_{ij}(\mathcal{G}, x) = W_{ji}(\mathcal{G}, x)$.
|
| 406 |
+
|
| 407 |
+
For the case of the removal of two nodes, from Theorem 4 and eq. (10), we obtain the following corollary.
|
| 408 |
+
|
| 409 |
+
**Corollary 1.** [24] For any graph $\mathcal{G}$ we have:
|
| 410 |
+
|
| 411 |
+
$$x^{-1}W_{ij}(\mathcal{G}, x^{-1}) = \frac{x_i(x_j - \phi(\mathcal{G}, x))\phi(\mathcal{G}, x)}{\phi(\mathcal{G}, x)\phi(\mathcal{G}, x)\phi(\mathcal{G}, x)} \quad (11)$$
|
| 412 |
+
|
| 413 |
+
The next corollary is extremely important since it guarantees that independently of the number of nodes we remove from the torus graph, we can restrict our study of the characteristic polynomials to the case of the removal of two nodes.
|
| 414 |
+
|
| 415 |
+
**Corollary 2.** [24] If $\mathcal{C}$ is a subset of $\mathcal{V}(\mathcal{G})$ then $\phi(\mathcal{G}\setminus\mathcal{C}, x)$ is determined by the polynomials $\phi(\mathcal{G}\setminus\mathcal{S}, x)$ where $\mathcal{S}$ ranges over all subsets of $\mathcal{C}$ with at most two vertices.
|
| 416 |
+
|
| 417 |
+
From Corollary 1, we obtain that
|
| 418 |
+
|
| 419 |
+
$$\phi(\mathcal{G} \setminus i, j, x) = \frac{\phi(\mathcal{G} \setminus i, x)\phi(\mathcal{G} \setminus j, x)}{\phi(\mathcal{G}, x)} - \phi(\mathcal{G}, x)(x^{-1}W_{ij}(\mathcal{G}, x^{-1}))^2. \quad (12)$$
|
| 420 |
+
|
| 421 |
+
This implies that the only unknowns to compute the removal of a set of nodes over the torus, in particular for a set of two nodes, are given by $W_{ij}$, which can be represented as the number of walks between node $i$ and node $j$. The following theorem provides us a way to compute the values $W_{ij}$.
|
| 422 |
+
|
| 423 |
+
**Theorem 5.** [26] For any two vertices $i$ and $j$ in the graph $\mathcal{G}$ and any non-negative integer $\ell$, we have that the number of walks of length $\ell$, denoted $W_{ij}(\mathcal{G}, \ell)$, is given by
|
| 424 |
+
|
| 425 |
+
$$W_{ij}(\mathcal{G}, \ell) = \sum_{\theta} u_{\theta}(i)^T \theta^{\ell} u_{\theta}(j),$$
|
| 426 |
+
|
| 427 |
+
where the sum is over all eigenvalues $\theta$ of $\mathcal{G}$ and $u_{\theta}(i)$ denotes the $i$-th row of $U_{\theta}$ where $U_{\theta}$ is the matrix whose columns form an orthonormal basis for the eigenspace belonging to $\theta$.
|
| 428 |
+
|
| 429 |
+
From the previous theorem, we conclude that the walk generating function can be written as
|
| 430 |
+
|
| 431 |
+
$$W_{ij}(\mathcal{G}, x^{-1}) = \sum_{\ell \ge 0} x^{-\ell} \sum_{\theta} u_{\theta}(i)^T \theta^{\ell} u_{\theta}(j).$$
|
| 432 |
+
|
| 433 |
+
In order to compute the walk generating function of the two-dimensional torus, we recall that the ring of length $m$ has eigenvectors:
|
| 434 |
+
|
| 435 |
+
$$x_k(u) = \sin(2\pi k u/m) \quad \text{and} \quad y_k(u) = \cos(2\pi k u/m).$$
|
| 436 |
+
|
| 437 |
+
for $k \le m/2$. Both of these eigenvectors have eigenvalue $\lambda_k = 2 - 2\cos(2\pi k/m)$. Here $x_0$ should be ignored and $y_0$ is the all 1s vector. If $m$ is even, then also $x_{m/2}$ should be ignored. We denote the matrix of eigenvectors of the ring of length $m$ as $\mathcal{V}$. Then the two-dimensional torus has matrix of eigenvectors $\mathcal{Z} = \mathcal{V} \otimes \mathcal{V}$ where $\otimes$ denotes the Kronecker product. Since the matrix is symmetric the eigenvectors form an orthogonal basis and we may normalize its eigenvectors to obtain an orthonormal basis. We call this orthonormal basis $\hat{\mathcal{Z}}$. We consider the matrix of eigenvalues of the two-dimensional torus, denoted by $\Lambda$, which are given by
|
| 438 |
+
|
| 439 |
+
$$\lambda_i + \mu_j = 2 \cos(2\pi i/m) + 2 \cos(2\pi j/m) \quad \forall 1 \le i, j \le m.$$
|
| 440 |
+
---PAGE_BREAK---
|
| 441 |
+
|
| 442 |
+
From here we obtain the number of closed walks of
|
| 443 |
+
the two-dimensional torus between node *i* and node *j*.
|
| 444 |
+
|
| 445 |
+
Following a similar approach to the previous case
|
| 446 |
+
of the removal of one node, Godsil [24] is able to prove
|
| 447 |
+
the following theorem which deals with the removal
|
| 448 |
+
of one edge of the graph.
|
| 449 |
+
|
| 450 |
+
**Theorem 6 (Edge removal).** [24] Let $e = \{i, j\}$ be an edge in $\mathcal{G}$. Then
|
| 451 |
+
|
| 452 |
+
$$
|
| 453 |
+
\phi(\mathcal{G}, x) = \phi(\mathcal{G} \setminus e, x) - \phi(\mathcal{G} \setminus ij, x) \quad (13) \\
|
| 454 |
+
- 2\sqrt{\phi(\mathcal{G} \setminus i, x)\phi(\mathcal{G} \setminus j, x) - \phi(\mathcal{G}, x)\phi(\mathcal{G} \setminus ij, \frac{x}{\sqrt{e}})} \quad \text{Proof. The Stirling approximation formula tell us that}
|
| 455 |
+
$$
|
| 456 |
+
|
| 457 |
+
We observe that all the previous terms in eq. (13)
|
| 458 |
+
except $\phi(G \setminus e, x)$ are known. This implies that
|
| 459 |
+
the characteristic polynomial of the two-dimensional
|
| 460 |
+
torus with one edge removed is completely deter-
|
| 461 |
+
mined by the previous expressions for the torus with
|
| 462 |
+
one node removed and two nodes removed.
|
| 463 |
+
|
| 464 |
+
6 Lower bounds for the two-
|
| 465 |
+
dimensional torus networks
|
| 466 |
+
|
| 467 |
+
In this section, we will compare the previous expres-
|
| 468 |
+
sions with the case of the infinite lattice. We notice
|
| 469 |
+
that this scenario can be seen as the case where the
|
| 470 |
+
length of the torus goes to infinity.
|
| 471 |
+
|
| 472 |
+
**Proposition 3.** The diagonal entries of the walk generating function over the lattice $\mathcal{L}^2$ are given by
|
| 473 |
+
|
| 474 |
+
$$
|
| 475 |
+
W_{ii}(\mathcal{L}^2, x) = \sum_{\ell \ge 0} x^{2\ell} \binom{2\ell}{\ell}^2 . \quad (14)
|
| 476 |
+
$$
|
| 477 |
+
|
| 478 |
+
*Proof.* The number of closed walks of length $2\ell$ over the two-dimensional lattice$^2 \mathcal{L}^2$ is equal to (see e.g. EIS A002894):
|
| 479 |
+
|
| 480 |
+
$$
|
| 481 |
+
\sum_{i=0}^{\ell} \frac{(2\ell)!}{i!i!(\ell-i)!(\ell-i)!} = \binom{2\ell}{\ell}^2.
|
| 482 |
+
$$
|
| 483 |
+
|
| 484 |
+
In the following, we give a lower bound to the walk
|
| 485 |
+
generating function of the two-dimensional torus with
|
| 486 |
+
one removed node.
|
| 487 |
+
|
| 488 |
+
**Proposition 4.** For the two-dimensional torus, we have
|
| 489 |
+
|
| 490 |
+
$$
|
| 491 |
+
x^{-1} W_{ii}(\mathcal{T}_m^2, x^{-1}) \geq \frac{1}{x} \frac{4\pi}{e^4} \sum_{\ell \ge 0} \left(\frac{1}{\ell}\right) \left(\frac{4}{x}\right)^{2\ell}.
|
| 492 |
+
$$
|
| 493 |
+
|
| 494 |
+
$$
|
| 495 |
+
\sqrt{2\pi\ell} \left(\frac{\ell}{e}\right)^{\ell} \le \ell! \le e\sqrt{\ell} \left(\frac{\ell}{e}\right)^{\ell}.
|
| 496 |
+
$$
|
| 497 |
+
|
| 498 |
+
Then applying it to the binomial coefficient we get
|
| 499 |
+
the lower bound
|
| 500 |
+
|
| 501 |
+
$$
|
| 502 |
+
\binom{2\ell}{\ell} = \frac{(2\ell)!}{\ell!\ell!} \geq \frac{\sqrt{2\pi 2\ell} \left(\frac{2\ell}{e}\right)^{2\ell}}{e\sqrt{\ell} \left(\frac{\ell}{e}\right)^{\ell e} \sqrt{\ell} \left(\frac{\ell}{e}\right)^{\ell}} = \frac{\sqrt{4\pi}}{e^2} \frac{1}{\sqrt{\ell}} 4^{\ell}. \quad (15)
|
| 503 |
+
$$
|
| 504 |
+
|
| 505 |
+
We know that the number of walks over the torus
|
| 506 |
+
network is greater than the number of walks over the
|
| 507 |
+
lattice network since every walk over the lattice net-
|
| 508 |
+
work can be mapped into a walk over the torus net-
|
| 509 |
+
work. We notice that the contrary is not true in
|
| 510 |
+
same direction will be a closed walk over the torus
|
| 511 |
+
of length *m* but not a closed walk over the lattice.
|
| 512 |
+
Therefore, from eq. (14) and eq. (15), we have that
|
| 513 |
+
|
| 514 |
+
$$
|
| 515 |
+
x^{-1} W_{ii}(\mathcal{T}_m^2, x^{-1}) \ge x^{-1} W_{ii}(\mathcal{L}^2, x^{-1}) \\
|
| 516 |
+
= x^{-1} \sum_{\ell \ge 0} x^{-2\ell} \binom{2\ell}{\ell}^2 \ge \frac{1}{x} \frac{4\pi}{e^4} \sum_{\ell \ge 0} \left(\frac{1}{\ell}\right) \left(\frac{4}{x}\right)^{2\ell}.
|
| 517 |
+
$$
|
| 518 |
+
|
| 519 |
+
Proposition 4 allow us to obtain also a lower
|
| 520 |
+
bound for the characteristic polynomial of the two-
|
| 521 |
+
dimensional torus with one removed node.
|
| 522 |
+
|
| 523 |
+
The lower bound is given by
|
| 524 |
+
|
| 525 |
+
$$
|
| 526 |
+
\phi(\mathcal{T}_m^2 \setminus i, x) \geq \phi(\mathcal{T}_m^2, x) \frac{1}{x} \frac{4\pi}{e^4} \sum_{\ell \ge 0} \left( \frac{1}{\ell} \right) \left( \frac{4}{x} \right)^{2\ell} \text{ where}
|
| 527 |
+
$$
|
| 528 |
+
|
| 529 |
+
$$
|
| 530 |
+
\phi(\mathcal{T}_m^2, x) = \prod_{1 \le i,j \le m} (x - 2 \cos(2\pi i/m) - 2 \cos(2\pi j/m)).
|
| 531 |
+
$$
|
| 532 |
+
|
| 533 |
+
$^2$ We notice that to consider even length is not restrictive since every closed walk over the lattice network has to have even length.
|
| 534 |
+
---PAGE_BREAK---
|
| 535 |
+
|
| 536 |
+
Figure 6: Number of closed walks vs closed walks length.
|
| 537 |
+
|
| 538 |
+
The case of the number of walks between any two nodes over the two-dimensional torus of length $m$ is more involved. We define the following function which allow us to map each node on the two-dimensional torus of length $m$ to a node on the lattice $\mathbb{Z} \times \mathbb{Z}$. The mapping function $h: \{1, \dots, m^2\} \to \mathbb{Z} \times \mathbb{Z}$ is as follows:
|
| 539 |
+
|
| 540 |
+
$$h(u) = (x_u, y_u) \quad \text{where} \\ x_u \equiv u - 1 \pmod m \quad \text{and} \quad y_u = \left\lfloor \frac{u}{m} \right\rfloor.$$
|
| 541 |
+
|
| 542 |
+
**Proposition 5.** *The number of walks of length *l* over the two-dimensional lattice from the origin (0,0) to (a,b) (*l* + *a* + *b* is even) is equal to*
|
| 543 |
+
|
| 544 |
+
$$\sum_{i=b}^{b+(\ell-b)/2} \frac{l!}{i!(i-b)!\left(\frac{\ell+a+b}{2}-i\right)!\left(\frac{\ell+b-a}{2}-i\right)!}. \quad (16)$$
|
| 545 |
+
|
| 546 |
+
*Proof.* We consider $n_U$ the number of steps you go up, $n_D$ the number of steps you go down, $n_L$ the number of steps you go left, $n_R$ the number of steps you go right. The number of walks of length $l$ from (0,0) to (a,b) is equal to the number of ways you can choose $l$ steps between up, down, left, and right, such that $n_U = n_D + b$, $n_R = n_L + a$, and $n_U + n_D + n_L + n_R = l$. If we fix $n_U = i$ then $n_D = i - b$, $n_L = (\frac{\ell+b-a}{2} - i)$, $n_R = (\frac{\ell+a+b}{2} - i)$ and the number
|
| 547 |
+
|
| 548 |
+
of closed walks of length $l$ with $i$ steps going up is equal to $\frac{i!(i-b)!(\frac{\ell+a+b}{2}-i)!(\frac{\ell+b-a}{2}-i)!}{i!(i-b)!(\frac{\ell+a+b}{2}-i)!}$. Summing up for all possible values of $n_U$, the number of paths is equal to eq. (16). □
|
| 549 |
+
|
| 550 |
+
From Proposition 5 and the mapping function $h$, we obtain a lower bound on the number of walks of length $l$ on the two-dimensional torus $T_m^2$ between two different nodes $i$ and $j$ by mapping $i$ and $j$ to $h(i) = (x_i, y_i)$ and $h(j) = (x_j, y_j)$ and computing the number of walks from (0,0) to $(x_j - x_i, y_j - y_i)$. We observe that the previous lower bound is tight only when $l+a+b$ is even.
|
| 551 |
+
|
| 552 |
+
# 7 Numerical Simulations
|
| 553 |
+
|
| 554 |
+
In Fig. 6, we compute the number of closed walks over the torus of length $m=5$ and over the lattice vs the length of the closed walks $l$. We notice that for small values of $l$ to approximate the number of closed walks over the torus is more precise than for large values of $l$. This fact is compensated since in the characteristic polynomial the important terms are the terms of smaller order. In Fig. 7, we compute the difference on percent error of this approximation vs the length of the closed walks $l$.
|
| 555 |
+
|
| 556 |
+
In Fig. 8, we compute the number of closed walks over the torus and over the lattice by varying the
|
| 557 |
+
---PAGE_BREAK---
|
| 558 |
+
|
| 559 |
+
Figure 8: Number of closed walks vs torus length.
|
| 560 |
+
|
| 561 |
+
torus length *m* and keeping constant the length of the closed walks *l*. We notice that for small values of the torus length *m* the approximation is not tight, however it becomes tighter relatively fast. In Fig. 9, we compute the difference on percent error of this approximation vs the torus length *m*.
|
| 562 |
+
|
| 563 |
+
## 8 Conclusions
|
| 564 |
+
|
| 565 |
+
In this work, we have analyzed information spreading on almost torus networks assuming the Susceptible-Infected-Susceptible spreading model as a metric of information spreading. Almost torus networks consist on the torus network topology where some nodes or edges have been removed. We have provided analytical expressions for the characteristic polynomial of these graphs and we have provided as well tight lower bounds for its computation. Using these expressions we are able to estimate their spectral radius and thus to know how the information spreads on these networks. Simulations results that validated our analysis are presented.
|
| 566 |
+
|
| 567 |
+
## Acknowledgments
|
| 568 |
+
|
| 569 |
+
The work presented in this paper has been partially carried out at LINCS (http://www.lincs.fr).
|
| 570 |
+
|
| 571 |
+
Figure 9: Percent difference on the number of closed walks between the torus and the lattice vs torus length.
|
| 572 |
+
|
| 573 |
+
## References
|
| 574 |
+
|
| 575 |
+
[1] F. Brauer, P. van den Driessche, and J. Wu, eds., *Mathematical epidemiology*, vol. 1945 of *Lecture Notes in Mathematics*. Berlin: Springer-Verlag, 2008. Mathematical Biosciences Subseries.
|
| 576 |
+
|
| 577 |
+
[2] D. J. Daley and J. Gani, *Epidemic Modeling: An Introduction*. Cambridge, UK: Cambridge University Press, 1999.
|
| 578 |
+
|
| 579 |
+
[3] M. Draief and L. Massouli, *Epidemics and ruminours in complex networks*. Cambridge University Press, 2010.
|
| 580 |
+
|
| 581 |
+
[4] J. Kephart and S. White, "Directed-graph epidemiological models of computer viruses," in *Research in Security and Privacy, 1991. Proceedings., 1991 IEEE Computer Society Symposium on*, pp. 343–359, may 1991.
|
| 582 |
+
|
| 583 |
+
[5] X. Zhang, G. Neglia, J. Kurose, and D. Towsley, "Performance modeling of epidemic routing," *Computer Networks* (Amsterdam, Netherlands: 1999), vol. 51, pp. 2867–2891, July 2007.
|
| 584 |
+
|
| 585 |
+
[6] M. Bampo, M. T. Ewing, D. R. Mather, D. Stewart, and M. Wallace, "The effects of the social structure of digital networks on viral marketing performance.," vol. 19, pp. 273–290, 2008.
|
| 586 |
+
---PAGE_BREAK---
|
| 587 |
+
|
| 588 |
+
[7] A. J. Ganesh, L. Massoulié, and D. F. Towsley, "The effect of network topology on the spread of epidemics," in *INFOCOM*, pp. 1455–1466, IEEE, 2005.
|
| 589 |
+
|
| 590 |
+
[8] Leskovec, Adamic, and Huberman, "The dynamics of viral marketing," in *CECOMM: ACM Conference on Electronic Commerce*, 2006.
|
| 591 |
+
|
| 592 |
+
[9] X. Wei, N. Valler, B. A. Prakash, I. Neamtiu, M. Faloutsos, and C. Faloutsos, "Competing memes propagation on networks: a case study of composite networks," *Computer Communication Review*, vol. 42, no. 5, pp. 5–12, 2012.
|
| 593 |
+
|
| 594 |
+
[10] W. Tang, Z. Lan, N. Desai, D. Buettner, and Y. Yu, "Reducing fragmentation on torus-connected supercomputers," in *IPDPS*, pp. 828–839, IEEE, 2011.
|
| 595 |
+
|
| 596 |
+
[11] TOP 500 Supercomputing website, "http://www.top500.org,"
|
| 597 |
+
|
| 598 |
+
[12] A. Gara, M. A. Blumrich, D. Chen, G. L.-T. Chiu, P. Coteus, M. E. Giampapa, R. A. Haring, P. Heidelberger, D. Hoenicke, G. V. Kopcsay, T. A. Liebsch, M. Ohmacht, B. D. Steinmacher-Burow, T. Takken, and P. Vranas, "Overview of the Blue Gene/L system architecture," *IBM Journal of Research and Development*, vol. 49, pp. 195–212, march 2005.
|
| 599 |
+
|
| 600 |
+
[13] N. R. Adiga, M. A. Blumrich, D. Chen, P. Coteus, A. Gara, M. E. Giampapa, P. Heidelberger, S. Singh, B. D. Steinmacher-Burow, T. Takken, M. Tsao, and P. Vranas, "Blue Gene/L torus interconnection network," *IBM Journal of Research and Development*, vol. 49, pp. 265–276, mar./may 2005.
|
| 601 |
+
|
| 602 |
+
[14] IBM Blue Gene team, "Overview of the IBM Blue Gene/P project," *IBM Journal of Research*
|
| 603 |
+
|
| 604 |
+
*and Development*, vol. 52, pp. 199–220, jan. 2008.
|
| 605 |
+
|
| 606 |
+
[15] J. Brooks and G. Kirschner, "Cray XT3 and cray XT series of supercomputers," in *Encyclopedia of Parallel Computing* (D. A. Padua, ed.), pp. 457–470, Springer, 2011.
|
| 607 |
+
|
| 608 |
+
[16] D. Chen, N. Eisley, P. Heidelberger, R. Senger, Y. Sugawara, S. Kumar, V. Salapura, D. Satterfield, B. Steinmacher-Burow, and J. Parker, "The ibm blue gene/q interconnection network and message unit," in *High Performance Computing, Networking, Storage and Analysis (SC), 2011 International Conference for*, pp. 1–10, nov. 2011.
|
| 609 |
+
|
| 610 |
+
[17] Y. Ajima, S. Sumimoto, and T. Shimizu, "Tofu: A 6D mesh/torus interconnect for exascale computers," *IEEE Computer*, vol. 42, pp. 36–40, Nov. 2009. Fujitsu.
|
| 611 |
+
|
| 612 |
+
[18] H. W. Hethcote, "The mathematics of infectious diseases," *SIAM Review*, vol. 42, pp. 599–653, Dec. 2000.
|
| 613 |
+
|
| 614 |
+
[19] Y. Wang, D. Chakrabarti, C. Wang, and C. Faloutsos, "Epidemic spreading in real networks: An eigenvalue viewpoint," in *SRDS*, pp. 25–34, IEEE Computer Society, 2003.
|
| 615 |
+
|
| 616 |
+
[20] D. Chakrabarti, Y. Wang, C. Wang, J. Leskovec, and C. Faloutsos, "Epidemic thresholds in real networks," *ACM Trans. Inf. Syst. Secur*, vol. 10, no. 4, 2008.
|
| 617 |
+
|
| 618 |
+
[21] M. W. Hirsch and S. Smale, *Differential equations, dynamical systems, and linear algebra*. New York: Academic Press, 1974.
|
| 619 |
+
|
| 620 |
+
[22] L. Lovász, "Eigenvalues of graphs,"
|
| 621 |
+
---PAGE_BREAK---
|
| 622 |
+
|
| 623 |
+
[23] G. Cramer, "Introduction à l'analyse des lignes courbes algébriques (in french),” *Geneva: Euro-peana*, pp. 656-659, 1750.
|
| 624 |
+
|
| 625 |
+
[24] C. D. Godsil, "Walk generating functions, christoffel-darboux identities and the adjacency matrix of a graph," *Combinatorics, Probability and Computing*, vol. 1, no. 01, pp. 13–25, 1992.
|
| 626 |
+
|
| 627 |
+
[25] D. Spielman, “Spectral graph theory and its ap-plications,”
|
| 628 |
+
|
| 629 |
+
[26] A. Chan and C. D. Godsil, “Symmetry and eigenvectors,” in *Graph symmetry (Montreal, PQ, 1996)*, vol. 497 of *NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci.*, pp. 75–106, Dordrecht: Kluwer Acad. Publ., 1997.
|
samples/texts_merged/2234121.md
ADDED
|
@@ -0,0 +1,472 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Direct measurement of ultrafast temporal wavefunctions
|
| 5 |
+
|
| 6 |
+
Kazuhisa Ogawa,¹,* Takumi Okazaki,¹ Hirokazu Kobayashi,² Toshihiro Nakanishi,³ and Akihisa Tomita¹
|
| 7 |
+
|
| 8 |
+
¹Graduate School of Information Science and Technology, Hokkaido University, Sapporo 060-0814, Japan
|
| 9 |
+
|
| 10 |
+
²School of System Engineering, Kochi University of Technology, Kochi 782-8502, Japan
|
| 11 |
+
|
| 12 |
+
³Department of Electronic Science and Engineering, Kyoto University, Kyoto 615-8510, Japan
|
| 13 |
+
|
| 14 |
+
(Dated: June 30, 2021)
|
| 15 |
+
|
| 16 |
+
The large capacity and robustness of information encoding in the temporal mode of photons is important in quantum information processing, in which characterizing temporal quantum states with high usability and time resolution is essential. We propose and demonstrate a direct measurement method of temporal complex wavefunctions for weak light at a single-photon level with subpicosecond time resolution. Our direct measurement is realized by ultrafast metrology of the interference between the light under test and self-generated monochromatic reference light; no external reference light or complicated post-processing algorithms are required. Hence, this method is versatile and potentially widely applicable for temporal state characterization.
|
| 17 |
+
|
| 18 |
+
# I. INTRODUCTION
|
| 19 |
+
|
| 20 |
+
The temporal-spectral mode of photons offers an attractive platform for quantum information processing in terms of a large capacity due to its high dimensionality and robustness in fiber and waveguide transmission. To date, many applications using the temporal-spectral mode have been proposed and realized in quantum information processing fields such as quantum computation, quantum cryptography, and quantum metrology [1-10]. In these applications, the full characterization of quantum states, i.e., complex wavefunctions, is crucial for developing reliable quantum operations. In addition, temporal-mode characterization for high-speed and precise processing often requires ultrafast time resolution, such as on the subpicosecond scale.
|
| 21 |
+
|
| 22 |
+
Several established methods, such as frequency-resolved optical gating (FROG) and spectral phase interferometry for direct electric field reconstruction (SPIDER), are well known for measuring the temporal-spectral mode of classical light [11]. These methods, however, utilize the nonlinear optical processes of the light under test, which are difficult to observe for weak light at the single-photon level. In recent years, various methods for characterizing the temporal-spectral mode of quantum light have been demonstrated, such as single photons and entangled photon pairs [12-23], and some have achieved ultrafast (subpicosecond) time resolution [12, 13, 16, 19-23]. While these methods differ in the details of their measurement procedures, they have a common procedure to reconstruct the form of the wavefunction: projective measurements for the entire tempo-
|
| 23 |
+
|
| 24 |
+
ral (or spectral or other basis) wavefunction have to be performed first, and then the measurement data is post-processed, as shown in Fig. 1(a). In other words, even for acquiring only one part of the wavefunction, measurement of the entire wavefunction is essential. Each set of measurement data before post-processing contains partial information of the wavefunction but is not itself the wavefunction.
|
| 25 |
+
|
| 26 |
+
As a more suitable measurement method for the form of the wavefunction, direct measurement [24] is the focus of this study. The direct measurement of a wavefunction $\psi(t)$ is defined as the measurement that can reconstruct the complex value $\psi(t_0)$ only using the measurement data at the point $t = t_0$, as shown in Fig. 1(b); that is, the measurement data at $t_0$ directly correspond to the complex value $\psi(t_0)$. Direct measurement was first demonstrated for the transverse spatial wavefunction of single photons [24] using a technique called weak measurement [25], and then for wavefunctions and density matrices in various degrees of freedom [26-30]. While direct measurement was introduced to give the operational meaning of the complex-valued wavefunction, it also provides a practical advantage of requiring only one measurement basis. Although direct measurement using weak measurement has drawbacks in its approximation error and low efficiency due to the nature of weak measurement, in recent years it has been reported that direct measurement can also be realized using strong (projection) measurement both theoretically [31-33] and experimentally [34, 35]. Therefore, applying direct measurement using strong measurement to the temporal wavefunction of photons can provide a practical characterization method for temporal wavefunctions, which avoids the requirement of post-processing the measurement data of the entire wavefunction.
|
| 27 |
+
|
| 28 |
+
* ogawak@ist.hokudai.ac.jp
|
| 29 |
+
---PAGE_BREAK---
|
| 30 |
+
|
| 31 |
+
FIG. 1. Comparison of the concepts of the conventional and direct measurement methods. (a) In conventional measurement methods, measurements of the entire wavefunction are usually performed first, and then the measurement data are post-processed to reconstruct the entire wavefunction. (b) In the direct measurement method, the measurement data at the time $t_0$ directly correspond to the complex value $\psi(t_0)$ of the wavefunction $\psi(t)$ at $t_0$.
|
| 32 |
+
|
| 33 |
+
In this paper, we propose a direct measurement method of temporal complex wavefunctions that can be performed for weak light at a single-photon level with subpicosecond time resolution. Our direct measurement is realized by ultrafast metrology (time gate measurement) of the interference between the light under test and the self-generated monochromatic reference light with several phase differences. This mechanism is simple compared to other measurement methods of the temporal-spectral mode of quantum light; that is, it does not require external reference light or complicated post-processing of the measurement data. We also experimentally demonstrate this direct measurement method of the temporal wavefunction of light at a single-photon level and examine the validity of the measurement results.
|
| 34 |
+
|
| 35 |
+
## II. THEORY
|
| 36 |
+
|
| 37 |
+
The proposed method for direct measurement of the temporal wavefunction is based on our previous study [33]. The wavefunction under test $\psi(t)$ is the temporal representation of the pulse-mode state $|\psi\rangle$, and its spectral representation $\tilde{\psi}(\omega)$ is given by the Fourier transform of $\psi(t)$. $\psi(t)$ can be represented by the product of the complex-valued envelope function $\psi_{\text{env}}(t)$ and the carrier term $e^{-i\omega_0 t}$ as $\psi(t) = \psi_{\text{env}}(t)e^{-i\omega_0 t}$, where $\omega_0$ is the reference carrier frequency. We assume that $\omega_0$ is known and then consider measuring $\psi_{\text{env}}(t)$ instead of $\psi(t)$. The Fourier transform of $\psi_{\text{env}}(t)$, $\tilde{\psi}_{\text{env}}(\omega)$, satisfies the relation $\tilde{\psi}_{\text{env}}(\omega) = \tilde{\psi}(\omega + \omega_0)$.
|
| 38 |
+
|
| 39 |
+
The basic mechanism common to most direct mea-
|
| 40 |
+
|
| 41 |
+
FIG. 2. Procedure of direct measurement of the temporal wavefunction. (a), (b) Temporal and spectral representations of the pulse-mode state $|\psi\rangle$, respectively. Its polarization mode is set to the diagonal state $|D\rangle$. (c) Wavefunction after applying the polarization-dependent frequency filter at $\omega = 0$ (the actual frequency is $\omega_0$) to the wavefunction of (b). Only the frequency $\omega = 0$ component remains for the horizontally polarized light. (d) Temporal representation of the wavefunction of (c). The horizontally polarized component has an almost uniform distribution, which serves as a reference for the magnitude and phase of the vertically polarized temporal wavefunction $\psi_{\text{env}}(t)$. (e) Real and imaginary parts of $\psi_{\text{env}}(t)$, which are reconstructed by combining the projection probability distributions $P(t, D)$, $P(t, A)$, $P(t, R)$, and $P(t, L)$ of time and polarization measurement for the wavefunction of (d).
|
| 42 |
+
|
| 43 |
+
surements [24, 26–35] is the interference between the signal wavefunction under test $\psi(t) = \psi_{\text{env}}(t)e^{-i\omega_0 t}$ and a self-generated uniform reference wave $\psi_0 e^{-i\omega_0 t}$ with four phase differences $0, \pi/2, \pi$, and $3\pi/2$. The probability that their superposition state is projected onto time $t$ and phase difference $\theta$ is given by $P(t, \theta) = |\psi(t) + e^{-i(\omega_0 t+\theta)}\psi_0|^2 = |\psi_{\text{env}}(t) + e^{-i\theta}\psi_0|^2$. The differences between $P(t, 0)$ and $P(t, \pi)$ and between $P(t, \pi/2)$ and $P(t, 3\pi/2)$ give the real and imaginary parts of $\psi_{\text{env}}(t)$, respectively:
|
| 44 |
+
|
| 45 |
+
$$ P(t, 0) - P(t, \pi) \propto \operatorname{Re}[\psi_{\text{env}}(t)], \quad (1) $$
|
| 46 |
+
|
| 47 |
+
$$ P(t, \pi/2) - P(t, 3\pi/2) \propto \operatorname{Im}[\psi_{\text{env}}(t)]. \quad (2) $$
|
| 48 |
+
|
| 49 |
+
Their proportional coefficients are equal and can be determined by the normalization condition of the wavefunction.
|
| 50 |
+
---PAGE_BREAK---
|
| 51 |
+
|
| 52 |
+
To realize the above mechanism, our direct measure-
|
| 53 |
+
ment method [33] uses a qubit (two-state quantum sys-
|
| 54 |
+
tem) probe mode to prepare the four phase differences;
|
| 55 |
+
we utilize the polarization mode of the photons spanned
|
| 56 |
+
by the horizontal and vertical states |H⟩ and |V⟩. We
|
| 57 |
+
define the four polarization states as follows: diagonal
|
| 58 |
+
|D⟩ := (|H⟩ + |V⟩)/√2, anti-diagonal |A⟩ := (|H⟩ −
|
| 59 |
+
|V⟩)/√2, right-circular |R⟩ := (|H⟩ + i|V⟩)/√2, and left-
|
| 60 |
+
circular |L⟩ := (|H⟩ − i|V⟩)/√2. The procedure of our di-
|
| 61 |
+
rect measurement of the temporal wavefunction is shown
|
| 62 |
+
in Fig. 2. Let the initial state be |Ψ₀⟩ := |ψ⟩|D⟩ =
|
| 63 |
+
|ψ⟩(|H⟩ + |V⟩)/√2. The temporal and spectral repre-
|
| 64 |
+
sentations of |Ψ₀⟩ are shown in Figs. 2(a) and (b), re-
|
| 65 |
+
spectively. First, we extract the frequency ω = 0 com-
|
| 66 |
+
ponent (the actual frequency is ω₀) from the horizon-
|
| 67 |
+
tally polarized light using a polarization-dependent fre-
|
| 68 |
+
quency filter. This operation is ideally described by the
|
| 69 |
+
projection operator |ω₀⟩⟨ω₀| ⊗ |H⟩⟨H| + ˆι ⊗ |V⟩⟨V|, and
|
| 70 |
+
the unnormalized state after the projection is given by
|
| 71 |
+
|Ψ₁⟩ := (|ω₀⟩⟨ω₀|ψ⟩|H⟩ + |ψ⟩⟨V|)/√2. Second, we per-
|
| 72 |
+
form projection measurements of time and polarization
|
| 73 |
+
for |Ψ₁⟩. The projections onto D, A, R, L polarizations
|
| 74 |
+
correspond to the preparations of the four phase differ-
|
| 75 |
+
ences 0, π, π/2, and 3π/2, respectively. The projection
|
| 76 |
+
operator onto time t and polarization φ is described as
|
| 77 |
+
|t⟩⟨t| ⊗ |φ⟩⟨φ|, and its projection probability is given by
|
| 78 |
+
P(t, φ) = ⟨Ψ₁|(|t⟩⟨t| ⊗ |φ⟩⟨φ|)|Ψ₁⟩/⟨Ψ₁|Ψ₁⟩. Using P(t, φ)
|
| 79 |
+
for φ = D, A, R, and L, the real and imaginary parts of
|
| 80 |
+
ψ env (t) are obtained as
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
\begin{align}
|
| 84 |
+
P(t, \text{D}) - P(t, \text{A}) &\propto \text{Re}[\langle\psi|\omega_0\rangle\langle\omega_0|t\rangle\langle t|\psi\rangle] \nonumber \\
|
| 85 |
+
&\propto \text{Re}[e^{i\omega_0 t}\psi(t)] = \text{Re}[\psi_{\text{env}}(t)], \tag{3}
|
| 86 |
+
\end{align}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
P(t,R) - P(t,L) \propto \mathrm{Im}[\psi_{\mathrm{env}}(t)], \quad (4)
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
where $\langle\psi|\omega_0\rangle$ is a constant that does not depend on $t$ and
|
| 94 |
+
$\langle\omega_0|t\rangle = e^{i\omega_0 t}/\sqrt{2\pi}$.
|
| 95 |
+
|
| 96 |
+
Here, we emphasize the following two points. First,
|
| 97 |
+
our measurement method satisfies the definition of direct
|
| 98 |
+
measurement mentioned previously. Indeed, to obtain
|
| 99 |
+
the complex value of $\psi_{\text{env}}(t_0)$, this measurement method
|
| 100 |
+
requires only the four projection probabilities $P(t_0, \phi)$
|
| 101 |
+
($\phi$ = D, A, R, L) at time $t_0$. Second, our direct measure-
|
| 102 |
+
ment method is more accurate and efficient than conven-
|
| 103 |
+
tional direct measurement methods using weak measure-
|
| 104 |
+
ment [24, 26–30]. Our measurement method causes in-
|
| 105 |
+
terference between the signal and the self-generated uni-
|
| 106 |
+
form reference wave using the polarization-dependent fre-
|
| 107 |
+
quency filter (projection measurement) instead of weak
|
| 108 |
+
|
| 109 |
+
measurement. Therefore, our method can avoid the ap-
|
| 110 |
+
proximation error and low measurement efficiency asso-
|
| 111 |
+
ciated with weak measurement. We note that the polar-
|
| 112 |
+
ization degree of freedom, which is used to provide the
|
| 113 |
+
four phase differences in the interference, can be replaced
|
| 114 |
+
by another degree of freedom such as path mode when
|
| 115 |
+
the polarization mode is already used or unstable for use.
|
| 116 |
+
|
| 117 |
+
### III. EXPERIMENTS
|
| 118 |
+
|
| 119 |
+
We demonstrate the direct measurement of the tempo-
|
| 120 |
+
ral wavefunction using the measurement system shown
|
| 121 |
+
in Fig. 3. The femtosecond fiber laser (Menlo Systems
|
| 122 |
+
C-Fiber 780) emits two pulsed light beams with central
|
| 123 |
+
wavelengths 1560 nm and 780 nm in synchronization (rep-
|
| 124 |
+
etition rate 100 MHz). The 1560 nm beam is used as the
|
| 125 |
+
signal under test, and the 780 nm beam [76.5 mW, 79.2 fs
|
| 126 |
+
full width at half maximum (FWHM)] as the gate pulse
|
| 127 |
+
for the time gate measurement [36]. We prepare the sig-
|
| 128 |
+
nal power in the following two conditions using the at-
|
| 129 |
+
tenuator: the classical-light (CL) condition, in which the
|
| 130 |
+
average photon number is 366 photons/pulse (4.69 nW);
|
| 131 |
+
and the single-photon-level (SPL) condition, in which the
|
| 132 |
+
average photon number is 0.58 photons/pulse (7.47 pW)
|
| 133 |
+
and the probability of one or fewer photons per pulse is
|
| 134 |
+
0.885. The SPL condition is used to demonstrate that
|
| 135 |
+
our direct measurement system works even for a signal
|
| 136 |
+
as weak as a single-photon level.
|
| 137 |
+
|
| 138 |
+
The 1560 nm beam then enters the 4-f system com-
|
| 139 |
+
posed of the gratings (600 grooves/mm) and lenses (focal
|
| 140 |
+
length f = 300 mm). At the center of the 4-f system,
|
| 141 |
+
the spectral distribution is mapped onto the transverse
|
| 142 |
+
spatial distribution, where state preparation followed by
|
| 143 |
+
polarization-dependent frequency filtering is performed.
|
| 144 |
+
As seen in Fig. 3(b), two beam displacers (BDs) are set
|
| 145 |
+
in the 4-f system to divide the optical path according
|
| 146 |
+
to the polarization; the polarization-dependent frequency
|
| 147 |
+
filter is realized by inserting a slit (293 µm width) in one
|
| 148 |
+
of the paths. In contrast, the state preparation before
|
| 149 |
+
the slit is performed equally for the two beams. After
|
| 150 |
+
the state preparation followed by polarization-dependent
|
| 151 |
+
frequency filtering, the polarizations of the two beams
|
| 152 |
+
are exchanged by the half-wave plate (HWP) and then
|
| 153 |
+
combined by the second BD so that the two optical path
|
| 154 |
+
lengths are equal.
|
| 155 |
+
|
| 156 |
+
In the state preparation, we prepare the three types of
|
| 157 |
+
states shown in Fig. 3(c). A variable slit with gap width
|
| 158 |
+
w and displacement s is used to quantitatively evaluate
|
| 159 |
+
---PAGE_BREAK---
|
| 160 |
+
|
| 161 |
+
FIG. 3. Experimental setup. (a) Top view of the whole system. (b) Side view of the 4-f system. HWP: half-wave plate, QWP: quarter-wave plate, BD: beam displacer, PBS: polarizing beam splitter, BBO: β-BaB$_{2}$O$_{4}$ crystal. (c) Details of state preparation. (i) Variable slit (w: gap width, s: displacement of the gap center from x₀, x₀: center position of the slit for polarization-dependent frequency filtering). (ii) Slit (w = 2 mm, s = 0 mm) and coverglass (170 ± 5 mm thickness). (iii) Stripe mask (0.5 mm gap) and two coverglasses.
|
| 162 |
+
|
| 163 |
+
the measured temporal wavefunction. The coverglass is used to cause a phase change. As the magnitude of the phase change depends sensitively on the inclination of the coverglass, we assume that this magnitude is unknown. The combination of the stripe mask and coverglasses is used to demonstrate the direct measurement of a complicated wavefunction.
|
| 164 |
+
|
| 165 |
+
After the 4-f system, the beam is projected onto one of the D, A, R, or L polarizations by the HWP, quarter-wave plate, and polarizing beam splitter. Subsequently, the beam is projected onto time $t$ by the time gate measurement, which is realized by sum-frequency generation (SFG) of the signal beam and the 780 nm gate pulse with delay $t$. In SFG, these two beams are focused on the β-BaB$_{2}$O$_{4}$ crystal by the lens ($f = 50$ mm), and their sum-frequency light (520 nm wavelength) is emitted at an intensity proportional to the product of the two input temporal intensities. By scanning the delay of the gate pulse $t$, sum-frequency light with an intensity proportional to the time intensity distribution of the signal light is extracted. Finally, the sum-frequency light is spatially and spectrally filtered to remove the stray light (not shown in the figure) and then detected by a single-photon counting module (Laser Components COUNT-NIR).
|
| 166 |
+
|
| 167 |
+
For comparison, we additionally perform intensity (projection) measurements in time and frequency for the state under test in the CL condition. The state under test is extracted by the projection measurement onto V polarization from the output light of the 4-f system. The intensity measurements in time and frequency are realized by the time gate measurement and using an opti-
|
| 168 |
+
|
| 169 |
+
cal spectrum analyzer (Advantest Q8384), respectively. The obtained temporal and spectral intensity distributions are used to examine the validity of the direct measurement results.
|
| 170 |
+
|
| 171 |
+
We note that the spectral width $\delta\omega$ extracted by the polarization-dependent frequency filter (1.08 THz FWHM) is not sufficiently small compared to those of the states under test generated by the slit or the stripe mask (~6 THz). In this condition, the spectral wavefunction after the frequency filter should be approximated by the rectangle function $rect(\omega/\delta\omega)$, which is zero outside the interval [-$\delta\omega/2, \delta\omega/2$] and unity inside it. In this case, the right sides of Eqs. (3) and (4) are replaced by $\text{sinc}(\delta\omega t/2)\text{Re}[\psi_{\text{env}}(t)]$ and $\text{sinc}(\delta\omega t/2)\text{Im}[\psi_{\text{env}}(t)]$, respectively, where $\text{sinc}(x) := \sin(x)/x$. To obtain $\text{Re}[\psi_{\text{env}}(t)]$ and $\text{Im}[\psi_{\text{env}}(t)]$, we make a correction by dividing the measured wavefunctions by $\text{sinc}(\delta\omega t/2)$, which is independent of $\psi_{\text{env}}(t)$ and was determined by prior measurement. On the other hand, the time width of the gate pulses (79.2 fs FWHM) is considered to be sufficiently smaller than those of the states under test (~3 ps). Hence, we assume here that the effect of the width of the time measurement can be ignored. The detailed calculation accounting for both the effects of the finite frequency and the time widths is given in Appendix A.
|
| 172 |
+
|
| 173 |
+
In the following, we show the experimental results for state preparations (i)-(iii) in Fig. 3(c) in order. First, the spectral wavefunction generated by the variable slit with gap width $w$ and displacement $s$ is given by a rectangle function $rect[(\omega - \omega_c)/\Delta\omega]$. The spectral width $\Delta\omega$ and central frequency $\omega_c$ are expressed as $\Delta\omega = \alpha w$.
|
| 174 |
+
---PAGE_BREAK---
|
| 175 |
+
|
| 176 |
+
FIG. 4. Results of the direct measurement of the wavefunc-
|
| 177 |
+
tion generated by the variable slit ($w$ = 2.0 mm, $s$ = 0.0 mm).
|
| 178 |
+
(a) 3D plot of the measured temporal wavefunction (black
|
| 179 |
+
line and dots). The red, green, and navy lines and dots are
|
| 180 |
+
its projections on the real, imaginary, and amplitude phase
|
| 181 |
+
planes, respectively. The solid lines and the dots are the mea-
|
| 182 |
+
surement results in the CL and SPL conditions, respectively.
|
| 183 |
+
In the SPL condition, photon counting was performed for 25 s
|
| 184 |
+
per measurement point. The error bars are omitted here. (b)
|
| 185 |
+
Intensity (upper panel) and phase distribution (lower panel)
|
| 186 |
+
calculated from the measured temporal wavefunction. The
|
| 187 |
+
red solid line and blue dots are the measurement results in
|
| 188 |
+
the CL and SPL conditions, respectively. The error bars were
|
| 189 |
+
calculated from the square root of the counted photon num-
|
| 190 |
+
ber (shot noise). The green dotted line in the upper panel
|
| 191 |
+
is the temporal intensity distribution obtained by the time
|
| 192 |
+
gate measurement of the wavefunction under test. (c) Inten-
|
| 193 |
+
sity (upper panel) and phase distribution (lower panel) of the
|
| 194 |
+
spectral wavefunction obtained by Fourier-transforming the
|
| 195 |
+
measured temporal wavefunction. The red solid line and blue
|
| 196 |
+
dashed line are the distributions in the CL and SPL condi-
|
| 197 |
+
tions, respectively. The green dotted line in the upper panel is
|
| 198 |
+
the spectral intensity distribution obtained using the optical
|
| 199 |
+
spectrum analyzer for the wavefunction under test.
|
| 200 |
+
|
| 201 |
+
and $\omega_c = \alpha s$, respectively, where the proportional con-
|
| 202 |
+
stant $\alpha := 2.41$ THz/mm is derived from the geometrical
|
| 203 |
+
configuration of our 4-f system. The temporal wavefunc-
|
| 204 |
+
tion obtained by Fourier-transforming rect[(ω - ωc)/Δω]
|
| 205 |
+
is e^[iωct] sinc(Δωt/2), and the time width Δt between the
|
| 206 |
+
|
| 207 |
+
two central zeros of this sinc function and the phase
|
| 208 |
+
gradient $\kappa$ are given by $\Delta t = 4\pi/\Delta\omega = 4\pi/(\alpha w)$ and
|
| 209 |
+
$\kappa = \omega_c = \alpha s$, respectively. Therefore, in this state prepa-
|
| 210 |
+
ration, the form of the temporal wavefunction can be
|
| 211 |
+
controlled quantitatively by changing $w$ and $s$.
|
| 212 |
+
|
| 213 |
+
We display the 3D plot of the result of the direct mea-
|
| 214 |
+
surement of the temporal wavefunction generated by the
|
| 215 |
+
variable slit ($w$ = 2.0 mm, $s$ = 0.0 mm) in Fig. 4(a).
|
| 216 |
+
There is no significant difference between the measure-
|
| 217 |
+
ment results under the CL condition (lines) and those un-
|
| 218 |
+
der the SPL condition (dots), while some fluctuation due
|
| 219 |
+
to the shot noise is observed in the results in the SPL con-
|
| 220 |
+
dition. The intensity (square of the amplitude) and phase
|
| 221 |
+
distributions of the measured temporal wavefunction are
|
| 222 |
+
shown in Fig. 4(b), and those in the frequency domain,
|
| 223 |
+
obtained by Fourier-transforming the measured temporal
|
| 224 |
+
wavefunction, are shown in Fig. 4(c). Furthermore, the
|
| 225 |
+
temporal and spectral intensity distributions obtained by
|
| 226 |
+
the time gate measurement and optical spectrum ana-
|
| 227 |
+
lyzer are displayed as green dotted lines in Figs. 4(b)
|
| 228 |
+
and (c), respectively. The agreement of these intensity
|
| 229 |
+
measurement distributions with the intensity distribution
|
| 230 |
+
reconstructed from the directly measured wavefunction
|
| 231 |
+
supports the validity of our direct measurement results.
|
| 232 |
+
A quantitative comparison between them using classical
|
| 233 |
+
fidelity is discussed at the end of this section.
|
| 234 |
+
|
| 235 |
+
Next, we examine the change in the measured temporal
|
| 236 |
+
wavefunction when the gap width $w$ and displacement $s$
|
| 237 |
+
of the variable slit are changed. All these measurements
|
| 238 |
+
are performed in the CL condition. Figure 5(a) shows the
|
| 239 |
+
direct measurement results of the magnitude of the tem-
|
| 240 |
+
poral wavefunction when $w$ is changed from 1.4 mm to
|
| 241 |
+
2.6 mm while $s$ is fixed at $s = 0$ mm. The time widths $\Delta t$
|
| 242 |
+
of the measured temporal amplitude, which are obtained
|
| 243 |
+
by fitting the sinc function $A|\text{sinc}[2\pi(t - t_c)/\Delta t]|$ to the
|
| 244 |
+
measured curves, are plotted versus $w$ in Fig. 5(b). The
|
| 245 |
+
values are in good agreement with the theoretical curve
|
| 246 |
+
$\Delta t = 4\pi/(\alpha w)$ (black line). Figure 5(c) shows the di-
|
| 247 |
+
rect measurement results of the phase of the temporal
|
| 248 |
+
wavefunction when $s$ is changed from 0.0 mm to 0.8 mm
|
| 249 |
+
while $w$ is fixed as $w = 2.0$ mm. The phase gradients $\kappa$
|
| 250 |
+
of the measured temporal phase, which are also obtained
|
| 251 |
+
by fitting the linear function to the measured curves in
|
| 252 |
+
the range of $t \in [3.75$ ps, $5.75$ ps], are plotted versus the
|
| 253 |
+
displacement $s$ in Fig. 5(d). These values are also in
|
| 254 |
+
good agreement with the theoretical curve $\kappa = \alpha s + \kappa_0$
|
| 255 |
+
(black line), where the offset value $\kappa_0 := -0.11$ ps⁻¹ is
|
| 256 |
+
determined from the phase gradient when $s = 0$ mm.
|
| 257 |
+
---PAGE_BREAK---
|
| 258 |
+
|
| 259 |
+
FIG. 5. Results of the direct measurement when the gap width $w$ and displacement of the gap center $s$ of the variable slit are changed. (a) Measurement results of the magnitude of the temporal wavefunction when $w$ is changed from 1.4 mm to 2.6 mm while $s$ is fixed at $s = 0$ mm. (b) Relationship between $w$ and time width $\Delta t$ obtained from the measured curves. The solid black line represents the theoretical curve $\Delta t = 4\pi/(\alpha w)$. (c) Measurement results of the phase of the temporal wavefunction when $s$ is changed from 0.0 mm to 0.8 mm while $w$ is fixed at $w = 2.0$ mm. (d) Relationship between $s$ and phase gradient $\kappa$ obtained from the measured curves. The solid black line represents the theoretical curve $\kappa = \alpha s + \kappa_0$.
|
| 260 |
+
|
| 261 |
+
We further demonstrate the direct measurement of the temporal wavefunction generated by the slit ($w = 2.0$ mm, $s = 0.0$ mm) with a coverglass and by the stripe mask with two coverglasses. The measurement results for the slit with a coverglass are shown in Figs. 6(a)–(c). It should be noted that the frequency wavefunction derived from the directly measured time wavefunction shows a stepwise phase change due to the phase added by the coverglass. The magnitude of the obtained phase step cannot be evaluated because its true value is not known in advance, as mentioned above. Nevertheless, the agreement of the spectral intensity distributions derived from the directly measured time wavefunction (red and blue lines) with the results of the frequency intensity measurement (green line) indicates that the characterization of the wavefunction by direct measurement is performed properly. Figures 7(a)–(c) show the measurement results for the stripe mask with two coverglasses, which have more complicated waveforms. In this case as well, the point to be noted is that the frequency wavefunction de-
|
| 262 |
+
|
| 263 |
+
FIG. 6. Results of the direct measurement of the temporal wavefunction generated by the slit ($w = 2.0$ mm, $s = 0.0$ mm) and coverglass. The notation of this figure is the same as in Fig. 4.
|
| 264 |
+
|
| 265 |
+
TABLE I. Classical fidelity (Bhattacharyya coefficient) between the intensity distributions calculated from the results of the direct measurement and those obtained by the projection measurements for panels (b) and (c) in Figs. 4, 6, and 7. CL and SPL indicate the signal power condition under which the direct measurements were performed.
|
| 266 |
+
|
| 267 |
+
<table><thead><tr><th rowspan="2"></th><th colspan="2">Time domain<br>[Panel (b)]</th><th colspan="2">Frequency domain<br>[Panel (c)]</th></tr><tr><th>CL</th><th>SPL</th><th>CL</th><th>SPL</th></tr></thead><tbody><tr><td>Fig. 4</td><td>0.999</td><td>0.993</td><td>0.995</td><td>0.990</td></tr><tr><td>Fig. 6</td><td>0.998</td><td>0.973</td><td>0.985</td><td>0.974</td></tr><tr><td>Fig. 7</td><td>0.999</td><td>0.976</td><td>0.987</td><td>0.970</td></tr></tbody></table>
|
| 268 |
+
|
| 269 |
+
rived from the directly measured time wavefunction (red and blue lines) shows two stepwise phase changes as a result of the two coverglasses, and their intensity distributions are in agreement with the results of the frequency intensity measurement (green line). These results support the validity of the direct measurement method of the wavefunction.
|
| 270 |
+
|
| 271 |
+
Finally, we evaluate the closeness of the intensity dis-
|
| 272 |
+
---PAGE_BREAK---
|
| 273 |
+
|
| 274 |
+
FIG. 7. Results of the direct measurement of the temporal wavefunction generated by the stripe mask and coverglasses. The notation of this figure is the same as in Figs. 4 and 6.
|
| 275 |
+
|
| 276 |
+
tributions of the wavefunctions obtained by the direct
|
| 277 |
+
measurement and those obtained by the intensity (pro-
|
| 278 |
+
jection) measurement using the classical fidelity (Bhat-
|
| 279 |
+
tacharyya coefficient). The classical fidelity is defined
|
| 280 |
+
as $\sum_j \sqrt{p_j q_j}$ for two probability distributions $\{p_j\}$ and
|
| 281 |
+
$\{q_j\}$. Table I shows the classical fidelity between the in-
|
| 282 |
+
tensity distributions obtained by the direct measurement
|
| 283 |
+
and the projection measurements for panels (b) and (c)
|
| 284 |
+
in Figs. 4, 6, and 7. We can see that these fidelities show
|
| 285 |
+
high values close to 1.
|
| 286 |
+
|
| 287 |
+
IV. DISCUSSION
|
| 288 |
+
|
| 289 |
+
First, we describe the performance of the direct mea-
|
| 290 |
+
surement system used in our experiment. The time reso-
|
| 291 |
+
lution is determined by the time width of the gate pulse
|
| 292 |
+
and the phase-matching bandwidth of SFG. In our case,
|
| 293 |
+
the latter effect is negligible and the time resolution is
|
| 294 |
+
79.2 fs FWHM, which gives the subpicosecond resolu-
|
| 295 |
+
tion. On the other hand, the measurable range in the
|
| 296 |
+
|
| 297 |
+
time domain is determined by the time width of the self-
|
| 298 |
+
generated reference light in the shape of a sinc func-
|
| 299 |
+
tion. The time width between the two central zeros
|
| 300 |
+
of the sinc function is 11.7 ps. Therefore, the dynamic
|
| 301 |
+
range of our direct measurement system is evaluated to
|
| 302 |
+
be 11.7 ps/79.2 fs = 148.
|
| 303 |
+
|
| 304 |
+
Next, we remark on previous studies related to direct
|
| 305 |
+
measurement of the temporal wavefunction. A recently
|
| 306 |
+
reported experiment on δ-quench measurement [37] has
|
| 307 |
+
demonstrated measurement of the temporal mode of light
|
| 308 |
+
by applying instantaneous phase modulation followed
|
| 309 |
+
by projection onto a specific frequency. Although this
|
| 310 |
+
method differs from direct measurement using weak mea-
|
| 311 |
+
surement [24] and our direct measurement method, it sat-
|
| 312 |
+
isfies the definition of direct measurement of the temporal
|
| 313 |
+
wavefunction. In this measurement, the time resolution
|
| 314 |
+
did not reach the subpicosecond scale, and classical light
|
| 315 |
+
much stronger than a single-photon level was used as the
|
| 316 |
+
light under test.
|
| 317 |
+
|
| 318 |
+
In addition, a temporal-mode measurement method re-
|
| 319 |
+
ported over 30 years ago [38] also satisfies the definition
|
| 320 |
+
of direct measurement. Although it was devised indepen-
|
| 321 |
+
dently of the context of direct measurement, its configu-
|
| 322 |
+
ration is similar to that of our direct measurement sys-
|
| 323 |
+
tem. In this measurement, the time resolution reached
|
| 324 |
+
the subpicosecond scale, while classical light was used
|
| 325 |
+
as the light under test. As a characterization method
|
| 326 |
+
of the temporal mode of classical light, this method is
|
| 327 |
+
currently rarely used in contrast to other sophisticated
|
| 328 |
+
methods such as FROG and SPIDER. However, the sim-
|
| 329 |
+
ple configuration of this method makes it suitable for the
|
| 330 |
+
measurement of single photons, and the significance of
|
| 331 |
+
our experiment is that it demonstrates this.
|
| 332 |
+
|
| 333 |
+
V. CONCLUSION
|
| 334 |
+
|
| 335 |
+
We proposed a direct measurement method for charac-
|
| 336 |
+
terizing the temporal wavefunction of single photons and
|
| 337 |
+
experimentally demonstrated the direct measurement for
|
| 338 |
+
several test wavefunctions. The experimental results
|
| 339 |
+
showed that the direct measurement method works at the
|
| 340 |
+
single-photon level and can achieve subpicosecond time
|
| 341 |
+
resolution. We clarified the validity of the direct mea-
|
| 342 |
+
surement by quantitatively evaluating the measurement
|
| 343 |
+
results when using the variable slit for state preparation
|
| 344 |
+
and calculating the fidelities between the results of the di-
|
| 345 |
+
rect measurement and the intensity distribution obtained
|
| 346 |
+
by the projection measurement.
|
| 347 |
+
---PAGE_BREAK---
|
| 348 |
+
|
| 349 |
+
This direct measurement method can be applied not only to the temporal-spectral mode but also to other degrees of freedom. In addition, it is expected that the direct measurement method can be extended not only to pure states but also to mixed states and processes; such an expansion of the scope of application of direct measurement is a subject for future research.
|
| 350 |
+
|
| 351 |
+
ACKNOWLEDGMENTS
|
| 352 |
+
|
| 353 |
+
This research was supported by JSPS KAKENHI Grant Number 19K14606, the Matsuo Foundation, and the Research Foundation for Opto-Science and Technology.
|
| 354 |
+
|
| 355 |
+
Appendix A: Calculation of direct measurement method when resolution of frequency filter and time measurement is finite
|
| 356 |
+
|
| 357 |
+
Here, we describe the calculation of our direct measurement method when the effects of the finite resolution of the frequency filter and the time measurement are considered. The projection operator of the frequency filter with spectral width $\delta\omega$ is given by $\int_{-\infty}^{\infty} d\omega \operatorname{rect}[(\omega - \omega_0)/\delta\omega] |\omega\rangle\langle\omega|$, where $\operatorname{rect}[(\omega - \omega_0)/\delta\omega]$ is zero outside the interval $[\omega_0 - \delta\omega/2, \omega_0 + \delta\omega/2]$ and unity inside it. The unnormalized resultant state after the polarization-dependent frequency filter is described as
|
| 358 |
+
|
| 359 |
+
$$|\Psi'_1\rangle = \frac{1}{\sqrt{2}} \left[ \int_{-\infty}^{\infty} d\omega \operatorname{rect}\left(\frac{\omega - \omega_0}{\delta\omega}\right) |\omega\rangle\langle\omega|\psi\rangle|H\rangle + |\psi\rangle|V\rangle \right]. \quad (A1)$$
|
| 360 |
+
|
| 361 |
+
The time measurement implemented by optical gating is characterized by the positive-operator-valued measure $\int_{-\infty}^{\infty} dt' g_t(t') |t'\rangle\langle t'|$, where $g_t(t')$ is the non-negative gate function centered at $t' = t$. The probability $P'(t, \phi)$ that the results of the time and polarization measurements are $t$ and $\phi$, respectively, is described as
|
| 362 |
+
|
| 363 |
+
$$P'(t, \phi) = \frac{\langle\Psi'_1| \left[ \int_{-\infty}^{\infty} dt' g_t(t') |t'\rangle\langle t'| \otimes |\phi\rangle\langle\phi| \right] |\Psi'_1\rangle}{\langle\Psi'_1|\Psi'_1\rangle} = \int_{-\infty}^{\infty} dt' g_t(t') \frac{\langle\Psi'_1|(|t'\rangle\langle t'| \otimes |\phi\rangle\langle\phi|)|\Psi'_1\rangle}{\langle\Psi'_1|\Psi'_1\rangle}. \quad (A2)$$
|
| 364 |
+
|
| 365 |
+
Therefore, we obtain the following results:
|
| 366 |
+
|
| 367 |
+
$$P(t,D) - P(t,A) = \int_{-\infty}^{\infty} dt' g_t(t') \operatorname{Re} \left[ \int_{-\infty}^{\infty} d\omega \operatorname{rect} \left( \frac{\omega - \omega_0}{\delta\omega} \right) \langle \phi | \omega \rangle \langle \omega | t' \rangle \langle t' | \psi \rangle \right], \quad (A3)$$
|
| 368 |
+
|
| 369 |
+
$$P(t,R) - P(t,L) = \int_{-\infty}^{\infty} dt' g_t(t') \operatorname{Im} \left[ \int_{-\infty}^{\infty} d\omega \operatorname{rect} \left( \frac{\omega - \omega_0}{\delta\omega} \right) \langle \phi | \omega \rangle \langle \omega | t' \rangle \langle t' | \psi \rangle \right]. \quad (A4)$$
|
| 370 |
+
|
| 371 |
+
Assuming that $\langle\omega|\psi\rangle$ is the constant value $\langle\omega_0|\psi\rangle$ in the interval $[\omega_0 - \delta\omega/2, \omega_0 + \delta\omega/2]$, the integral with respect to $\omega$ can be calculated as
|
| 372 |
+
|
| 373 |
+
$$\int_{-\infty}^{\infty} d\omega \operatorname{rect}\left(\frac{\omega - \omega_0}{\delta\omega}\right) \langle\phi|\omega\rangle\langle\omega|t'\rangle = \frac{\langle\psi|\omega_0\rangle}{\sqrt{2\pi}} e^{i\omega_0 t'} \delta\omega \sinh\left(\frac{\delta\omega t'}{2}\right), \quad (A5)$$
|
| 374 |
+
|
| 375 |
+
and then we obtain
|
| 376 |
+
|
| 377 |
+
$$P(t, D) - P(t, A) \propto \int_{-\infty}^{\infty} dt' g_t(t') \text{ sinc} \left( \frac{\delta\omega t'}{2} \right) \text{ Re}[ψ_{\text{env}}(t')], \quad (A6)$$
|
| 378 |
+
|
| 379 |
+
$$P(t, R) - P(t, L) \propto \int_{-\infty}^{\infty} dt' g_t(t') \text{sinc}\left(\frac{\delta\omega t'}{2}\right) \text{Im}[ψ_{\text{env}}(t')]. \quad (A7)$$
|
| 380 |
+
|
| 381 |
+
Furthermore, when the temporal width of the optical gate is sufficiently small compared with that of $ψ_{\text{env}}(t)$, we can
|
| 382 |
+
---PAGE_BREAK---
|
| 383 |
+
|
| 384 |
+
approximate $g_t(t') = \delta(t - t')$ and thus obtain
|
| 385 |
+
|
| 386 |
+
$$P(t, D) - P(t, A) \propto \text{sinc}\left(\frac{\delta\omega t}{2}\right) \text{Re}[\psi_{\text{env}}(t)], \quad P(t, R) - P(t, L) \propto \text{sinc}\left(\frac{\delta\omega t}{2}\right) \text{Im}[\psi_{\text{env}}(t)]. \qquad (\text{A8})$$
|
| 387 |
+
|
| 388 |
+
We adopt these approximated results in the main text.
|
| 389 |
+
|
| 390 |
+
[1] P. C. Humphreys, W. S. Kolthammer, J. Nunn, M. Barbieri, A. Datta, and I. A. Walmsley, “Continuous-variable quantum computing in optical time-frequency modes using quantum memories,” Phys. Rev. Lett. **113**, 130502 (2014).
|
| 391 |
+
|
| 392 |
+
[2] J. Nunn, L. J. Wright, C. Söller, L. Zhang, I. A. Walmsley, and B. J. Smith, “Large-alphabet time-frequency entangled quantum key distribution by means of time-to-frequency conversion,” Opt. Express **21**, 15959–15973 (2013).
|
| 393 |
+
|
| 394 |
+
[3] J. Mower, Z. Zhang, P. Desjardins, C. Lee, J. H. Shapiro, and D. Englund, “High-dimensional quantum key distribution using dispersive optics,” Phys. Rev. A **87**, 062322 (2013).
|
| 395 |
+
|
| 396 |
+
[4] J. M. Lukens, A. Dezfooliyan, C. Langrock, M. M. Fejer, D. E. Leaird, and A. M. Weiner, “Orthogonal spectral coding of entangled photons,” Phys. Rev. Lett. **112**, 133602 (2014).
|
| 397 |
+
|
| 398 |
+
[5] J. Roslund, R. M. De Araujo, S. Jiang, C. Fabre, and N. Treps, “Wavelength-multiplexed quantum networks with ultrafast frequency combs,” Nat. Photon. **8**, 109–112 (2014).
|
| 399 |
+
|
| 400 |
+
[6] B. Brecht, D. V. Reddy, C. Silberhorn, and M. G. Raymer, “Photon temporal modes: A complete framework for quantum information science,” Phys. Rev. X **5**, 041017 (2015).
|
| 401 |
+
|
| 402 |
+
[7] B. Lamine, C. Fabre, and N. Treps, “Quantum improvement of time transfer between remote clocks,” Phys. Rev. Lett. **101**, 123601 (2008).
|
| 403 |
+
|
| 404 |
+
[8] P. Jian, O. Pinel, C. Fabre, B. Lamine, and N. Treps, “Real-time displacement measurement immune from atmospheric parameters using optical frequency combs,” Opt. Express **20**, 27133–27146 (2012).
|
| 405 |
+
|
| 406 |
+
[9] P. C. Humphreys, B. J. Metcalf, J. B. Spring, M. Moore, X.-M. Jin, M. Barbieri, W. S. Kolthammer, and I. A. Walmsley, “Linear optical quantum computing in a single spatial mode,” Phys. Rev. Lett. **111**, 150501 (2013).
|
| 407 |
+
|
| 408 |
+
[10] P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photon. **10**, 167–170 (2016).
|
| 409 |
+
|
| 410 |
+
[11] I. A. Walmsley and C. Dorrer, “Characterization of ultrashort electromagnetic pulses,” Adv. Opt. Photonics **1**, 308–437 (2009).
|
| 411 |
+
|
| 412 |
+
[12] V. Ansari, J. M. Donohue, M. Allgaier, L. Sansoni, B. Brecht, J. Roslund, N. Treps, G. Harder, and C. Silberhorn, “Tomography and purification of the temporal-mode structure of quantum light,” Phys. Rev. Lett. **120**, 213601 (2018).
|
| 413 |
+
|
| 414 |
+
[13] C. Polycarpou, K. N. Cassemiro, G. Venturi, A. Zavatta, and M. Bellini, “Adaptive detection of arbitrarily shaped ultrashort quantum light states,” Phys. Rev. Lett. **109**, 053602 (2012).
|
| 415 |
+
|
| 416 |
+
|
| 417 |
+
|
| 418 |
+
[14] Z. Qin, A. S. Prasad, T. Brannan, A. MacRae, A. Lezama, and A. I. Lvovsky, “Complete temporal characterization of a single photon,” Light Sci. Appl. **4**, e298–e298 (2015).
|
| 419 |
+
|
| 420 |
+
[15] C. Yang, Z. Gu, P. Chen, Z. Qin, J. F. Chen, and W. Zhang, “Tomography of the temporal-spectral state of subnatural-linewidth single photons from atomic ensembles,” Phys. Rev. Applied **10**, 054011 (2018).
|
| 421 |
+
|
| 422 |
+
[16] W. Wasilewski, P. Kolenderski, and R. Frankowski, “Spectral density matrix of a single photon measured,” Phys. Rev. Lett. **99**, 123601 (2007).
|
| 423 |
+
|
| 424 |
+
[17] Y.-K. Xu, S.-H. Sun, W.-T. Liu, J.-Y. Liu, and P.-X. Chen, “Robust holography of the temporal wave function via second-order interference,” Phys. Rev. A **100**, 042317 (2019).
|
| 425 |
+
|
| 426 |
+
[18] P. Chen, C. Shu, X. Guo, M. M. T. Loy, and S. Du, “Measuring the biphoton temporal wave function with polarization-dependent and time-resolved two-photon interference,” Phys. Rev. Lett. **114**, 010401 (2015).
|
| 427 |
+
|
| 428 |
+
[19] A. O. C. Davis, V. Thiel, M. Karpiński, and B. J. Smith, “Measuring the single-photon temporal-spectral wave function,” Phys. Rev. Lett. **121**, 083602 (2018).
|
| 429 |
+
|
| 430 |
+
[20] A. O. C. Davis, V. Thiel, and B. J. Smith, “Measuring the quantum state of a photon pair entangled in frequency and time,” Optica **7**, 1317–1322 (2020).
|
| 431 |
+
|
| 432 |
+
[21] A. O. C. Davis, V. Thiel, M. Karpiński, and B. J. Smith, “Experimental single-photon pulse characterization by electro-optic shearing interferometry,” Phys. Rev. A **98**, 023840 (2018).
|
| 433 |
+
|
| 434 |
+
[22] J.-P. W. MacLean, S. Schwarz, and K. J. Resch, “Constructing ultrafast energy-time-entangled two-photon pulses,” Phys. Rev. A **100**, 033834 (2019).
|
| 435 |
+
|
| 436 |
+
[23] V. Thiel, A. O. C. Davis, K. Sun, P. D'Ornellas, X.-M. Jin, and B. J. Smith, “Single-photon characterization by two-photon spectral interferometry,” Optics Express **28**, 19315–19324 (2020).
|
| 437 |
+
|
| 438 |
+
[24] J. S. Lundeen, B. Sutherland, A. Patel, C. Stewart, and C. Bamber, “Direct measurement of the quantum wavefunction,” Nature **474**, 188–191 (2011).
|
| 439 |
+
|
| 440 |
+
[25] Y. Aharonov, D. Z. Albert, and L. Vaidman, “How the result of a measurement of a component of the spin of a spin-1/2 particle can turn out to be 100,” Phys. Rev. Lett. **60**, 1351–1354 (1988).
|
| 441 |
+
|
| 442 |
+
[26] J. S. Lundeen and C. Bamber, “Procedure for direct measurement of general quantum states using weak measurement,” Phys. Rev. Lett. **108**, 070402 (2012).
|
| 443 |
+
|
| 444 |
+
[27] M. Malik, M. Mirhosseini, M. P. J. Lavery, J. Leach, M. J. Padgett, and R. W. Boyd, “Direct measurement of a 27-dimensional orbital-angular-momentum state vector,” Nat. Commun. **5**, 1–7 (2014).
|
| 445 |
+
---PAGE_BREAK---
|
| 446 |
+
|
| 447 |
+
[28] J. Z. Salvail, M. Agnew, A. S. Johnson, E. Bolduc, J. Leach, and R. W. Boyd, "Full characterization of polarization states of light via direct measurement," Nat. Photon. **7**, 316-321 (2013).
|
| 448 |
+
|
| 449 |
+
[29] Z. Shi, M. Mirhosseini, J. Margiewicz, M. Malik, F. Rivera, Z. Zhu, and R. W. Boyd, "Scan-free direct measurement of an extremely high-dimensional photonic state," Optica **2**, 388-392 (2015).
|
| 450 |
+
|
| 451 |
+
[30] G. S. Thekkadath, L. Giner, Y. Chalich, M. J. Hor-
|
| 452 |
+
ton, J. Banker, and J. S. Lundeen, "Direct measure-
|
| 453 |
+
ment of the density matrix of a quantum system,"
|
| 454 |
+
Phys. Rev. Lett. **117**, 120401 (2016).
|
| 455 |
+
|
| 456 |
+
[31] P. Zou, Z.-M. Zhang, and W. Song, "Direct measurement of general quantum states using strong measurement," Phys. Rev. A **91**, 052109 (2015).
|
| 457 |
+
|
| 458 |
+
[32] G. Vallone and D. Dequal, “Strong measurements give a better direct measurement of the quantum wave function,” Phys. Rev. Lett. **116**, 040502 (2016).
|
| 459 |
+
|
| 460 |
+
[33] K. Ogawa, O. Yasuhiko, H. Kobayashi, T. Nakanishi, and A. Tomita, “A framework for measuring weak values without weak interactions and its diagrammatic representation,” New J. Phys. **21**, 043013 (2019).
|
| 461 |
+
|
| 462 |
+
|
| 463 |
+
|
| 464 |
+
[34] T. Denkmayr, H. Geppert, H. Lemmel, M. Waegell, J. Dressel, Y. Hasegawa, and S. Sponar, “Experimental demonstration of direct path state characterization by strongly measuring weak values in a matter-wave interferometer,” Phys. Rev. Lett. **118**, 010402 (2017).
|
| 465 |
+
|
| 466 |
+
[35] L. Calderaro, G. Foletto, D. Dequal, P. Villoresi, and G. Vallone, “Direct reconstruction of the quantum density matrix by strong measurements,” Phys. Rev. Lett. **121**, 230501 (2018).
|
| 467 |
+
|
| 468 |
+
[36] While the 780 nm beam is not only synchronized but also has coherence with the 1560 nm beam, this coherence is not necessary for the time gate measurement.
|
| 469 |
+
|
| 470 |
+
[37] S. Zhang, Y. Zhou, Y. Mei, K. Liao, Y.-L. Wen, J. Li, X.-D. Zhang, S. Du, H. Yan, and S.-L. Zhu, “$\delta$-quench measurement of a pure quantum-state wave function,” Phys. Rev. Lett. **123**, 190402 (2019).
|
| 471 |
+
|
| 472 |
+
[38] J. E. Rothenberg and D. Grischkowsky, “Measurement of optical phase with subpicosecond resolution by time-domain interferometry,” Opt. Lett. **12**, 99–101 (1987).
|
samples/texts_merged/2251660.md
ADDED
|
@@ -0,0 +1,389 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
GEOMETRIC GRAPH MANIFOLDS
|
| 5 |
+
WITH NON-NEGATIVE SCALAR CURVATURE
|
| 6 |
+
|
| 7 |
+
LUIS A. FLORIT AND WOLFGANG ZILLER
|
| 8 |
+
|
| 9 |
+
**ABSTRACT.** We classify $n$-dimensional geometric graph manifolds with nonnegative scalar curvature by first showing that if $n > 3$, the universal cover splits off a codimension 3 Euclidean factor. We then proceed with the classification of the 3-dimensional case, where the condition is equivalent to the eigenvalues of the Ricci tensor being $(\lambda, \lambda, 0)$ with $\lambda \ge 0$. In this case we prove that such a manifold is either a lens space or a prism manifold with a very rigid metric. This allows us to also classify the moduli space of such metrics: it has infinitely many connected components for lens spaces, while it is connected for prism manifolds.
|
| 10 |
+
|
| 11 |
+
A geometric graph manifold $M^n$ is a Riemannian manifold which is the union of twisted cylinders $C^n = (L^2 \times \mathbb{R}^{n-2})/G$, where $G \subset \text{Iso}(L^2 \times \mathbb{R}^{n-2})$ acts properly discontinuously and freely on the Riemannian product of a connected surface $L^2$ with the Euclidean space $\mathbb{R}^{n-2}$. In addition, the boundary of each twisted cylinder is a union of compact totally geodesic flat hypersurfaces, each of which is isometric to a boundary component of another twisted cylinder. In its simplest form, as first discussed in [Gr], they are the union of building blocks of the form $L^2 \times S^1$, where $L^2$ is a surface, not diffeomorphic to a disk or an annulus, whose boundary is a union of closed geodesics. The building blocks are glued along common boundary totally geodesic flat tori by switching the role of the circles. Such graph manifolds have been studied frequently in the context of manifolds with nonpositive sectional curvature. In fact, they were the first examples of such metrics with geometric rank one. Furthermore, in [Sch] it was shown that if a complete 3-manifold with nonpositive sectional curvature has the fundamental group of a graph manifold, then it is isometric to a geometric graph manifold.
|
| 12 |
+
|
| 13 |
+
One of the most basic features of geometric graph manifolds is that their curvature tensor has nullity space of dimension at least $n-2$ everywhere. This property by itself already guarantees that each finite volume connected component of the set of non-flat points is a twisted cylinder, and under some further weak assumptions, the manifold is isometric to a geometric graph manifold in the above sense; see [FZ2]. See also [BKV] and references therein for extensive literature on manifolds with nullity equal to $n-2$.
|
| 14 |
+
|
| 15 |
+
In dimension 3, the nullity condition is equivalent to saying that the eigenvalues of the Ricci tensor are $(\lambda, \lambda, 0)$, or to the assumption, called cvc(0), that every tangent vector is contained in a flat plane; see [SW]. Notice that this is in fact the only choice for the eigenvalues of the Ricci tensor where the metric is allowed to be locally reducible.
|
| 16 |
+
|
| 17 |
+
The first author was supported by CNPq-Brazil, and the second author by a grant from the National Science Foundation, by IMPA, and CAPES-Brazil.
|
| 18 |
+
---PAGE_BREAK---
|
| 19 |
+
|
| 20 |
+
This nullity condition also arose in a different context. In [FZ1] it was shown that a compact immersed submanifold $M^n \subset \mathbb{R}^{n+2}$ with nonnegative sectional curvature is either diffeomorphic to the sphere $\mathbb{S}^n$, isometric to a product of two convex hypersurfaces $\mathbb{S}^k \times \mathbb{S}^{n-k} \subset \mathbb{R}^{k+1} \times \mathbb{R}^{n-k+1}$, isometric to $(\mathbb{S}^{n-1} \times \mathbb{R})/\mathbb{Z}$, or diffeomorphic to a lens space $\mathbb{S}^3/\mathbb{Z}_p \subset \mathbb{R}^5$. In the latter case it was shown that each connected component of the set of nonflat points is a twisted cylinder. The present paper arose out of an attempt to understand the intrinsic geometry of such metrics. We thus want to classify all compact geometric graph manifolds with nonnegative sectional curvature, or equivalently, with nonnegative scalar curvature. Notice that under this curvature assumption compactness is equivalent to finite volume.
|
| 21 |
+
|
| 22 |
+
We first show that their study can be reduced to dimension three.
|
| 23 |
+
|
| 24 |
+
**THEOREM A.** Let $M^n$, $n \ge 4$, be a compact geometric graph manifold with nonnegative scalar curvature. Then, the universal cover $\tilde{M}^n$ of $M^n$ splits off an $(n-3)$-dimensional Euclidean factor isometrically, i.e., $\tilde{M}^n = N^3 \times \mathbb{R}^{n-3}$. Moreover, either $M^n$ is flat, or $N^3 = \mathbb{S}^2 \times \mathbb{R}$ splits isometrically, or $N^3 = \mathbb{S}^3$ with a geometric graph manifold metric.
|
| 25 |
+
|
| 26 |
+
By the splitting theorem, the curvature condition by itself already implies that $\tilde{M}^n$ is isometric to a product $Q^k \times \mathbb{R}^{n-k}$ with $Q^k$ compact and simply connected, but it is surprisingly delicate to show that $k \le 3$.
|
| 27 |
+
|
| 28 |
+
In dimension three, the simplest nontrivial example of a geometric graph manifold with nonnegative scalar curvature is the usual description of $\mathbb{S}^3$ as the union of two solid tori $D^2 \times S^1$ endowed with a product metric, see Figure 1. If this product metric is invariant under $SO(2) \times SO(2)$, we can also take a quotient by the cyclic group generated by $R_p \times R_p^q$ to obtain a geometric graph manifold metric on any lens space $L(p,q) = \mathbb{S}^3/\mathbb{Z}_p$. Here $R_p \in SO(2)$ denotes the rotation of angle $2\pi/p$.
|
| 29 |
+
|
| 30 |
+
FIGURE 1. $\mathbb{S}^3 \subset \mathbb{R}^5$ with nonnegative curvature
|
| 31 |
+
|
| 32 |
+
There is a further family whose members also admit geometric graph manifold metrics with nonnegative scalar curvature: the so-called *prism manifolds* $P(m,n) := \mathbb{S}^3/G_{m,n}$, which depend on two relatively prime positive integers $m, n$. Such a metric on $P(m,n)$ can be constructed as a quotient of the metric on $\mathbb{S}^3$ as above by the group $G_{m,n}$ generated by $R_{2n} \times R_{2n}^{-1}$ and $(R_m \times R_m) \circ J$, where $J$ is a fixed point free isometry switching the two
|
| 33 |
+
---PAGE_BREAK---
|
| 34 |
+
|
| 35 |
+
isometric solid tori. Topologically $P(m, n)$ is thus a single solid torus whose boundary is identified to be a Klein bottle. Its fundamental group $G_{m,n}$ is abelian if and only if $m=1$, and in fact $P(1, n)$ is diffeomorphic to $L(4n, 2n-1)$; see Section 1. Unlike in the case of lens spaces, the diffeomorphism type of a prism manifold is determined by its fundamental group.
|
| 36 |
+
|
| 37 |
+
Our main purpose is to show that these are the only three dimensional compact geometric graph manifolds with nonnegative scalar curvature, and to classify the moduli space of such metrics. We will see that the twisted cylinders in this case are of the form $C = (D \times \mathbb{R})/\mathbb{Z}$, where $D$ is the interior of a 2-disk of nonnegative Gaussian curvature, whose boundary $\partial D$ is a closed geodesic along which the curvature vanishes to infinite order. We fix once and for all such a metric $\langle\cdot,\cdot\rangle_0$ on a 2-disc $D_0$, whose boundary has length 1 and which is rotationally symmetric. We call a geometric graph manifold metric on a 3-manifold *standard* if the generating disk $D$ of a twisted cylinder $C$ as above is isometric to the interior of $D_0$ with metric $r^2\langle\cdot,\cdot\rangle_0$ for some constant $r > 0$. Observe that the projection of $\partial D \times \{s\}$ for $s \in \mathbb{R}$ is a parallel foliation by closed geodesics of the flat totally geodesic 2-torus $(\partial D \times \mathbb{R})/\mathbb{Z}$.
|
| 38 |
+
|
| 39 |
+
We provide the following classification:
|
| 40 |
+
|
| 41 |
+
**THEOREM B.** Let $M^3$ be a compact geometric graph manifold with nonnegative scalar curvature and irreducible universal cover. Then $M^3$ is diffeomorphic to a lens space or a prism manifold. Moreover, we have either:
|
| 42 |
+
|
| 43 |
+
a) $M^3$ is a lens space and $M^3 = C_1 \sqcup T^2 \sqcup C_2$, i.e., $M^3$ is isometrically the union of two twisted cylinders $C_i = (D_i \times \mathbb{R})/\mathbb{Z}$ over disks $D_i$ glued together along their common totally geodesic flat torus boundary $T^2$. Conversely, any flat torus endowed with two parallel foliations by closed geodesics uniquely defines a standard geometric graph manifold metric on a lens space;
|
| 44 |
+
|
| 45 |
+
b) $M^3$ is a prism manifold and $M^3 = C \sqcup K^2$, i.e., $M^3$ is isometrically the closure of a single twisted cylinder $C = (D \times \mathbb{R})/\mathbb{Z}$ over a disk $D$, whose totally geodesic flat interior boundary is isometric to a rectangular torus $T^2$, and $K^2 = T^2/\mathbb{Z}_2$ is a Klein bottle. Conversely, any rectangular flat torus endowed with a parallel foliation by closed geodesics uniquely defines a standard geometric graph manifold metric on a prism manifold.
|
| 46 |
+
|
| 47 |
+
In addition, any geometric graph manifold metric with nonnegative scalar curvature on $M^3$ is isotopic, through geometric graph manifold metrics with nonnegative scalar curvature, to a standard one.
|
| 48 |
+
|
| 49 |
+
We call $T^2$, respectively $K^2$, the core of the geometric graph manifold and will see that it is in fact an isometry invariant.
|
| 50 |
+
|
| 51 |
+
Observe that a twisted cylinder with generating surface a disc is diffeomorphic to a solid torus. In topology one constructs a lens space by gluing two such solid tori along their boundary by an element of $\text{GL}(2, \mathbb{Z})$. In order to make this gluing into an isometry, we twist the local product structure. An alternate way to view this construction is as follows. Start with an arbitrary twisted cylinder $C_1$ and regard the flat boundary torus as the
|
| 52 |
+
---PAGE_BREAK---
|
| 53 |
+
|
| 54 |
+
quotient of $\mathbb{R}^2$ with respect to a lattice. We can then choose a second twisted cylinder $C_2$ whose boundary is a different fundamental domain of the same lattice, and hence the two twisted cylinders can be glued with an isometry of the boundary tori. We note that in principle, a twisted cylinder can also be flat, but we will see that in that case it can be absorbed by one of the nonflat twisted cylinders.
|
| 55 |
+
|
| 56 |
+
The diffeomorphism type of $M^3$ in Theorem B is determined by the (algebraic) oriented slope between the parallel foliations of $T^2$ by closed geodesics. As we will see, this is also an isometry invariant $S(M^3, \mathbf{o}) \in \mathbb{Q}$ of $M^3$ which we call its *slope*, once orientations **o** of $M^3$ and its core are chosen; see Section 3 for the precise definition.
|
| 57 |
+
|
| 58 |
+
**THEOREM C.** Let $M^3$ be a compact geometric graph manifold of nonnegative scalar curvature with irreducible universal cover and slope $S(M^3, \mathbf{o}) = q/p \in \mathbb{Q}$. Then, in case (a) of Theorem B, $M^3$ is diffeomorphic to the lens space $L(p, q)$, while in case (b) it is diffeomorphic to the prism manifold $P(q, p)$.
|
| 59 |
+
|
| 60 |
+
This result can be used to classify the moduli space of geometric graph manifold metrics. We first deform any such metric in Theorem B to be standard, preserving the metric on the torus $T^2$, and then deform $T^2$ to be the unit square $S^1 \times S^1$, while preserving also the sign of the scalar curvature in the process. In case (a), we can also make one of the foliations equal to $S^1 \times \{w\}$. The metric is then determined by the remaining parallel foliation of the unit square by closed geodesics. Since the diffeomorphism type of a lens space $L(p,q)$ is determined by $\pm q^{\pm 1} \mod p$, we conclude:
|
| 61 |
+
|
| 62 |
+
**COROLLARY.** The moduli space of geometric graph manifold metrics with nonnegative scalar curvature on a lens space has infinitely many connected components, whereas on a prism manifold $P(q,p)$ with $q > 1$ it is connected.
|
| 63 |
+
|
| 64 |
+
We will see that the moduli space for the lens space $L(4p, 2p-1)$ has a special component arising from the fact that it is diffeomorphic to $P(1, p)$.
|
| 65 |
+
|
| 66 |
+
Finally, we apply our results, combined with those in [FZ2], to the class of compact 3-dimensional manifolds $M^3$ with Ricci eigenvalues $(\lambda, \lambda, 0)$ for $\lambda \ge 0$. Theorem A in [FZ2] implies that any connected component of the set $M'$ of non-flat points of $M^3$ is isometric to a twisted cylinder. The basic geometric feature of $M'$ is that it admits a parallel foliation by complete geodesics tangent to the kernel of the Ricci tensor. If there exists a larger open set $M'' \supset M'$ which admits a parallel foliation by complete geodesics extending that of $M'$, then any connected component of $M''$ is still isometric to a twisted cylinder. Such an extension $M''$ is called *full* if it is dense in $M^3$ and if its collection of twisted cylinders is locally finite. From the second theorem in [FZ2] we thus conclude the following.
|
| 67 |
+
|
| 68 |
+
**COROLLARY.** Let $M^3$ be a compact Riemannian manifold with Ricci eigenvalues $(\lambda, \lambda, 0)$ for some function $\lambda \ge 0$. Then $M^3$ is isometric to one of the manifolds in Theorem B if and only if its set of nonflat points admits a full extension.
|
| 69 |
+
---PAGE_BREAK---
|
| 70 |
+
|
| 71 |
+
This applies of course if $M'$ is already dense, as long as it satisfies the mild regularity assumption that its collection of twisted cylinders is locally finite. Although in [FZ2] we built an explicit example where $M'$ admits no full extension, we conjecture that it always admits a full extension when $\lambda \ge 0$.
|
| 72 |
+
|
| 73 |
+
The paper is organized as follows. In Section 1 we recall some facts about geometric graph manifolds. In Section 2 we prove Theorem A by showing that the manifold is a union of one or two twisted cylinders over disks, while in Section 3 we classify their metrics.
|
| 74 |
+
|
| 75 |
+
# 1. PRELIMINARIES
|
| 76 |
+
|
| 77 |
+
Let us begin with the definition of twisted cylinders and geometric graph manifolds.
|
| 78 |
+
|
| 79 |
+
Consider the cylinder $L^2 \times \mathbb{R}^{n-2}$ with its natural product metric, where $L^2$ is a connected surface. We call the quotient
|
| 80 |
+
|
| 81 |
+
$$C^n = (L^2 \times \mathbb{R}^{n-2})/G$$
|
| 82 |
+
|
| 83 |
+
a *twisted cylinder*, where $G \subset \text{Iso}(L^2 \times \mathbb{R}^{n-2})$ acts properly discontinuously and freely on $L^2 \times \mathbb{R}^{n-2}$, and $L^2$ the *generating surface* of $C^n$. We also say that $C^n$ is a twisted cylinder over $L^2$. The Euclidean factor induces a foliation $\Gamma$ on $C^n$ whose leaves will be called the *nullity leaves* of $C^n$. These leaves are complete flat totally geodesic and locally parallel of codimension 2. Such twisted cylinders are the building blocks of geometric graph manifolds:
|
| 84 |
+
|
| 85 |
+
*Definition.* A complete connected Riemannian manifold $M^n$, $n \ge 3$, is called a *geometric graph manifold* if $M^n$ is a locally finite disjoint union of twisted cylinders $C_i$ glued together along disjoint compact connected totally geodesic flat hypersurfaces $H_\lambda$ of $M^n$. That is,
|
| 86 |
+
|
| 87 |
+
$$M^n \setminus W = \bigsqcup_\lambda H_\lambda, \quad \text{where} \quad W := \bigsqcup_i C_i.$$
|
| 88 |
+
|
| 89 |
+
See Figure 2 for a typical (4-dimensional) example, where each twisted cylinder is just the isometric product $L^2 \times S^1 \times S^1$ of a surface $L^2$ and a flat torus.
|
| 90 |
+
|
| 91 |
+
**FIGURE 2.** An irreducible 4-dimensional geometric graph manifold with three cylinders and two (finite volume) ends
|
| 92 |
+
---PAGE_BREAK---
|
| 93 |
+
|
| 94 |
+
We first make some general remarks about this definition.
|
| 95 |
+
|
| 96 |
+
1. We allow the possibility that the hypersurfaces $H_λ$ are one-sided, even when $M^n$ is orientable.
|
| 97 |
+
|
| 98 |
+
2. The locally finiteness condition is equivalent to the assumption that each $H_λ$ is a common boundary component of two twisted cylinders $C_i$ and $C_j$, that may even be globally the same. When $H_λ$ is one-sided it is a boundary component of only one twisted cylinder.
|
| 99 |
+
|
| 100 |
+
3. As shown in [FZ2], the foliations $\Gamma_i$ and $\Gamma_j$ of $C_i$ and $C_j$ induce two totally geodesic foliations on $H_λ$. When they agree, $C_i$, $C_j$ and $H_λ$ can be considered as a single twisted cylinder. Thus, without loss of generality, we assume from now on that they are different. This implies that the generating surface $L^2$ of each twisted cylinder $C$ is the interior of a surface with boundary consisting of complete geodesics along which the Gaussian curvature vanishes to infinite order. We refer to these geodesics as boundary geodesics of $L^2$ itself.
|
| 101 |
+
|
| 102 |
+
4. These boundary geodesics of $L^2$ do not have to be closed, even when $C$ is compact.
|
| 103 |
+
|
| 104 |
+
5. The complement of $W$ is contained in the set of flat points of $M^n$, but we do not require that the generating surfaces have nonvanishing Gaussian curvature.
|
| 105 |
+
|
| 106 |
+
6. In principle, we could ask for the hypersurfaces $H_λ$ to be complete instead of compact. However, compactness follows when $M^n$ has finite volume; see [FZ2].
|
| 107 |
+
|
| 108 |
+
7. If none of the generating surfaces in a geometric graph manifold are discs, it also admits a metric with nonpositive sectional curvature. On the other hand, if all of the generating surfaces are discs, we will see that it admits a metric with nonnegative sectional curvature.
|
| 109 |
+
|
| 110 |
+
In [FZ2] we gave a characterization of geometric graph manifolds with finite volume in terms of the nullity of the curvature tensor. But since a complete noncompact manifold with nonnegative Ricci curvature has linear volume growth by [Ya], we will assume from now on that $M^n$ is compact.
|
| 111 |
+
|
| 112 |
+
We now recall some properties of three dimensional lens spaces and prism manifolds that will be needed later on; see e.g. [ST, HK, Ru, Or] for details.
|
| 113 |
+
|
| 114 |
+
One way of defining a lens space is as the quotient $L(p, q) = \mathbb{S}^3/\mathbb{Z}_p$, where $g \in \mathbb{Z}_p \subset S^1 \subset \mathbb{C}$ acts as $g \cdot (z, w) = (gz, g^q w)$ for $(z, w) \in \mathbb{S}^3 \subset \mathbb{R}^4 = \mathbb{C}^2$ for coprime integers $p, q$ with $p \neq 0$. It is a well known fact that two lens spaces $L(p, q)$ and $L(p, q')$ are diffeomorphic if and only if $q' = \pm q^{\pm 1} \mod p$. An alternative description we will use is as the union of two solid tori $D_i \times S^1$, with boundary identified such that $\partial D_1 \times \{p_0\} \in \pi_1(\partial D_1 \times S^1)$ is taken into $(q, p) \in \mathbb{Z} \oplus \mathbb{Z} = \pi_1(\partial D_2 \times S^1)$ with respect to its natural basis.
|
| 115 |
+
|
| 116 |
+
A prism manifold can also be described in two different ways. The first one is to define it as the quotient $\mathbb{S}^3/(H_1 \times H_2) = H_1/\mathbb{S}^3/H_2$, where $H_1 \subset \text{Sp}(1)$ is a cyclic group acting as left translations on $\mathbb{S}^3 \simeq \text{Sp}(1)$ and $H_2 \subset \text{Sp}(1)$ a binary dihedral group acting as right translations. A more useful description for our purposes is as the union of a solid torus $C = D \times S^1$ with the 3-manifold
|
| 117 |
+
|
| 118 |
+
$$ (1.1) \qquad N^3 = (S^1 \times S^1 \times I)/\langle (j, -Id) \rangle, \quad \text{where } j(z, w) = (-z, \bar{w}). $$
|
| 119 |
+
---PAGE_BREAK---
|
| 120 |
+
|
| 121 |
+
Notice that $N^3$ is a bundle over the Klein bottle $K = T^2/\langle j \rangle$ with fiber an interval $I = [-\epsilon, \epsilon]$ and orientable total space. Thus $\partial N^3$ is the torus $S^1 \times S^1$, and we glue the two boundaries via a diffeomorphism. Here $\pi_1(N^3) = \pi_1(K) = \{a, b \mid bab^{-1} = a^{-1}\}$ and $\pi_1(\partial N^3) = \mathbb{Z} \oplus \mathbb{Z}$, with generators $a, b^2$, where $a$ represents the first circle and $b^2$ the second one. Then $P(m, n)$ is defined as gluing $\partial C$ to $\partial N^3$ by sending $\partial D \times \{p_0\}$ to $a^m b^{2n} \in \pi_1(\partial N^3)$. We can again assume that $m, n > 0$ with $\gcd(m, n) = 1$. Furthermore,
|
| 122 |
+
|
| 123 |
+
$$\pi_1(P(m,n)) = G_{m,n} = \{a,b \mid bab^{-1} = a^{-1}, a^m b^{2n} = 1\}.$$
|
| 124 |
+
|
| 125 |
+
This group has order $4mn$ and its abelianization has order $4n$. Thus the fundamental group determines and is determined by the ordered pair $(m, n)$. In addition, $G_{m,n}$ is abelian if and only if $m=1$ in which case $P(m,n)$ is diffeomorphic to the lens space $L(4n, 2n-1)$. Unlike in the case of lens spaces, the diffeomorphism type of $P(m,n)$ is uniquely determined by $(m,n)$. Prism manifolds can also be characterized as the 3-dimensional spherical space forms which contain a Klein bottle, which for $m>1$ is also incompressible. Observe in addition that in $N^3$ we can shrink the length of the interval $I$ in (1.1) down to 0, and hence $P(m,n)$ can also be viewed as a single solid torus whose rectangular flat torus boundary has been identified to a Klein bottle, as in part (b) of Theorem B.
|
| 126 |
+
|
| 127 |
+
## 2. A DICHOTOMY AND THE PROOF OF THEOREM A
|
| 128 |
+
|
| 129 |
+
In this section we provide the general structure of geometric graph manifolds with non-negative scalar curvature by showing a dichotomy: they are built from either one or two twisted cylinders over 2-disks. This will then be used to prove Theorem A.
|
| 130 |
+
|
| 131 |
+
Let $M^n$ be a compact nonflat geometric graph manifold with nonnegative scalar curvature. We will furthermore assume that $M^n$ is not itself a twisted cylinder since in this case the universal cover of $M^n$ is isometric to $\mathbb{S}^2 \times \mathbb{R}^{n-2}$, where $\mathbb{S}^2$ is endowed with a metric of nonnegative Gaussian curvature. Recall that we also assume that the nullity foliations of two twisted cylinders glued along a hypersurface $H$ induce two different foliations on $H$, which in turn implies that the Gaussian curvature of the two generating surfaces vanish to infinite order along their boundary geodesic.
|
| 132 |
+
|
| 133 |
+
By assumption, there exists a collection of compact flat totally geodesic hypersurfaces in $M^n$ whose complement is a disjoint union of (open) twisted cylinders $C_i$. Let $C = (L^2 \times \mathbb{R}^{n-2})/G$ be one of these cylinders whose boundary in $M^n$ is a disjoint union of compact flat totally geodesic hypersurfaces. There is also an *interior boundary* $\partial_i C$ of $C$, which we also denote for convenience as $\partial C$ by abuse of notation. This boundary can be defined as the set of equivalence classes of Cauchy sequences $\{p_n\} \subset C$ in the interior distance function $d_C$ of $C$, where $\{p_n\} \sim \{p'_n\}$ if $\lim_{n\to\infty} d_C(p_n, p'_n) = 0$. Since $M^n$ is compact, such a Cauchy sequence $\{p_n\}$ converges in $M^n$, and we have a natural map $\sigma: \partial C \to M$ that sends $\{p_n\}$ to $\lim_{n\to\infty} p_n \in M^n$. This map is, on each component of $\partial C$, either an isometry or a locally isometric two-fold covering map since $H = \sigma(\partial C)$ consists of disjoint smooth hypersurfaces which are two-sided in the former case, and one-sided in the latter. Therefore, $\partial C$ is smooth as well and $C \sqcup \partial C$ is a closed twisted cylinder with totally geodesic flat compact interior boundary, that by abuse of notation we still denote
|
| 134 |
+
---PAGE_BREAK---
|
| 135 |
+
|
| 136 |
+
by $C$. Similarly, $L^2$ is a smooth surface with geodesic interior boundary components along which the Gaussian curvature vanishes to infinite order.
|
| 137 |
+
|
| 138 |
+
We first determine the generating surfaces of the twisted cylinders:
|
| 139 |
+
|
| 140 |
+
**PROPOSITION 2.1.** Let $C = (L^2 \times \mathbb{R}^{n-2})/G$ be a compact twisted cylinder with nonnegative curvature as above. Then one of the following holds:
|
| 141 |
+
|
| 142 |
+
i) The surface $L^2$ is isometric to a 2-disk $D$ with nonnegative Gaussian curvature, whose boundary is a closed geodesic along which the curvature of $D$ vanishes to infinite order.
|
| 143 |
+
|
| 144 |
+
ii) $C$ is flat and there exists a compact flat hypersurface $S$ such that $C$ is isometric to either $[-s_0, s_0] \times S$, or to $([ -s_0, s_0] \times S)/\{(s,x) \sim (-s, \tau(x))\}$ for some involution $\tau$ of $S$.
|
| 145 |
+
|
| 146 |
+
*Proof*. Since $C$ is compact and the boundary is totally geodesic, we can apply the soul theorem to $C$, see [CG] Theorem 1.9 and [Pet] Theorem 4.1. Thus there exists a compact totally geodesic submanifold $S \subset C$ and $C$ is diffeomorphic to the disc bundle $D_\epsilon(S) = \{v \in T_pC \mid v \perp T_pS, |v| \le \epsilon\}$ for some $\epsilon > 0$. Recall that $S$ is constructed as follows. Let $C^s = \{p \in C \mid d(p, \partial C) \ge s\}$. Then $C^s$ is convex, and the set of points $C^{s_0}$ at maximal distance $s_0$ from $\partial C$ is a totally geodesic submanifold, possibly with boundary. Repeating the process if necessary, one obtains the soul $S$. In our situation, let $q = [(p,v)] \in C^{s_0}$, and $\gamma$ a minimal geodesic from $q$ to $\partial C$. Since it meets $\partial C = ((\partial L^2) \times \mathbb{R}^{n-2})/G$ perpendicularly, we have $\gamma = [(\alpha, v)]$ where $\alpha$ is a geodesic in the leaf $L_v^2 = [L^2 \times \{v\}]$ meeting $\partial L_v^2$ perpendicularly. So, for every $w \in \mathbb{R}^{n-2}$, the geodesic $[(\alpha, w)]$ is also minimizing, $[(p,w)] \in C^{s_0}$ lies at maximal distance $s_0$ to $\partial C$, and hence $C^{s_0} = (T \times \mathbb{R}^{n-2})/G$ where $T \subset L^2$ is a segment, a complete geodesic or a point. Therefore $S = (T' \times \mathbb{R}^{n-2})/G$, where $T'$ is a point or a complete geodesic (possibly closed).
|
| 147 |
+
|
| 148 |
+
We first consider the case where $T'$ is a point and hence the soul is a single nullity leaf. Recall, that in order to show that $C$ is diffeomorphic to the disc bundle $D_\epsilon(S)$, one constructs a gradient like vector field $X$ by observing that the distance function to the soul has no critical points. In our case, the initial vector to all minimal geodesics from $[(p,v)] \in C$ to $S$ lies in the leaf $L_v^2$ and hence we can construct $X$ such that $X$ is tangent to $L_v^2$ for all $v$. The diffeomorphism between $C$ and $D_\epsilon(S)$ is obtained via the flow of $X$, which now preserves the leaves $L_v^2$ and therefore $L^2$ is diffeomorphic to a disc.
|
| 149 |
+
|
| 150 |
+
If $T'$ is a complete geodesic, the soul $S$ is flat and has codimension 1. If $X$ is a unit vector field in $L^2$ along $T'$ and orthogonal to $T'$, it is necessarily parallel and its image under the normal exponential map of $S$ determines a flat surface by Perelman's solution to the soul conjecture, see [Pe]. This surface lies in $L^2$, and every point $q \in L^2$ is contained in such a surface since we can connect $q$ to $S$ by a minimal geodesic, which is contained in some $L_v$, and is orthogonal to $T'$. Thus $L^2$ is flat and hence either $L^2 = T' \times [-s_0, s_0]$, and hence $C = [-s_0, s_0] \times S$, or $L^2$ is a Moebius strip and hence $C = ([ -s_0, s_0] \times S)/\{(s,x) \sim (-s, \tau(x))\}$ for some involution $\tau$ of $S$. □
|
| 151 |
+
---PAGE_BREAK---
|
| 152 |
+
|
| 153 |
+
*Remark 2.2.* A flat twisted cylinder as in (ii) can be absorbed by any cylinder $C'$ attached to one of its boundary components by either attaching $[-s_0, s_0]$ to the generating surface of $C'$ in the first case, or attaching $(0, s_0]$ in the second, in which case $\{0\} \times (S/\tau) = S/\tau$ becomes a one sided boundary component of $C'$. We will therefore assume from now on that the generating surfaces of all twisted cylinders are 2-discs.
|
| 154 |
+
|
| 155 |
+
*Remark 2.3.* The properties at the boundary $\gamma$ of a disk $D$ as in Proposition 2.1 are easily seen to be equivalent to the fact that the natural gluing $D \sqcup (\gamma \times (-\epsilon, 0])$, $\gamma \cong \gamma \times \{0\}$, is smooth when we consider on $\gamma \times (-\epsilon, 0]$ the flat product metric. In fact, in Fermi coordinates $(s \ge 0, t)$ along $\gamma$, the metric is given by $ds^2 + f(t,s)dt^2$. The fact that $\gamma$ is a (unparameterized) geodesic is equivalent to $\partial_s f(0,t) = 0$, while the curvature condition is equivalent to $\partial_s^k f(0,t) = 0$ for all $t$ and $k \ge 2$. Therefore, $f(s,t)$ can be extended smoothly as $f(0,t)$ for $-\epsilon < s < 0$, which gives the smooth isometric attachment of the flat cylinder $\gamma \times (-\epsilon, 0]$ to $D$.
|
| 156 |
+
|
| 157 |
+
As a consequence of Proposition 2.1, and the assumption that there are no flat cylinders, $\partial C = (\gamma \times \mathbb{R}^{n-2})/G$ is connected, and so is $H = \sigma(\partial C)$. In particular, $M^n$ contains at most two twisted cylinders with nonnegative curvature glued along $H$. We call such a connected compact flat totally geodesic hypersurface $H$ a core of $M^n$. We conclude:
|
| 158 |
+
|
| 159 |
+
**COROLLARY 2.4.** If $M^n$ is not flat and not itself a twisted cylinder, then $M^n = W \sqcup H$ with core $H$, and either:
|
| 160 |
+
|
| 161 |
+
a) $H$ is two-sided, $\sigma$ is an isometry, and $W = C \sqcup C'$ is the disjoint union of two open nonflat twisted cylinders as above attached via an isometry $\partial C \simeq H \simeq \partial C'$; or
|
| 162 |
+
|
| 163 |
+
b) $H$ is one-sided, $\sigma$ is a locally isometric two-fold covering map, $W = C$ is a single open nonflat twisted cylinder as above, and $M^n = C \sqcup H = C \sqcup (\partial C / \mathbb{Z}_2)$.
|
| 164 |
+
|
| 165 |
+
Furthermore, in case (a), if $H' \subset M^n$ is an embedded compact flat totally geodesic hypersurface then there exists an isometric product $H \times [0,a] \subset M^n$, with $H = H \times \{0\}$ and $H' = H \times \{a\}$. In particular, any such $H'$ is a core of $M^n$, and hence the core is unique up to isometry. On the other hand, in case (b) the core $H$ is already unique.
|
| 166 |
+
|
| 167 |
+
*Proof.* We only need to prove the uniqueness of the cores. In order to do this, any limit of nullity leaves of $C$ at its boundary in $M^n$ will be called a boundary nullity leaf, or BNL for short.
|
| 168 |
+
|
| 169 |
+
For case (a), first assume that $H \cap H' \neq \emptyset$ and take $p \in H \cap H'$. Then a BNL of $C$ in $H$ at $p$ is contained in $H'$. Indeed if not, the product structure of the universal cover $\pi: \tilde{C} = L^2 \times \mathbb{R}^{n-2} \to C$, together with the fact that $H'$ is flat totally geodesic and complete and intersects $H$ transversely, would imply that $L^2$, and hence $C$, is flat since by dimension reasons the projection of $\pi^{-1}(H' \cap C)$ onto $L^2$ would be a surjective submersion. Analogously, the (distinct) BNL of $C'$ at $p$ lies in $H'$, and since $H$ is the unique hypersurface containing both BNL's, we have that $H = H'$. If, on the other hand, $H \cap H' = \emptyset$, we can assume $H' \subset C = (L^2 \times \mathbb{R}^{n-2})/G$. Again by the product structure of $\tilde{C}$ and the fact that $H'$ is embedded we see that $H' = (\gamma' \times \mathbb{R}^{n-2})/G'$ where $\gamma' \subset L^2$ is a simple closed geodesic and $G' \subset G$ the subgroup preserving $\gamma'$. Since the boundary
|
| 170 |
+
---PAGE_BREAK---
|
| 171 |
+
|
| 172 |
+
$\gamma$ of $L^2$ is also a closed geodesic and $L^2$ is a 2-disk with nonnegative Gaussian curvature, by Gauss-Bonnet there is a closed interval $I = [0, a] \subset \mathbb{R}$ such that the flat strip $\gamma \times I$ is contained in $L^2$, with $\gamma = \gamma \times \{0\}$ and $\gamma' = \gamma \times \{a\}$. Thus $G'$ acts trivially on $I$, which implies our claim.
|
| 173 |
+
|
| 174 |
+
In case (b) we have that $H \cap H' = \emptyset$ as in case (a) since at any point $p \in H$ we have two different BNL's at $\sigma^{-1}(p)$. Hence as before $H' = (\gamma' \times \mathbb{R}^{n-2})/G' \subset C$ and $H \times [0, a] \subset M^n$, with $H = H \times \{0\}$ and $H' = H \times \{a\}$. But then the normal bundle of $H'$ is trivial, contradicting the fact that $H$ is one-sided. □
|
| 175 |
+
|
| 176 |
+
*Remark 2.5.* Any manifold in case (b) admits a two-fold cover whose covering metric is as in case (a). Indeed, we can attach to $C$ another copy of $C$ along its interior boundary $\partial_i C$ using the involution that generates $\mathbb{Z}_2$. Switching the two cylinders induces the two-fold cover of $M^n$.
|
| 177 |
+
|
| 178 |
+
We proceed by showing that our geometric graph manifolds are essentially 3-dimensional. Observe that we only use here that $M^n \setminus W$ is connected, with no curvature assumptions. In fact, the same proof shows that if $M^n \setminus W$ has $k$ connected components, then $M^n$ splits off an $(n-k-2)$-dimensional Euclidean factor.
|
| 179 |
+
|
| 180 |
+
*Claim.* If $n > 3$, the universal cover of $M^n$ splits off an $(n-3)$-dimensional Euclidean factor.
|
| 181 |
+
|
| 182 |
+
*Proof.* Assume first that $M^n$ is the union of two cylinders $C$ and $C'$ with common boundary $H$. Consider the nullity distributions $\Gamma$ and $\Gamma'$ on the interior of $C$ and $C'$, which extend uniquely to parallel codimension one distributions $F$ and $F'$ on $H$, respectively. Recall that $F \neq F'$ since otherwise the universal cover is an isometric product $N^2 \times \mathbb{R}^{n-2}$. So $J := F \cap F'$ is a codimension two parallel distribution on $H$. We claim that $J$ extends to a parallel distribution on the interior of both $C$ and $C'$.
|
| 183 |
+
|
| 184 |
+
To see this, we only need to argue for $C$, so lift the distributions $J$ and $F$ to the cover $S^1 \times \mathbb{R}^{n-2}$ of $H$ under the projection $\pi: L^2 \times \mathbb{R}^{n-2} \to C = (L^2 \times \mathbb{R}^{n-2})/G$, and denote these lifts by $\hat{J}$ and $\hat{F}$. They are again parallel distributions whose leaves project to those of $J$ and $F$ under $\pi$. At a point $(x_0, v_0) \in S^1 \times \mathbb{R}^{n-2}$ a leaf of $\hat{F}$ is given by $\{x_0\} \times \mathbb{R}^{n-2}$ and hence a leaf of $\hat{J}$ by $\{x_0\} \times W$ for some affine hyperplane $W \subset \mathbb{R}^{n-2}$. Since $\hat{J}$ is parallel, any other leaf is given by $\{x\} \times W$ for $x \in S^1$. Since $G$ permutes the leaves of $\hat{F}$, $W$ is invariant under the projection of $G$ into Iso($\mathbb{R}^{n-2}$). Therefore $\pi(\{x\} \times W)$ for $x \in L^2$ are the leaves of a parallel distribution on the interior of $C$, restricting to $J$ on its boundary.
|
| 185 |
+
|
| 186 |
+
Therefore, we have a global flat parallel distribution $J$ of codimension three on $M^n$, which implies that the universal cover splits isometrically as $N^3 \times \mathbb{R}^{n-3}$.
|
| 187 |
+
|
| 188 |
+
Now, if $M^n$ consists of only one open cylinder $C$ and its one-sided boundary, by Remark 2.5 there is a two-fold cover $\tilde{M}^n$ of $M^n$ which is the union of two cylinders as above and whose universal cover splits an $(n-3)$-dimensional Euclidean factor. □
|
| 189 |
+
|
| 190 |
+
We can now finish the proof of Theorem A. Since $M^n$ is compact with nonnegative curvature, the splitting theorem implies that the universal cover splits isometrically as
|
| 191 |
+
---PAGE_BREAK---
|
| 192 |
+
|
| 193 |
+
$\tilde{M}^n = Q^k \times \mathbb{R}^{n-k}$ with $Q^k$ compact and simply connected. According to the above claim, $k=2$ and hence $Q^2 \simeq \mathbb{S}^2$, or $k=3$ and by Theorem 1.2 in [Ha] we have $Q^3 \simeq \mathbb{S}^3$. In the latter case, we claim that the metric on $\mathbb{S}^3$ is again a geometric graph manifold metric. Indeed, if $\sigma: \mathbb{S}^3 \times \mathbb{R}^{n-3} \to M^n$ is the covering map, and $C \subset M^n$ a twisted cylinder, then in $C' = \sigma^{-1}(C)$ the codimension 2 nullity leaves contain the $\mathbb{R}^{n-3}$ factor. Since the universal cover of $C'$ has the form $L^2 \times \mathbb{R}^{n-2}$, the metric on $\mathbb{S}^3$ must be a geometric graph manifold metric.
|
| 194 |
+
|
| 195 |
+
### 3. GEOMETRIC GRAPH 3-MANIFOLDS WITH NONNEGATIVE CURVATURE
|
| 196 |
+
|
| 197 |
+
In this section we classify 3-dimensional geometric graph manifolds with nonnegative scalar curvature, giving an explicit construction of all of them. As a consequence, we show that, for each lens space, the number of connected components of the moduli space of such metrics is infinite, while for each prism manifold, the moduli space is connected. Recall that we assume that $M^3$ itself is not a single twisted cylinder. Furthermore, none of the twisted cylinders are flat, hence their generating surfaces are discs and $M^3$ is the union of one or two twisted cylinders according to the dichotomy in Corollary 2.4.
|
| 198 |
+
|
| 199 |
+
Let $M^3$ be such a compact geometric graph manifold with nonnegative scalar curvature. We first observe that $M^3$ is orientable. Indeed, by Theorem A, $M^3 = \mathbb{S}^3/\Pi$ for some finite group $\Pi$ acting freely. Moreover, if an element $g \in \Pi$ reverses orientation, the Lefschetz fixed point theorem implies that $g$ has a fixed point. Thus every cylinder $C = (D \times \mathbb{R})/G$ is orientable as well, i.e. the action of $G$ preserves orientation.
|
| 200 |
+
|
| 201 |
+
For $g \in G$, we write $g = (g_1, g_2) \in \text{Iso}(D \times \mathbb{R})$. Thus $g_1$ preserves the closed geodesic $\partial D$ and fixes the soul point $x_0 \in D$. If $g \neq e$ and $g_1$ reverses orientation, then so does $g_2$ and hence $g$ would have a fixed point. Thus $g_2$ preserves orientation and is a translation which is nontrivial since $g_1$ has a fixed point. This easily implies that $G = \mathbb{Z}$. Altogether, the twisted cylinders are of the form $C = (D \times \mathbb{R})/\mathbb{Z}$ with $\mathbb{Z}$ generated by some $g = (g_1, g_2)$. If $g_1$ is nontrivial, then $g_1$ is determined by its derivative at $x_0$. After orienting $D$, $d(g_1)_{x_0}$ is a rotation $R_\theta$ of angle $2\pi\theta$, $0 \le \theta < 1$. We simply say that $g_1$ acts as a rotation $R_\theta$ on $D$. Thus $g$ acts via
|
| 202 |
+
|
| 203 |
+
$$ (3.1) \qquad g(x,s) = (R_\theta(x), s+h) \in \text{Iso}(D \times \mathbb{R}), $$
|
| 204 |
+
|
| 205 |
+
for a certain constant $h > 0$, after orienting the nullity distribution $\Gamma \cong T^\perp D$. We can regard $\theta$ as the twist of the cylinder and $h$ as its height; see Figure 3. These, together with the length of $\partial D$, are the geometric invariants that characterize the twisted cylinder up to isometry. Moreover, $C$ has a parallel foliation by the nullity lines, i.e. the images of $\{p_0\} \times \mathbb{R}, p_0 \in D$, which are closed if and only if $\theta$ is rational. The interior boundary of $C$ is a flat 2-torus and the limits of the nullity lines induce a parallel foliation on $\partial_i C$. Observe that $\partial_i C$ also has a parallel foliation by closed geodesics given by the projection of $\partial D \times \{s_0\}, s_0 \in \mathbb{R}$, which will be denoted $\mathcal{F}(C)$.
|
| 206 |
+
|
| 207 |
+
Notice that the action of $\mathbb{Z}$ can be changed differentially until $\theta = 0$, and hence $C$ is diffeomorphic to a solid torus $D \times S^1$. According to Corollary 2.4, $M^3$ is thus either the union of two solid tori glued along their boundary, and hence diffeomorphic to a lens space,
|
| 208 |
+
---PAGE_BREAK---
|
| 209 |
+
|
| 210 |
+
FIGURE 3. A twisted cylinder
|
| 211 |
+
|
| 212 |
+
or it is a solid torus whose boundary is identified via an involution to form a Klein bottle, and therefore diffeomorphic to a prism manifold.
|
| 213 |
+
|
| 214 |
+
*Remark 3.2.* Let us clarify the role of the orientations in our description of $C$ in (3.1). Take a twisted cylinder $C$ with nonnegative scalar curvature, and $D$ a maximal leaf of $\Gamma^{\perp}$. Orienting $\Gamma$ is then equivalent to orienting $T^{\perp}D$, which in turn is equivalent to choosing one of the two generators of $\mathbb{Z}$. On the other hand, orienting $D$ is equivalent to choosing between the oriented angle $\theta$ above or $1-\theta$. In particular, these orientations are unrelated to the metric on $C$, i.e., changing orientations give isometric cylinders.
|
| 215 |
+
|
| 216 |
+
Next, we show that the geometric graph manifold metric on $M^3$ is isotopic to a standard one. In order to do this, fix once and for all a metric $\langle \cdot, \cdot \rangle_0$ on the disc $D_0 = \{x \in \mathbb{R}^2: |x| \le 1\}$ which is rotationally symmetric, has positive Gaussian curvature on the interior of $D_0$, and whose boundary is a closed geodesic of length 1 along which the Gaussian curvature vanishes to infinite order. We call the metric on $M^3$ *standard*, if for each twisted cylinder $C = (D \times \mathbb{R})/\mathbb{Z}$ in the complement of a core of $M^3$, the metric on $D$ is isometric to $r^2\langle \cdot, \cdot \rangle_0$ for some constant $r > 0$. Notice that such a metric on $M^3$ is unique up to isometry. For this we first show:
|
| 217 |
+
|
| 218 |
+
**LEMMA 3.3.** Let $\langle \cdot, \cdot \rangle$ be a metric on a disc $D$ with nonnegative Gaussian curvature. Assume that its boundary is a closed geodesic along which the curvature vanishes to infinite order, and that the metric is invariant under a group of isometries $K$. Then, given a constant $r > 0$, there exists a smooth path of metrics on $D$, $\langle \cdot, \cdot \rangle_s$, $1 \le s \le 2$, satisfying the same assumptions for all $s$, such that $\langle \cdot, \cdot \rangle_1 = \langle \cdot, \cdot \rangle$ and $\langle \cdot, \cdot \rangle_2 = r^2\langle \cdot, \cdot \rangle_0$, where $\langle \cdot, \cdot \rangle_0$ is the fixed rotationally symmetric metric on $D_0$.
|
| 219 |
+
|
| 220 |
+
*Proof.* Let $\langle \cdot, \cdot \rangle'$ be the standard flat metric on $D_0$. By the uniformization theorem we can write $\langle \cdot, \cdot \rangle = f_1^*(e^{2v}\langle \cdot, \cdot \rangle')$ for some diffeomorphism $f_1: D \to D_0$ and a smooth function $v$ on $D_0$. The metric $e^{2v}\langle \cdot, \cdot \rangle'$ is thus invariant under $C_{f_1}(K) = \{f_1 \circ g \circ f_1^{-1} : g \in K\}$ which fixes $f_1(x_0)$, where $x_0 \in D$ is the fixed point of the action of $K$. Equivalently, $h \in C_{f_1}(K)$ is a conformal transformation of $(D_0, \langle \cdot, \cdot \rangle')$ with conformal factor $e^{2v-2v\phi}$. Recall that the conformal transformations of $\langle \cdot, \cdot \rangle'$ on the interior of $D_0$ can be viewed as the isometry group of the hyperbolic disc model. Hence there exists a conformal transformation $j$ of $D_0$ with $j(f_1(x_0)) = 0$ and conformal factor $e^{2\tau}$. We can thus also write $\langle \cdot, \cdot \rangle = f^*(e^{2u}\langle \cdot, \cdot \rangle')$,
|
| 221 |
+
---PAGE_BREAK---
|
| 222 |
+
|
| 223 |
+
where $f = j \circ f_1 : D \to D_0$ and $u := (v - \tau) \circ j$. Now the metric $e^{2u}\langle , \rangle'$ is invariant under $C_f(K)$, which this time fixes the origin of $D_0$. So $k \in C_f(K)$ is a conformal transformation of $\langle , \rangle'$ fixing the origin, with conformal factor $e^{2u-2uok}$. But an isometry of the hyperbolic disc model, fixing the origin, is also an isometry of $\langle , \rangle'$. Hence $e^{2u} = e^{2uok}$, i.e. $u$ is invariant under $k$. Altogether, $C_f(K) \subset \text{SO}(2) \subset \text{Iso}(D_0, \langle , \rangle')$ and $u$ is $C_f(K)$-invariant. Analogously, $r^2\langle , \rangle_0 = f_0^*(e^{2u_0}\langle , \rangle')$ with $f_0 \in \text{Diff}(D_0)$ satisfying $f_0(0) = 0$ and $u_0$ being SO(2)-invariant. In particular, $u_0$ is also $C_f(K)$-invariant.
|
| 224 |
+
|
| 225 |
+
We now consider the two metrics $e^{2u}\langle , \rangle'$ and $e^{2u_0}\langle , \rangle'$ on $D_0$. They both have the property that the boundary is a closed geodesic along which the curvature vanishes to infinite order. An easy computation shows that the assumption that the boundary is a closed geodesic, up to parametrization, is equivalent to the condition that the normal derivatives of $u$ and $u_0$, with respect to a unit normal vector in $\langle , \rangle'$, is equal to 1. Furthermore, since the curvature $G$ of a metric $e^{2w}\langle , \rangle'$ is given by $Ge^{2w} = -\Delta w$, $G$ vanishes to infinite order if and only if $\Delta w$ does. For each $0 \le s \le 1$, consider the $C_f(K)$-invariant metric on $D_0$ given by $\langle , \rangle^s = e^{2(1-s)u_0+2su+a(s)}\langle , \rangle'$, where $a(s)$ is the function that makes the boundary to have length $r$ for all $s$. Clearly, for each $s$, the boundary is again a closed geodesic up to parametrization and $G^s$ vanishes at the boundary to infinite order. Furthermore, since $G^s e^{2(1-s)u_0+2su+a(s)} = -(1-s)\Delta u_0 - s\Delta u$ and $\Delta u_0 < 0$, $\Delta u \le 0$, the curvature of $\langle , \rangle^s$ is nonnegative and positive on the interior of $D_0$. Thus $\langle , \rangle_s = f^*\langle , \rangle^s$ is the desired family of metrics on $D$. $\square$
|
| 226 |
+
|
| 227 |
+
We can now apply this to deform the metric on $M^3$:
|
| 228 |
+
|
| 229 |
+
**PROPOSITION 3.4.** *A geometric graph manifold metric with nonnegative scalar curvature is isotopic, through geometric graph manifold metrics with nonnegative scalar curvature, to a standard one.*
|
| 230 |
+
|
| 231 |
+
*Proof.* We define the isotopy separately on each cylinder $C = (D \times \mathbb{R})/\mathbb{Z}$, such that the isometry type of the core $H = \partial C$, and the foliation of $H$ induced by the nullity leaves of $C$, stays fixed. The metric on $D$ is invariant under the group of isometries $K = \{g_1 | (g_1, g_2) \in \mathbb{Z}\}$ and we apply Lemma 3.3 to obtain a family of metrics $\langle , \rangle_s + dt^2$ on $D \times \mathbb{R}$, which is invariant under the action of $\mathbb{Z}$. We now glue the induced metrics on $(D \times \mathbb{R})/\mathbb{Z}$ to the core $H$ and choose $r$ such that the arc length parametrization of $\partial C$ and nullity leaves in $H$ match. Performing this process on each cylinder, we obtain the desired deformation of the metric on $M^3$. $\square$
|
| 232 |
+
|
| 233 |
+
We now discuss how $C$ induces a natural marking on its interior boundary $\partial_i C$. For this, let us first recall some elementary facts about lattices $\Lambda \subset \mathbb{R}^2$, where we assume that the orientation on $\mathbb{R}^2$ is fixed.
|
| 234 |
+
|
| 235 |
+
**Definition 3.5.** A marking of the lattice $\Lambda$ is a choice of an oriented basis $\{v, \hat{v}\}$ of $\Lambda$, and we say that the marking is *normalized* if
|
| 236 |
+
|
| 237 |
+
$$\langle v, \hat{v} \rangle / \|v\|^2 \in [0, 1).$$
|
| 238 |
+
---PAGE_BREAK---
|
| 239 |
+
|
| 240 |
+
Notice that for any primitive $v \in \Lambda$, i.e. $tv \notin \Lambda$ for $0 < t < 1$, there exists a unique oriented normalized marking $\{v, \hat{v}\}$ of $\Lambda$. Indeed, if $\{v, w\}$ is some oriented basis of $\Lambda$, then $\langle v, w + nv \rangle / \|v\|^2 = \langle v, w \rangle / \|v\|^2 + n$ and hence there exists a unique $n \in \mathbb{Z}$ such that $\{v, \hat{v}\}$ with $\hat{v} = w + nv$ is normalized.
|
| 241 |
+
|
| 242 |
+
If $T^2$ is an oriented flat torus and $z_0 \in T^2$ a base point, then $T^2 = T_{z_0} T^2 / \Lambda$ where $\Lambda$ is the lattice given by $\Lambda = \{w \in T_{z_0} T^2 : \exp_{z_0}(w) = z_0\}$. A (normalized) marking of $T^2$ is a (normalized) marking of its lattice $\Lambda$.
|
| 243 |
+
|
| 244 |
+
Now consider an oriented twisted cylinder $C = (D \times \mathbb{R})/\mathbb{Z}$ with its standard metric, where the action of $\mathbb{Z}$ is given by (3.1) for some $\theta$ and $h$. The totally geodesic flat torus $T^2 = \partial_i C$, which inherits an orientation from $C$, has a natural marking based at $z_0 = [(p_0, s_0)]$. For this, denote by $\gamma: [0, 1] \to \partial D$ the simple closed geodesic with $\gamma(0) = p_0$ which follows the orientation of $D = [D \times \{s_0\}] \subset C$. Then, since $\theta \in [0, 1)$, we have that
|
| 245 |
+
|
| 246 |
+
$$ \mathcal{B}(\gamma) := \{v, \hat{v}\}, \text{ where } v = \gamma'(0) \text{ and } \hat{v} = \theta v + h\partial/\partial s, $$
|
| 247 |
+
|
| 248 |
+
is a normalized marking of $T^2$ based at $z_0$; see Figure 3. Notice that the geodesic $\sigma(s) = \exp(s\hat{v})$, $0 \le s \le 1$, is simple and closed with length $\|\hat{v}\|$. Recall that $\mathcal{F}(C)$ denotes the foliation of $T^2$ by parallel closed geodesics $\{\gamma \times \{s\}\}$, $s \in [0, h)$.
|
| 249 |
+
|
| 250 |
+
It is important for us that the above process can be reversed for standard metrics:
|
| 251 |
+
|
| 252 |
+
**PROPOSITION 3.6.** Let $T^2$ be a flat oriented torus and $\mathcal{F}$ an oriented foliation of $T^2$ by parallel closed simple geodesics. Then there exists an oriented twisted cylinder $C_{\mathcal{F}} = (D \times \mathbb{R})/\mathbb{Z}$ over a standard oriented disk $D$, unique up to isometry, such that $\partial_i C_{\mathcal{F}} = T^2$ and $\mathcal{F}(C_{\mathcal{F}}) = \mathcal{F}$. Moreover, different orientations induce isometric metrics.
|
| 253 |
+
|
| 254 |
+
*Proof.* Choose $\gamma \in \mathcal{F}$, and set $z_0 = \gamma(0)$ and $v = \gamma'(0)$. By the above, there exists a unique vector $\hat{v}$ such that $\mathcal{B}(\gamma) = \{v, \hat{v}\}$ is a normalized marking of $T^2$ based at $z_0$. Set $r = \|v\|$, $\theta = \langle v, \hat{v} \| / \|v\|^2$ and $h = |\hat{v} - \theta v|$. With respect to the oriented orthonormal basis $e_1 = v/r$, $e_2 = (\hat{v} - \theta v)/h$ of $T_{z_0} T^2$ we have
|
| 255 |
+
|
| 256 |
+
$$ T^2 = \mathbb{R}^2 / \Lambda = (\mathbb{R} \oplus \mathbb{R}) / (\mathbb{Z}v \oplus \mathbb{Z}\hat{v}) = (S_r^1 \times \mathbb{R}) / \mathbb{Z}\hat{v}, $$
|
| 257 |
+
|
| 258 |
+
where $S_r^1$ is the oriented circle of length $r$. Since $v = re_1$ and $\hat{v} = \theta v + he_2$, we can also write $T^2 = (S_r^1 \times \mathbb{R}) / \langle g \rangle$ where $g(p, s) = (R_\theta(p), s+h)$. Now we simply attach $(D_0, r^2\langle , \rangle_0)$ to $S_r^1$ preserving orientations to build $C = (D_0 \times \mathbb{R}) / \langle g \rangle$. Notice that any two base points of $T^2$ are taken to each other by an orientation preserving isometry of $C$, restricted to $\partial C = T^2$. Thus the construction is independent of the choice of $z_0$ and the choice of $\gamma \in \mathcal{F}$. By Remark 3.2, different choices of orientation induce the same metric on $C$, and hence $C_{\mathcal{F}}$ is unique up to isometry. □
|
| 259 |
+
|
| 260 |
+
**Remark 3.7.** If we do not assume that the metric on $C$ is standard, then the construction of $C_{\mathcal{F}}$ depends on the choice of base point, and one has to assume that the metric on $D$ is invariant under $R_\theta$, where $\theta$ is the angle determined by the marking of $T^2$ induced by $\mathcal{F}$.
|
| 261 |
+
|
| 262 |
+
We can now easily classify standard geometric graph manifold metrics with two-sided core, proving case (a) of Theorem B.
|
| 263 |
+
---PAGE_BREAK---
|
| 264 |
+
|
| 265 |
+
**THEOREM 3.8.** Let $M^3$ be a compact geometric graph manifold of nonnegative scalar curvature with irreducible universal cover, and assume that its core $T^2$ is two-sided. Then, $M^3 = C_1 \cup T^2 \cup C_2$, where $C_i = (D_i \times \mathbb{R})/\mathbb{Z}$ are twisted cylinders over 2-disks that induce two different foliations $\mathcal{F}_i = \mathcal{F}(C_i)$ of $T^2$ by parallel closed geodesics, $i = 1, 2$.
|
| 266 |
+
|
| 267 |
+
Conversely, given a flat 2-torus $T^2$ with two different foliations $\mathcal{F}_i$ by parallel closed geodesics, there exists a standard geometric graph manifold $M^3 = C_1 \cup T^2 \cup C_2$ with irreducible universal cover whose core is $T^2$ and $C_i = C_{\mathcal{F}_i}$. Moreover, this data determines the standard metric up to isometries, i.e., if $h: T^2 \to \hat{T}^2$ is an isometry between flat tori, then $\hat{M}^3 = \hat{C}_1 \cup \hat{T}^2 \cup \hat{C}_2$ is isometric to $M^3$, where $\hat{C}_i = C_{h(\mathcal{F}_i)}$.
|
| 268 |
+
|
| 269 |
+
*Proof.* We only need to prove uniqueness. The core of a standard metric is unique since, by the choice of the metric on $D_0$, the set of nonflat points is dense. It is clear then that an isometry between standard geometric graph manifolds will send the core to the core, and the parallel foliations to the parallel foliations. Hence the core and the parallel foliations are determined by the isometry class of $M^3$.
|
| 270 |
+
|
| 271 |
+
Conversely, by uniqueness in Proposition 3.6 the standard twisted cylinders $C_{\mathcal{F}_i}$ and $C_{h(\mathcal{F}_i)}$ are isometric, which in turn induces an isometry between $M^3$ and $\hat{M}^3$. The only ambiguity is on which side of the torus to attach each of the twisted cylinders, but this simply gives an orientation reversing isometry fixing the core. $\square$
|
| 272 |
+
|
| 273 |
+
Now, let us consider the one-sided core case. Here we know that $M^3 = C \cup K$ and that $K$ is a nonorientable quotient of the flat torus $\partial_i C$ and hence a flat Klein bottle. It is easy to see that, if a flat torus admits an orientation reversing fixed point free isometric involution $j$, then $T^2$ has to be isometric to a rectangular torus $S_r^1 \times S_s^1$ on which $j$ acts as in (1.1), i.e., $j(z, w) = (-z, \bar{w})$. Thus, since the universal cover of $M^3$ is irreducible, $\mathcal{F}(C)$ does not to coincide with one of the two invariant parallel foliations $\{S_r^1 \times \{w\} : w \in S_s^1\}$ and $\{\{z\} \times S_s^1 : z \in S_r^1\}$. We denote the first one by $\mathcal{F}(j)$.
|
| 274 |
+
|
| 275 |
+
As in the proof of Theorem 3.8, we conclude:
|
| 276 |
+
|
| 277 |
+
**THEOREM 3.9.** Let $M^3$ be a compact geometric graph manifold of nonnegative scalar curvature with irreducible universal cover, and assume that its core $K$ is one-sided. Then $M^3 = C \cup K$, where $C = (D \times \mathbb{R})/\mathbb{Z}$ is a twisted cylinder over a 2-disk with $\partial_i C = T^2$ isometric to a rectangular torus, and $\partial C = K = T^2/\mathbb{Z}_2$ a flat totally geodesic Klein bottle.
|
| 278 |
+
|
| 279 |
+
Conversely, a rectangular flat torus $T^2 = S_r^1 \times S_s^1$ and a foliation $\mathcal{F}$ of $T^2$ by parallel closed geodesics different from $S_r^1 \times \{p\}$ or $\{p\} \times S_s^1$ define a standard geometric graph manifold with irreducible universal cover $M^3 = C_\mathcal{F} \cup K$ whose core $K$ is one-sided. Moreover, $T^2$ and $\mathcal{F}$ determine $M^3$ up to isometry.
|
| 280 |
+
|
| 281 |
+
We now introduce an isometric invariant of a geometric graph manifold. As we will see, this invariant determines the diffeomorphism type of the manifold.
|
| 282 |
+
|
| 283 |
+
For this purpose, we start by defining the slope $S(\mathcal{F}_1, \mathcal{F}_2)$ of a foliation $\mathcal{F}_2$ by closed simple geodesics of an oriented flat torus $T^2$ with respect to another such foliation $\mathcal{F}_1$. In order to do this, we first assume that the foliations are oriented. Fix $z_0 \in T^2$, and take
|
| 284 |
+
---PAGE_BREAK---
|
| 285 |
+
|
| 286 |
+
$\gamma_i \in \mathcal{F}_i$ parametrized over $[0, 1]$ such that $\gamma_1(0) = \gamma_2(0) = z_0$. Then $v_i$ is primitive, and as observed above, there exists a unique $\hat{v}_i$ such that $\mathcal{B}(\gamma_i) = \{v_i, \hat{v}_i\}$ are two normalized markings of $T^2$ based at $z_0$. Since SL(2, $\mathbb{Z}$) acts transitively on the set of oriented bases of a given lattice, there exist coprime integers $p, q$ and $a, b$ with $bq - ap = 1$ such that
|
| 287 |
+
|
| 288 |
+
$$ (3.10) \qquad v_2 = qv_1 + p\hat{v}_1, \quad \hat{v}_2 = av_1 + b\hat{v}_1. $$
|
| 289 |
+
|
| 290 |
+
We also have $p \neq 0$ since $v_1 \neq \pm v_2$. Notice that, since $v_2$ determines $\hat{v}_2$, the integers $p$ and $q$ determine $a$ and $b$. Observe that $q/p \in \mathbb{Q}$ is independent of the choice of $z_0$ since the foliations are parallel. It does not depend on the orientations of the foliations either, since $\{-v, -\hat{v}\}$ is the oriented marking associated to $-\gamma$. We call
|
| 291 |
+
|
| 292 |
+
$$ S(\mathcal{F}_1, \mathcal{F}_2) := q/p $$
|
| 293 |
+
|
| 294 |
+
the slope of $\mathcal{F}_2$ with respect to $\mathcal{F}_1$. Note though that reversing the orientation of the torus changes the sign of the slope, since this corresponds to replacing $\hat{v}_i$ with $-\hat{v}_i$. Moreover, since $v_1 = bv_2 - p\hat{v}_2$, we have that $S(\mathcal{F}_2, \mathcal{F}_1) = -b/p$.
|
| 295 |
+
|
| 296 |
+
If $M^3 = C_1 \sqcup T^2 \sqcup C_2$ has a two-sided core, a choice of orientations $\mathfrak{o} = (\mathfrak{o}_M, \mathfrak{o}_T)$ of both $M^3$ and its core $T^2$ orients the normal bundle of $T^2$. We can thus choose the order of the two twisted cylinders $(C_1, C_2)$ by letting $C_1$ be the cylinder containing the positive direction of the normal bundle. We thus define the *slope* of the lens space as
|
| 297 |
+
|
| 298 |
+
$$ S(M^3, \mathfrak{o}) = S(M^3, (\mathfrak{o}_M, \mathfrak{o}_T)) := S(\mathcal{F}(C_1), \mathcal{F}(C_2)) \in \mathbb{Q}. $$
|
| 299 |
+
|
| 300 |
+
Notice that $S(M^3, (\mathfrak{o}_M, -\mathfrak{o}_T)) = -q/p$ and $S(M^3, (-\mathfrak{o}_M, \mathfrak{o}_T)) = -b/p$ where $b$ is defined in (3.10). Since $b = q^{-1} \mod p$, this is consistent with the fact that $L(p, q)$ and $L(p, q')$ are diffeomorphic if and only if $q' = \pm q^{\pm 1} \mod p$.
|
| 301 |
+
|
| 302 |
+
Analogously, if $M^3 = C \sqcup K$ has a one-sided core $K = \partial_i C / \langle j \rangle$, a choice of an orientation $\mathfrak{o} = \mathfrak{o}_M$ induces an orientation of the torus $\partial_i C$. We call $S(M^3, \mathfrak{o}) := S(\mathcal{F}(j), \mathcal{F}(C))$ the *slope* of the prism manifold, recalling that $\mathcal{F}(j) = \{S^1 \times \{w\} : w \in S^1\}$. Here we have $S(M^3, -\mathfrak{o}) = -S(M^3, \mathfrak{o})$.
|
| 303 |
+
|
| 304 |
+
Notice that, in either case, the slope of $M^3$ is well defined even when the geometric graph manifold metric is not standard.
|
| 305 |
+
|
| 306 |
+
We now observe:
|
| 307 |
+
|
| 308 |
+
**PROPOSITION 3.11.** The slope $S(M^3, \mathfrak{o}) = q/p$ is an oriented isometry invariant of a geometric graph manifold. Furthermore, the slopes $-q/p$ and $\pm b/p$ are achieved by changing the orientation on $M^3$ or the core $T^2$. Conversely, any rational number is the slope of a geometric graph manifold, both on a lens space and on a prism manifold.
|
| 309 |
+
|
| 310 |
+
*Proof.* First, assume that $M^3 = C_1 \sqcup T^2 \sqcup C_2$ is a lens space and let $f: M \to M'$ be an orientation preserving isometry. By Corollary 2.4 the core $H$ is unique up to isometry, i.e. there exists a maximal isometric product $\tilde{H} \times [0, a] \subset M^n$, such that any $\tilde{H} \times \{s\}$ for $0 \le s \le a$ can be regarded as a core, and any core is of this form. If we choose $H = \tilde{H} \times \{a/2\}$, and similarly $H'$ for $M'$, then $f$ takes $H$ to $H'$ and by Theorem 3.8 the isometry $f|_H$ takes the boundary nullity foliations of $H$ into those of $H'$. Since we also
|
| 311 |
+
---PAGE_BREAK---
|
| 312 |
+
|
| 313 |
+
assume that $f|_H$ is orientation preserving, the slopes of $M$ and $M'$ are the same. We can
|
| 314 |
+
argue similarly for a prism manifold, in which case the core is even unique.
|
| 315 |
+
|
| 316 |
+
To achieve any slope $q/p$, we can choose the standard basis $e_1, e_2$ of a product torus
|
| 317 |
+
$T^2 = S^1 \times S^1$ and let $v = qe_1 + pe_2$. Then there exists a unique $\hat{v}$ such that $\{v, \hat{v}\}$ is a
|
| 318 |
+
normalized marking of the torus. This gives rise to two parallel foliations of $T^2$ with slope
|
| 319 |
+
$q/p$ and by Theorem 3.8 they can be realized by a geometric graph manifold metric on a
|
| 320 |
+
lens space. The same data also gives rise to a prism manifold by Theorem 3.9. $\square$
|
| 321 |
+
|
| 322 |
+
We are now in position to prove Theorem C in the introduction, which states that
|
| 323 |
+
$S(M^3, \mathbf{o})$ determines the diffeomorphism type of the manifold.
|
| 324 |
+
|
| 325 |
+
*Proof of Theorem C.* Recall that the twisted cylinders $C_i$ with invariants $\theta_i, h_i$ as in (3.1) are diffeomorphic to $D_i \times S^1$ by deforming $\theta_i$ continuously to 0. For a two-sided core $T^2$, choose $\gamma_i \in \mathcal{F}_i$, and let $\mathcal{B}(\gamma_i) = \{v_i, \hat{v}_i\}$ be the normalized markings of $T^2$ defined by $C_i$. Then the natural generators of $\pi_1(\partial(D_i \times S^1)) = \mathbb{Z} \oplus \mathbb{Z}$ are represented by the simple closed geodesics $\gamma_i$ and $\sigma_i(t) = \exp(t\hat{v}_i)$, $0 \le t \le 1$, since the marking $\{v_i, \hat{v}_i\}$ is normalized. According to the definition of slope, $v_2 = qv_1 + p\hat{v}_1$ which implies that under the diffeomorphism from $\partial D_2 \times S^1 \simeq \partial C_2$ to $\partial C_1 \simeq \partial D_1 \times S^1$, the element $(1, 0) \in \pi_1(\partial(D_2 \times S^1))$ is taken to $(q, p) \in \pi_1(\partial(D_1 \times S^1))$. By definition this is the lens space $L(p, q)$; see Section 1.
|
| 326 |
+
|
| 327 |
+
To determine the topological type in the one-sided case, we view $M^3$ as the union of $C$
|
| 328 |
+
with the flat twisted cylinder $N^3$ defined in (1.1). Then $\partial N^3 = T^2$ is a rectangular torus
|
| 329 |
+
which we glue to $\partial_i C$. Taking $\epsilon \to 0$ (or considering $T^2 \times (0, \epsilon]$ as part of $C$ instead), we
|
| 330 |
+
obtain $M^3$. We can now use our second description of prism manifolds in Section 1 and
|
| 331 |
+
the proof finishes as in the previous case. $\square$
|
| 332 |
+
|
| 333 |
+
We finally classify the moduli space of metrics.
|
| 334 |
+
|
| 335 |
+
**PROPOSITION 3.12.** On a lens space $(L(p,q), \mathbf{o})$ the connected components of the moduli space of geometric graph manifold metrics with nonnegative scalar curvature are parameterized by its slope $q/p \in \mathbb{Q}$, and therefore it has infinitely many components. On the other hand, on a prism manifold $P(q,p)$ with $q > 1$ the moduli space is connected.
|
| 336 |
+
|
| 337 |
+
*Proof.* In Proposition 3.4 we saw that we can deform any geometric graph manifold metric into one which is standard. According to Theorem 3.8, the standard geometric graph manifold metric on a lens space can equivalently be uniquely defined by the triple $(T^2, \mathcal{F}_1, \mathcal{F}_2)$. Thus, we can deform the flat metric on the torus, carrying along the foliations $\mathcal{F}_i$, which induces a deformation of the original metric by standard metrics. In the proof of Proposition 3.6 we saw that, after choosing orientations, for $\gamma_i \in \mathcal{F}_i$ with $v_i = \gamma'_i(0)$ we have the normalized markings $\mathcal{B}(\gamma_i) = \{v_i, \hat{v}_i\}$ which represents a fundamental domain of the lattice defined by $T^2$. We can thus deform the flat torus to a unit square torus such that the first marking is given by $v_1 = (1,0)$, $\hat{v}_1 = (0,1)$. Then $v_2 = (q,p) = qv_1 + p\hat{v}_1$, which in turn determines $\hat{v}_2$, and $q/p$ is the slope of $\mathcal{F}_2$ with respect to $\mathcal{F}_1$. Metrics with different slope can clearly not be deformed into each other since the invariant is a rational number.
|
| 338 |
+
---PAGE_BREAK---
|
| 339 |
+
|
| 340 |
+
Since the diffeomorphism type of the lens space only depends on $\pm q^{\pm 1}$ mod $p$, we obtain infinitely many components.
|
| 341 |
+
|
| 342 |
+
For a prism manifold, we similarly deform the metric to be standard and the rectangular torus into a unit square. But then the absolute value of its slope already uniquely determines its diffeomorphism type. $\square$
|
| 343 |
+
|
| 344 |
+
*Remarks.* a) For a lens space $L(p, q) = \mathbb{S}^3/\mathbb{Z}_p$ one can assume that $p, q > 0$, $\gcd(p, q) = 1$ and $q \le p$ since the action of $\mathbb{Z}_p$ is determined by $q \bmod p$. Then the slopes $q'/p + n$ for $n \in \mathbb{N} \cup \{0\}$, and $q' = \pm q^{\pm 1} \bmod p$ with $0 < q' \le p$, parametrize the infinitely many distinct connected components of geometric graph manifold metrics of nonnegative curvature in $L(p, q)$. Yet, the lens space $L(4p, 2p-1)$ has one further component since it is diffeomorphic to $P(1, p)$. This component is distinct from the others since the core is one sided.
|
| 345 |
+
|
| 346 |
+
b) One easily sees that the angle $\alpha$ between the nullity foliations of a lens space, i.e., the angle between $v_1$ and $v_2$, is given by $\cos(\alpha) = (q+p\theta_1)r_1/r_2 = (b-p\theta_2)r_2/r_1$, where $r_i = |v_i|$ and $\theta_i$ are the twists of the two cylinders. One can thus make the nullity leaves orthogonal if and only if $0 \le -q/p < 1$ and in that case $r_2 = ph_1$, $h_2 = r_1/p$ and $\theta_1 = -q/p$, $\theta_2 = b/p$. This determines the metric on the lens space described in the introduction as a quotient of Figure 1, and is thus the only component containing a metric with orthogonal nullity leaves.
|
| 347 |
+
|
| 348 |
+
c) We can explicitly describe the geometric graph manifold metrics on $\mathbb{S}^3 = L(1, 1)$ up to deformation. We assume that the core is a unit square and that the first foliation is parallel to (1, 0), i.e. the first cylinder is a product cylinder. Then the second marking is given by $v_2 = (q, 1)$, $\hat{v}_2 = (q - 1, 1)$. By choosing the orientations appropriately, we can assume $q \ge 0$. According to the proof of Proposition 3.4, the marking $\{v, \hat{v}\}$ corresponds to a twisted cylinder as in (3.1) with $r = \|v\|$, $\theta = \langle v, \hat{v} \rangle / \|v\|^2$ and $h = \|\hat{v} - \theta v\|$. Thus in our case the second cylinder is given by $r = 1/h = \sqrt{1+q^2}$, and $\theta = (1+q^2-q)(1+q^2)$. The slope is $q$, and the standard metric in Figure 1 corresponds to $q=0$.
|
| 349 |
+
|
| 350 |
+
## REFERENCES
|
| 351 |
+
|
| 352 |
+
[BKV] E. Boeckx, O. Kowalski and L. Vanhecke, *Riemannian manifolds of conullity two*. World Scientific, 1996.
|
| 353 |
+
|
| 354 |
+
[CG] J. Cheeger and D. Gromoll, *On the structure of complete manifolds of nonnegative curvature*. Ann. of Math. (2) **96** (1972), 413–443.
|
| 355 |
+
|
| 356 |
+
[FZ1] L. Florit and W. Ziller, *Nonnegatively curved Euclidean submanifolds in codimension two*. Comm. Math. Helv. **91** (2016), no. 4, 629–651.
|
| 357 |
+
|
| 358 |
+
[FZ2] L. Florit and W. Ziller, *Manifolds with conullity at most two as graph manifolds*, to appear in Ann. Scient. de Ec. Norm. Sup. arXiv: 1611.06572.
|
| 359 |
+
|
| 360 |
+
[Gr] M. Gromov, *Manifolds of negative curvature*. J. Differ. Geom. **13** (1978), 223–230.
|
| 361 |
+
|
| 362 |
+
[Ha] R. Hamilton, *Four-manifolds with positive curvature operator*. J. Diff. Geom. **24** (1986), 153–179.
|
| 363 |
+
|
| 364 |
+
[HK] S. Hong, J. Kalliongis, D. McCullough and J. Rubinstein, *Diffeomorphisms of Elliptic 3-Manifolds*. Lecture Notes in Mathematics 2055, Springer-Verlag, Berlin Heidelberg 2012.
|
| 365 |
+
|
| 366 |
+
[Or] P. Orlik, Seifert Manifolds, Lecture Notes in Math. 291, Springer-Verlag, Berlin (1972).
|
| 367 |
+
---PAGE_BREAK---
|
| 368 |
+
|
| 369 |
+
[Pe] G. Perelman, *Proof of the soul conjecture of Cheeger and Gromoll*. J. Diff. Geom. **40** (1994), 209–212.
|
| 370 |
+
|
| 371 |
+
[Pet] P. Petersen, Riemannian Geometry. Graduate Texts in Mathematics, 171. Springer, Cham, 2016.
|
| 372 |
+
|
| 373 |
+
[Ru] J. Rubinstein, *On 3-manifolds that have finite fundamental group and contain Klein bottles*. Trans. AMS, **251** 129–137.
|
| 374 |
+
|
| 375 |
+
[Sch] V. Schroeder, *Rigidity of Nonpositively Curved Graphmanifolds*. Math. Ann. **274** (1986), 19–26.
|
| 376 |
+
|
| 377 |
+
[ST] H. Seifert and W. Threlfall, *Topologische Untersuchung der Diskontinuitätsbereiche endlicher Be-wegungsgruppen des dreidimensionalen sphärischen Raumes*. Math. Ann. **104** (1931), 1–70.
|
| 378 |
+
|
| 379 |
+
[SW] B. Schmidt and J. Wolfson, *Three manifolds with constant vector curvature*. Indiana Univ. Math. J. **63** (2014), 1757–1783.
|
| 380 |
+
|
| 381 |
+
[Wa] F. Waldhausen, *Eine Klasse von 3-dimensionalen Mannigfaltigkeiten II*. Invent. Math. **4** (1967), 87–117.
|
| 382 |
+
|
| 383 |
+
[Ya] S. T. Yau, *Some Function Theoretic Properties of Complete Riemannian Manifold and Their Applications to Geometry*. Indiana University Mathematics Journal **25** (1976), 659–670.
|
| 384 |
+
|
| 385 |
+
IMPA: EST. DONA CASTORINA 110, 22460-320, RIO DE JANEIRO, BRAZIL
|
| 386 |
+
*E-mail address:* luis@impa.br
|
| 387 |
+
|
| 388 |
+
UNIVERSITY OF PENNSYLVANIA: PHILADELPHIA, PA 19104, USA
|
| 389 |
+
*E-mail address:* wziller@math.upenn.edu
|
samples/texts_merged/2531237.md
ADDED
|
@@ -0,0 +1,780 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# APPROXIMATION AND MODULI OF FRACTIONAL ORDERS IN SMIRNOV-ORLICZ CLASSES
|
| 5 |
+
|
| 6 |
+
RAMAZAN AKGÜN AND DANIYAL M. ISRAFILOV
|
| 7 |
+
|
| 8 |
+
Balikesir University, Turkey and Institute of Math. and Mech. NAS, Azerbaijan
|
| 9 |
+
|
| 10 |
+
**ABSTRACT.** In this work we investigate the approximation problems in the Smirnov-Orlicz spaces in terms of the fractional modulus of smoothness. We prove the direct and inverse theorems in these spaces and obtain a constructive descriptions of the Lipschitz classes of functions defined by the fractional order modulus of smoothness, in particular.
|
| 11 |
+
|
| 12 |
+
## 1. PRELIMINARIES AND INTRODUCTION
|
| 13 |
+
|
| 14 |
+
A function $M(u) : \mathbb{R} \to \mathbb{R}^+$ is called an $N$-function if it admits of the representation
|
| 15 |
+
|
| 16 |
+
$$M(u) = \int_{0}^{|u|} p(t) dt,$$
|
| 17 |
+
|
| 18 |
+
where the function $p(t)$ is right continuous and nondecreasing for $t \ge 0$ and positive for $t > 0$, which satisfies the conditions
|
| 19 |
+
|
| 20 |
+
$$p(0) = 0, \quad p(\infty) := \lim_{t \to \infty} p(t) = \infty.$$
|
| 21 |
+
|
| 22 |
+
The function
|
| 23 |
+
|
| 24 |
+
$$N(v) := \int_{0}^{|v|} q(s) ds,$$
|
| 25 |
+
|
| 26 |
+
where
|
| 27 |
+
|
| 28 |
+
$$q(s) := \sup_{p(t) \le s} t, \quad (s \ge 0)$$
|
| 29 |
+
|
| 30 |
+
2000 Mathematics Subject Classification. 30E10, 46E30, 41A10, 41A25.
|
| 31 |
+
|
| 32 |
+
Key words and phrases. Orlicz space, Smirnov-Orlicz class, Dini-smooth curve, direct theorems, inverse theorems, fractional modulus of smoothness.
|
| 33 |
+
---PAGE_BREAK---
|
| 34 |
+
|
| 35 |
+
is defined as complementary function of $M$.
|
| 36 |
+
|
| 37 |
+
Let $\Gamma$ be a rectifiable Jordan curve and let $G := \text{int}\Gamma$, $G^- := \text{ext}\Gamma$, $\mathbb{D} := \{w \in \mathbb{C} : |w| < 1\}$, $\mathbb{T} := \partial\mathbb{D}$, $D^- := \text{ext}\mathbb{T}$. Without loss of generality we may assume $0 \in G$. We denote by $L^p(\Gamma)$, $1 \le p < \infty$, the set of all measurable complex valued functions $f$ on $\Gamma$ such that $|f|^p$ is Lebesgue integrable with respect to arclength. By $E^p(G)$ and $E^p(G^{-})$, $0 < p < \infty$, we denote the Smirnov classes of analytic functions in $G$ and $G^-$, respectively. It is well-known that every function $f \in E^1(G)$ or $f \in E^1(G^{-})$ has a non-tangential boundary values a.e. on $\Gamma$ and if we use the same notation for the nontangential boundary value of $f$, then $f \in L^1(\Gamma)$.
|
| 38 |
+
|
| 39 |
+
Let $M$ be an $N$-function and $N$ be its complementary function. By $L_M(\Gamma)$ we denote the linear space of Lebesgue measurable functions $f: \Gamma \to \mathbb{C}$ satisfying the condition
|
| 40 |
+
|
| 41 |
+
$$ \int_{\Gamma} M [\alpha |f(z)|] |dz| < \infty $$
|
| 42 |
+
|
| 43 |
+
for some $\alpha > 0$.
|
| 44 |
+
|
| 45 |
+
The space $L_M(\Gamma)$ becomes a Banach space with the norm
|
| 46 |
+
|
| 47 |
+
$$ \|f\|_{L_M(\Gamma)} := \sup \left\{ \int_{\Gamma} |f(z)g(z)| |dz| : g \in L_N(\Gamma), \rho(g; N) \le 1 \right\}, $$
|
| 48 |
+
|
| 49 |
+
where
|
| 50 |
+
|
| 51 |
+
$$ \rho(g; N) := \int_{\Gamma} N [|g(z)|] |dz|. $$
|
| 52 |
+
|
| 53 |
+
The norm $\|\cdot\|_{L_M(\Gamma)}$ is called Orlicz norm and the Banach space $L_M(\Gamma)$ is called Orlicz space. Every function in $L_M(\Gamma)$ is integrable on $\Gamma$ [18, p. 50], i.e.
|
| 54 |
+
|
| 55 |
+
$$ L_M(\Gamma) \subset L^1(\Gamma). $$
|
| 56 |
+
|
| 57 |
+
An $N$-function $M$ satisfies the $\Delta_2$-condition if
|
| 58 |
+
|
| 59 |
+
$$ \limsup_{x \to \infty} \frac{M(2x)}{M(x)} < \infty. $$
|
| 60 |
+
|
| 61 |
+
The Orlicz space $L_M(\Gamma)$ is reflexive if and only if the $N$-function $M$ and its complementary function $N$ both satisfy the $\Delta_2$-condition [18, p. 113].
|
| 62 |
+
|
| 63 |
+
Let $\Gamma_r$ be the image of the circle $\gamma_r := \{w \in \mathbb{C} : |w| = r, 0 < r < 1\}$ under some conformal mapping of $\mathbb{D}$ onto $G$ and let $M$ be an $N$-function.
|
| 64 |
+
|
| 65 |
+
The class of functions $f$ analytic in $G$ and satisfying
|
| 66 |
+
|
| 67 |
+
$$ \sup_{0<r<1} \int_{\Gamma_r} M [|f(z)|] |dz| \le c < \infty $$
|
| 68 |
+
---PAGE_BREAK---
|
| 69 |
+
|
| 70 |
+
with $c$ independent of $r$, will be called Smirnov-Orlicz class and denoted by
|
| 71 |
+
$E_M(G)$. In the similar way $E_M(G^{-})$ can be defined. Let
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\tilde{E}_M (G^{-}) := \{ f \in E_M (G^{-}) : f (\infty) = 0 \}.
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
If $M(x) = M(x,p) := x^p$, $1 < p < \infty$, then the Smirnov-Orlicz class $E_M(G)$ coincides with the usual Smirnov class $E^p(G)$.
|
| 78 |
+
|
| 79 |
+
Every function in the class $E_M(G)$ has [13] the non-tangential boundary values a.e. on $\Gamma$ and the boundary function belongs to $L_M(\Gamma)$.
|
| 80 |
+
|
| 81 |
+
Let
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
S[f] := \sum_{k=-\infty}^{\infty} c_k e^{ikx}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
be Fourier series of a function $f \in L^1(\mathbb{T})$ where $\mathbb{T} := [-\pi, \pi]$, $\int_{\mathbb{T}} f(x) dx = 0$, so that $c_0 = 0$.
|
| 88 |
+
|
| 89 |
+
For $\alpha > 0$, the $\alpha$-th integral of $f$ is defined by
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
I_{\alpha}(x, f) := \sum_{k \in \mathbb{Z}^{*}} c_k (ik)^{-\alpha} e^{ikx},
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
(ik)^{-\alpha} := |k|^{-\alpha} e^{(-1/2)\pi i \alpha \operatorname{sign} k} \text{ and } \mathbb{Z}^* := \{\pm 1, \pm 2, \pm 3, \dots\}.
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
It is known [24, V. 2, p. 134] that
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
f_{\alpha}(x) := I_{\alpha}(x, f)
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
exist a.e. on $\mathbb{T}$, $f_\alpha \in L^1(\mathbb{T})$ and $S[f_\alpha] = f_\alpha(x)$.
|
| 108 |
+
|
| 109 |
+
For $\alpha \in (0,1)$ let
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
f^{(\alpha)}(x) := \frac{d}{dx} I_{1-\alpha}(x,f)
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
if the right hand side exist.
|
| 116 |
+
|
| 117 |
+
We set
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
f^{(\alpha+r)}(x) := \left(f^{(\alpha)}(x)\right)^{(r)} = \frac{d^{r+1}}{dx^{r+1}} I_{1-\alpha}(x,f),
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
where $r \in \mathbb{Z}^+ := \{1, 2, 3, ...\}$.
|
| 124 |
+
|
| 125 |
+
Throughout this work by $c$, $c_1$, $c_2$, ..., we denote the constants which are different in different places.
|
| 126 |
+
|
| 127 |
+
1.1. Moduli of smoothness of fractional order. Suppose that $x, h \in \mathbb{R} := (-\infty, \infty)$ and $\alpha > 0$. Then, by [16, Theorem 11, p. 135] the series
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\Delta_h^\alpha f(x) := \sum_{k=0}^\infty (-1)^k C_k^\alpha f(x + (\alpha - k)h), \quad f \in L_M(\mathbb{T}),
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
converges absolutely a.e. on $\mathbb{T}$ [16, p. 135]. Hence $\Delta_h^\alpha f(x)$ measurable and by [16, Theorem 10, p. 134]
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\|\Delta_h^\alpha f\|_{L_M(\mathbb{T})} \le C(\alpha) \|f\|_{L_M(\mathbb{T})},
|
| 137 |
+
$$
|
| 138 |
+
---PAGE_BREAK---
|
| 139 |
+
|
| 140 |
+
with
|
| 141 |
+
|
| 142 |
+
$$C(\alpha) := \sum_{k=0}^{\infty} |C_k^{\alpha}| < \infty.$$
|
| 143 |
+
|
| 144 |
+
The quantity $\Delta_h^\alpha f(x)$ will be called the $\alpha$-th difference of $f$ at $x$, with increment $h$. If $\alpha \in \mathbb{Z}^+$ the above cited $\alpha$-th difference coincides with usual forward difference. Namely,
|
| 145 |
+
|
| 146 |
+
$$\Delta_h^\alpha f(x) := \sum_{k=0}^\alpha (-1)^k C_k^\alpha f(x + (\alpha - k)h) = \sum_{k=0}^\alpha (-1)^{\alpha-k} C_k^\alpha f(x + kh),$$
|
| 147 |
+
|
| 148 |
+
for $\alpha \in \mathbb{Z}^{+}$. For $\alpha > 0$ we define the $\alpha$-th modulus of smoothness of a function $f \in L_M(\mathcal{T})$ as
|
| 149 |
+
|
| 150 |
+
$$\omega_\alpha (f, \delta)_M := \sup_{|h| \le \delta} \| \Delta_h^\alpha f \|_{L_M(\mathcal{T})}, \quad \omega_0 (f, \delta)_M := \| f \|_{L_M(\mathcal{T})}.$$
|
| 151 |
+
|
| 152 |
+
**REMARK 1.1.** The modulus of smoothness $\omega_\alpha(f, \delta)_M$ has the following properties.
|
| 153 |
+
|
| 154 |
+
(i) $\omega_\alpha(f, \delta)_M$ is non-negative and non-decreasing function of $\delta \ge 0$,
|
| 155 |
+
|
| 156 |
+
(ii) $\lim_{\delta \to 0^+} \omega_\alpha(f, \delta)_M = 0$,
|
| 157 |
+
|
| 158 |
+
(iii) $\omega_\alpha(f_1 + f_2, \cdot)_M \le \omega_\alpha(f_1, \cdot)_M + \omega_\alpha(f_2, \cdot)_M$.
|
| 159 |
+
|
| 160 |
+
Let
|
| 161 |
+
|
| 162 |
+
$$E_n(f)_M := \inf_{T \in \mathcal{T}_n} \|f - T\|_{L_M(\mathcal{T})}, \quad f \in L_M(\mathcal{T}),$$
|
| 163 |
+
|
| 164 |
+
where $\mathcal{T}_n$ is the class of trigonometric polynomials of degree not greater than $n \ge 1$.
|
| 165 |
+
|
| 166 |
+
The proofs of following direct and inverse theorems are similar to the appropriate theorems from [21], where the approximation problems are investigated in Lebesgue spaces $L^p(\mathcal{T})$, $1 \le p < \infty$.
|
| 167 |
+
|
| 168 |
+
**THEOREM 1.2.** Let $L_M(\mathcal{T})$ be a reflexive Orlicz space and let $M$ be an $N$-function. Then
|
| 169 |
+
|
| 170 |
+
$$E_n(f)_M \le C_1 (\alpha) \omega_\alpha (f, 1/n)_M, \quad n = 1, 2, \dots$$
|
| 171 |
+
|
| 172 |
+
**THEOREM 1.3.** Let $L_M(\mathcal{T})$ be a reflexive Orlicz space and let $M$ be an $N$-function. Then
|
| 173 |
+
|
| 174 |
+
$$\omega_\alpha (f, 1/n)_M \le \frac{C_2 (\alpha)}{n^\alpha} \sum_{\nu=0}^{n} (\nu + 1)^{\alpha-1} E_\nu (f)_M, \quad n = 1, 2, \dots$$
|
| 175 |
+
|
| 176 |
+
1.2. Modulus of smoothness of fractional order in Smirnov-Orlicz classes. Let $w = \varphi(z)$ and $w = \varphi_1(z)$ be the conformal mappings of $G^-$ and $G$ onto $\mathbb{D}^-$ normalized by the conditions
|
| 177 |
+
|
| 178 |
+
$$\varphi(\infty) = \infty, \quad \lim_{z \to \infty} \varphi(z)/z > 0,$$
|
| 179 |
+
---PAGE_BREAK---
|
| 180 |
+
|
| 181 |
+
and
|
| 182 |
+
|
| 183 |
+
$$ \varphi_1(0) = \infty, \quad \lim_{z \to 0} z\varphi_1(z) > 0, $$
|
| 184 |
+
|
| 185 |
+
respectively. We denote by $\psi$ and $\psi'$ the inverse of $\varphi$ and $\varphi_1$, respectively.
|
| 186 |
+
|
| 187 |
+
Since $\Gamma$ is rectifiable, we have $\varphi' \in E^1(G^-)$ and $\psi' \in E^1(\mathbb{D}^-)$, and hence the functions $\varphi'$ and $\psi'$ admit nontangential limits almost everywhere (a.e.) on $\Gamma$ and on $\mathbb{T}$ respectively, and these functions respectively belong to $L^1(\Gamma)$ and $L^1(\mathbb{T})$ (see, for example [7, p. 419]).
|
| 188 |
+
|
| 189 |
+
Let $f \in L^1(\Gamma)$. Then, the functions $f^+$ and $f^-$ defined by
|
| 190 |
+
|
| 191 |
+
$$ f^+(z) = \frac{1}{2\pi i} \int_{\Gamma} \frac{f(\zeta)}{\zeta - z} d\zeta, \quad z \in G, $$
|
| 192 |
+
|
| 193 |
+
$$ f^{-}(z) = \frac{1}{2\pi i} \int_{\Gamma} \frac{f(\zeta)}{\zeta - z} d\zeta, \quad z \in G^{-}, $$
|
| 194 |
+
|
| 195 |
+
are analytic in $G$ and $G^{-}$, respectively and $f^{-}(\infty) = 0$.
|
| 196 |
+
|
| 197 |
+
Let $h$ be a function continuous on $\mathbb{T}$. Its modulus of continuity is defined by
|
| 198 |
+
|
| 199 |
+
$$ \omega(t, h) := \sup\{|h(t_1) - h(t_2)| : t_1, t_2 \in \mathbb{T}, |t_1 - t_2| \le t\}, \quad t \ge 0. $$
|
| 200 |
+
|
| 201 |
+
The function $h$ is called Dini-continuous if
|
| 202 |
+
|
| 203 |
+
$$ \int_0^c \frac{\omega(t,h)}{t} dt < \infty, \quad c > 0. $$
|
| 204 |
+
|
| 205 |
+
A curve $\Gamma$ is called Dini-smooth [17, p. 48] if it has a parametrization
|
| 206 |
+
|
| 207 |
+
$$ \Gamma: \varphi_0(\tau), \quad \tau \in \mathbb{T} $$
|
| 208 |
+
|
| 209 |
+
such that $\varphi'_0(\tau)$ is Dini-continuous and $\varphi'_0(\tau) \neq 0$.
|
| 210 |
+
|
| 211 |
+
If $\Gamma$ is Dini-smooth, then [23]
|
| 212 |
+
|
| 213 |
+
$$ (1.1) \quad 0 < c_3 < |\psi'(w)| < c_4 < \infty, \quad 0 < c_5 < |\varphi'(z)| < c_6 < \infty, $$
|
| 214 |
+
|
| 215 |
+
where the constants $c_3$, $c_4$ and $c_5$, $c_6$ are independent of $|w| \ge 1$ and $z \in G^{-}$, respectively.
|
| 216 |
+
|
| 217 |
+
Let $\Gamma$ be a Dini-smooth curve and let $f_0 := f \circ \psi$, $f_1 := f \circ \psi_1$ for $f \in L_M(\Gamma)$. Then from (1.1), we have $f_0 \in L_M(\mathbb{T})$ and $f_1 \in L_M(\mathbb{T})$ for $f \in L_M(\Gamma)$. Using the nontangential boundary values of $f_0^+$ and $f_1^+$ on $\mathbb{T}$ we define
|
| 218 |
+
|
| 219 |
+
$$ \omega_{\alpha,\Gamma}(f,\delta)_M := \omega_\alpha(f_0^+, \delta)_M, \quad \delta > 0 $$
|
| 220 |
+
|
| 221 |
+
$$ \tilde{\omega}_{\alpha,\Gamma}(f,\delta)_M := \omega_\alpha(f_1^+, \delta)_M, \quad \delta > 0 $$
|
| 222 |
+
|
| 223 |
+
for $\alpha > 0$.
|
| 224 |
+
|
| 225 |
+
We set
|
| 226 |
+
|
| 227 |
+
$$ E_n(f,G)_M := \inf_{P \in \mathcal{P}_n} \|f-P\|_{L_M(\Gamma)}, \quad \tilde{E}_n(g,G^{-})_M := \inf_{R \in \mathcal{R}_n} \|g-R\|_{L_M(\Gamma)}, $$
|
| 228 |
+
---PAGE_BREAK---
|
| 229 |
+
|
| 230 |
+
where $f \in E_M(G)$, $g \in E_M(G^{-})$, $\mathcal{P}_n$ is the set of algebraic polynomials of degree not greater than $n$ and $\mathcal{R}_n$ is the set of rational functions of the form
|
| 231 |
+
|
| 232 |
+
$$ \sum_{k=0}^{n} \frac{a_k}{z^k}. $$
|
| 233 |
+
|
| 234 |
+
Let $\Gamma$ be a rectifiable Jordan curve, $f \in L^1(\Gamma)$ and let
|
| 235 |
+
|
| 236 |
+
$$ (S_{\Gamma} f)(t) := \lim_{\epsilon \to 0} \frac{1}{2\pi i} \int_{\Gamma \setminus \Gamma(t, \epsilon)} \frac{f(\zeta)}{\zeta - t} d\zeta, \quad t \in \Gamma $$
|
| 237 |
+
|
| 238 |
+
be Cauchy's singular integral of $f$ at the point $t$. The linear operator $S_\Gamma$, $f \mapsto S_\Gamma f$ is called the Cauchy singular operator.
|
| 239 |
+
|
| 240 |
+
If one of the functions $f^+$ or $f^-$ has the non-tangential limits a. e. on $\Gamma$, then $S_\Gamma f(z)$ exists a.e. on $\Gamma$ and also the other one has the nontangential limits a. e. on $\Gamma$. Conversely, if $S_\Gamma f(z)$ exists a.e. on $\Gamma$, then both functions $f^+$ and $f^-$ have the nontangential limits a.e. on $\Gamma$. In both cases, the formulae
|
| 241 |
+
|
| 242 |
+
$$ (1.2) \qquad f^{+}(z) = (S_{\Gamma}f)(z) + f(z)/2, \qquad f^{-}(z) = (S_{\Gamma}f)(z) - f(z)/2, $$
|
| 243 |
+
|
| 244 |
+
and hence
|
| 245 |
+
|
| 246 |
+
$$ (1.3) \qquad f = f^{+} - f^{-} $$
|
| 247 |
+
|
| 248 |
+
holds a.e. on $\Gamma$ (see, e.g., [7, p. 431]).
|
| 249 |
+
|
| 250 |
+
In this work we investigate the approximation problems in the Smirnov-Orlicz spaces in terms of the fractional modulus of smoothness. We prove the direct and inverse theorems in these spaces and obtain a constructive descriptions of the Lipschitz classes of functions defined by the fractional order modulus of smoothness, in particular.
|
| 251 |
+
|
| 252 |
+
In the spaces $L^p(\mathbb{T})$, $1 \le p < \infty$, these problems were studied in the works [21] and [3].
|
| 253 |
+
|
| 254 |
+
In terms of the usual modulus of smoothness, these problems in the Lebesgue and Smirnov spaces defined on the complex domains with the various boundary conditions were investigated by Walsh-Russel [22], Al'per [1], Kokilashvili [14, 15], Andersson [2], Israfilov [9, 10, 11], Cavus-Israfilov [4] and other mathematicians.
|
| 255 |
+
|
| 256 |
+
## 2. MAIN RESULTS
|
| 257 |
+
|
| 258 |
+
The following direct theorem holds.
|
| 259 |
+
|
| 260 |
+
**THEOREM 2.1.** Let $\Gamma$ be a Dini-smooth curve and $L_M(\Gamma)$ be a reflexive Orlicz space on $\Gamma$. If $\alpha > 0$ and $f \in L_M(\Gamma)$ then for any $n = 1, 2, 3, ...$ there is a constant $c_7 > 0$ such that
|
| 261 |
+
|
| 262 |
+
$$ \|f - R_n(\cdot, f)\|_{L_M(\Gamma)} \le c_7 \{\omega_{\alpha,\Gamma}(f, 1/n)_M + \tilde{\omega}_{\alpha,\Gamma}(f, 1/n)_M\}, $$
|
| 263 |
+
|
| 264 |
+
where $R_n(\cdot, f)$ is the nth partial sum of the Faber-Laurent series of $f$.
|
| 265 |
+
---PAGE_BREAK---
|
| 266 |
+
|
| 267 |
+
From this theorem we have the following corollaries.
|
| 268 |
+
|
| 269 |
+
COROLLARY 2.2. Let G be a finite, simply connected domain with a Dini-smooth boundary Γ and let $L_M(\Gamma)$ be a reflexive Orlicz space on $\Gamma$. If $\alpha > 0$ and $S_n(f, \cdot) := \sum_{k=0}^n a_k \Phi_k$ is the nth partial sum of the Faber expansion of $f \in E_M(G)$, then for every $n = 1, 2, 3, ...$
|
| 270 |
+
|
| 271 |
+
$$
|
| 272 |
+
\| f - S_n (f, \cdot) \|_{L_M(\Gamma)} \le c_8 \omega_{\alpha, \Gamma} (f, 1/n)_M,
|
| 273 |
+
$$
|
| 274 |
+
|
| 275 |
+
with some constant $c_8 > 0$ independent of $n$.
|
| 276 |
+
|
| 277 |
+
COROLLARY 2.3. Let $\Gamma$ be a Dini-smooth curve. If $\alpha > 0$ and $f \in \tilde{E}_M(G^{-})$, then for every $n = 1, 2, 3, ...$ there is a constant $c_9 > 0$ such that
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
\| f - R_n (\cdot, f) \|_{L_M(\Gamma)} \le c_9 \tilde{\omega}_{\alpha, \Gamma} (f, 1/n)_M,
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
where $R_n(\cdot, f)$ as in Theorem 2.1.
|
| 284 |
+
|
| 285 |
+
The following inverse theorem holds.
|
| 286 |
+
|
| 287 |
+
THEOREM 2.4. Let G be a finite, simply connected domain with a Dini-smooth boundary Γ and let L_M(Γ) be a reflexive Orlicz space on Γ. If α > 0, then
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
\omega_{\alpha, \Gamma} (f, 1/n)_M \leq \frac{c_{10}}{n^{\alpha}} \sum_{k=0}^{n} (k+1)^{\alpha-1} E_k (f, G)_M, \quad n=1,2,\dots
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
with a constant $c_{10} > 0$ depending only on M and $\alpha$.
|
| 294 |
+
|
| 295 |
+
COROLLARY 2.5. Under the conditions of Theorem 2.4, if
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
E_n (f, G)_M = O (n^{-\sigma}), \quad \sigma > 0, \quad n = 1, 2, 3, \dots,
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
then for $f \in E_M(G)$ and $\alpha > 0$
|
| 302 |
+
|
| 303 |
+
$$
|
| 304 |
+
\omega_{\alpha, \Gamma} (f, \delta)_M =
|
| 305 |
+
\begin{cases}
|
| 306 |
+
\mathcal{O}(\delta^\sigma) & , \alpha > \sigma; \\
|
| 307 |
+
\mathcal{O}\left(\frac{|\log \frac{1}{\delta}|}{\delta}\right) & , \alpha = \sigma; \\
|
| 308 |
+
\mathcal{O}(\delta^\alpha) & , \alpha < \sigma.
|
| 309 |
+
\end{cases}
|
| 310 |
+
$$
|
| 311 |
+
|
| 312 |
+
**DEFINITION 2.6.** For $0 < \sigma < \alpha$ we set
|
| 313 |
+
|
| 314 |
+
$$
|
| 315 |
+
\operatorname{Lip}^* \sigma(\alpha, M) := \{f \in E_M(G) : \omega_{\alpha, \Gamma}(f, \delta)_M = \mathcal{O}(\delta^\sigma), \delta > 0\},
|
| 316 |
+
$$
|
| 317 |
+
|
| 318 |
+
$$
|
| 319 |
+
\widetilde{\text{Lip}}\sigma(\alpha, M) := \{f \in \tilde{E}_M(G^{-}) : \tilde{\omega}_{\alpha, \Gamma}(f, \delta)_M = \mathcal{O}(\delta^{\sigma}), \delta > 0\}.
|
| 320 |
+
$$
|
| 321 |
+
|
| 322 |
+
COROLLARY 2.7. Under the conditions of Theorem 2.4, if $0 < \sigma < \alpha$ and
|
| 323 |
+
|
| 324 |
+
$$
|
| 325 |
+
E_n (f, G)_M = O (n^{-\alpha}), \quad n = 1, 2, 3, \dots,
|
| 326 |
+
$$
|
| 327 |
+
|
| 328 |
+
then $f \in Lip^* \sigma(\alpha, M).$
|
| 329 |
+
|
| 330 |
+
COROLLARY 2.8. Let $0 < \sigma < \alpha$ and let the conditions of Theorem 2.4 be fulfilled. Then the following conditions are equivalent.
|
| 331 |
+
|
| 332 |
+
(a) $f \in Lip^* \sigma(\alpha, M)$
|
| 333 |
+
|
| 334 |
+
(b) $E_n(f,G)_M = O(n^{-\sigma})$, $n = 1, 2, 3, ...$.
|
| 335 |
+
---PAGE_BREAK---
|
| 336 |
+
|
| 337 |
+
Similar results hold also in the class $\tilde{E}_M(G^-)$.
|
| 338 |
+
|
| 339 |
+
**THEOREM 2.9.** Let $\Gamma$ be a Dini-smooth curve and $L_M(\mathbb{T})$ be a reflexive Orlicz space. If $\alpha > 0$ and $f \in \tilde{E}_M(G^-)$, then
|
| 340 |
+
|
| 341 |
+
$$ \tilde{\omega}_{\alpha, \Gamma} (f, 1/n)_M \le \frac{c_{11}}{n^\alpha} \sum_{k=0}^{n} (k+1)^{\alpha-1} \tilde{E}_k (f, G^{-})_M, \quad n = 1, 2, 3, \dots, $$
|
| 342 |
+
|
| 343 |
+
with a constant $c_{11} > 0$.
|
| 344 |
+
|
| 345 |
+
**COROLLARY 2.10.** Under the conditions of Theorem 2.9, if
|
| 346 |
+
|
| 347 |
+
$$ \tilde{E}_n (f, G^{-})_M = \mathcal{O} (n^{-\sigma}), \quad \sigma > 0, \quad n = 1, 2, 3, \dots, $$
|
| 348 |
+
|
| 349 |
+
then for $f \in \tilde{E}_M(G^-)$ and $\alpha > 0$
|
| 350 |
+
|
| 351 |
+
$$ \tilde{\omega}_{\alpha, \Gamma}(f, \delta)_M = \begin{cases} \mathcal{O}(\delta^{\sigma}) & , \alpha > \sigma; \\ \mathcal{O}(\delta^{\sigma} |\log \frac{1}{\delta}|) & , \alpha = \sigma; \\ \mathcal{O}(\delta^{\alpha}) & , \alpha < \sigma. \end{cases} $$
|
| 352 |
+
|
| 353 |
+
**COROLLARY 2.11.** Under the conditions of Theorem 2.9, if $0 < \sigma < \alpha$ and
|
| 354 |
+
|
| 355 |
+
$$ \tilde{E}_n (f, G^{-})_M = \mathcal{O} (n^{-\sigma}), \quad n = 1, 2, 3, \dots, $$
|
| 356 |
+
|
| 357 |
+
then $f \in \widetilde{Lip}(\alpha, M)$.
|
| 358 |
+
|
| 359 |
+
**COROLLARY 2.12.** Let $0 < \sigma < \alpha$ and the conditions of Theorem 2.9 be fulfilled. Then the following conditions are equivalent.
|
| 360 |
+
|
| 361 |
+
(a) $f \in \widetilde{Lip}(\alpha, M)$,
|
| 362 |
+
|
| 363 |
+
(b) $\tilde{E}_n(f, G^{-})_M = \mathcal{O}(n^{-\sigma})$, $n = 1, 2, 3, ...$
|
| 364 |
+
|
| 365 |
+
## 2.1. Some auxiliary results.
|
| 366 |
+
|
| 367 |
+
**LEMMA 2.13.** Let $L_M(\mathbb{T})$ be a reflexive Orlicz space. Then $f^+ \in E_M(\mathbb{D})$ and $f^- \in E_M(\mathbb{D}^-)$ for every $f \in L_M(\mathbb{T})$.
|
| 368 |
+
|
| 369 |
+
**PROOF.** We claim that for every $f \in L_M(\mathbb{T})$ there exists a $p \in (1, \infty)$ such that $f \in L^p(\mathbb{T})$. Indeed, by Corollaries 4 and 5 of [18, p. 26] there exist some $x_0$, $c_{12} > 0$ and $p > 1$ such that
|
| 370 |
+
|
| 371 |
+
$$ (2.1) \qquad c_{13}^p |f|^p \le \frac{1}{c_{12}} M(c_{13} |f|) $$
|
| 372 |
+
|
| 373 |
+
holds for $|f| \ge x_0$ and some $c_{13} > 0$.
|
| 374 |
+
|
| 375 |
+
Hence, using
|
| 376 |
+
|
| 377 |
+
$$ \int_{\mathbb{T}} |f(z)|^p |dz| = \int_{\Gamma_0} |f(z)|^p |dz| + \int_{\mathbb{T} \setminus \Gamma_0} |f(z)|^p |dz| $$
|
| 378 |
+
---PAGE_BREAK---
|
| 379 |
+
|
| 380 |
+
with $\Gamma_0 := \{z \in \mathbb{T} : |f| \ge x_0\}$, from (2.1) we get that
|
| 381 |
+
|
| 382 |
+
$$
|
| 383 |
+
\begin{align*}
|
| 384 |
+
\int_{\mathbb{T}} |f(z)|^p |dz| &\le \frac{1}{c_{12} c_{13}^p} \int_{\Gamma_0} M(c_{13} |f(z)|) |dz| + \int_{\mathbb{T} \setminus \Gamma_0} |f(z)|^p |dz| \\
|
| 385 |
+
&\le c_{14} \int_{\mathbb{T}} M(c_{13} |f(z)|) |dz| + x_0^p \text{mes}(\mathbb{T} \setminus \Gamma_0) < \infty
|
| 386 |
+
\end{align*}
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
and therefore $f \in L^p(\mathbb{T})$. Since $1<p<\infty$, this implies [8] that $f^+ \in E^p(\mathbb{D})$,
|
| 390 |
+
$f^- \in E^p(\mathbb{D}^-)$ and hence $f^+ \in E^1(\mathbb{D})$, $f^- \in E^1(\mathbb{D}^-)$.
|
| 391 |
+
|
| 392 |
+
Since $f^+ \in E^1(\mathbb{D})$ it can be represented by the Poisson integral of its boundary function. Hence, taking $z := re^{ix}$, $(0 < r < 1)$ we have
|
| 393 |
+
|
| 394 |
+
$$
|
| 395 |
+
M [|f^+(z)|] = M \left[ \frac{1}{2\pi} \left| \int_0^{2\pi} f^+(e^{iy}) P_r(x-y) dy \right| \right].
|
| 396 |
+
$$
|
| 397 |
+
|
| 398 |
+
Now, using Jensen integral inequality [24, V:1, p.24] we get
|
| 399 |
+
|
| 400 |
+
$$
|
| 401 |
+
M [|f^+(z)|] &\le M \left[ \frac{\displaystyle \int_0^{2\pi} |f^+(e^{iy})| P_r (x-y) dy}{\displaystyle \int_0^{2\pi} P_r (x-y) dy} \right] \\
|
| 402 |
+
&\le \frac{1}{2\pi} \int_0^{2\pi} M [|f^+(e^{iy})|] P_r (x-y) dy,
|
| 403 |
+
$$
|
| 404 |
+
|
| 405 |
+
and therefore
|
| 406 |
+
|
| 407 |
+
$$
|
| 408 |
+
\begin{align*}
|
| 409 |
+
\int_{\gamma_r} M [|f^+(z)|] dz
|
| 410 |
+
&\leq \int_{\gamma_r} \frac{1}{2\pi} \int_0^{2\pi} M [|f^+(e^{iy})]| P_r (x-y) dy |dz| \\
|
| 411 |
+
&= \int_0^{2\pi} \frac{1}{2\pi} \int_0^{2\pi} M [|f^+(e^{iy})]| P_r (x-y) dy dx \\
|
| 412 |
+
&= \int_0^{2\pi} M [|f^+(e^{iy})]| \left\{ \frac{1}{2\pi} \int_0^{2\pi} P_r (x-y) dx \right\} rdy \\
|
| 413 |
+
&= \int_0^{2\pi} M [|f^+(e^{iy})]| rdy < \int_0^{2\pi} M [|f^+(e^{ix})]| dx.
|
| 414 |
+
\end{align*}
|
| 415 |
+
$$
|
| 416 |
+
|
| 417 |
+
Taking into account the relations
|
| 418 |
+
|
| 419 |
+
$$
|
| 420 |
+
f^{+}(e^{ix}) = (1/2)f(e^{ix}) + (S_{\mathbb{T}}f)(e^{ix}) = (1/2)\{f(e^{ix}) + 2(S_{\mathbb{T}}f)(e^{ix})\},
|
| 421 |
+
$$
|
| 422 |
+
---PAGE_BREAK---
|
| 423 |
+
|
| 424 |
+
we have
|
| 425 |
+
|
| 426 |
+
$$
|
| 427 |
+
\begin{align*}
|
| 428 |
+
M [|f^+(e^{ix})]| &= M \left[ \frac{1}{2} |f(e^{ix}) + 2(S_{\mathbb{T}}f)(e^{ix})| \right] \\
|
| 429 |
+
&\le M \left[ \frac{1}{2} \{|f(e^{ix})| + 2|(S_{\mathbb{T}}f)(e^{ix})|\} \right] \\
|
| 430 |
+
&\le \frac{1}{2} \{M [|f(e^{ix})]| + M [2|(S_{\mathbb{T}}f)(e^{ix})]|\} \\
|
| 431 |
+
&\le \frac{1}{2} \{M [|f(e^{ix})]| + M [2x_0] + c_{15}M [|(S_{\mathbb{T}}f)(e^{ix})]|\}
|
| 432 |
+
\end{align*}
|
| 433 |
+
$$
|
| 434 |
+
|
| 435 |
+
for some $x_0 > 0$ and hence
|
| 436 |
+
|
| 437 |
+
$$
|
| 438 |
+
\begin{align*}
|
| 439 |
+
\int_{\gamma_r} M [|f^+(z)|] |dz|
|
| 440 |
+
&< \frac{1}{2} \int_0^{2\pi} \{M [|f(e^{ix})]| + M[2x_0] + c_{16}M [|(S_T f)(e^{ix})]|\} dx \\
|
| 441 |
+
&= \frac{1}{2} \int_0^{2\pi} M [|f(e^{ix})]| dx + c_{17} \int_0^{2\pi} M [|(S_T f)(e^{ix})]| dx + M[2x_0]\pi.
|
| 442 |
+
\end{align*}
|
| 443 |
+
$$
|
| 444 |
+
|
| 445 |
+
On the other hand [19]
|
| 446 |
+
|
| 447 |
+
$$
|
| 448 |
+
\| S_T f \|_{L_M(T)} \le c_{18} \| f \|_{L_M(T)}
|
| 449 |
+
$$
|
| 450 |
+
|
| 451 |
+
which implies that
|
| 452 |
+
|
| 453 |
+
$$
|
| 454 |
+
\int_0^{2\pi} M [|(S_T f)(e^{ix})]| dx \le c_{19} < \infty
|
| 455 |
+
$$
|
| 456 |
+
|
| 457 |
+
and then
|
| 458 |
+
|
| 459 |
+
$$
|
| 460 |
+
\begin{align*}
|
| 461 |
+
\int_{\gamma_r} M [|f^+(z)|] |dz| &< \frac{1}{2} \int_0^{2\pi} M [|f(e^{ix})]| dx + c_{20} \\
|
| 462 |
+
&= c_{21} (1/2) \int_{T} M [|f(w)|] |dw| + c_{20} < \infty.
|
| 463 |
+
\end{align*}
|
| 464 |
+
$$
|
| 465 |
+
|
| 466 |
+
Finally, we have $f^+ \in E_M(\mathbb{D})$. Similar result also holds for $f^-$. $\square$
|
| 467 |
+
|
| 468 |
+
Using Theorem 1.2 and the method, applied for the proof of the similar
|
| 469 |
+
result in [4], we have
|
| 470 |
+
---PAGE_BREAK---
|
| 471 |
+
|
| 472 |
+
LEMMA 2.14. Let an $N$-function $M$ and its complementary function both satisfy the $\Delta_2$ condition. Then there exists a constant $c_{22} > 0$ such that for every $n = 1, 2, 3, \dots$
|
| 473 |
+
|
| 474 |
+
$$ \left\| g(w) - \sum_{k=0}^{n} \alpha_k w^k \right\|_{L_M(\mathbb{T})} \le c_{22} \omega_\alpha (g, 1/n)_M, \quad \alpha > 0 $$
|
| 475 |
+
|
| 476 |
+
where $\alpha_k$, ($k = 0, 1, 2, 3, \dots$) are the $k$th Taylor coefficients of $g \in E_M(\mathbb{D})$ at the origin.
|
| 477 |
+
|
| 478 |
+
We know [20, pp. 52, 255] that
|
| 479 |
+
|
| 480 |
+
$$ \frac{\psi'(w)}{\psi(w)-z} = \sum_{k=0}^{\infty} \frac{\Phi_k(z)}{w^{k+1}}, \quad z \in G, w \in \mathbb{D}^{-} $$
|
| 481 |
+
|
| 482 |
+
and
|
| 483 |
+
|
| 484 |
+
$$ \frac{\psi_1'(w)}{\psi_1(w)-z} = \sum_{k=1}^{\infty} \frac{F_k(1/z)}{w^{k+1}}, \quad z \in G^{-}, w \in \mathbb{D}^{-}, $$
|
| 485 |
+
|
| 486 |
+
where $\Phi_k(z)$ and $F_k(1/z)$ are the *Faber polynomials* of degree $k$ with respect to $z$ and $1/z$ for the continuums $\bar{G}$ and $\bar{C} \setminus G$, with the integral representations [20, pp. 35, 255]
|
| 487 |
+
|
| 488 |
+
$$ \Phi_k(z) = \frac{1}{2\pi i} \int_{|w|=R} \frac{w^k \psi'(w)}{\psi(w)-z} dw, \quad z \in G, R > 1 $$
|
| 489 |
+
|
| 490 |
+
$$ F_k(1/z) = \frac{1}{2\pi i} \int_{|w|=1} \frac{w^k \psi'_1(w)}{\psi_1(w)-z} dw, \quad z \in G^{-}, $$
|
| 491 |
+
|
| 492 |
+
and
|
| 493 |
+
|
| 494 |
+
$$ (2.2) \qquad \Phi_k(z) = \varphi^k(z) + \frac{1}{2\pi i} \int_{\Gamma} \frac{\varphi^k(\zeta)}{\zeta-z} d\zeta, \quad z \in G^{-}, k = 0, 1, 2, \dots, $$
|
| 495 |
+
|
| 496 |
+
$$ (2.3) \qquad F_k(1/z) = \varphi_1^k(z) - \frac{1}{2\pi i} \int_{\Gamma} \frac{\varphi_1^k(\zeta)}{\zeta-z} d\zeta, \quad z \in G \setminus \{0\}. $$
|
| 497 |
+
|
| 498 |
+
We put
|
| 499 |
+
|
| 500 |
+
$$ a_k := a_k(f) := \frac{1}{2\pi i} \int_{\mathbb{T}} \frac{f_0(w)}{w^{k+1}} dw, \quad k = 0, 1, 2, \dots, $$
|
| 501 |
+
|
| 502 |
+
$$ \tilde{a}_k := \tilde{a}_k(f) := \frac{1}{2\pi i} \int_{\mathbb{T}} \frac{f_1(w)}{w^{k+1}} dw, \quad k = 1, 2, \dots $$
|
| 503 |
+
---PAGE_BREAK---
|
| 504 |
+
|
| 505 |
+
and correspond the series
|
| 506 |
+
|
| 507 |
+
$$
|
| 508 |
+
\sum_{k=0}^{\infty} a_k \Phi_k(z) + \sum_{k=1}^{\infty} \tilde{a}_k F_k(1/z)
|
| 509 |
+
$$
|
| 510 |
+
|
| 511 |
+
with the function $f \in L^1(\Gamma)$, i.e.,
|
| 512 |
+
|
| 513 |
+
$$
|
| 514 |
+
f(z) \sim \sum_{k=0}^{\infty} a_k \Phi_k(z) + \sum_{k=1}^{\infty} \tilde{a}_k F_k(1/z).
|
| 515 |
+
$$
|
| 516 |
+
|
| 517 |
+
This series is called the *Faber-Laurent series* of the function *f* and the coefficients *a*<sub>*k*</sub> and *ā*<sub>*k*</sub> are said to be the *Faber-Laurent coefficients of f*.
|
| 518 |
+
|
| 519 |
+
Let $\mathcal{P}$ be the set of all polynomials (with no restrictions on the degree),
|
| 520 |
+
and let $\mathcal{P}(\mathbb{D})$ be the set of traces of members of $\mathcal{P}$ on $\mathbb{D}$.
|
| 521 |
+
|
| 522 |
+
We define two operators $T : \mathcal{P}(\mathbb{D}) \to E_M(G)$ and $\tilde{T} : \mathcal{P}(\mathbb{D}) \to \tilde{E}_M(G^-)$
|
| 523 |
+
as
|
| 524 |
+
|
| 525 |
+
$$
|
| 526 |
+
T(P)(z) := \frac{1}{2\pi i} \int_{\mathbb{T}} \frac{P(w) \psi'(w)}{\psi(w) - z} dw, \quad z \in G
|
| 527 |
+
$$
|
| 528 |
+
|
| 529 |
+
$$
|
| 530 |
+
\tilde{T}(P)(z) := \frac{1}{2\pi i} \int_{\mathbb{T}} \frac{P(w) \psi_1'(w)}{\psi_1(w) - z} dw, \quad z \in G^{-}.
|
| 531 |
+
$$
|
| 532 |
+
|
| 533 |
+
It is readily seen that
|
| 534 |
+
|
| 535 |
+
$$
|
| 536 |
+
T \left( \sum_{k=0}^{n} b_k w^k \right) = \sum_{k=0}^{n} b_k \Phi_k (z) \text{ and } \tilde{T} \left( \sum_{k=0}^{n} d_k w^k \right) = \sum_{k=0}^{n} d_k F_k (1/z).
|
| 537 |
+
$$
|
| 538 |
+
|
| 539 |
+
If $z' \in G$, then
|
| 540 |
+
|
| 541 |
+
$$
|
| 542 |
+
T(P)(z') = \frac{1}{2\pi i} \int_{\mathbb{T}} \frac{P(w) \psi'(w)}{\psi(w) - z'} dw = \frac{1}{2\pi i} \int_{\Gamma} \frac{(P \circ \varphi)(\zeta)}{\zeta - z'} d\zeta = (P \circ \varphi)^+ (z'),
|
| 543 |
+
$$
|
| 544 |
+
|
| 545 |
+
which, by (1.2) implies that
|
| 546 |
+
|
| 547 |
+
$$
|
| 548 |
+
T(P)(z) = S_{\Gamma}(P \circ \varphi)(z) + (1/2)(P \circ \varphi)(z)
|
| 549 |
+
$$
|
| 550 |
+
|
| 551 |
+
a. e. on Γ.
|
| 552 |
+
|
| 553 |
+
Similarly taking the limit $z'' \to z \in \Gamma$ over all nontangential paths outside
|
| 554 |
+
$\Gamma$ in the relation
|
| 555 |
+
|
| 556 |
+
$$
|
| 557 |
+
\tilde{T}(P)(z'') = \frac{1}{2\pi i} \int_{\Gamma} \frac{P(\varphi_1(\varsigma))}{\varsigma - z''} d\varsigma = [(\mathcal{P} \circ \varphi_1)]^{-}(z''), \quad z'' \in G^{-}
|
| 558 |
+
$$
|
| 559 |
+
|
| 560 |
+
we get
|
| 561 |
+
|
| 562 |
+
$$
|
| 563 |
+
\tilde{T}(P)(z) = -(1/2)(P \circ \varphi_1)(z) + S_{\Gamma}(P \circ \varphi_1)(z)
|
| 564 |
+
$$
|
| 565 |
+
|
| 566 |
+
a.e. on $\Gamma$.
|
| 567 |
+
|
| 568 |
+
By virtue of the Hahn-Banach theorem, we can extend the operators T
|
| 569 |
+
and $\tilde{T}$ from $\mathcal{P}(\mathbb{D})$ to the spaces $E_M(\mathbb{D})$ as a linear and bounded operator.
|
| 570 |
+
---PAGE_BREAK---
|
| 571 |
+
|
| 572 |
+
Then for these extensions $T: E_M(\mathbb{D}) \to E_M(G)$ and $\tilde{T}: E_M(\mathbb{D}) \to \tilde{E}_M(G^-)$
|
| 573 |
+
we have the representations
|
| 574 |
+
|
| 575 |
+
$$
|
| 576 |
+
T(g)(z) = \frac{1}{2\pi i} \int_{\mathbb{T}} \frac{g(w) \psi'(w)}{\psi(w) - z} dw, \quad z \in G, \ g \in E_M(\mathbb{D}),
|
| 577 |
+
$$
|
| 578 |
+
|
| 579 |
+
$$
|
| 580 |
+
\tilde{T}(g)(z) = \frac{1}{2\pi i} \int_{\mathbb{T}} \frac{g(w) \psi_1'(w)}{\psi_1(w) - z} dw, \quad z \in G^{-}, \ g \in E_M(\mathbb{D}).
|
| 581 |
+
$$
|
| 582 |
+
|
| 583 |
+
The following lemma is a special case of Theorem 2.4 of [12].
|
| 584 |
+
|
| 585 |
+
LEMMA 2.15. If $\Gamma$ is a Dini-smooth curve and $E_M(G)$ is a reflexive Smirnov-Orlicz class, then the operators
|
| 586 |
+
|
| 587 |
+
$T : E_M(\mathbb{D}) \to E_M(G)$ and $\tilde{T} : E_M(\mathbb{D}) \to \tilde{E}_M(G^-)$
|
| 588 |
+
|
| 589 |
+
are one-to-one and onto.
|
| 590 |
+
|
| 591 |
+
3. PROOFS OF THE RESULTS
|
| 592 |
+
|
| 593 |
+
PROOF OF THEOREM 2.1. Since $f(z) = f^+(z) - f^-(z)$ a.e. on $\Gamma$, considering the rational function
|
| 594 |
+
|
| 595 |
+
$$
|
| 596 |
+
R_n (z, f) := \sum_{k=0}^{n} a_k \Phi_k (z) + \sum_{k=1}^{n} \tilde{a}_k F_k (1/z),
|
| 597 |
+
$$
|
| 598 |
+
|
| 599 |
+
it is enough to prove inequalities
|
| 600 |
+
|
| 601 |
+
$$
|
| 602 |
+
(3.1) \qquad \left\| f^{-}(z) + \sum_{k=1}^{n} \tilde{a}_{k} F_{k}\left(\frac{1}{z}\right) \right\|_{L_{M}(\Gamma)} \leq c_{23} \tilde{\omega}_{\alpha,\Gamma}(f, 1/n)_{M}
|
| 603 |
+
$$
|
| 604 |
+
|
| 605 |
+
and
|
| 606 |
+
|
| 607 |
+
$$
|
| 608 |
+
(3.2) \qquad \left\| f^{+}(z) - \sum_{k=0}^{n} a_k \Phi_k(z) \right\|_{L_M(\Gamma)} \le c_{24} \omega_{\alpha, \Gamma}(f, 1/n)_M.
|
| 609 |
+
$$
|
| 610 |
+
|
| 611 |
+
Let $f \in L_M(\Gamma)$. Then $f_1, f_0 \in L_M(\mathbb{T})$. We take $z' \in G \setminus \{0\}$. Using (2.3)
|
| 612 |
+
and
|
| 613 |
+
|
| 614 |
+
$$
|
| 615 |
+
(3.3) \qquad f(\varsigma) = f_1^+(\varphi_1(\varsigma)) - f_1^-(\varphi_1(\varsigma)) \quad \text{a.e. on } \Gamma
|
| 616 |
+
$$
|
| 617 |
+
|
| 618 |
+
we obtain that
|
| 619 |
+
|
| 620 |
+
$$
|
| 621 |
+
\begin{align*}
|
| 622 |
+
\sum_{k=1}^{n} \tilde{a}_k F_k (1/z') &= \sum_{k=1}^{n} \tilde{a}_k \varphi_1^k (z') - \frac{1}{2\pi i} \int_{\Gamma} \frac{\left( \sum_{k=1}^{n} \tilde{a}_k \varphi_1^k (\varsigma) - f_1^+ (\varphi_1 (\varsigma)) \right)}{\varsigma - z'} d\varsigma \\
|
| 623 |
+
&\qquad - f_1^- (\varphi_1 (z')) - f^- (z').
|
| 624 |
+
\end{align*}
|
| 625 |
+
$$
|
| 626 |
+
---PAGE_BREAK---
|
| 627 |
+
|
| 628 |
+
Taking the limit as $z' \to z$ along all non-tangential paths inside of $\Gamma$, we obtain
|
| 629 |
+
|
| 630 |
+
$$
|
| 631 |
+
\sum_{k=1}^{n} \tilde{a}_k F_k (1/z) = \sum_{k=1}^{n} \tilde{a}_k \varphi_1^k(z) - \frac{1}{2} \left( \sum_{k=1}^{n} \tilde{a}_k \varphi_1^k(z) - f_1^+( \varphi_1(z) ) \right) \\
|
| 632 |
+
-S_\Gamma \left[ \sum_{k=1}^{n} \tilde{a}_k \varphi_1^k - (f_1^+ \circ \varphi_1) \right] - f_1^-( \varphi_1(z) ) - f_1^+(z)
|
| 633 |
+
$$
|
| 634 |
+
|
| 635 |
+
a.e. on Γ.
|
| 636 |
+
|
| 637 |
+
Using (1.3), (3.3), Minkowski's inequality and the boundedness of $S_\Gamma$ we get
|
| 638 |
+
|
| 639 |
+
$$
|
| 640 |
+
\begin{align*}
|
| 641 |
+
\left\| f^{-}(z) + \sum_{k=1}^{n} \tilde{a}_{k} F_{k} \left( \frac{1}{z'} \right) \right\|_{L_{M}(\Gamma)} &= \left\| \frac{1}{2} \left( \sum_{k=1}^{n} \tilde{a}_{k} \varphi_{1}^{k}(z) - f_{1}^{+}(\varphi_{1}(z)) \right) \right. \\
|
| 642 |
+
&\qquad \left. - S_{\Gamma} \left[ \sum_{k=1}^{n} \tilde{a}_{k} \varphi_{1}^{k} - (\tilde{f}_{1}^{+} \circ \varphi_{1}) \right] (z) \right\|_{L_{M}(\Gamma)} \\
|
| 643 |
+
&\le c_{25} \left\| \sum_{k=1}^{n} \tilde{a}_{k} \varphi_{1}^{k}(z) - f_{1}^{+}(\varphi_{1}(z)) \right\|_{L_{M}(\Gamma)} \le c_{26} \left\| f_{1}^{+}(w) - \sum_{k=1}^{n} \tilde{a}_{k} w^{k} \right\|_{L_{M}(\Gamma)} .
|
| 644 |
+
\end{align*}
|
| 645 |
+
$$
|
| 646 |
+
|
| 647 |
+
On the other hand, the Faber-Laurent coefficients $\tilde{a}_k$ of the function $f$ and the Taylor coefficients of the function $f_1^+$ at the origin are coincide. Then taking Lemma 2.14 into account, we conclude that
|
| 648 |
+
|
| 649 |
+
$$
|
| 650 |
+
\left\| f^{-} + \sum_{k=1}^{n} \tilde{a}_k F_k (1/z') \right\|_{L_M(\Gamma)} \le c_{27} \tilde{\omega}_{\alpha, \Gamma} (f, 1/n)_M,
|
| 651 |
+
$$
|
| 652 |
+
|
| 653 |
+
and (3.1) is proved.
|
| 654 |
+
|
| 655 |
+
The proof of relation (3.2) goes similarly; we use the relations (2.2) and
|
| 656 |
+
|
| 657 |
+
$$
|
| 658 |
+
f(\zeta) = f_0^+(\varphi(\zeta)) - f_0^-(\varphi(\zeta)) \quad \text{a.e. on } \Gamma
|
| 659 |
+
$$
|
| 660 |
+
|
| 661 |
+
instead of (2.3) and (3.3), respectively.
|
| 662 |
+
|
| 663 |
+
PROOF OF THEOREM 2.4. Let $f \in E_M(G)$. Then we have $T(f_0^+) = f$. Since the operator $T: E_M(\mathbb{D}) \to E_M(G)$ is linear, bounded, one-to-one and onto, the operator $T^{-1}: E_M(G) \to E_M(\mathbb{D})$ is linear and bounded. We take a $p_n^* \in P_n$ as the best approximating algebraic polynomial to $f$ in $E_M(G)$. Then $T^{-1}(p_n^*) \in P_n(\mathbb{D})$ and therefore
|
| 664 |
+
|
| 665 |
+
$$
|
| 666 |
+
\begin{equation}
|
| 667 |
+
\begin{aligned}
|
| 668 |
+
E_n(f_0^+) &\le \|f_0^+ - T^{-1}(p_n^*)\|_{L_M(\mathbb{T})} \\
|
| 669 |
+
&= \|T^{-1}(f) - T^{-1}(p_n^*)\|_{L_M(\mathbb{T})} \\
|
| 670 |
+
&= \|T^{-1}(f - p_n^*)\|_{L_M(\mathbb{T})} \\
|
| 671 |
+
&\le \|T^{-1}\| \|f - p_n^*\|_{L_M(\Gamma)} \\
|
| 672 |
+
&= \|T^{-1}\| E_n(f, G)_M,
|
| 673 |
+
\end{aligned}
|
| 674 |
+
\tag{3.4}
|
| 675 |
+
\end{equation}
|
| 676 |
+
$$
|
| 677 |
+
---PAGE_BREAK---
|
| 678 |
+
|
| 679 |
+
because the operator $T^{-1}$ is bounded. From (3.4) we have
|
| 680 |
+
|
| 681 |
+
$$
|
| 682 |
+
\begin{aligned}
|
| 683 |
+
\omega_{\alpha, \Gamma} (f, 1/n)_M &= \omega_\alpha (f_0^+, 1/n)_M \le \frac{c_{28}}{n^\alpha} \sum_{k=0}^{n} (k+1)^{\alpha-1} E_k (f_0^+)_M \\
|
| 684 |
+
&\le \frac{c_{28} \|T^{-1}\|}{n^\alpha} \sum_{k=0}^{n} (k+1)^{\alpha-1} E_k (f, G)_M, \quad \alpha > 0, n = 1, 2, \dots
|
| 685 |
+
\end{aligned}
|
| 686 |
+
$$
|
| 687 |
+
|
| 688 |
+
and the proof is completed.
|
| 689 |
+
|
| 690 |
+
PROOF OF THEOREM 2.9. Let $f \in \tilde{E}_M(G^-)$. Then $\tilde{T}(f_1^+) = f$. By Lemma 2.15 the operator $\tilde{T}^{-1}: \tilde{E}_M(G^-) \to E_M(\mathbb{D})$ is linear and bounded. Let $r_n^* \in R_n$ be a function such that $\tilde{E}_n(f, G^-)_M = \|f - r_n^*\|_{L_M(\Gamma)}$. Then $\tilde{T}^{-1}(r_n^*) \in P_n(\mathbb{D})$ and therefore
|
| 691 |
+
|
| 692 |
+
$$
|
| 693 |
+
\begin{align}
|
| 694 |
+
E_n (f_1^+) &\le \| f_1^+ - \tilde{T}^{-1} (r_n^*) \|_{L_M(\mathbb{T})} &&= \| \tilde{T}^{-1} (f) - \tilde{T}^{-1} (r_n^*) \|_{L_M(\mathbb{T})} \nonumber \\
|
| 695 |
+
&= \| \tilde{T}^{-1} (f - r_n^*) \|_{L_M(\mathbb{T})} &&\le \| \tilde{T}^{-1} \| \| f - r_n^* \|_{L_M(\Gamma)} &&= \| \tilde{T}^{-1} \| \tilde{E}_n (f, G^-)_M . \tag{3.5}
|
| 696 |
+
\end{align}
|
| 697 |
+
$$
|
| 698 |
+
|
| 699 |
+
From (3.5) we conclude
|
| 700 |
+
|
| 701 |
+
$$
|
| 702 |
+
\begin{align*}
|
| 703 |
+
\tilde{\omega}_{\alpha, \Gamma} (f, 1/n)_M &= \omega_{\alpha} (f_1^+, 1/n)_M \le \frac{c_{29}}{n^{\alpha}} \sum_{k=0}^{n} (k+1)^{\alpha-1} E_k (f_1^+)_M \\
|
| 704 |
+
&\le \frac{c_{29} \| \tilde{T}^{-1} \|}{n^{\alpha}} \sum_{k=0}^{n} (k+1)^{\alpha-1} \tilde{E}_k (f, G^-)_M, \quad \alpha > 0, n = 1, 2, \dots
|
| 705 |
+
\end{align*}
|
| 706 |
+
$$
|
| 707 |
+
|
| 708 |
+
the required result.
|
| 709 |
+
|
| 710 |
+
ACKNOWLEDGEMENTS.
|
| 711 |
+
|
| 712 |
+
Authors are indebted to referees for constructive discussions on the results obtained in this paper.
|
| 713 |
+
|
| 714 |
+
REFERENCES
|
| 715 |
+
|
| 716 |
+
[1] S. Ya. Al'per, *Approximation in the mean of analytic functions of class Ep*, in: Investigations on the modern problems of the function theory of a complex variable. Gos. Izdat. Fiz.-Mat. Lit., Moscow, 1960, 272-286 (in Russian).
|
| 717 |
+
|
| 718 |
+
[2] J. E. Andersson, *On the degree of polynomial approximation in Ep(D)*, J. Approx. Theory **19** (1977), 61-68.
|
| 719 |
+
|
| 720 |
+
[3] P. L. Butzer, H. Dyckoff, E. Görlicz and R. L. Stens, *Best trigonometric approximation, fractional derivatives and Lipschitz classes*, Can. J. Math. **24** (1977), 781-793.
|
| 721 |
+
|
| 722 |
+
[4] A. Çavuş and D. M. Israfilov, *Approximation by Faber-Laurent rational functions in the mean of functions of the class Lp(Γ) with 1 < p < ∞*, Approx. Theory Appl. **11** (1995), 105-118.
|
| 723 |
+
|
| 724 |
+
[5] P. L. Duren, *Theory of Hª spaces*, Academic Press, 1970.
|
| 725 |
+
|
| 726 |
+
[6] D. Gaier, *Lectures on Complex Approximation*, Birkhäuser, 1987.
|
| 727 |
+
---PAGE_BREAK---
|
| 728 |
+
|
| 729 |
+
[7] G. M. Goluzin, Geometric theory of functions of a complex variable, Translation of Mathematical Monographs Vol. 26, R. I.: AMS, Providence, 1969.
|
| 730 |
+
|
| 731 |
+
[8] V. P. Havin, Continuity in $L_p$ of an integral operator with the Cauchy kernel, Vestnik Leningrad Univ. 22 (1967), 103 (russian, english summary).
|
| 732 |
+
|
| 733 |
+
[9] D. M. Israfilov, Approximate properties of the generalized Faber series in an integral metric, Izv. Akad. Nauk Az. SSR, Ser. Fiz.-Tekh. Math. Nauk 2 (1987), 10-14 (in Russian).
|
| 734 |
+
|
| 735 |
+
[10] D. M. Israfilov, Approximation by p-Faber polynomials in the weighted Smirnov class $E^p(G, \omega)$ and the Bieberbach polynomials, Constr. Approx. 17 (2001), 335-351.
|
| 736 |
+
|
| 737 |
+
[11] D. M. Israfilov, Approximation by p-Faber-Laurent rational functions in the weighted Lebesgue spaces, Czechoslovak Math. J. 54 (2004), 751-765.
|
| 738 |
+
|
| 739 |
+
[12] D. M. Israfilov and R. Akgün, Approximation in weighted Smirnov-Orlicz classes, J. Math. Kyoto Univ. 46 (2006), no:4, 755-770.
|
| 740 |
+
|
| 741 |
+
[13] V. M. Kokilashvili, On analytic functions of Smirnov-Orlicz classes, Studia Math. 31 (1968), 43-59.
|
| 742 |
+
|
| 743 |
+
[14] V. M. Kokilashvili, Approximation of analytic functions of class $E^p$, Proceedings of Math. Inst. of Tbilisi, 39 (1968), 82-102, (In Russian).
|
| 744 |
+
|
| 745 |
+
[15] V. M. Kokilashvili, A direct theorem on mean approximation of analytic functions by polynomials, Soviet Math. Dokl. 10 (1969), 411-414.
|
| 746 |
+
|
| 747 |
+
[16] I. P. Natanson, Teoriya funktsii veshestvennoy peremennoy, Moscow-Leningrad, 1974.
|
| 748 |
+
|
| 749 |
+
[17] Ch. Pommerenke, Boundary Behavior of Conformal Maps, Berlin, Springer-Verlag (1992).
|
| 750 |
+
|
| 751 |
+
[18] M. M. Rao and Z. D. Ren, Theory of Orlicz Spaces, Marcel Dekker, New York, 1991.
|
| 752 |
+
|
| 753 |
+
[19] R. Ryan, On the conjugate functions in Orlicz space, Pacific J. Math. 13 (1963), 1371-1377.
|
| 754 |
+
|
| 755 |
+
[20] P. K. Suetin, Series of Faber polynomials, Gordon and Breach Science Publishers, Amsterdam, 1998.
|
| 756 |
+
|
| 757 |
+
[21] R. Taberski, Differences, moduli and derivatives of fractional orders, Comment. Math. 19 (1977), 389-400.
|
| 758 |
+
|
| 759 |
+
[22] J. L. Walsh and H. G. Russel, Integrated continuity conditions and degree of approximation by polynomials or by bounded analytic functions, Trans. Amer. Math. Soc. 92 (1959), 355-370.
|
| 760 |
+
|
| 761 |
+
[23] S. E. Warschawski, Über das ranverhalten der Ableitung der Abildungsfunktion bei Konformer Abbildung, Math. Z. 35 (1932), 321-456.
|
| 762 |
+
|
| 763 |
+
[24] A. Zygmund, Trigonometric series, V: I and II, Cambridge, 1959.
|
| 764 |
+
|
| 765 |
+
R. Akgün
|
| 766 |
+
Balikesir University
|
| 767 |
+
Faculty of Science and Art
|
| 768 |
+
Department of Mathematics
|
| 769 |
+
10145, Balikesir
|
| 770 |
+
Turkey
|
| 771 |
+
*E-mail:* rakgun@balikesir.edu.tr
|
| 772 |
+
|
| 773 |
+
D. M. Israfilov
|
| 774 |
+
Institute of Math. and Mech. NAS Azerbaijan
|
| 775 |
+
F. Agayev Str. 9
|
| 776 |
+
Baku
|
| 777 |
+
Azerbaijan
|
| 778 |
+
*E-mail:* mdaniyal@balikesir.edu.tr
|
| 779 |
+
|
| 780 |
+
*Received:* 20.4.2007.
|
samples/texts_merged/2565362.md
ADDED
|
@@ -0,0 +1,822 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# APPROXIMATELY UNBIASED TESTS OF REGIONS USING MULTISTEP-MULTISCALE BOOTSTRAP RESAMPLING¹
|
| 5 |
+
|
| 6 |
+
BY HIDETOSHI SHIMODAIRA
|
| 7 |
+
|
| 8 |
+
Tokyo Institute of Technology
|
| 9 |
+
|
| 10 |
+
Approximately unbiased tests based on bootstrap probabilities are considered for the exponential family of distributions with unknown expectation parameter vector, where the null hypothesis is represented as an arbitrary-shaped region with smooth boundaries. This problem has been discussed previously in Efron and Tibshirani [Ann. Statist. 26 (1998) 1687–1718], and a corrected *p*-value with second-order asymptotic accuracy is calculated by the two-level bootstrap of Efron, Halloran and Holmes [Proc. Natl. Acad. Sci. U.S.A. 93 (1996) 13429–13434] based on the ABC bias correction of Efron [J. Amer. Statist. Assoc. 82 (1987) 171–185]. Our argument is an extension of their asymptotic theory, where the geometry, such as the signed distance and the curvature of the boundary, plays an important role. We give another calculation of the corrected *p*-value without finding the “nearest point” on the boundary to the observation, which is required in the two-level bootstrap and is an implementational burden in complicated problems. The key idea is to alter the sample size of the replicated dataset from that of the observed dataset. The frequency of the replicates falling in the region is counted for several sample sizes, and then the *p*-value is calculated by looking at the change in the frequencies along the changing sample sizes. This is the multiscale bootstrap of Shimodaira [Systematic Biology 51 (2002) 492–508], which is third-order accurate for the multivariate normal model. Here we introduce a newly devised multistep-multiscale bootstrap, calculating a third-order accurate *p*-value for the exponential family of distributions. In fact, our *p*-value is asymptotically equivalent to those obtained by the double bootstrap of Hall [The Bootstrap and Edgeworth Expansion (1992) Springer, New York] and the modified signed likelihood ratio of Barndorff-Nielsen [Biometrika 73 (1986) 307–322] ignoring $O(n^{-3/2})$ terms, yet the computation is less demanding and free from model specification. The algorithm is remarkably simple despite complexity of the theory behind it. The differences of the *p*-values are illustrated in simple examples, and the accuracies of the bootstrap methods are shown in a systematic way.
|
| 11 |
+
|
| 12 |
+
**1. Introduction.** We start with a simple example of Efron and Tibshirani (1998) to illustrate the issue to discuss. Let $X_1, \dots, X_n$ be independent *p*-dimensional multivariate normal vectors with mean vector $\mu$ and covariance matrix
|
| 13 |
+
|
| 14 |
+
Received November 2000; revised March 2004.
|
| 15 |
+
¹Supported in part by Grant KAKENHI-14702061 from MEXT of Japan.
|
| 16 |
+
*AMS 2000 subject classifications*. Primary 62G10; secondary 62G09.
|
| 17 |
+
|
| 18 |
+
*Key words and phrases*. Problem of regions, approximately unbiased tests, third-order accuracy, bootstrap probability, curvature, bias correction.
|
| 19 |
+
---PAGE_BREAK---
|
| 20 |
+
|
| 21 |
+
identity $I_p$,
|
| 22 |
+
|
| 23 |
+
$$X_1, \dots, X_n \sim N_p(\mu, I_p).$$
|
| 24 |
+
|
| 25 |
+
For given observed values $x_1, \dots, x_n$, let us assume that we would like to know
|
| 26 |
+
whether $\|\mu\|^2 = \mu_1^2 + \dots + \mu_p^2 \le 1$ or not. The problem is also described in a
|
| 27 |
+
transformed variable $Y = \sqrt{n}\bar{X}$ with mean $\eta = \sqrt{n}\mu$, where $\bar{x} = (x_1 + \dots + x_n)/n$
|
| 28 |
+
is the sample average. We have observed a $p$-dimensional multivariate normal
|
| 29 |
+
vector $y$ having unknown mean vector $\eta$ and covariance matrix the identity,
|
| 30 |
+
|
| 31 |
+
$$ (1.1) \qquad Y \sim N_p(\eta, I_p). $$
|
| 32 |
+
|
| 33 |
+
Then the null hypothesis we are going to test is $\eta \in \mathcal{R}$, with the spherical region
|
| 34 |
+
|
| 35 |
+
$$ (1.2) \qquad \mathcal{R} = \{\eta : \| \eta \| \le \sqrt{n}\}. $$
|
| 36 |
+
|
| 37 |
+
This problem is simple enough to give the exact answer. The frequentist
|
| 38 |
+
confidence level, namely, the probability value ($p$-value) for the spherical null
|
| 39 |
+
hypothesis is calculated as the probability of $\|Y\|^2$ being greater than or equal
|
| 40 |
+
to the observed $\|y\|^2$ assuming that $\eta$ is on the boundary $\partial \mathcal{R} = \{\eta : \| \eta \| = \sqrt{n}\}$ of $\mathcal{R}$. The exact $p$-value is easily calculated knowing that $\|Y\|^2$ is distributed as
|
| 41 |
+
the chi-square distribution with degrees of freedom $p$ and noncentrality $\|\eta\|^2$.
|
| 42 |
+
|
| 43 |
+
In this paper we are going to remove two restrictions in the above problem
|
| 44 |
+
for generalization. (i) The underlying probability model for $Y$ is the exponential
|
| 45 |
+
family of distributions, instead of the multivariate normal model; we denote the
|
| 46 |
+
density function with the expectation parameter $\eta$ as
|
| 47 |
+
|
| 48 |
+
$$ (1.3) \qquad Y \sim f(y; \eta). $$
|
| 49 |
+
|
| 50 |
+
(ii) The null hypothesis will be represented as an arbitrarily-shaped region $\mathcal{R}$ with smooth boundaries, instead of the spherical region. The surface of $\partial \mathcal{R}$ may be represented as the Taylor series with coefficients $d^{ab}, e^{abc}, \dots$
|
| 51 |
+
|
| 52 |
+
$$ (1.4) \qquad \Delta\eta_p = -d^{ab}\Delta\eta_a\Delta\eta_b - e^{abc}\Delta\eta_a\Delta\eta_b\Delta\eta_c + \dots $$
|
| 53 |
+
|
| 54 |
+
in the local coordinates $(\Delta\eta_1, \dots, \Delta\eta_p)$ by taking the origin at a point on $\partial\mathcal{R}$ and rotating the axes properly. The summation convention such as $d^{ab}\Delta\eta_a\Delta\eta_b = \sum_{a=1}^{p-1}\sum_{b=1}^{p-1} d^{ab}\Delta\eta_a\Delta\eta_b$ will be used, where the indices $a, b, \dots$ may run through 1, \dots, $p-1$ and $i, j, \dots$ may run through 1, \\ \dots, $p$ when used as subscripts or superscripts for $p$-dimensional vectors. The axes are taken so that $\Delta\eta_1, \dots, \Delta\eta_{p-1}$ are for the tangent space of the surface, and $\Delta\eta_p$ is for its orthogonal space taken positive in the direction pointing away from $\mathcal{R}$. This general setting is the “problem of regions” discussed previously in Efron and Tibshirani (1998), and our argument is an extension of their asymptotic theory, where the geometry, such as the signed distance and the curvature of the boundary, plays an important role.
|
| 55 |
+
|
| 56 |
+
Since the exact $p$-value is available only for special cases, we will discuss
|
| 57 |
+
several bootstrap methods to calculate approximate $p$-values from $y$ under the
|
| 58 |
+
---PAGE_BREAK---
|
| 59 |
+
|
| 60 |
+
assumptions (i) and (ii) above. Let $\alpha$ denote a specified significance level, and $\hat{\alpha}(y)$ denote an approximate $p$-value. A large value of $\hat{\alpha}(y)$ may indicate evidence to support the null hypothesis $\eta \in \mathcal{R}$. On the other hand, if $\hat{\alpha}(y) < \alpha$ is observed, then we reject the null hypothesis and conclude that $\eta \notin \mathcal{R}$. The hypothesis test of $\mathcal{R}$ is said to be *unbiased* if the rejection probability is equal to $\alpha$ whenever $\eta \in \partial \mathcal{R}$. The approximate $p$-value is said to be $k$th order accurate if the asymptotic bias is of order $O(n^{-k/2})$, that is,
|
| 61 |
+
|
| 62 |
+
$$ (1.5) \qquad \Pr\{\hat{\alpha}(Y) < \alpha; \eta\} = \alpha + O(n^{-k/2}), \quad \eta \in \partial\mathcal{R}, $$
|
| 63 |
+
|
| 64 |
+
holds for $0 < \alpha < 1$. For sufficiently large $n$, approximately unbiased $p$-values of higher-order accuracy are considered to be better than those of lower-order accuracy.
|
| 65 |
+
|
| 66 |
+
We will not specify the probabilistic model or the shape of the region explicitly in the calculation of the $p$-value, but only assume that a mechanism is available to us for generating the bootstrap replicates and identifying whether the outcomes are in the region or not. This setting is important for complicated practical applications, where the exact $p$-value is not available and, thus, bootstrap methods are used for approximation. The phylogenetic tree selection discussed in Efron, Halloran and Holmes (1996) and Shimodaira (2002) is a typical case; the history of evolution represented as a tree is inferred by a model-based clustering of the DNA sequences of organisms, where we are given complex computer software for inferring the tree from a dataset. For calculating $p$-values of the hypothetical evolutionary trees, we can easily run bootstrap simulations, although computationally demanding, by repeatedly applying the software to replicated datasets.
|
| 67 |
+
|
| 68 |
+
We confine our attention to the parametric bootstrap of continuous random vectors for mathematical simplicity. We also assume that the boundary of the region is a smooth surface. In practical applications, however, it is often the case that the nonparametric bootstrap is employed, the random vector is discrete and the boundary is nonsmooth. Regions with nonsmooth boundaries, in particular, may lead to serious difficulty as discussed in Perlman and Wu (1999, 2003). Further study is needed to bridge these gaps between the theory and practice.
|
| 69 |
+
|
| 70 |
+
The frequency of the bootstrap replicates falling in the region, namely, the bootstrap probability, has been used widely since its application to phylogenetic tree selection in Felsenstein (1985). This is also named “empirical strength probability” of $\mathcal{R}$ in Liu and Singh (1997), where a modification for nonsmooth boundary is discussed as well. The bootstrap probability is, however, biased as an approximation to the exact $p$-value and, thus, the *two-level bootstrap* of Efron, Halloran and Holmes (1996) and Efron and Tibshirani (1998) is developed to improve the accuracy. Under the assumptions (i) and (ii) above, the two-level bootstrap calculates a second-order accurate $p$-value, whereas the bootstrap probability is only first-order accurate.
|
| 71 |
+
---PAGE_BREAK---
|
| 72 |
+
|
| 73 |
+
The bias of the bootstrap probability mainly arises from the curvature of $\partial \mathcal{R}$. The two-level bootstrap estimates the curvature for bias correction, where the curvature is estimated by generating second-level replicates around $\hat{\eta}(y)$. Here $\hat{\eta}(y)$ denotes the maximum likelihood estimate for $\eta$ restricted to $\partial \mathcal{R}$. $\hat{\eta}(y)$ is the nearest point on $\partial \mathcal{R}$ to $y$ for (1.1). For the spherical region, $\hat{\eta}(y) = \sqrt{n}y/\|\eta y\|$ is easily obtained, but $\hat{\eta}(y)$ must be obtained by numerical search in general, leading to an implementational burden in complex problems. This motivated our development of a new method.
|
| 74 |
+
|
| 75 |
+
The multiscale bootstrap is developed in Shimodaira (2002) to calculate another bias corrected *p*-value. It does not require $\hat{\eta}(y)$. Instead, the bootstrap probabilities are calculated for sets of bootstrap replicates with several sample sizes which may differ from that of the observed data. This, in effect, alters the scale parameter of the replicates (Figure 1). The key idea is to estimate the curvature from the change in the bootstrap probabilities along varying sample sizes. The corrected *p*-value is third-order accurate for any arbitrarily-shaped region with smooth boundaries under the multivariate normal model. The normality assumption is not as restrictive as it might look at first, because the procedure is transformation-invariant and should work fine if there exists a transformation from the dataset to the normal *Y* and if the null hypothesis is represented as a region of $\eta$. We do not have to know what the transformation is. However, it becomes only first-order accurate if there is no such transformation to (1.1) but only one to (1.3).
|
| 76 |
+
|
| 77 |
+
The multiscale bootstrap can be used easily for complex problems. It is as easy as the usual bootstrap. We only have to change the sample size of the
|
| 78 |
+
|
| 79 |
+
FIG. 1. Multiscale bootstrap. The three circles with dashed lines indicate the conditional distributions of the bootstrap replicates with mean *y* and scales τ = 1/√2, 1, √2. In this particular configuration, the bootstrap probability may increase by halving the sample size to alter τ = 1 to √2, and may decrease by doubling the sample size to alter τ = 1 to 1/√2.
|
| 80 |
+
---PAGE_BREAK---
|
| 81 |
+
|
| 82 |
+
bootstrap replicates, and apply a regression fit to the bootstrap probabilities. The bias corrected *p*-value is calculated from the slope of the regression curve (Figure 2). This procedure is implemented in computer software [Shimodaira and Hasegawa (2001)] for phylogenetic tree selection, and is also applied to gene network estimation from microarray expression profiles [Kamimura et al. (2003)]. In these applications, the multiscale bootstrap can calculate the *p*-values for many related hypotheses at the same time; we do not have to run time-consuming bootstrap simulations separately for these hypotheses. For example, biologists are interested in the monophyletic hypothesis that some specified species constitute a cluster in the phylogenetic tree, and there are many such hypotheses for groups of species. The bootstrap probabilities for these hypotheses are obtained at the same time from a single run of bootstrap simulation for each scale. We only have to apply the regression fit separately to the multiscale bootstrap probabilities of each hypothesis.
|
| 83 |
+
|
| 84 |
+
In this paper we provide the theoretical foundation of the multiscale bootstrap, and introduce a newly devised multistep-multiscale bootstrap resampling. This method calculates an approximately unbiased *p*-value with third-order asymptotic accuracy under the assumptions (i) and (ii). The previously developed method of Shimodaira (2002) corresponds to a special case of the new method, that is, the one-step multiscale bootstrap.
|
| 85 |
+
|
| 86 |
+
For explaining the bootstrap methods, a rather intuitive argument is given in Sections 2 to 6 using simple examples. A more formal argument is given in Section 7, and the technical details are given in a supporting document [Shimodaira (2004)]. We introduce a *modified signed distance*, and give a unified approach to the asymptotic analysis of the bootstrap methods using Edgeworth series, as well as the tube formula of Weyl (1939). Third-order accuracy is also shown there for the *p*-value computed by the modified signed likelihood ratio [Barndorff-Nielsen (1986)], which requires the analytic expression of the likelihood function, and for the *p*-value computed by the double bootstrap [Hall (1992)], which requires a huge number of replicates, as well as computation of $\hat{\eta}(y)$. The multistep-multiscale bootstrap method requires only the bootstrap mechanism for generating replicates around y, inheriting the simplicity from the one-step multiscale bootstrap. The price for higher-order accuracy and simpler implementation is a large number of replicates, which can be as large as that of the double bootstrap. These three *p*-values are, in fact, shown to be equivalent ignoring $O(n^{-3/2})$ terms.
|
| 87 |
+
|
| 88 |
+
Our argument may not be justified unless the assumptions (i) and (ii) hold. We are not sure yet how robust the multistep-multiscale bootstrap method is under misspecifications of the exponential family model. It is shown at the end of Section 4, however, that the one-step method adjusts the bias halfway, though not completely, under misspecifications of the normal model. A simulation study in Shimodaira (2002) shows that the bias of the one-step method under the normal model is very small even if the boundary is piecewise smooth, but the bias becomes larger as $\eta$ moves closer to nonsmooth points on the boundary.
|
| 89 |
+
---PAGE_BREAK---
|
| 90 |
+
|
| 91 |
+
**2. Two-level bootstrap resampling.** Although our ultimate goal is to get rid of the normal assumption, we use normality in this section to illustrate the bootstrap methods, and besides (1.1), we also assume (1.2). For given observed value $\bar{x}$, we consider the parametric bootstrap resampling
|
| 92 |
+
|
| 93 |
+
$$X_1^*, \dots, X_{n_1}^* \sim N_p(\bar{x}, I_p).$$
|
| 94 |
+
|
| 95 |
+
Typically, the sample size $n_1$ of the replicated dataset should be equal to $n$, but we reserve the generality of using any value for $n_1$. The scaling factor of the bootstrap, $\tau_1 = \sqrt{n/n_1}$, will be altered later in the multiscale bootstrap. Once we specify $\tau_1$, we may generate $B$, say 10,000, replicated datasets, and compute the average $\bar{X}^* = (X_1^* + \dots + X_{n_1}^*)/n_1$ for each replicate. A large value of the frequency that $\|\bar{X}^*\|^2 \le 1$ holds in the replicates may indicate a high chance of the null hypothesis $\|\mu\|^2 \le 1$ being correct. This is also described in a transformed variable $Y^* = \sqrt{n}\bar{X}^*$. For given observed value $y$, we consider the parametric bootstrap resampling
|
| 96 |
+
|
| 97 |
+
$$ (2.1) \qquad Y^* \sim N_p(y, \tau_1^2 I_p), $$
|
| 98 |
+
|
| 99 |
+
and the bootstrap probability with scale $\tau_1$ is denoted by
|
| 100 |
+
|
| 101 |
+
$$ \tilde{\alpha}_1(y, \tau_1) = \Pr\{Y^* \in \mathcal{R}; y, \tau_1\}, $$
|
| 102 |
+
|
| 103 |
+
where the index 1 indicates the “one-step” bootstrap in connection with $\tilde{\alpha}_2$ and $\tilde{\alpha}_3$ defined later, as shown in Table 1. $\tilde{\alpha}_1$ is estimated by the frequency of $Y^* \in \mathcal{R}$ from the B bootstrap replicates with the binomial variance $\tilde{\alpha}_1(1 - \tilde{\alpha}_1)/B$.
|
| 104 |
+
|
| 105 |
+
Let us consider a numerical example with
|
| 106 |
+
|
| 107 |
+
$$ (2.2) \qquad p=4, \quad n=10, \quad \|\bar{x}\|^2 = 2.680. $$
|
| 108 |
+
|
| 109 |
+
Although $\|\bar{x}\|^2 > 1$, we are not sure if $\|\mu\|^2 \le 1$ holds or not. The frequentist
|
| 110 |
+
confidence level for the null hypothesis is given by the exact *p*-value, which
|
| 111 |
+
|
| 112 |
+
TABLE 1
|
| 113 |
+
*Bootstrap probabilities and corrected *p*-values*
|
| 114 |
+
|
| 115 |
+
<table>
|
| 116 |
+
<thead>
|
| 117 |
+
<tr>
|
| 118 |
+
<th>Symbol</th>
|
| 119 |
+
<th>Section</th>
|
| 120 |
+
<th>Description</th>
|
| 121 |
+
</tr>
|
| 122 |
+
</thead>
|
| 123 |
+
<tbody>
|
| 124 |
+
<tr>
|
| 125 |
+
<td>α̂<sub>1</sub>(y, τ<sub>1</sub>)</td>
|
| 126 |
+
<td>2</td>
|
| 127 |
+
<td>Bootstrap probability</td>
|
| 128 |
+
</tr>
|
| 129 |
+
<tr>
|
| 130 |
+
<td>α̂<sub>∞</sub>(y)</td>
|
| 131 |
+
<td>2</td>
|
| 132 |
+
<td>Exact <i>p</i>-value<sup>*</sup></td>
|
| 133 |
+
</tr>
|
| 134 |
+
<tr>
|
| 135 |
+
<td>α̂<sub>0</sub>(y)</td>
|
| 136 |
+
<td>2</td>
|
| 137 |
+
<td>Bootstrap probability (τ<sub>1</sub> = 1)</td>
|
| 138 |
+
</tr>
|
| 139 |
+
<tr>
|
| 140 |
+
<td>α̂<sub>abc</sub>(y)</td>
|
| 141 |
+
<td>2</td>
|
| 142 |
+
<td>Two-level bootstrap corrected <i>p</i>-value</td>
|
| 143 |
+
</tr>
|
| 144 |
+
<tr>
|
| 145 |
+
<td>α̂<sub>1</sub>(y)</td>
|
| 146 |
+
<td>3</td>
|
| 147 |
+
<td>Multiscale bootstrap corrected <i>p</i>-value</td>
|
| 148 |
+
</tr>
|
| 149 |
+
<tr>
|
| 150 |
+
<td>α̂<sub>2</sub>(y, τ<sub>1</sub>, τ<sub>2</sub>)</td>
|
| 151 |
+
<td>4</td>
|
| 152 |
+
<td>Two-step bootstrap probability</td>
|
| 153 |
+
</tr>
|
| 154 |
+
<tr>
|
| 155 |
+
<td>α̂<sub>2</sub>(y)</td>
|
| 156 |
+
<td>4</td>
|
| 157 |
+
<td>Two-step multiscale bootstrap corrected <i>p</i>-value</td>
|
| 158 |
+
</tr>
|
| 159 |
+
<tr>
|
| 160 |
+
<td>α̂<sub>3</sub>(y, τ<sub>1</sub>, τ<sub>2</sub>, τ<sub>3</sub>)</td>
|
| 161 |
+
<td>5</td>
|
| 162 |
+
<td>Three-step bootstrap probability</td>
|
| 163 |
+
</tr>
|
| 164 |
+
<tr>
|
| 165 |
+
<td>α̂<sub>3</sub>(y)</td>
|
| 166 |
+
<td>5</td>
|
| 167 |
+
<td>Three-step multiscale bootstrap corrected <i>p</i>-value</td>
|
| 168 |
+
</tr>
|
| 169 |
+
</tbody>
|
| 170 |
+
</table>
|
| 171 |
+
|
| 172 |
+
*A third-order accurate *p*-value in Section 7.
|
| 173 |
+
---PAGE_BREAK---
|
| 174 |
+
|
| 175 |
+
we will denote by $\hat{\alpha}_{\infty}(y)$, or simply $\hat{\alpha}_{\infty}$ for brevity sake. In this numerical example, the value of $\|\bar{x}\|^2$ is, in fact, chosen to make $\hat{\alpha}_{\infty}(y) = 0.05$. $\hat{\alpha}_{\infty}$ may be approximated by the bootstrap probability with $\tau_1 = 1$, denoted by
|
| 176 |
+
|
| 177 |
+
$$ \hat{\alpha}_0(y) = \tilde{\alpha}_1(y, 1). $$
|
| 178 |
+
|
| 179 |
+
This turns out to be $\hat{\alpha}_0(y) = 0.0085$, showing $\hat{\alpha}_0$ is not a very good approximation to $\hat{\alpha}_{\infty}$. Here the problem is so simple that $\hat{\alpha}_0(y)$, as well as $\hat{\alpha}_{\infty}(y)$, can be computed numerically from the noncentral chi-square distribution function. If the bootstrap resampling with $B = 10,000$, say, is used for $\hat{\alpha}_0$, the standard error becomes 0.0009.
|
| 180 |
+
|
| 181 |
+
A modification of $\hat{\alpha}_0$ is developed based on the geometric theory in Efron, Halloran and Holmes (1996) and Efron and Tibshirani (1998) to improve the accuracy of the approximation to $\hat{\alpha}_{\infty}$. The idea is to compute $\hat{\alpha}_0(\hat{\eta}(y))$ by generating the second-level replicates around $\hat{\eta}(y)$ for estimating the curvature of the surface $\partial\mathcal{R}$. When the surface of $\partial\mathcal{R}$ is flat, $\hat{\alpha}_0(\hat{\eta}(y)) = \frac{1}{2}$. It becomes smaller/larger than $\frac{1}{2}$ when the surface is curved toward/away from $\mathcal{R}$. Let $z$ denote a generic symbol for the $z$-value corresponding to a $p$-value $\alpha$ with relation $z = -\Phi^{-1}(\alpha)$, where $\Phi^{-1}(\cdot)$ is the inverse of the standard normal distribution function $\Phi(\cdot)$. For example, we may write $\hat{z}_0(y) = -\Phi^{-1}(\hat{\alpha}_0(y))$. The ABC conversion formula of Efron (1987) and DiCiccio and Efron (1992) is
|
| 182 |
+
|
| 183 |
+
$$ (2.3) \qquad \hat{z}_{abc}(y) = \frac{\hat{z}_0(y) - \hat{z}_0(\hat{\eta}(y))}{1 - \hat{a}(\hat{z}_0(y) - \hat{z}_0(\hat{\eta}(y)))} - \hat{z}_0(\hat{\eta}(y)), $$
|
| 184 |
+
|
| 185 |
+
where $\hat{z}_{abc}(y)$, $\hat{z}_0(y)$, and $\hat{z}_0(\hat{\eta}(y))$ are denoted $\hat{Z}$, $\tilde{Z}$, and $\hat{z}_0$, respectively, in the notation of equation (6.6) of Efron and Tibshirani (1998). The corrected $p$-value for the two-level bootstrap is then defined by $\hat{\alpha}_{abc}(y) = \Phi(-\hat{z}_{abc}(y))$. The acceleration constant $\hat{a}$, characterizing the probabilistic model, is known to be $\hat{a} = 0$ for the normal model. $\hat{a}$ may also be estimated using the second-level bootstrap for (1.3); for details we refer to Efron, Halloran and Holmes (1996). Note that the sign in front of $\hat{a}$ in (2.3) is reversed from that of equation (6.6) of Efron and Tibshirani (1998), because the $\Delta\eta_p$-axis is taking the opposite direction here.
|
| 186 |
+
|
| 187 |
+
The $p$-values for the numerical example of (2.2) are
|
| 188 |
+
|
| 189 |
+
$$ \begin{align*} \hat{\alpha}_0(y) &= 0.0085, & \hat{\alpha}_0(\hat{\eta}(y)) &= 0.315, \\ \hat{\alpha}_{abc}(y) &= 0.0775, & \hat{\alpha}_{\infty}(y) &= 0.05. \end{align*} $$
|
| 190 |
+
|
| 191 |
+
We observe that $\hat{\alpha}_{abc}$ shows great improvement over $\hat{\alpha}_0$ to approximate $\hat{\alpha}_{\infty}$. This improvement is also confirmed in the asymptotic argument. It has been shown in Efron and Tibshirani (1998) that $k=1$ for $\hat{\alpha}_0$, and $k=2$ for $\hat{\alpha}_{abc}$ under (1.3) and (1.4).
|
| 192 |
+
---PAGE_BREAK---
|
| 193 |
+
|
| 194 |
+
**3. Multiscale bootstrap resampling.** Here we continue to use the normal model (1.1) for the argument of the corrected *p*-value in this section. The bootstrap probability changes if the replicate sample size changes. When we alter $n_1 = 10$ to $n_1 = 3$ for the numerical example of (2.2), or equivalently alter the scale $\tau_1 = 1$ to $\tau_1 = \sqrt{10/3}$, we observe that $\hat{\alpha}_1(y, 1) = 0.0085$ changes to $\hat{\alpha}_1(y, \sqrt{10/3}) = 0.0359$. In the multiscale bootstrap, $\hat{\alpha}_1(y, \tau_1)$ is computed for several values of $\tau_1 = \sqrt{n/n_1}$. For example, instead of $n = 10$, we use the following five $n_1$ values:
|
| 195 |
+
|
| 196 |
+
$$ (3.1) \qquad n_1 = 3, 6, 10, 15, 21, $$
|
| 197 |
+
|
| 198 |
+
and compute the corresponding bootstrap probabilities
|
| 199 |
+
|
| 200 |
+
$$ (3.2) \qquad \tilde{\alpha}_1(y, \tau_1) = 0.0359, 0.0205, 0.0085, 0.0028, 0.0008. $$
|
| 201 |
+
|
| 202 |
+
These values, as well as those for other parameter settings, are shown in Figure 2 by plotting the z-value along the inverse of the scale. The horizontal axis is $1/\tau_1 = \sqrt{n_1/n} = 0.55, 0.78, 1, 1.23, 1.45$, and the vertical axis is $\tilde{z}_1(y, \tau_1) = -\Phi^{-1}(\tilde{\alpha}_1(y, \tau_1)) = 1.80, 2.04, 2.39, 2.77, 3.17$.
|
| 203 |
+
|
| 204 |
+
Figure 2 shows these values along with a regression fit. This is obtained by fitting a regression model with explanatory variables $1/\tau_1$ and $\tau_1$,
|
| 205 |
+
|
| 206 |
+
$$ (3.3) \qquad \tilde{z}_1(y, \tau_1) \approx \hat{v}/\tau_1 + \hat{c}\tau_1, $$
|
| 207 |
+
|
| 208 |
+
to the plot, where $\hat{v}$ and $\hat{c}$ are the regression coefficients estimated as
|
| 209 |
+
|
| 210 |
+
$$ (3.4) \qquad \hat{v} = 2.002, \quad \hat{c} = 0.385 $$
|
| 211 |
+
|
| 212 |
+
for the plot of (3.2). We observe that the regression fit agrees with the plots very
|
| 213 |
+
|
| 214 |
+
FIG. 2. Plots of the z-value of the multiscale bootstrap probability along the inverse of the scale $\tau$ for the normal example ($p=4$) of Section 2 and the exponential example ($p=1$) of Section 4. Parameter values are chosen so that the exact p-value is either 0.05 (left panel) or 0.95 (right panel). The curves are drawn by the regression model of equation (3.3).
|
| 215 |
+
---PAGE_BREAK---
|
| 216 |
+
|
| 217 |
+
well for the cases in Figure 2. The regression model (3.3) has been justified in Shimodaira (2002) under (1.1) and (1.4); we will use "$\approx$" to indicate that equality holds up to $O(n^{-1})$ terms with the error of order $O(n^{-3/2})$. The regression model with explanatory variables $1/\tau_1$ and $\tau_1$ will be justified later, in fact, under (1.3) and (1.4) as seen in (7.15), although the following interpretation of the coefficients should be modified accordingly.
|
| 218 |
+
|
| 219 |
+
A simple geometric interpretation can be given to the regression coefficients under (1.1) and (1.4). Efron and Tibshirani (1998) have shown a formula equivalent to
|
| 220 |
+
|
| 221 |
+
$$ (3.5) \qquad \hat{z}_0(y) \approx \hat{v} + \hat{c}, $$
|
| 222 |
+
|
| 223 |
+
where $\hat{v}$ and $\hat{c}$ correspond to $x_0$ and $\hat{d}_1 - x_0\hat{d}_2$, respectively, in their equation (2.19). $\hat{v}$ is the signed distance of Efron (1985), defined as the distance from $y$ to $\partial\mathcal{R}$ with a positive/negative sign when $y$ is outside/inside of $\mathcal{R}$. Thus, $\hat{v} = \pm\|y - \hat{\eta}(y)\|$ measures evidence of the null hypothesis being wrong. $\hat{c}$ is related to the $(p-1) \times (p-1)$ matrix $\hat{d}^{ab}$ measuring the curvature of $\partial\mathcal{R}$ at $\hat{\eta}(y)$; $\hat{d}^{ab}$ is defined as $d^{ab}$ in (1.4) by making the local coordinates orthonormal at $\hat{\eta}(y)$. In our notation, $\hat{c} = \hat{d}_1 - v\hat{d}_2$, where $\hat{d}_1 = \hat{d}^{aa}$ is the trace of $\hat{d}^{ab}$, and $\hat{d}_2 = (\hat{d}^{ab})^2 = \sum_{a=1}^{p-1} \sum_{b=1}^{p-1} (\hat{d}^{ab})^2$ is that for the squared matrix. When $\partial\mathcal{R}$ is flat at $\hat{\eta}(y)$, $\hat{d}^{ab} = 0$ and, thus, $\hat{c} = 0$. $\hat{v}$, $\hat{d}_1$ and $\hat{d}_2$ are transformation-invariant functions of $y$ calculated from the shape of the boundary and the density function of $Y$; they are referred to as geometric quantities here. Under (1.1) and (1.2) these quantities are
|
| 224 |
+
|
| 225 |
+
$$ (3.6) \qquad \hat{v} = \|y\| - \sqrt{n}, \qquad \hat{d}_1 = \frac{p-1}{2\sqrt{n}}, \qquad \hat{d}_2 = \frac{p-1}{4n}. $$
|
| 226 |
+
|
| 227 |
+
This computes directly,
|
| 228 |
+
|
| 229 |
+
$$ (3.7) \qquad \hat{v} = 2.015, \qquad \hat{c} = 0.323 $$
|
| 230 |
+
|
| 231 |
+
for (2.2), showing good agreement with those computed indirectly from the multiscale bootstrap. $\hat{v}$ and $\hat{c}$ in (3.4) are actually estimating those in (3.7), thus, it would be appropriate to denote the former as $\hat{v}$ and $\hat{c}$, although we do not make the notational distinction. This estimation is third-order accurate, since the regression model (3.3) holds for (3.7) with error of $O(n^{-3/2})$.
|
| 232 |
+
|
| 233 |
+
Considering that $\hat{v}$ and $\hat{c}$ are functions of $y$, we may define a statistic
|
| 234 |
+
|
| 235 |
+
$$ (3.8) \qquad \hat{z}_1(y) = \hat{v} - \hat{c}. $$
|
| 236 |
+
|
| 237 |
+
This is equivalent to the pivot statistic of Efron (1985), and Pr$\{\hat{z}_1(Y) \le x; \eta\} \approx \Phi(x)$ for $\eta \in \partial\mathcal{R}$ under (1.1) and (1.4); see equation (2.16) of Efron and Tibshirani (1998). Thus, a third-order accurate $p$-value is defined by $\hat{\alpha}_1(y) = \Phi(-\hat{z}_1(y))$. We
|
| 238 |
+
---PAGE_BREAK---
|
| 239 |
+
|
| 240 |
+
can compute $\hat{\alpha}_1(y)$ using $\hat{v}$ and $\hat{c}$ obtained from the multiscale bootstrap. For the
|
| 241 |
+
example of (2.2),
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
\hat{\alpha}_1(y) = \Phi(-2.002 + 0.385) = 0.0529,
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
showing an improvement over $\hat{\alpha}_{abc}(y) = 0.0775$ to approximate $\hat{\alpha}_{\infty}(y) = 0.05$.
|
| 248 |
+
The index of $\hat{\alpha}_1$ indicates the “one-step” bootstrap as similarly for $\tilde{\alpha}_1$.
|
| 249 |
+
|
| 250 |
+
It is interesting to note that we can also read off the values of $\hat{z}_1(y)$ from
|
| 251 |
+
Figure 2. The differentiation of (3.3) with respect to $1/\tau_1$ is
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
\frac{\partial \tilde{z}_1(y, \tau_1)}{\partial (1/\tau_1)} \approx \hat{v} - \hat{c} \tau_1^2,
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
and the slope of the regression curve at $1/\tau_1 = 1$ gives $\hat{z}_1(y)$. The corrected
|
| 258 |
+
$p$-value $\hat{\alpha}_1$ is essentially obtained from the change of the bootstrap probability
|
| 259 |
+
in the multiscale bootstrap.
|
| 260 |
+
|
| 261 |
+
**4. Two-step multiscale bootstrap resampling.** The one-step multiscale bootstrap described in Section 3 calculates a very accurate *p*-value for the arbitrarily-shaped region if there exists a transformation from the dataset to the normal model. However, it can be inaccurate if such a transformation does not exist even approximately. This restriction essentially comes from the fact that the covariance matrix of y in (1.1) is constant with respect to η. The acceleration constant *â* of the ABC formula measures the rate of change in the covariance matrix, and *â* is assumed zero in the derivation of (3.8). Here we introduce the *two-step multiscale bootstrap* for estimating *â* to improve the accuracy of the one-step multiscale bootstrap.
|
| 262 |
+
|
| 263 |
+
A breakdown of the one-step multiscale bootstrap method is illustrated in the
|
| 264 |
+
following example. Let X<sub>1</sub>, . . . , X<sub>n</sub> be one-dimensional independent exponential
|
| 265 |
+
random variables with mean μ,
|
| 266 |
+
|
| 267 |
+
$$
|
| 268 |
+
X_1, \dots, X_n \sim \exp(-x/\mu - \log \mu),
|
| 269 |
+
$$
|
| 270 |
+
|
| 271 |
+
and let the null hypothesis of interest be $\mu \le 1$. The exact $p$-value is calculated
|
| 272 |
+
by knowing that a transformed variable $Y = \sqrt{n}\bar{X}$ is distributed as Gamma with
|
| 273 |
+
shape $n$ and mean $\eta = \sqrt{n}\mu$. We consider a numerical example with
|
| 274 |
+
|
| 275 |
+
$$
|
| 276 |
+
(4.1) \qquad p = 1, \quad n = 10, \quad \bar{x} = 1.571,
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
so that $\hat{\alpha}_{\infty}(y) = 0.05$. The multiscale bootstrap probabilities for the five $n_1$ values
|
| 280 |
+
in (3.1) are computed as
|
| 281 |
+
|
| 282 |
+
$$
|
| 283 |
+
(4.2) \qquad \tilde{\alpha}_1(y, \tau_1) = 0.2990, 0.1875, 0.1115, 0.0622, 0.0322,
|
| 284 |
+
$$
|
| 285 |
+
|
| 286 |
+
and the regression coefficients of (3.3) are estimated as $\hat{v} = 1.328$, $\hat{c} = -0.110$.
|
| 287 |
+
Then the corrected *p*-value is computed as
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
(4.3) \qquad \hat{\alpha}_1(y) = \Phi(-1.328 - 0.110) = 0.0753.
|
| 291 |
+
$$
|
| 292 |
+
---PAGE_BREAK---
|
| 293 |
+
|
| 294 |
+
Although this is an improvement over $\hat{\alpha}_0(y) = 0.112$, it is not as good as in the normal example above. The pivot (3.8) is not justified under (1.3) in general, and $\hat{\alpha}_1(y)$ is, in fact, only first-order accurate for the exponential example.
|
| 295 |
+
|
| 296 |
+
The two-step multiscale bootstrap is employed simply to generate a second-step replicate from every first-step replicate. Let us denote the conditional density of the first-step bootstrap replicate $Y^* = \sqrt{n\bar{X}}$ as
|
| 297 |
+
|
| 298 |
+
$$ (4.4) \qquad Y^* \sim f(y^*; y, \tau_1), $$
|
| 299 |
+
|
| 300 |
+
given mean $y = \sqrt{n\bar{X}}$ and scale $\tau_1$ under (1.3), which reduces to $f(y^*; y, 1) = f(y^*; y)$ when $\tau_1 = \sqrt{n/n_1}$ is unity. This becomes (2.1) for (1.1), and Gamma with shape $n_1$ and mean $y$ for the exponential example. We generate a second-step replicate $Y^{**}$ for each $y^*$. The conditional density of $Y^{**}$ given $y^*$ takes the same form as (4.4), but with scale parameter $\tau_2 = \sqrt{n/n_2}$;
|
| 301 |
+
|
| 302 |
+
$$ (4.5) \qquad Y^{**} \sim f(y^{**}; y^*, \tau_2). $$
|
| 303 |
+
|
| 304 |
+
For the normal example, (4.5) is equivalent to generating
|
| 305 |
+
|
| 306 |
+
$$ X_1^{**}, \dots, X_{n_2}^{**} \sim N_p(\bar{x}^*, I_p) $$
|
| 307 |
+
|
| 308 |
+
for given $\bar{x}^*$, and using the transformed variable $Y^{**} = \sqrt{n}\bar{X}^{**}$. The two-step bootstrap probability with a pair of scales $(\tau_1, \tau_2)$ is then defined by
|
| 309 |
+
|
| 310 |
+
$$ \begin{align*} \tilde{\alpha}_2(y, \tau_1, \tau_2) &= \Pr\{Y^{**} \in \mathcal{R}; y, \tau_1, \tau_2\} \\ &= \int \tilde{\alpha}_1(y^*, \tau_2) f(y^*; y, \tau_1) dy^*, \end{align*} $$
|
| 311 |
+
|
| 312 |
+
where the integration is taken over the range of the components. We can write $\tilde{\alpha}_1(y, \tau_1) = \tilde{\alpha}_2(y, \tau_1, 0)$, because the conditional density of $Y^{**}$ converges to the point mass at $y^*$ by taking the limit $\tau_2 \to 0$. The two-step bootstrap might look similar to the double bootstrap of Hall (1992), but they are very different. We should generate thousands of $Y^{**}$ for given $y^*$ in the double bootstrap, but only one $Y^*$ in the two-step bootstrap.
|
| 313 |
+
|
| 314 |
+
Let us consider two $n_2$ values,
|
| 315 |
+
|
| 316 |
+
$$ (4.6) \qquad n_2 = 6, 15, $$
|
| 317 |
+
|
| 318 |
+
for the normal example with parameter values (2.2). The two-step bootstrap probabilities are, for example,
|
| 319 |
+
|
| 320 |
+
$$ \tilde{\alpha}_2(y, \sqrt{\frac{10}{6}}, \sqrt{\frac{10}{6}}) = 0.0359, \qquad \tilde{\alpha}_2(y, \sqrt{\frac{10}{10}}, \sqrt{\frac{10}{15}}) = 0.0205. $$
|
| 321 |
+
|
| 322 |
+
Of course, they give $\tilde{\alpha}_1(y, \sqrt{\frac{10}{3}})$ and $\tilde{\alpha}_1(y, \sqrt{\frac{10}{6}})$, respectively, in (3.2), because
|
| 323 |
+
|
| 324 |
+
$$ \tilde{\alpha}_2(y, \tau_1, \tau_2) = \tilde{\alpha}_1(y, \sqrt{\tau_1^2 + \tau_2^2}) $$
|
| 325 |
+
---PAGE_BREAK---
|
| 326 |
+
|
| 327 |
+
for (1.1). For the exponential example with parameter values (4.1), however,
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
\tilde{\alpha}_2(y, \sqrt{\frac{10}{6}}, \sqrt{\frac{10}{6}}) = 0.3063, \quad \tilde{\alpha}_2(y, \sqrt{\frac{10}{10}}, \sqrt{\frac{10}{15}}) = 0.1866
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
are different, though very slightly, from $\tilde{\alpha}_1(y, \sqrt{\frac{10}{3}}) = 0.2990$ and $\tilde{\alpha}_1(y, \sqrt{\frac{10}{6}}) =$
|
| 334 |
+
$0.1875$, respectively, in (4.2). The difference of $\tilde{\alpha}_2(y, \tau_1, \tau_2)$ from $\tilde{\alpha}_1(y, \sqrt{\tau_1^2 + \tau_2^2})$
|
| 335 |
+
for (1.3) is explained by
|
| 336 |
+
|
| 337 |
+
$$
|
| 338 |
+
(4.7) \qquad \tilde{z}_2(y, \tau_1, \tau_2) - \tilde{z}_1(y, \sqrt{\tau_1^2 + \tau_2^2}) \doteq \frac{\hat{a}\tau_1^2\tau_2^2(\hat{v}^2 - (\tau_1^2 + \tau_2^2))}{(\tau_1^2 + \tau_2^2)^{5/2}}.
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
We will use “$\doteq$” to indicate that equality holds up to $O(n^{-1/2})$ terms with error of order $O(n^{-1})$. Formula (4.7) and a revised regression model
|
| 342 |
+
|
| 343 |
+
$$
|
| 344 |
+
(4.8) \qquad \tilde{z}_1(y, \tau_1) \doteq \frac{\hat{v} - 2\hat{a}\hat{v}^2}{\tau_1} + (\hat{d}_1 - \hat{a})\tau_1
|
| 345 |
+
$$
|
| 346 |
+
|
| 347 |
+
for (1.3) are consequences of a more general argument with third-order accuracy shown in Section 7.
|
| 348 |
+
|
| 349 |
+
The key idea in the two-step multiscale bootstrap is to estimate $\hat{a}$ by looking at
|
| 350 |
+
the difference of $\tilde{\alpha}_2(y, \tau_1, \tau_2)$ from $\tilde{\alpha}_1(y, \sqrt{\tau_1^2 + \tau_2^2})$. Once we compute $\tilde{\alpha}_1(y, \tau_1)$
|
| 351 |
+
and $\tilde{\alpha}_2(y, \tau_1, \tau_2)$ for several values of $(\tau_1, \tau_2)$ by the one-step and two-step
|
| 352 |
+
multiscale bootstrap, we can estimate $\hat{v}, \hat{d}_1$ and $\hat{a}$ by fitting (4.7) and (4.8) to the
|
| 353 |
+
observed bootstrap probabilities. A second-order accurate $p$-value, denoted $\hat{\alpha}_2(y)$,
|
| 354 |
+
is then computed by using the estimated geometric quantities in the $z$-value
|
| 355 |
+
|
| 356 |
+
$$
|
| 357 |
+
(4.9) \qquad \hat{z}_2(y) \doteq \hat{v} - \hat{d}_1 + \hat{a}(1 - \hat{v}^2).
|
| 358 |
+
$$
|
| 359 |
+
|
| 360 |
+
This expression is shown to be equivalent to (2.3) up to $O(n^{-1/2})$ terms by using
|
| 361 |
+
(4.8); $\hat{z}_0(y) \doteq \hat{v} + \hat{d}_1 - \hat{a}(1 + 2\hat{v}^2)$ and $\hat{z}_0(\hat{\eta}(y)) \doteq \hat{d}_1 - \hat{a}$. In the next section we
|
| 362 |
+
will describe a procedure based on the above idea, as well as its refined version
|
| 363 |
+
with third-order accuracy.
|
| 364 |
+
|
| 365 |
+
It follows from (4.8) that the one-step multiscale bootstrap estimates $\hat{v} - 2\hat{a}\hat{v}^2$ and $\hat{d}_1 - \hat{a}$ for the coefficients $\hat{v}$ and $\hat{c}$, respectively, under (1.3). Thus, $\hat{z}_1(y) \doteq \hat{v} - \hat{d}_1 + \hat{a}(1 - 2\hat{v}^2) \doteq \hat{z}_2(y) - \hat{a}\hat{v}^2$, as well as $\hat{z}_0(y) \doteq \hat{z}_2(y) + 2\hat{d}_1 - 2\hat{a} - \hat{a}\hat{v}^2$, is first-order accurate in general. Since the difference $\hat{z}_2(y) - \hat{z}_1(y) \doteq \hat{a}\hat{v}^2$ does not involve $\hat{d}_1$, the one-step method adjusts the bias resulting from the curvature even if the normal model is misspecified.
|
| 366 |
+
|
| 367 |
+
**5. Three-step multiscale bootstrap resampling.** We may repeat “stepping”
|
| 368 |
+
to obtain multistep-multiscale bootstrap probabilities so that we might be able to
|
| 369 |
+
compute higher-order accurate *p*-values. This is the case, in fact, for going one step
|
| 370 |
+
further, although the results are not known for yet further stepping. We introduce
|
| 371 |
+
the three-step multiscale bootstrap for computing a third-order accurate *p*-value,
|
| 372 |
+
---PAGE_BREAK---
|
| 373 |
+
|
| 374 |
+
denoted $\hat{\alpha}_3(y)$, under (1.3) and (1.4). In the following argument, we first describe
|
| 375 |
+
the procedure to compute $\hat{\alpha}_2(y)$, which helps understand that for $\hat{\alpha}_3(y)$.
|
| 376 |
+
|
| 377 |
+
The expression for $\hat{z}_2(y, \tau_1, \tau_2)$ is obtained from (4.7) by substituting $\sqrt{\tau_1^2 + \tau_2^2}$ for $\tau_1$ in (4.8). This is also expressed as
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
(5.1) \qquad \tilde{z}_2(y, \tau_1, \tau_2) \doteq \zeta_2(\hat{\gamma}_1, \hat{\gamma}_2, \hat{\gamma}_3, \tau_1, \tau_2),
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
where the function ζ₂ on the right-hand side is defined by
|
| 384 |
+
|
| 385 |
+
$$
|
| 386 |
+
(5.2) \qquad \zeta_2(\gamma_1, \gamma_2, \gamma_3, \tau_1, \tau_2) = s_1\gamma_1(1 + s_2\gamma_3) - \frac{\gamma_2 + s_2\gamma_3}{s_1\gamma_1}.
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
Here $s_1 = (\tau_1^2 + \tau_2^2)^{-1/2}$ and $s_2 = \tau_1^2 \tau_2^2 s_1^4$ are functions of the scales, and the $\hat{\gamma}_i$'s
|
| 390 |
+
are specified as functions of $y$ under (1.3) and (1.4);
|
| 391 |
+
|
| 392 |
+
$$
|
| 393 |
+
(5.3) \qquad \hat{\gamma}_1 \doteq \hat{v} - 2\hat{a}\hat{v}^2, \qquad \hat{\gamma}_2 \doteq \hat{v}(\hat{a} - \hat{d}_1), \qquad \hat{\gamma}_3 \doteq \hat{v}\hat{a}.
|
| 394 |
+
$$
|
| 395 |
+
|
| 396 |
+
These $\hat{\gamma}_i$'s are also used to express
|
| 397 |
+
|
| 398 |
+
$$
|
| 399 |
+
(5.4) \qquad \hat{z}_2(y) = \hat{\gamma}_1(1 + \hat{\gamma}_3) + \frac{\hat{\gamma}_2}{\hat{\gamma}_1},
|
| 400 |
+
$$
|
| 401 |
+
|
| 402 |
+
which is equivalent to (4.9) up to $O(n^{-1/2})$ terms. We calculate $\tilde{\alpha}_2(y, \tau_1, \tau_2)$
|
| 403 |
+
for several values of $(\tau_1, \tau_2)$ by the two-step multiscale bootstrap resampling,
|
| 404 |
+
and fitting the observed $\tilde{z}_2(y, \tau_1, \tau_2) = -\Phi^{-1}(\tilde{\alpha}_2(y, \tau_1, \tau_2))$ to the nonlinear
|
| 405 |
+
regression model (5.1). Then the estimated $\hat{\gamma}_i$'s are used to compute $\hat{\alpha}_2(y) =$
|
| 406 |
+
$\Phi(-\hat{z}_2(y))$ from (5.4).
|
| 407 |
+
|
| 408 |
+
This procedure is generalized for the three-step multiscale bootstrap resampling.
|
| 409 |
+
A third-step replicate $Y^{***}$ is generated for each $y^{**}$ by
|
| 410 |
+
|
| 411 |
+
$$
|
| 412 |
+
Y^{***} \sim f(y^{***}; y^{**}, \tau_3)
|
| 413 |
+
$$
|
| 414 |
+
|
| 415 |
+
using the scale τ₃, and the three-step bootstrap probability is defined by
|
| 416 |
+
|
| 417 |
+
$$
|
| 418 |
+
\begin{align*}
|
| 419 |
+
\tilde{\alpha}_3(y, \tau_1, \tau_2, \tau_3) &= \Pr\{Y^{***} \in \mathcal{R}; y, \tau_1, \tau_2, \tau_3\} \\
|
| 420 |
+
&= \int \tilde{\alpha}_2(y^*, \tau_2, \tau_3) f(y^*; y, \tau_1) dy^*.
|
| 421 |
+
\end{align*}
|
| 422 |
+
$$
|
| 423 |
+
|
| 424 |
+
Then, observed $\tilde{z}_3(y, \tau_1, \tau_2, \tau_3) = -\Phi^{-1}(\tilde{\alpha}_3(y, \tau_1, \tau_2, \tau_3))$ for several values of $(\tau_1, \tau_2, \tau_3)$ are fitted to the nonlinear regression model $\zeta_3$, defined by
|
| 425 |
+
|
| 426 |
+
$$
|
| 427 |
+
(5.5) \qquad
|
| 428 |
+
\begin{aligned}[t]
|
| 429 |
+
&\zeta_3(\gamma_1, \gamma_2, \gamma_3, \gamma_4, \gamma_5, \gamma_6, \tau_1, \tau_2, \tau_3) \\
|
| 430 |
+
&= \gamma_1 s_1 (1 + \gamma_3 s_2 + 4\gamma_3^2 s_2^2 + \gamma_5 s_3 + \gamma_6 s_4) \\
|
| 431 |
+
&\quad - (\gamma_1 s_1)^{-1} (\gamma_2 + \gamma_3 s_2 + 7\gamma_3^2 s_2^2 + \gamma_4 s_2 + 3\gamma_5 s_3 + 3\gamma_6 s_4),
|
| 432 |
+
\end{aligned}
|
| 433 |
+
$$
|
| 434 |
+
|
| 435 |
+
where $s_1, \dots, s_4$ are given by
|
| 436 |
+
|
| 437 |
+
$$
|
| 438 |
+
s_1 = (\tau_1^2 + \tau_2^2 + \tau_3^2)^{-1/2}, & s_2 = (\tau_1^2 \tau_2^2 + \tau_2^2 \tau_3^2 + \tau_3^2 \tau_1^2)s_1^4, \\
|
| 439 |
+
s_3 = (\tau_1^2 \tau_2^2 \tau_3^2 + \tau_2^4 \tau_3^2 + \tau_1^4 (\tau_2^2 + \tau_3^2))s_1^6, & s_4 = (\tau_1^2 \tau_2^2 \tau_3^2)s_1^6.
|
| 440 |
+
$$
|
| 441 |
+
---PAGE_BREAK---
|
| 442 |
+
|
| 443 |
+
The least squares estimates for the six $\gamma_i$'s are denoted by $\hat{\gamma}_1, \dots, \hat{\gamma}_6$. We then compute $\hat{\alpha}_3(y) = \Phi(-\hat{z}_3(y))$ by using the estimated $\hat{\gamma}_i$'s in
|
| 444 |
+
|
| 445 |
+
$$ (5.6) \qquad \hat{z}_3(y) = \hat{\gamma}_1(1 + \hat{\gamma}_3 + 4\hat{\gamma}_3^2 + \hat{\gamma}_6) + \hat{\gamma}_1^{-1}(\hat{\gamma}_2 + \frac{\hat{\gamma}_3^2}{2} + \hat{\gamma}_4 + \hat{\gamma}_5). $$
|
| 446 |
+
|
| 447 |
+
Section 7 is mostly devoted to proving the third-order accuracy of $\hat{\alpha}_3(y)$. The justification for the second-order accuracy of $\hat{\alpha}_2(y)$ then immediately follows by ignoring $O(n^{-1})$ terms. As seen in (5.3), $\hat{\gamma}_1$ is $O(1)$, and $\hat{\gamma}_2$ and $\hat{\gamma}_3$ are $O(n^{-1/2})$. The rest of the three $O(n^{-1})$ geometric quantities are defined in Section 7.8. We do not have to know, however, the expressions of $\hat{\gamma}_i$'s for computing $\hat{\alpha}_3(y)$, because their values are estimated from the nonlinear regression, and the estimation error is only $O(n^{-3/2})$.
|
| 448 |
+
|
| 449 |
+
It should be noted that there are other asymptotically equivalent expressions for $\zeta_3$ and $\hat{z}_3$ as functions of coefficients transformed from the six $\hat{\gamma}_i$'s; we have shown the two different expressions for $\zeta_2$ and $\hat{z}_2$ as functions of either $\hat{\gamma}_1$, $\hat{\gamma}_2$, $\hat{\gamma}_3$ or $\hat{v}$, $\hat{d}_1$, $\hat{a}$. The expressions (5.5) and (5.6) are obtained by seeking simple ones.
|
| 450 |
+
|
| 451 |
+
**6. Examples.** The two procedures in the previous section are applied to the exponential example with parameter values (4.1). By the two-step multiscale bootstrap, the least squares estimates of $\hat{\gamma}_i$'s are
|
| 452 |
+
|
| 453 |
+
$$ \hat{\gamma}_1 = 1.328, \qquad \hat{\gamma}_2 = 0.144, \qquad \hat{\gamma}_3 = 0.137, $$
|
| 454 |
+
|
| 455 |
+
and the corrected p-value is computed as
|
| 456 |
+
|
| 457 |
+
$$ \hat{\alpha}_2(y) = 1 - \Phi\left\{1.328\left(1 + 0.137\right) + \frac{0.144}{1.328}\right\} = 0.0528, $$
|
| 458 |
+
|
| 459 |
+
which comes closer to the exact p-value $\hat{\alpha}_{\infty}(y) = 0.05$ than $\hat{\alpha}_1(y) = 0.0753$ computed in (4.3). By the three-step multiscale bootstrap, the least squares estimates of the $\hat{\gamma}_i$'s are
|
| 460 |
+
|
| 461 |
+
$$ \begin{align*} \hat{\gamma}_1 &= 1.328, & \hat{\gamma}_2 &= 0.145, & \hat{\gamma}_3 &= 0.127, \\ \hat{\gamma}_4 &= -0.018, & \hat{\gamma}_5 &= -0.0004, & \hat{\gamma}_6 &= -0.036, \end{align*} $$
|
| 462 |
+
|
| 463 |
+
and the corrected p-value is
|
| 464 |
+
|
| 465 |
+
$$ \hat{\alpha}_3(y) = 1 - \Phi\left\{1.328\left(1 + 0.127 + 0.065 - 0.036\right) + \frac{0.145 + 0.008 - 0.018 - 0.0004}{1.328}\right\} = 0.0509, $$
|
| 466 |
+
|
| 467 |
+
which is even better than $\hat{\alpha}_2(y) = 0.0528$.
|
| 468 |
+
|
| 469 |
+
In Table 2 p-values are computed for several parameter settings. The bootstrap probabilities are computed numerically ($B = \infty$), but the standard errors due to the bootstrap resampling are shown for $B = 10,000$. The first row corresponds to the normal model with (2.2), and the fourth row corresponds to the exponential model with (4.1). The following two rows for each are obtained by changing $n = 10$ to
|
| 470 |
+
---PAGE_BREAK---
|
| 471 |
+
|
| 472 |
+
TABLE 2
|
| 473 |
+
$p$-values in percent (standard error) for the examples*
|
| 474 |
+
|
| 475 |
+
<table><thead><tr><th rowspan="2">$n$</th><th rowspan="2">α̂<sub>0</sub></th><th rowspan="2">α̂<sub>abc</sub></th><th colspan="6">Ridge regression</th></tr><tr><th>α̂<sub>1</sub></th><th>α̂<sub>2</sub></th><th>α̂<sub>3</sub></th><th>α̂<sub>2</sub></th><th>α̂<sub>3</sub></th></tr></thead><tbody><tr><td colspan="8" style="text-align:center;">Normal distribution (α̂<sub>∞</sub> = 5.00)</td></tr><tr><td>10</td><td>0.85</td><td>7.75</td><td>5.29 (0.61)</td><td>5.85 (1.81)</td><td>7.03 (8.04)</td><td>5.67 (1.03)</td><td>6.04 (1.13)</td></tr><tr><td>100</td><td>2.73</td><td>5.25</td><td>5.01 (0.37)</td><td>5.05 (1.16)</td><td>5.08 (2.93)</td><td>5.04 (0.78)</td><td>5.06 (0.97)</td></tr><tr><td>1000</td><td>4.12</td><td>5.03</td><td>5.00 (0.32)</td><td>5.00 (1.05)</td><td>5.00 (2.22)</td><td>5.00 (0.72)</td><td>5.00 (0.89)</td></tr><tr><td colspan="8" style="text-align:center;">Exponential distribution (α̂<sub>∞</sub> = 5.00)</td></tr><tr><td>10</td><td>11.15</td><td>5.00</td><td>7.53 (0.31)</td><td>5.28 (0.77)</td><td>5.09 (0.95)</td><td>5.77 (0.60)</td><td>5.13 (0.68)</td></tr><tr><td>100</td><td>6.73</td><td>5.00</td><td>5.90 (0.30)</td><td>5.03 (0.94)</td><td>5.01 (1.50)</td><td>5.25 (0.67)</td><td>5.04 (0.81)</td></tr><tr><td>1000</td><td>5.52</td><td>5.00</td><td>5.29 (0.30)</td><td>5.00 (0.98)</td><td>5.00 (1.82)</td><td>5.08 (0.69)</td><td>5.01 (0.80)</td></tr><tr><td colspan="8" style="text-align:center;">Normal distribution (α̂<sub>∞</sub> = 95.00)</td></tr><tr><td>10</td><td>67.84</td><td>92.33</td><td>95.26 (0.18)</td><td>95.20 (0.41)</td><td>95.02 (0.51)</td><td>95.21 (0.34)</td><td>95.07 (0.37)</td></tr><tr><td>100</td><td>90.65</td><td>94.74</td><td>95.02 (0.24)</td><td>95.07 (0.84)</td><td>95.09 (1.28)</td><td>95.06 (0.60)</td><td>95.07 (0.70)</td></tr><tr><td>1000</td><td>93.91</td><td>94.97</td><td>95.00 (0.28)</td><td>95.00 (0.95)</td><td>95.00 (1.72)</td><td>95.00 (0.67)</td><td>95.00 (0.81)</td></tr><tr><td colspan="8" style="text-align:center;">Exponential distribution (α̂<sub>∞</sub> = 95.00)</td></tr><tr><td>10</td><td>98.78</td><td>95.00</td><td>97.99 (0.24)</td><td>94.48 (1.31)</td><td>96.12 (7.39)</td><td>95.60 (0.81)</td><td>96.48 (0.56)</td></tr><tr><td>100</td><td>96.49</td><td>95.00</td><td>95.95 (0.28)</td><td>94.97 (1.06)</td><td>95.01 (2.71)</td><td>95.24 (0.72)</td><td>95.14 (0.82)</td></tr><tr><td>1000</td><td>95.50</td><td>95.00</td><td>95.30 (0.29)</td><td>95.00 (1.02)</td><td>95.00 (2.19)</td><td>95.08 (0.70)</td><td>95.02 (0.81)</td></tr></tbody></table>
|
| 476 |
+
|
| 477 |
+
*The bootstrap calculation is replaced by integration numerically, and, hence, the number of bootstrap replicates is regarded as $B = \infty$. The standard errors in parentheses are calculated for the case of $B = 10^4$ by the local linearization of the nonlinear regression [Draper and Smith (1998)]. All the combinations of $\tau_1^2 \in \{\frac{10}{3}, \frac{10}{6}, \frac{10}{10}, \frac{10}{15}, \frac{10}{21}\}$, $\tau_2^2 \in \{\frac{10}{6}, \frac{10}{15}\}$, $\tau_3^2 \in \{\frac{10}{6}, \frac{10}{15}\}$ are used for the scales. The total numbers of bootstrap replicates are 5B, 15B and 35B, respectively, for $\hat{\alpha}_1$, $\hat{\alpha}_2$ and $\hat{\alpha}_3$. For the ridge regression, the penalty weights are $\omega_1 = \omega_2 = 0$ and $\omega_3 = \dots = \omega_6 = 0.01$.
|
| 478 |
+
|
| 479 |
+
100 and 1000. Similarly, the last six rows are obtained by changing $\hat{\alpha}_{\infty} = 0.05$ to 0.95. We observe that all the *p*-values tend to converge to $\hat{\alpha}_{\infty}$ as *n* grows, and the corrected *p*-values are faster for convergence than $\hat{\alpha}_0$.
|
| 480 |
+
|
| 481 |
+
$\tilde{\alpha}_3(y, \tau_1, \tau_2, \tau_3)$ is computed for all the combinations of $(\tau_1, \tau_2, \tau_3)$ values, as noted in the table; five $(\tau_1, 0, 0)'$s, ten $(\tau_1, \tau_2, 0)'$s, and twenty $(\tau_1, \tau_2, \tau_3)'$s.
|
| 482 |
+
|
| 483 |
+
Therefore, the numbers of bootstrap probabilities are 5, 15 and 35, respectively, for $\hat{\alpha}_1(y)$, $\hat{\alpha}_2(y)$ and $\hat{\alpha}_3(y)$. The nonlinear regression models are fitted to these bootstrap probabilities, and the least squares estimates of the geometric quantities are calculated; each residual term is weighted inversely proportional to the estimated variance.
|
| 484 |
+
|
| 485 |
+
For stable estimation, ridge regression is also used; a penalty term $\sum_{i=1}^{6} \omega_i \hat{\gamma}_i^2$ with small $\omega_i$ values is added to the residual sum of squares for minimization.
|
| 486 |
+
|
| 487 |
+
For the exponential distribution, $\hat{\alpha}_k$ is kth order accurate ($k = 1, 2, 3$), and, in fact, $|\hat{\alpha}_k - \hat{\alpha}_\infty|$ becomes smaller as $k$ increases in the table.
|
| 488 |
+
|
| 489 |
+
It turns out that $|\hat{\alpha}_{abc} - \hat{\alpha}_\infty|$ is almost zero here, because $\hat{\alpha}_{abc}$ happens to be third-order accurate for the one-dimensional exponential distribution, as shown in Section 7.7.
|
| 490 |
+
|
| 491 |
+
|
| 492 |
+
---PAGE_BREAK---
|
| 493 |
+
|
| 494 |
+
For the normal distribution, $\hat{\alpha}_1, \hat{\alpha}_2$ and $\hat{\alpha}_3$ are third-order accurate, because $\hat{\gamma}_3 = \dots = \hat{\gamma}_6 = 0$ under (1.1), as shown in Section 7.8. This may explain why $|\hat{\alpha}_k - \hat{\alpha}_\infty|$ becomes larger as $k$ increases in some of the rows. These four geometric quantities of zero value are estimated from slight differences of bootstrap probabilities, leading to unstable estimation as seen in the large standard errors. This is alleviated by ridge regression; even the worst case in the table $\hat{\alpha}_3 = 6.04 \pm 1.13$ may be allowed in practice. However, the total number of replicates is 350,000 for $\hat{\alpha}_3$, almost comparable to that of the double bootstrap for achieving the same degree of the standard error.
|
| 495 |
+
|
| 496 |
+
Although $\hat{\alpha}_1$ is first-order accurate for (1.3), it is reasonably accurate even for the exponential model in the table. The total number of replicates is 50,000, yet the standard error is considerably smaller than that of $\hat{\alpha}_3$. Similar observation holds for the second-order accurate $\hat{\alpha}_2$. The one-step, as well as two-step, multiscale bootstrap may provide a compromise between the number of replicates and the accuracy in practice.
|
| 497 |
+
|
| 498 |
+
**7. Asymptotic analysis of the bootstrap methods.**
|
| 499 |
+
|
| 500 |
+
7.1. *A unified approach.* Our approach to assessing the bootstrap methods is not very elegant but rather elementary and brute-force. We explicitly specify a curved coordinate system along $\partial\mathcal{R}$, which is convenient to work on the bootstrap methods. The density function of $Y$ with respect to the curved coordinates is first defined for $\tau = 1$ in Section 7.2 and extended for $\tau > 0$ in Section 7.3. We define a *modified signed distance* by altering $\hat{v}$ slightly, and its distribution function is given in Section 7.4.
|
| 501 |
+
|
| 502 |
+
It turns out that the z-values of the bootstrap probabilities are special cases of the modified signed distance, and our approach gives an asymptotic analysis of the bootstrap methods in a systematic way. Using the result of Section 7.4, a third-order accurate pivot statistic is defined in Section 7.5, and the distribution functions of the bootstrap z-values are shown in Sections 7.6 to 7.8, proving the main results of Section 5.
|
| 503 |
+
|
| 504 |
+
The proofs of lemmas are given in Shimodaira (2004). We have used the computer software *Mathematica* for straightforward and tedious symbolic calculations; the program file is available from the author upon request.
|
| 505 |
+
|
| 506 |
+
7.2. Tube-coordinates. In our curved coordinate system, a point $\eta$ is specified by two parts, a point on $\partial\mathcal{R}$ and the signed distance from it. This is an instance of the coordinate system used for the Weyl tube formula, and we call it tube-coordinates. Below we will define the coordinate system explicitly, and show the expression of the density function of $Y$ in terms of the tube-coordinates. We take an approach similar to that of Kuriki and Takemura (2000).
|
| 507 |
+
|
| 508 |
+
The density function of the exponential family of distributions is expressed as
|
| 509 |
+
|
| 510 |
+
$$ (7.1) \qquad \exp(\theta^i y_i - \psi(\theta) - h(y)) $$
|
| 511 |
+
---PAGE_BREAK---
|
| 512 |
+
|
| 513 |
+
where $\theta = (\theta^1, \dots, \theta^p)$ is the natural parameter vector. We denote (7.1) by $f(y; \eta)$ using the expectation parameter vector $\eta = (\eta_1, \dots, \eta_p) = E(Y)$, the expected value of $Y$. The change of variables $\theta \leftrightarrow \eta$ is one-to-one, and is given by $\eta_i = \partial\psi/\partial\theta^i$, $\theta^i = \partial\phi/\partial\eta_i$, $i = 1, \dots, p$, where the potential function $\phi(\eta)$ is defined from the cumulant function $\psi(\theta)$ by $\phi(\eta) = \max_{\theta}\{\theta^i\eta_i - \psi(\theta)\}$. The metric at $\eta$ is denoted as
|
| 514 |
+
|
| 515 |
+
$$ \phi^{ij}(\eta) = \frac{\partial^2 \phi(\eta)}{\partial \eta_i \partial \eta_j}, $$
|
| 516 |
+
|
| 517 |
+
and the derivatives of $\phi$ at $\eta = 0$ are denoted as
|
| 518 |
+
|
| 519 |
+
$$ \phi^i = \left. \frac{\partial \phi(\eta)}{\partial \eta_i} \right|_0, \quad \phi^{ij} = \left. \frac{\partial^2 \phi(\eta)}{\partial \eta_i \partial \eta_j} \right|_0, \quad \phi^{ijk} = \left. \frac{\partial^3 \phi(\eta)}{\partial \eta_i \partial \eta_j \partial \eta_k} \right|_0, \quad \text{and so on.} $$
|
| 520 |
+
|
| 521 |
+
Since the exponential family is not uniquely expressed up to affine transformation, we assume without loss of generality that $\phi^i = 0$ and $\phi^{ij} = \delta_{ij}$, where $\delta_{ij}$ takes value one when $i = j$, otherwise zero. In other words, $E(Y)=0$ and $\operatorname{cov}(Y)$, the covariance matrix of $Y$, is $I_p$ at $\theta=0$. We make our asymptotic argument local in a neighborhood of $\eta=0$ by assuming the local alternatives.
|
| 522 |
+
|
| 523 |
+
The smooth surface $\partial\mathcal{R}$ of the region $\mathcal{R}$ is specified locally around $\eta=0$ by
|
| 524 |
+
|
| 525 |
+
$$ \eta_a(u) = u_a, \quad a = 1, \dots, p-1; \quad \eta_p(u) \approx -d^{ab}u_a u_b - e^{abc}u_a u_b u_c, $$
|
| 526 |
+
|
| 527 |
+
where $u = (u_1, \dots, u_{p-1})$ is the $(p-1)$-dimensional parameter vector to specify a point $\eta(u)$ on $\partial\mathcal{R}$. $\mathcal{R}$ is specified locally by $\eta_p \le \eta_p(u)$. It follows from the argument below equation (2.12) of Efron and Tibshirani (1998) that $d^{ab} = O(n^{-1/2})$ and $e^{abc} = O(n^{-1})$, and similarly, $\phi^{ijk} = O(n^{-1/2})$ and $\phi^{ijkl} = O(n^{-1})$.
|
| 528 |
+
|
| 529 |
+
Let $B_i^a(u) = \partial\eta_i/\partial u_a$, $i=1, \dots, p$, be the components of a tangent vector of the surface for $a=1, \dots, p-1$. They are given explicitly as
|
| 530 |
+
|
| 531 |
+
$$ B_b^a(u) = \delta_{ab}, \quad b=1, \dots, p-1; \quad B_p^a(u) \approx -2d^{ab}u_b - 3e^{abc}u_b u_c, $$
|
| 532 |
+
|
| 533 |
+
and the metric in the tangent space is given by
|
| 534 |
+
|
| 535 |
+
$$ (7.2) \qquad \begin{aligned} \phi^{ab}(u) ={}& \phi^{ij}(\eta(u)) B_i^a(u) B_j^b(u) \\ & + \{\phantom{-}4d^{ac}d^{bd} - 2d^{ac}\phi^{bdp} - 2d^{bd}\phi^{acp} - d^{cd}\phi^{abp} + \frac{1}{2}\phi^{abcd}\} u_c u_d, \end{aligned} $$
|
| 536 |
+
|
| 537 |
+
where $\phi^{ij}(\eta(u)) \approx \delta_{ij} + \phi^{ija}u_a + \{-d^{ab}\phi^{ipj} + \frac{1}{2}\phi^{abij}\}u_a u_b$. Let $B_i^p(u)$, $i=1, \dots, p$, be the components of the unit length normal vector orthogonal to the tangent vectors with respect to the metric such that
|
| 538 |
+
|
| 539 |
+
$$ \begin{align*} \phi^{ij}(\eta(u)) B_i^a(u) B_j^p(u) &= 0, && a=1, \dots, p-1; \\ \phi^{ij}(\eta(u)) B_i^p(u) B_j^p(u) &= 1. && a=1, \dots, p-1; \end{align*} $$
|
| 540 |
+
---PAGE_BREAK---
|
| 541 |
+
|
| 542 |
+
The components are calculated explicitly as $B_a^p(u) \approx (2d^{ab} - \phi^{abp})u_b + \{3e^{abc} + d^{ab}\phi^{cpp} + d^{bc}\phi^{app} - 2d^{bd}\phi^{acd} + \phi^{abd}\phi^{cdp} + \frac{1}{2}\phi^{abp}\phi^{cpp} - \frac{1}{2}\phi^{abcp}\}u_b u_c$, and $B_p^p(u) \approx 1 - \frac{1}{2}\phi^{app}u_a + \{-2d^{ac}d^{bc} + \frac{1}{2}d^{ab}\phi^{ppp} + \frac{1}{2}\phi^{acp}\phi^{bcp} + \frac{3}{8}\phi^{app}\phi^{bpp} - \frac{1}{4}\phi^{abpp}\}u_a u_b$.
|
| 543 |
+
|
| 544 |
+
Let $v$ be a scalar, and $(u, v)$ be a $p$-dimensional vector. We consider reparameterization defined by
|
| 545 |
+
|
| 546 |
+
$$ (7.3) \qquad \eta_i(u, v) = \eta_i(u) + B_i^p(u)v, \quad i = 1, \dots, p, $$
|
| 547 |
+
|
| 548 |
+
and assume $\eta \leftrightarrow (u, v)$ is one-to-one at least locally around $\eta = 0$. $(u, v)$ gives the tube-coordinates of the point $\eta$. The boundary $\partial\mathcal{R}$ is expressed simply by $v=0$, and the region $\mathcal{R}$ is $v \le 0$. $(u, v)$ is used for indicating the parameter value $\eta = \eta(u, v)$, or the observation $y = \eta(u, v)$. When there is a possibility of confusion, we may write $y \leftrightarrow (\hat{u}, \hat{v})$ instead of $\eta \leftrightarrow (u, v)$.
|
| 549 |
+
|
| 550 |
+
Since the normal vector is orthogonal to the surface, $\eta(u) = \eta(u, 0) \in \partial\mathcal{R}$ is the projection of $\eta(u, v)$ onto $\partial\mathcal{R}$; $\hat{u}$ is the maximum likelihood estimate under the restricted model specified by $\partial\mathcal{R}$. $\eta(\hat{u}, 0)$ is denoted by $\hat{\eta}(y)$ in Section 1 as a function of $y$. $\hat{v}$ is the signed distance mentioned for (1.1) in Section 3.
|
| 551 |
+
|
| 552 |
+
$\hat{v}$ is also related to the signed likelihood ratio $R$ [McCullagh (1984) and Severini (2000)] by $R \approx \hat{v} + \frac{1}{6}\hat{\phi}^{ppp}\hat{v}^2 + \{\frac{1}{24}\hat{\phi}^{pppp} - \frac{1}{72}(\hat{\phi}^{ppp})^2\}\hat{v}^3$, where $\hat{\phi}^{ppp}$ and $\hat{\phi}^{pppp}$ are the third and fourth derivatives to the normal direction evaluated at $\eta(\hat{u}, 0)$, instead of $\eta = 0$. This third derivative is associated with the acceleration constant. For the acceleration constant $\hat{a}$, the formula $\hat{a} = -\frac{1}{6}\hat{\phi}^{ppp}$ is obtained directly from equation (2.9) of DiCiccio and Efron (1992), or by using equation (6.7) of Efron (1987) and $\partial^3\psi/\partial\theta^i\partial\theta^j\partial\theta^k = -\phi^{ijk}$. The expression for the density function of $(\hat{U}, \hat{V})$ is obtained from $f(y; \eta)$ by change of variables, as shown in the following lemma.
|
| 553 |
+
|
| 554 |
+
**LEMMA 1.** Let $Y \sim f(y; \eta)$ be the exponential family of distributions with $\eta = E(Y)$. Without loss of generality we may assume that $\operatorname{cov}(Y) = I_p$ at $\eta = 0$ and that the true parameter value is specified by $\eta = (0, \dots, 0, \lambda)$ for some $\lambda$, that is, $\eta_a = 0, a = 1, \dots, p-1, \eta_p = \lambda$, or, equivalently, $u = 0, v = \lambda$ using the tube-coordinates $(u, v) \leftrightarrow \eta$. Let $f(\hat{u}, \hat{v}; \lambda)$ be the joint density function of $(\hat{U}, \hat{V}) \leftrightarrow Y$. Then, ignoring the error of $O(n^{-3/2})$, we obtain
|
| 555 |
+
|
| 556 |
+
$$ (7.4) \qquad \log f(\hat{u}, \hat{v}; \lambda) \approx g(\hat{v}, \lambda) + g^a(\hat{v}, \lambda)\hat{u}_a + g^{ab}(\hat{v}, \lambda)\hat{u}_a\hat{u}_b \\ \phantom{(7.4) \qquad } + g^{abc}(\hat{v}, \lambda)\hat{u}_a\hat{u}_b\hat{u}_c + g^{abcd}(\hat{v}, \lambda)\hat{u}_a\hat{u}_b\hat{u}_c\hat{u}_d, $$
|
| 557 |
+
|
| 558 |
+
where the five functions on the right-hand side are defined by $g(\hat{v}, \lambda) = -\frac{1}{2}p \log(2\pi) - \frac{1}{2}(\hat{v} - \lambda)^2 - \frac{1}{8}\phi^{ijj} + \frac{1}{6}(\phi^{ijk})^2 - \frac{1}{3}\phi^{ppp}\lambda^3 - \frac{1}{8}\phi^{pppp}\lambda^4 + \{2d^{aa} - \frac{1}{2}\phi^{aap} + \frac{1}{2}\phi^{ppp} + \frac{1}{2}\phi^{ppp}\lambda^2 + \frac{1}{6}\phi^{pppp}\lambda^3\}\hat{v} + \{-2(d^{ab})^2 + 2d^{ab}\phi^{abp} - \frac{3}{4}(\phi^{abp})^2 - \frac{1}{2}(\phi^{app})^2 - \frac{1}{4}(\phi^{ppp})^2 + \frac{1}{4}\phi^{pppp} + \frac{1}{4}\phi^{aapp}\}\hat{v}^2 - \frac{1}{6}\phi^{ppp}\hat{v}^3 - \frac{1}{24}\phi^{pppp}\hat{v}^4$, $g^a(\hat{v}, \lambda) = \frac{1}{2}\phi^{abb} + \frac{1}{2}\phi^{app}\lambda^2 + \frac{1}{6}\phi^{appp}\lambda^3 + \{-\frac{1}{2}\phi^{app}\lambda - d^{ab}\phi^{bcc} + 5d^{ab}\phi^{bpp} + \phi^{app}d^{bb} -$
|
| 559 |
+
---PAGE_BREAK---
|
| 560 |
+
|
| 561 |
+
$$2\phi^{abc}d^{bc} + \frac{1}{2}\phi^{abp}\phi^{bcc} - \frac{3}{2}\phi^{abp}\phi^{bpp} + \frac{1}{4}\phi^{app}\phi^{bbp} - \frac{3}{4}\phi^{app}\phi^{ppp} + \frac{1}{2}\phi^{abc}\phi^{bcp} - \frac{1}{2}\phi^{abbp} + \frac{1}{2}\phi^{appp} + 6e^{abb} + d^{ab}\phi^{bpp}\lambda^2 - \frac{1}{2}\phi^{abp}\phi^{bpp}\lambda^2 - \frac{1}{4}\phi^{app}\phi^{ppp}\lambda^2\hat{v} + \{-d^{ab}\phi^{bpp} + \frac{1}{2}\phi^{abp}\phi^{bpp} + \frac{1}{4}\phi^{app}\phi^{ppp} - \frac{1}{6}\phi^{appp}\}\hat{v}^3, g^{ab}(\hat{v}, \lambda) = -\frac{1}{2}\delta_{ab} - d^{ab}\lambda - \frac{1}{2}d^{ab}\phi^{ccp} + \frac{1}{4}\phi^{abcc} - \frac{1}{4}\phi^{acd}\phi^{bcd} + 2d^{ac}d^{bc} - 2d^{ac}\phi^{bcp} - \frac{1}{2}d^{ab}\phi^{ppp}\lambda^2 + \{-d^{ab} + \frac{1}{2}\phi^{abp} - (2d^{ac}d^{bc} - \frac{1}{2}d^{ab}\phi^{ppp} + \frac{1}{4}\phi^{abpp} - \frac{1}{2}\phi^{acp}\phi^{bcp} - \frac{3}{8}\phi^{app}\phi^{bpp})\lambda\}\hat{v}, g^{abc}(\hat{v}, \lambda) = -\frac{1}{6}\phi^{abc} - e^{abc}\lambda + \{-2e^{abc} + \frac{1}{3}\phi^{abcp} - \frac{3}{2}d^{ab}\phi^{cpp} + d^{ad}\phi^{bcd} - \frac{1}{2}\phi^{abd}\phi^{cdp} - \frac{1}{4}\phi^{abp}\phi^{cpp}\}\hat{v}, g^{abcd}(\hat{v}, \lambda) = -\frac{1}{2}d^{ab}d^{cd} + \frac{1}{2}\phi^{abp}d^{cd} - \frac{1}{24}\phi^{abcd}.$$
|
| 562 |
+
|
| 563 |
+
7.3. *Changing the scale.* We define a density function $f(y; \eta, \tau)$ with mean $\eta$ and scale $\tau > 0$ by modifying $f(y; \eta)$. Here $\tau$ is regarded as a known constant, whereas $\eta$ is an unknown parameter vector. Let $\phi(\eta, \tau)$ be the potential function of $f(y; \eta, \tau)$, and $\phi(\eta)$ be that for $f(y; \eta)$. Since the density function is defined by specifying the potential function, the following equation gives a definition of $f(y; \eta, \tau)$:
|
| 564 |
+
|
| 565 |
+
$$ (7.5) \qquad \qquad \qquad \qquad \qquad \phi(\eta, \tau) = \phi(\eta)/\tau^2. $$
|
| 566 |
+
|
| 567 |
+
This $f(y; \eta, \tau)$ comes naturally from the multiscale bootstrap resampling. In fact, the potential function of the replicate $Y^*$ is $\phi(\eta, \tau) = \|\eta\|^2/(2\tau^2)$ for the normal example (2.1) of Section 2, and that is $\phi(\eta, \tau) = -n(1+\log\eta)/\tau^2$ for the exponential example of Section 4, and thus both agree with (7.5). The same applies to the exponential family, in general, as shown below.
|
| 568 |
+
|
| 569 |
+
**LEMMA 2.** Let $X$ be a $p$-dimensional random vector of the exponential family. We assume that $Y$ is expressed as a sum of $m$ independent $X$'s such that $Y = \sqrt{n}(X_1 + \cdots + X_m)/m$ for $m > 0$, and that the density function is $f(y; \eta)$ when $m=n$. Then $Y \sim f(y; \eta, \tau)$ with $\tau = \sqrt{n/m}$ for $\tau > 0$.
|
| 570 |
+
|
| 571 |
+
We continue to use the tube-coordinates defined by the reparameterization $\eta \leftrightarrow (u, v)$ of (7.3). By altering the potential $\phi(\eta, 1)$ to $\phi(\eta, \tau)$, the metric, as well as the tube-coordinates, should have changed if we go back to the specification of $\eta(u)$ and $B^p(u)$ given in the previous section. However, we continue to use the specification with $\tau = 1$ for any $\tau > 0$, so that the reparameterization $\eta \leftrightarrow (u, v)$ does not depend on $\tau$.
|
| 572 |
+
|
| 573 |
+
**LEMMA 3.** Let $f(\hat{u}, \hat{v}; \lambda)$ be the joint density function of $(\hat{U}, \hat{V}) \leftrightarrow Y$ given in Lemma 1, and $f(\hat{u}, \hat{v}; \lambda, \tau)$ be that corresponding to $f(y; \eta, \tau)$ with scale $\tau > 0$. Then the expression of $\log f(\hat{u}, \hat{v}; \lambda, \tau)$ is obtained from (7.4) by changing $(\hat{u}, \hat{v})$ to
|
| 574 |
+
|
| 575 |
+
$$ (7.6) \qquad \qquad \tilde{u} = \hat{u}/\tau, \qquad \tilde{v} = \hat{v}/\tau, $$
|
| 576 |
+
---PAGE_BREAK---
|
| 577 |
+
|
| 578 |
+
by adding the logarithm of the Jacobian $\log(1/\tau^p)$ to (7.4), and replacing $\phi^{ijk}$, $\phi^{ijkl}$, $d^{ab}$, $e^{abc}$ and $\lambda$, respectively, with
|
| 579 |
+
|
| 580 |
+
$$ (7.7) \qquad \begin{aligned} \tilde{\phi}^{ijk} &= \tau \phi^{ijk}, & \tilde{\phi}^{ijkl} &= \tau^2 \phi^{ijkl}, \\ \tilde{d}^{ab} &= \tau d^{ab}, & \tilde{e}^{abc} &= \tau^2 e^{abc}, & \tilde{\lambda} &= \lambda/\tau. \end{aligned} $$
|
| 581 |
+
|
| 582 |
+
7.4. Modified signed distance. We consider yet another transformation of the coordinates for expressing the bootstrap z-values in modified $\hat{v}$ values. Let w be a scalar variable defined formally by the series
|
| 583 |
+
|
| 584 |
+
$$ (7.8) \qquad w = v + \sum_{r=0}^{\infty} \bar{c}_r v^r + u_c \sum_{r=0}^{\infty} \bar{b}_r^c v^r, $$
|
| 585 |
+
|
| 586 |
+
where $v^r$ denotes the rth power. The coefficients are $\bar{c}_r = O(n^{-1/2})$ and $\bar{b}_r^c = O(n^{-1})$, and their expressions are specified later. We assume the transformation $(u, v) \leftrightarrow (u, w)$ is one-to-one at least locally around $(u, v) = 0$. By inverting the series in (7.8), we also have
|
| 587 |
+
|
| 588 |
+
$$ (7.9) \qquad v = w - \sum_{r=0}^{\infty} c_r w^r - u_c \sum_{r=0}^{\infty} b_r^c w^r, $$
|
| 589 |
+
|
| 590 |
+
where $c_r = \bar{c}_r - \sum_{s=0}^r (r-s+1)\bar{c}_{r-s+1}\bar{c}_s$, and $b_r^c = \bar{b}_r^c$. The coefficients are $c_r = O(n^{-1/2})$ and $b_r^c = O(n^{-1})$. Let $\tilde{W}$ be the random variable corresponding to $w$; the observed value $\hat{w}$ is defined by (7.8) but using the observed $(\hat{u}, \hat{v})$ instead of $(u, v)$.
|
| 591 |
+
|
| 592 |
+
We call $\hat{w}$ a modified signed distance characterized by the coefficients $b_r^c$, $c_r$; $\hat{w}$ reduces to $\hat{v}$ when all these coefficients are zero. The z-values of the bootstrap probabilities are represented as $\hat{w}$ by appropriately specifying the coefficients. The following lemma plays a key role in studying the distributional properties of the bootstrap probabilities.
|
| 593 |
+
|
| 594 |
+
LEMMA 4. Let us assume that the distribution of $Y$ in the tube-coordinates is specified by $(\hat{U}, \hat{V}) \sim f(\hat{u}, \hat{v}; \lambda, \tau)$, and the coefficients in (7.9) are of order $b_r^c = O(n^{-1})$ for $r \ge 0$, $c_0 = O(n^{-1/2})$, $c_1 = O(n^{-1})$, $c_2 = O(n^{-1/2})$, $c_3 = O(n^{-1})$ and $c_r = O(n^{-3/2})$ for $r \ge 4$. We define $z_c(\hat{w}; \lambda, \tau)$ from the distribution function of the modified signed distance $\tilde{W}$ as
|
| 595 |
+
|
| 596 |
+
$$ \Pr\{\tilde{W} \le \hat{w}\} = \Phi(z_c(\hat{w}; \lambda, \tau)). $$
|
| 597 |
+
|
| 598 |
+
Then the $z_c$-formula is, ignoring the error of $O(n^{-3/2})$, expressed as
|
| 599 |
+
|
| 600 |
+
$$ (7.10) \qquad z_c(\hat{w}; \lambda, \tau) \approx \tau^{-1}g_-(\hat{w}, \lambda) + \tau g_+(\hat{w}, \lambda), $$
|
| 601 |
+
|
| 602 |
+
where $g_-(\hat{w}, \lambda) = (\hat{w}-\lambda) - c_0 - \frac{1}{3}\phi^{ppp}\lambda^2 + \frac{1}{6}\phi^{ppp}\lambda\hat{w} + (\frac{1}{6}\phi^{ppp} - c_2)\hat{w}^2 - \frac{1}{6}c_0\phi^{ppp}\lambda - \{c_1 + \frac{1}{3}c_0\phi^{ppp}\}\hat{w} + \{\frac{1}{8}(\phi^{app})^2 + \frac{1}{18}(\phi^{ppp})^2 - \frac{1}{8}\phi^{pppp}\}\lambda^3 +$
|
| 603 |
+
---PAGE_BREAK---
|
| 604 |
+
|
| 605 |
+
$$
|
| 606 |
+
\begin{align*}
|
| 607 |
+
& \{-\frac{1}{8}(\phi^{app})^2 + \frac{1}{24}\phi^{pppp}\} \lambda^2 \hat{w} + \{-\frac{1}{24}(\phi^{ppp})^2 + \frac{1}{24}\phi^{pppp} - \frac{1}{6}c_2\phi^{ppp}\} \lambda \hat{w}^2 + \\
|
| 608 |
+
& \{-\frac{1}{72}(\phi^{ppp})^2 + \frac{1}{24}\phi^{pppp} - \frac{1}{3}c_2\phi^{ppp} - c_3\}\hat{w}^3, \text{ and } g_+(\hat{w}, \lambda) = -(d^{aa} + \frac{1}{6}\phi^{ppp}) + \\
|
| 609 |
+
& \{(d^{ab})^2 - d^{ab}\phi^{abp} + \frac{1}{6}d^{aa}\phi^{ppp} + \frac{1}{2}(\phi^{abp})^2 + \frac{1}{2}(\phi^{app})^2 + \frac{13}{72}(\phi^{ppp})^2 - \frac{1}{4}\phi^{aapp} - \\
|
| 610 |
+
& \frac{1}{8}\phi^{pppp}\}\hat{w} + \{(d^{ab})^2 - \frac{1}{6}d^{aa}\phi^{ppp} + \frac{1}{8}(\phi^{app})^2 + \frac{5}{72}(\phi^{ppp})^2 - \frac{1}{24}\phi^{pppp}\}\lambda. \text{ Note that the } z_c\text{-formula does not involve the coefficients } b_r^c, \text{ and that the distribution function of } \tilde{W} \text{ is characterized by the coefficients } c_r \text{ with third-order accuracy. The index } c \text{ of } z_c \text{ indicates the coefficients } c_r.
|
| 611 |
+
\end{align*}
|
| 612 |
+
$$
|
| 613 |
+
|
| 614 |
+
The true parameter value is assumed to be (0, λ) in the (u, v)-coordinates for (7.4) and (7.10). If we alter the true parameter value to arbitrary (u, v) with u ≠ 0, the expression changes as well, and Φ⁻¹(Pr{W̃ ≤ ŵ̂}) is denoted as z_c(ŵ̂; u, v, τ), which reduces to z_c(ŵ̂; 0, λ, τ) = z_c(ŵ̂; λ, τ) when u = 0 and v = λ.
|
| 615 |
+
|
| 616 |
+
$z_c(\hat{w}; u, v, \tau)$ is used for representing the bootstrap probabilities in particular. The simple bootstrap probability is, for example, $\hat{\alpha}_0(y) = \Pr\{\hat{V}^* \le 0; y\} = \Phi(z_c(0; \hat{u}, \hat{v}, 1))$ with all $c_r = 0$. The expression of $z_c(\hat{w}^*; \hat{u}, \hat{v}, \tau)$ is obtained from (7.10) by changing the origin to $\eta(\hat{u})$.
|
| 617 |
+
|
| 618 |
+
LEMMA 5. Let Y* be a replicate of Y distributed conditionally as Y* ~ f(y*; y, τ) with mean y and scale τ, and W* be the corresponding modified signed distance. Let us denote the conditional distribution of W* given y as Pr{W* ≤ ŵ*; y} = Φ(zc(ŵ*; ū, ḵ, τ)). Then the expression ofzc(ŵ*; ū, ḵ, τ) is obtained from (7.10) by replacing ŵ, λ, φppp and d₁ = daa, respectively, with ŵ*, ḵ,
|
| 619 |
+
|
| 620 |
+
$$
|
| 621 |
+
(7.11) \quad \hat{\phi}^{ppp} \approx \phi^{ppp} + \left\{ 3\phi^{bpp}(2d^{bc} - \phi^{bcp}) - \frac{3}{2}\phi^{cpp}\phi^{ppp} + \phi^{cppp} \right\} \hat{u}_c \quad \text{and}
|
| 622 |
+
$$
|
| 623 |
+
|
| 624 |
+
$$
|
| 625 |
+
(7.12) \quad \hat{d}_1 \approx d^{aa} + \left\{ \frac{1}{2} d^{aa} \phi^{cpp} - d^{ab} \phi^{abc} + 3e^{aac} \right\} \hat{u}_c.
|
| 626 |
+
$$
|
| 627 |
+
|
| 628 |
+
Note that $O(n^{-1})$ terms change only $O(n^{-3/2})$. For example, $d_2 = (d^{ab})^2$ would be replaced with $\hat{d}_2$, but $\hat{d}_2 \approx d_2$.
|
| 629 |
+
|
| 630 |
+
7.5. *Pivot statistic.* Although the exactly unbiased *p*-value may not exist in general, a third-order accurate *p*-value can be derived under (1.3) and (1.4). Let *Y*<sup>*</sup> ~ *f*(*y*<sup>*</sup>; *ŷ*(*y*), 1) be a replicate generated with mean *ŷ*(*y*) instead of *y*, and *α*<sub>*∞*</sub>(*y*) be defined as the probability of the corresponding signed distance *V̂*<sup>*</sup> being greater than or equal to the observed value *v̂*;
|
| 631 |
+
|
| 632 |
+
$$
|
| 633 |
+
\hat{\alpha}_{\infty}(y) = \mathrm{Pr}\{\hat{V}^* \geq \hat{v}; \hat{\eta}(y)\}.
|
| 634 |
+
$$
|
| 635 |
+
|
| 636 |
+
This is the exact *p*-value for the normal example of Section 2 and for the exponential example of Section 4. We will show that $\hat{\alpha}_{\infty}(y)$ is, in fact, third-order accurate under (1.3) and (1.4).
|
| 637 |
+
---PAGE_BREAK---
|
| 638 |
+
|
| 639 |
+
First, $\hat{z}_{\infty}(y) = -\Phi^{-1}(\hat{\alpha}_{\infty}(y))$ is expressed by the $z_c$-formula of Lemma 5.
|
| 640 |
+
From the definition, $\hat{z}_{\infty}(y) = z_c(\hat{v}; \hat{u}, 0, 1)$ with all $c_r = 0$ and, thus,
|
| 641 |
+
|
| 642 |
+
$$
|
| 643 |
+
\begin{equation}
|
| 644 |
+
\begin{aligned}
|
| 645 |
+
\hat{z}_{\infty}(y) \approx \hat{v} & - \left(d_1 + \frac{1}{6}\hat{\phi}^{ppp}\right) + \frac{1}{6}\hat{\phi}^{ppp}\hat{v}^2 \\
|
| 646 |
+
& + \left\{
|
| 647 |
+
\begin{aligned}[t]
|
| 648 |
+
&(d^{ab})^2 - d^{ab}\phi^{abp} + \frac{1}{6}d^{aa}\phi^{ppp} \\
|
| 649 |
+
&+ \frac{1}{2}(\phi^{abp})^2 + \frac{1}{2}(\phi^{app})^2 + \frac{13}{72}(\phi^{ppp})^2 - \frac{1}{4}\phi^{aapp} - \frac{1}{8}\phi^{pppp}
|
| 650 |
+
\end{aligned}
|
| 651 |
+
\right\}\hat{v} \\
|
| 652 |
+
& + \left\{-\frac{1}{72}(\phi^{ppp})^2 + \frac{1}{24}\phi^{pppp}\right\}\hat{v}^3.
|
| 653 |
+
\end{aligned}
|
| 654 |
+
\tag{7.13}
|
| 655 |
+
\end{equation}
|
| 656 |
+
$$
|
| 657 |
+
|
| 658 |
+
By comparing (7.13) with (7.8), we find that $\hat{z}_{\infty}(y)$ can be expressed as $\hat{w}$ with coefficients $\bar{c}_0 = -d^{aa} - \frac{1}{6}\phi^{ppp}$, $\bar{c}_1 = (d^{ab})^2 - d^{ab}\phi^{abp} + \frac{1}{6}d^{aa}\phi^{ppp} + \frac{1}{2}(\phi^{abp})^2 + \frac{1}{2}(\phi^{app})^2 + \frac{13}{72}(\phi^{ppp})^2 - \frac{1}{4}\phi^{aapp} - \frac{1}{8}\phi^{pppp}$, $\bar{c}_2 = \frac{1}{6}\phi^{ppp}$, $\bar{c}_3 = -\frac{1}{72}(\phi^{ppp})^2 + \frac{1}{24}\phi^{pppp}$, $\bar{b}_0^c = -\frac{1}{2}d^{aa}\phi^{cpp} + d^{ab}\phi^{abc} - 3e^{aac}$ and $\bar{b}_2^c = \frac{1}{2}\phi^{bpp}(2d^{bc} - \phi^{bcp}) - \frac{1}{4}\phi^{cpp}\phi^{ppp} + \frac{1}{6}\phi^{cppp}$. Then the distribution function of $\hat{z}_{\infty}(y)$ is obtained immediately from Lemma 4 as shown below.
|
| 659 |
+
|
| 660 |
+
LEMMA 6. Let us consider a statistic
|
| 661 |
+
|
| 662 |
+
$$
|
| 663 |
+
\hat{z}_q(y) \approx \hat{z}_\infty(y) + q_0 + q_1 \hat{v} + q_2 \hat{v}^2 + q_3 \hat{v}^3 + \hat{u}_c g^c(\hat{v}),
|
| 664 |
+
$$
|
| 665 |
+
|
| 666 |
+
where the coefficients are $q_0 = O(n^{-1/2})$, $q_1 = O(n^{-1})$, $q_2 = O(n^{-1/2})$ and
|
| 667 |
+
$q_3 = O(n^{-1})$, and $g^c(\hat{v}) = O(n^{-1})$, $c = 1, \dots, p-1$, representing arbitrary
|
| 668 |
+
polynomials of $\hat{v}$. The index $q$ of $z_q$ indicates the coefficients. Assuming $(\hat{U}, \hat{V}) \sim
|
| 669 |
+
f(\hat{u}, \hat{v}; \lambda, 1)$, the distribution function of $\hat{z}_q(y)$ is expressed as
|
| 670 |
+
|
| 671 |
+
$$
|
| 672 |
+
\begin{align*}
|
| 673 |
+
& \Pr\{\hat{z}_q(Y) \le x; \lambda\} \\
|
| 674 |
+
& \approx \Phi[x - \lambda - q_0 - \frac{1}{3}\phi^{ppp}\lambda^2 + \frac{1}{6}\phi^{ppp}\lambda x - q_2x^2 \\
|
| 675 |
+
& \phantom{{}\approx {}} + \{(d^{ab})^2 + \frac{1}{8}(\phi^{app})^2 + \frac{7}{72}(\phi^{ppp})^2 - \frac{1}{24}\phi^{pppp} - \frac{1}{6}\phi^{ppp}q_0\}\lambda \\
|
| 676 |
+
& \phantom{{}\approx {}} + \{-q_1 - 2q_2(d^{aa} + \frac{1}{6}\phi^{ppp} - q_0)\}x + \{-\frac{1}{8}(\phi^{app})^2 + \frac{1}{24}\phi^{pppp}\}\lambda^2 x \\
|
| 677 |
+
& \phantom{{}\approx {}} + \{\frac{1}{3}\phi^{ppp}q_2 + 2q_2^2 - q_3\}x^3 + \{\frac{1}{8}(\phi^{app})^2 + \frac{1}{18}(\phi^{ppp})^2 - \frac{1}{8}\phi^{pppp}\}\lambda^3 \\
|
| 678 |
+
& \phantom{{}\approx {}} + \{-\frac{5}{72}(\phi^{ppp})^2 + \frac{1}{24}\phi^{pppp} - \frac{1}{6}\phi^{ppp}q_2\}\lambda x^2\}. \tag{7.14}
|
| 679 |
+
\end{align*}
|
| 680 |
+
$$
|
| 681 |
+
|
| 682 |
+
For $\lambda = 0$, the distribution function is $\Pr\{\hat{z}_q(Y) \le x; 0\} \approx \Phi[x - q_0 - q_2x^2 + \\
|
| 683 |
+
\{-q_1 - 2q_2(d^{aa} + \frac{1}{6}\phi^{ppp} - q_0)\}x + \{\frac{1}{3}\phi^{ppp}q_2 + 2q_2^2 - q_3\}x^3]$. In particular, \\
|
| 684 |
+
$\Pr\{\hat{z}_\infty(Y) \le x; 0\} \approx \Phi(x)$ and, thus, $\hat{z}_\infty(y)$ is a third-order accurate pivot statistic. \\
|
| 685 |
+
We obtain $\Pr\{\hat{\alpha}_\infty(Y) < \alpha; \eta\} \approx \alpha$ for $\eta \in \partial\mathbb{R}$, proving the third-order accuracy \\
|
| 686 |
+
of $\hat{\alpha}_\infty(y)$.
|
| 687 |
+
|
| 688 |
+
The reverse of the above statement also holds. $\hat{\alpha}_q(y) = \Phi(-\hat{z}_q(y))$ is a third-
|
| 689 |
+
order accurate $p$-value if and only if $q_0 \approx q_1 \approx q_2 \approx q_3 \approx 0$. If we confine our
|
| 690 |
+
attention to $\hat{\alpha}_q(y)$ defined only from $\hat{v}$ and the geometric quantities $d^{ab}, e^{abc}, \phi^{ij},$
|
| 691 |
+
---PAGE_BREAK---
|
| 692 |
+
|
| 693 |
+
$\phi^{ijk}$ and $\phi^{ijkl}$ evaluated at $\hat{\eta}(y)$, then $\hat{u}_c g^c(\hat{v})$ in $\hat{z}_q(y)$ comes only from $q_r$'s by the replacements shown in Lemma 5. Thus, $\hat{\alpha}_q(y)$ is a third-order accurate p-value if and only if $\hat{\alpha}_q(y) \approx \hat{\alpha}_{\infty}(y)$. Similarly, $\hat{\alpha}_q(y)$ is second-order accurate if and only if $q_0 \doteq q_2 \doteq 0$ and, thus, $\hat{\alpha}_q(y) \doteq \hat{\alpha}_{\infty}(y)$.
|
| 694 |
+
|
| 695 |
+
$\hat{z}_{\infty}(y)$ is equivalent to other pivots in the literature up to $O(n^{-1})$ terms. Under (1.1) and (1.4), $\phi^{ijk} = \phi^{ijkl} = 0$ and, thus, (7.13) reduces to $\hat{z}_{\infty}(y) \approx \hat{v} - \hat{d}_1 + \hat{d}_2\hat{v}$, giving (3.8), the pivot of Efron (1985). Under (1.3), the modified signed likelihood ratio [Barndorff-Nielsen (1986) and Barndorff-Nielsen and Cox (1994)] has been known as a third-order accurate pivot, and it is expressed as $R^* = R + (1/R)\log(U/R)$ in the notation of Severini [(2000), page 251], where U is defined using the log-likelihood derivatives. A straightforward calculation shows that $U \approx \hat{v} - \hat{d}_1\hat{v}^2 + \{\frac{1}{2}(d^{aa})^2 + d^{ab}d^{ab} - \frac{1}{4}\phi^{aapp} - d^{ab}\phi^{abp} + \frac{1}{2}(\phi^{abp})^2 + \frac{1}{2}(\phi^{app})^2 + \frac{1}{8}(\phi^{ppp})^2 - \frac{1}{12}\phi^{pppp}\}\hat{v}^3$, and that $R^* \approx \hat{z}_{\infty}(y)$ in the moderate deviation region.
|
| 696 |
+
|
| 697 |
+
**7.6. Accuracy of the bootstrap probability.** Since the event $Y^* \in \mathcal{R}$ is equivalent to the event $\hat{V}^* \le 0$, the z-value of the bootstrap probability with scale $\tau$ is expressed by the $z_c$-formula of Lemma 5; $\tilde{z}_1(y, \tau) = -z_c(0; \hat{u}, \hat{v}, \tau)$ with all $c_r = 0$. From (7.10), we obtain a refined version of (4.8), erring only $O(n^{-3/2})$,
|
| 698 |
+
|
| 699 |
+
$$
|
| 700 |
+
\begin{align}
|
| 701 |
+
\tilde{z}_1(y, \tau) &\approx \tau^{-1}[\hat{v} + \frac{1}{3}\hat{\phi}^{ppp}\hat{v}^2 - \{\frac{1}{8}(\phi^{app})^2 + \frac{1}{18}(\phi^{ppp})^2 - \frac{1}{8}\phi^{pppp}\}\hat{v}^3] \tag{7.15} \\
|
| 702 |
+
&\quad + \tau[(\hat{d}_1 + \frac{1}{6}\hat{\phi}^{ppp}) \nonumber \\
|
| 703 |
+
&\quad - \{(d^{ab})^2 - \frac{1}{6}d^{aa}\phi^{ppp} + \frac{1}{8}(\phi^{app})^2 + \frac{5}{72}(\phi^{ppp})^2 - \frac{1}{24}\phi^{pppp}\}\hat{v}].
|
| 704 |
+
\end{align}
|
| 705 |
+
$$
|
| 706 |
+
|
| 707 |
+
It follows from (7.15) that $\tau\tilde{z}_1(y, \tau)$ is expressed as $\hat{w}$ and, thus, $\tau\tilde{z}_1(y, \tau) \approx \hat{z}_q(y)$ by choosing the coefficients appropriately. They are $c_0 = (d^{aa} + \frac{1}{6}\phi^{ppp})\tau^2$, $c_1 = -(d^{ab})^2 - \frac{1}{2}d^{aa}\phi^{ppp} - \frac{1}{8}(\phi^{app})^2 - \frac{13}{72}(\phi^{ppp})^2 + \frac{1}{24}\phi^{pppp}\tau^2$, $c_2 = \frac{1}{3}\phi^{ppp}$, and $c_3 = -\frac{1}{8}(\phi^{app})^2 - \frac{5}{18}(\phi^{ppp})^2 + \frac{1}{8}\phi^{pppp}$ for $\hat{w}$, or, equivalently, $q_0 = (1 + \tau^2)(d^{aa} + \frac{1}{6}\phi^{ppp})$, $q_1 = -(1+\tau^2)(d^{ab})^2 + d^{ab}\phi^{abp} + \frac{1}{4}\phi^{aapp} - \frac{1}{2}(\phi^{abp})^2 - \frac{1}{8}(4+\tau^2)(\phi^{app})^2 + \frac{1}{6}(-1+\tau^2)d^{aa}\phi^{ppp} - \frac{1}{72}(13+5\tau^2)(\phi^{ppp})^2 + \frac{1}{24}(3+\tau^2)\phi^{pppp}$, $q_2 = \frac{1}{6}\phi^{ppp}$, $q_3 = -\frac{1}{8}(\phi^{app})^2 - \frac{1}{24}(\phi^{ppp})^2 + \frac{1}{12}\phi^{pppp}$ for $\hat{z}_q(y)$. The distribution function of $\tau\tilde{z}(y, \tau)$ is obtained from (7.10) or (7.14). In particular, the distribution function of $\hat{z}_0(y) = \tilde{z}_1(y, 1)$ under $\lambda = 0, \tau = 1$ is
|
| 708 |
+
|
| 709 |
+
$$
|
| 710 |
+
\begin{align}
|
| 711 |
+
\Pr\{\hat{z}_0(Y) \le x; 0\} &\approx \Phi[x - (2d^{aa} + \frac{1}{3}\phi^{ppp}) - \frac{1}{6}\phi^{ppp}x^2] \tag{7.16} \\
|
| 712 |
+
&\quad + \left\{
|
| 713 |
+
\begin{aligned}[t]
|
| 714 |
+
& 2(d^{ab})^2 - d^{ab}\phi^{abp} + \frac{1}{3}d^{aa}\phi^{ppp} + \frac{1}{2}(\phi^{abp})^2 \\
|
| 715 |
+
& + \frac{5}{8}(\phi^{app})^2 + \frac{11}{36}(\phi^{ppp})^2 - \frac{1}{4}\phi^{aapp} - \frac{1}{6}\phi^{pppp}
|
| 716 |
+
\end{aligned}
|
| 717 |
+
\right\}x \\
|
| 718 |
+
&\quad + \left\{
|
| 719 |
+
\begin{aligned}[t]
|
| 720 |
+
& \frac{11}{72}(\phi^{ppp})^2 + \frac{1}{8}(\phi^{app})^2 - \frac{1}{12}\phi^{pppp}
|
| 721 |
+
\end{aligned}
|
| 722 |
+
\right\}x^3,
|
| 723 |
+
\end{align}
|
| 724 |
+
$$
|
| 725 |
+
|
| 726 |
+
showing the first-order accuracy of $\hat{\alpha}_0(y)$.
|
| 727 |
+
---PAGE_BREAK---
|
| 728 |
+
|
| 729 |
+
Remark A of Efron and Tibshirani (1998) discusses a calibrated bootstrap probability, denoted $\hat{\alpha}_{\text{double}}(y)$ here, using the double bootstrap of Hall (1992). Similarly to the two-level bootstrap, thousands of $Y^*$ are generated around $\hat{\eta}(y)$. Then $\hat{\alpha}_0(y^*)$ is computed for each $y^*$. The expression of $\hat{z}_{\text{double}}(y) = \Phi^{-1}[\Pr\{\hat{z}_0(Y^*) \le \hat{z}_0(y); \hat{\eta}(y)\}]$ is obtained from (7.16) by the replacements of Lemma 5, and a straightforward calculation shows that $\hat{z}_{\text{double}}(y) \approx \hat{z}_{\infty}(y)$, proving the third-order accuracy of $\hat{\alpha}_{\text{double}}(y)$.
|
| 730 |
+
|
| 731 |
+
7.7. Accuracy of the two-level bootstrap. The expression of $\hat{z}_0(y)$ is obtained from (7.15) by letting $\tau = 1$, and $\hat{z}_0(\hat{\eta}(y)) \approx \hat{d}_1 + \frac{1}{6}\hat{\phi}^{ppp}$ is obtained from it by letting $\hat{v} = 0$. By substituting these expressions, as well as $\hat{a} = -\frac{1}{6}\hat{\phi}^{ppp}$ for those in (2.3), we find that $\hat{z}_{abc}(y)$ is expressed as $\hat{w}$, or, equivalently, $\hat{z}_q(y)$ with coefficients $q_0 = q_2 = 0$, $q_1 = -2(d^{ab})^2 + \frac{1}{4}\phi^{aa}{}^{app} + d^{ab}\phi^{abp} - \frac{1}{2}(\phi^{abp})^2 - \frac{5}{8}(\phi^{app})^2 - \frac{1}{4}(\phi^{ppp})^2 + \frac{1}{6}\phi^{pppp}$ and $q_3 = -\frac{1}{8}(\phi^{app})^2 - \frac{1}{8}(\phi^{ppp})^2 + \frac{1}{12}\phi^{pppp}$. The distribution function is then obtained from Lemma 6. For $\lambda = 0$, it becomes
|
| 732 |
+
|
| 733 |
+
$$ (7.17) \quad \Pr\{\hat{z}_{abc}(Y) \le x; 0\} \approx \Phi(x - q_1x - q_3x^3), $$
|
| 734 |
+
|
| 735 |
+
showing the second-order accuracy of $\hat{\alpha}_{abc}(y)$.
|
| 736 |
+
|
| 737 |
+
For the exponential example of Section 4, $p=1$, $\phi^{111} = -2/\sqrt{n}$, $\phi^{1111} = 6/n$ and all the other quantities in $q_1$ and $q_3$ are zero. Therefore, $q_1 = q_3 = 0$, and $\hat{z}_{abc}(y)$ turns out to be third-order accurate, explaining the high accuracy of $\hat{\alpha}_{abc}(y)$ observed in Table 2.
|
| 738 |
+
|
| 739 |
+
7.8. Accuracy of the multistep-multiscale bootstrap. Using the expressions (7.4) and (7.15), the expression of $\tilde{z}_2(y, \tau_1, \tau_2)$ is obtained by the integration
|
| 740 |
+
|
| 741 |
+
$$ (7.18) \quad \tilde{z}_2(y, \tau_1, \tau_2) = \Phi^{-1}\left\{\int \Phi(\tilde{z}_1(y^*, \tau_2)) f(y^*; y, \tau_1) dy^*\right\}. $$
|
| 742 |
+
|
| 743 |
+
By repeating the same integration using $\tilde{z}_2(y^*, \tau_2, \tau_3)$ instead of $\tilde{z}_1(y^*, \tau_2)$, we obtain the expression of $\tilde{z}_3(y, \tau_1, \tau_2, \tau_3)$ as given below.
|
| 744 |
+
|
| 745 |
+
LEMMA 7. Let us define the following six geometric quantities using the derivatives evaluated at $\eta = 0$: $\gamma_1 = \lambda + \frac{1}{3}\lambda^2\phi^{ppp} + \lambda^3\{-\frac{1}{8}(\phi^{app})^2 - \frac{1}{18}(\phi^{ppp})^2 + \frac{1}{8}\phi^{pppp}\}$, $\gamma_2 = \lambda\{-d^{aa} - \frac{1}{6}\phi^{ppp}\} + \lambda^2\{(d^{ab})^2 - \frac{1}{2}d^{aa}\phi^{ppp} + \frac{1}{8}(\phi^{app})^2 + \frac{1}{72}(\phi^{ppp})^2 - \frac{1}{24}\phi^{pppp}\}$, $\gamma_3 = -\frac{1}{6}\lambda\phi^{ppp} + \lambda^2\{\frac{1}{4}(\phi^{app})^2 + \frac{1}{9}(\phi^{ppp})^2 - \frac{1}{8}\phi^{pppp}\}$, $\gamma_4 = \lambda^2\{-d^{ab}\phi^{abp} + \frac{1}{3}d^{aa}\phi^{ppp} + \frac{1}{2}(\phi^{abp})^2 + \frac{1}{2}(\phi^{app})^2 + \frac{2}{9}(\phi^{ppp})^2 - \frac{1}{4}\phi^{aapp}\}$, $\gamma_5 = \lambda^2\{-\frac{1}{8}(\phi^{app})^2 - \frac{1}{8}(\phi^{ppp})^2 + \frac{1}{12}\phi^{pppp}\}$ and $\gamma_6 = \lambda^2\{-\frac{1}{8}(\phi^{app})^2 - \frac{1}{8}(\phi^{ppp})^2 + \frac{1}{24}\phi^{pppp}\}$. Those evaluated at $\hat{\eta}(y)$, denoted $\hat{\gamma}_1, \dots, \hat{\gamma}_6$, are obtained by replacing $\lambda$, $\phi^{ppp}$ and $d^{aa}$, respectively, with $\hat{v}$, (7.11) and (7.12) as shown in Lemma 5. Then we have
|
| 746 |
+
|
| 747 |
+
$$ (7.19) \quad \tilde{z}_3(y, \tau_1, \tau_2, \tau_3) \approx \zeta_3(\hat{\gamma}_1, \hat{\gamma}_2, \hat{\gamma}_3, \hat{\gamma}_4, \hat{\gamma}_5, \hat{\gamma}_6, \tau_1, \tau_2, \tau_3) $$
|
| 748 |
+
---PAGE_BREAK---
|
| 749 |
+
|
| 750 |
+
using the $\zeta_3$-function of (5.5). Since (7.19) errs only $O(n^{-3/2})$ for any values of
|
| 751 |
+
($\tau_1, \tau_2, \tau_3$), the nonlinear regression for three-step multiscale bootstrap probabili-
|
| 752 |
+
ties in Section 5 estimates $\hat{\gamma}_i$'s up to $O(n^{-1})$ terms.
|
| 753 |
+
|
| 754 |
+
If we define $\hat{z}_3(y)$ of (5.6) using the $\hat{\gamma}_i$'s defined above, we can easily verify
|
| 755 |
+
|
| 756 |
+
$$
|
| 757 |
+
(7.20) \qquad \hat{z}_3(y) \approx \hat{z}_\infty(y)
|
| 758 |
+
$$
|
| 759 |
+
|
| 760 |
+
by comparing (5.6) with (7.13). This proves the third-order accuracy of $\hat{\alpha}_3(y)$ under (1.3) and (1.4).
|
| 761 |
+
|
| 762 |
+
For the multivariate normal model of (1.1), $\phi(\eta) = \|\eta\|^2/2$ and, thus, $\phi^{ijkl} = \phi^{ijkl} = 0$. This implies $\gamma_3 = \cdots = \gamma_6 = 0$, proving the third-order accuracy of $\hat{\alpha}_1(y)$ and $\hat{\alpha}_2(y)$ under (1.1) and (1.4).
|
| 763 |
+
|
| 764 |
+
**Acknowledgments.** I wish to thank the referees and the Associate Editor who handled this article for their very helpful constructive suggestions. The earlier version of the manuscript was prepared during my stay at Stanford University arranged by Brad Efron.
|
| 765 |
+
|
| 766 |
+
REFERENCES
|
| 767 |
+
|
| 768 |
+
BARNDORFF-NIELSEN, O. E. (1986). Inference on full or partial parameters based on the standardized signed log likelihood ratio. *Biometrika* **73** 307–322.
|
| 769 |
+
|
| 770 |
+
BARNDORFF-NIELSEN, O. E. and COX, D. R. (1994). *Inference and Asymptotics*. Chapman and Hall, London.
|
| 771 |
+
|
| 772 |
+
DICICCIO, T. and EFRON, B. (1992). More accurate confidence intervals in exponential families. *Biometrika* **79** 231–245.
|
| 773 |
+
|
| 774 |
+
DRAPER, N. R. and SMITH, H. (1998). *Applied Regression Analysis*, 3rd ed. Wiley, New York.
|
| 775 |
+
|
| 776 |
+
EFRON, B. (1985). Bootstrap confidence intervals for a class of parametric problems. *Biometrika* **72** 45–58.
|
| 777 |
+
|
| 778 |
+
EFRON, B. (1987). Better bootstrap confidence intervals (with discussion). *J. Amer. Statist. Assoc.* **82** 171–200.
|
| 779 |
+
|
| 780 |
+
EFRON, B., HALLORAN, E. and HOLMES, S. (1996). Bootstrap confidence levels for phylogenetic trees. *Proc. Natl. Acad. Sci. U.S.A.* **93** 13429–13434.
|
| 781 |
+
|
| 782 |
+
EFRON, B. and TIBSHIRANI, R. (1998). The problem of regions. *Ann. Statist.* **26** 1687–1718.
|
| 783 |
+
|
| 784 |
+
FELSENSTEIN, J. (1985). Confidence limits on phylogenies: An approach using the bootstrap.
|
| 785 |
+
*Evolution* **39** 783–791.
|
| 786 |
+
|
| 787 |
+
HALL, P. (1992). *The Bootstrap and Edgeworth Expansion*. Springer, New York.
|
| 788 |
+
|
| 789 |
+
KAMIMURA, T., SHIMODAIRA, H., IMOTO, S., KIM, S., TASHIRO, K., KUHARA, S. and MIYANO, S. (2003). Multiscale bootstrap analysis of gene networks based on Bayesian networks and nonparametric regression. In *Genome Informatics 2003* (M. Gribskov, M. Kanehisa, S. Miyano and T. Takagi, eds.) 350–351. Universal Academy Press, Tokyo.
|
| 790 |
+
|
| 791 |
+
KURIKI, S. and TAKEMURA, A. (2000). Shrinkage estimation towards a closed convex set with a smooth boundary. *J. Multivariate Anal.* **75** 79–111.
|
| 792 |
+
|
| 793 |
+
LIU, R. Y. and SINGH, K. (1997). Notions of limiting *P* values based on data depth and bootstrap. *J. Amer. Statist. Assoc.* **92** 266–277.
|
| 794 |
+
|
| 795 |
+
MCCULLAGH, P. (1984). Local sufficiency. *Biometrika* **71** 233–244.
|
| 796 |
+
|
| 797 |
+
PERLMAN, M. D. and WU, L. (1999). The emperor's new tests (with discussion). *Statist. Sci.* **14** 355–381.
|
| 798 |
+
---PAGE_BREAK---
|
| 799 |
+
|
| 800 |
+
PERLMAN, M. D. and WU, L. (2003). On the validity of the likelihood ratio and maximum likelihood methods. *J. Statist. Plann. Inference* **117** 59-81.
|
| 801 |
+
|
| 802 |
+
SEVERINI, T. A. (2000). *Likelihood Methods in Statistics*. Oxford Univ. Press.
|
| 803 |
+
|
| 804 |
+
SHIMODAIRA, H. (2002). An approximately unbiased test of phylogenetic tree selection. *Systematic Biology* **51** 492–508.
|
| 805 |
+
|
| 806 |
+
SHIMODAIRA, H. (2004). Technical details of the multistep-multiscale bootstrap resampling. Research Report B-403, Dept. Mathematical and Computing Sciences, Tokyo Institute of Technology, Tokyo.
|
| 807 |
+
|
| 808 |
+
SHIMODAIRA, H. and HASEGAWA, M. (2001). CONSEL: For assessing the confidence of phylogenetic tree selection. *Bioinformatics* **17** 1246–1247.
|
| 809 |
+
|
| 810 |
+
WEYL, H. (1939). On the volume of tubes. *Amer. J. Math.* **61** 461–472.
|
| 811 |
+
|
| 812 |
+
DEPARTMENT OF MATHEMATICAL
|
| 813 |
+
AND COMPUTING SCIENCES
|
| 814 |
+
|
| 815 |
+
TOKYO INSTITUTE OF TECHNOLOGY
|
| 816 |
+
2-12-1 OOKAYAMA, MEGURO-KU
|
| 817 |
+
TOKYO 152-8552
|
| 818 |
+
JAPAN
|
| 819 |
+
|
| 820 |
+
E-MAIL: shimo@is.titech.ac.jp
|
| 821 |
+
|
| 822 |
+
URC: www.is.titech.ac.jp/~shimo/
|
samples/texts_merged/2753278.md
ADDED
|
@@ -0,0 +1,287 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Three Classifications on Branching Processes
|
| 5 |
+
and Their Behavior for Finding the Solution of
|
| 6 |
+
Nonlinear Integral Equations
|
| 7 |
+
|
| 8 |
+
B. F. Vajargah and M. Moradi
|
| 9 |
+
|
| 10 |
+
*Department of Mathematics, University of Guilan,*
|
| 11 |
+
*Rasht, Iran
|
| 12 |
+
|
| 13 |
+
E-mail: fathi@guilan.ac.ir
|
| 14 |
+
|
| 15 |
+
E-mail: mmoradi@guilan.ac.ir
|
| 16 |
+
|
| 17 |
+
Received December 4, 2009; revised May 11, 2010; published online July 15, 2010
|
| 18 |
+
|
| 19 |
+
**Abstract.** In this paper, we consider the Monte Carlo method for finding the solution of nonlinear integral equations at a fixed point $x_0$. In this method, simulated Galton-Watson branching process is employed for solving the proposed integral equation. The main goal of this paper is to compare the behavior of three classifications of branching process based on the mean progeny, i.e. the subcritical, critical and supercritical process.
|
| 20 |
+
|
| 21 |
+
**Keywords:** integral equation, branching process, Monte Carlo, simulation.
|
| 22 |
+
|
| 23 |
+
**AMS Subject Classification:** 78M31; 60J80; 45G10.
|
| 24 |
+
|
| 25 |
+
# 1 Introduction
|
| 26 |
+
|
| 27 |
+
Consider the following Fredholm integral equation with polynomial nonlinearity:
|
| 28 |
+
|
| 29 |
+
$$u(x) = f(x) + \int_{D} \dots \int_{D} K(x, y_{1}, \dots, y_{m}) \prod_{i=1}^{m} u(y_{i}) \prod_{i=1}^{m} dy_{i}, \quad (1.1)$$
|
| 30 |
+
|
| 31 |
+
where $D \subseteq \mathbb{R}$ and $m$ is a natural number greater than or equal to 2, $f(x) \in L_2(D)$ and the kernel $K(x,y_1, \dots, y_m)$ belongs to $L_2(D \times D \times \dots \times D) \equiv L_2(D^{m+1})$. It is assumed that this equation has an iterative solution corresponding to the iteration process:
|
| 32 |
+
|
| 33 |
+
$$u_{j+1}(x) = f(x) + \int_D \dots \int_D K(x, y_1, \dots, y_m) \prod_{i=1}^{m} u_j(y_i) \prod_{i=1}^{m} dy_i. \quad (1.2)$$
|
| 34 |
+
|
| 35 |
+
where $u_0(x) = f(x)$, $j = 0, 1, \dots$.
|
| 36 |
+
---PAGE_BREAK---
|
| 37 |
+
|
| 38 |
+
The first task is to construct a Monte Carlo estimator for evaluating the
|
| 39 |
+
functional
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
\langle g, u \rangle = \int_D g(x)u(x)dx. \tag{1.3}
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
The functions $u(x)$ and $g(x)$ belong to any Banach space $X$ and to the adjoint space $X^*$, respectively, and $u(x)$ is a unique solution of the iterative process (1.2). More details can be found in [3].
|
| 46 |
+
|
| 47 |
+
## 2 Monte Carlo Method
|
| 48 |
+
|
| 49 |
+
In this section we introduce Monte Carlo method for estimating (1.3). If the
|
| 50 |
+
method of successive approximations
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
u_{j+1}(x) = |f(x)| + \int_D \dots \int_D |K(x, y_1, \dots, y_m)| \prod_{i=1}^{m} u_j(y_i) \prod_{i=1}^{m} dy_i
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
with initial approximation $u_0(x) = |f(x)|$, converges, then a branching process
|
| 57 |
+
enables us to establish random variables which mathematical expectations are
|
| 58 |
+
equal to functional (1.3).
|
| 59 |
+
|
| 60 |
+
In theory of branching process, we usually consider particles (such as neu-
|
| 61 |
+
trons or bacteria) that can generate new particles of the same type. The initial
|
| 62 |
+
set of objects is referred to as belonging to the zero generation. Particles gen-
|
| 63 |
+
erated from the nth generation are said to belong to the (n + 1)-th generation.
|
| 64 |
+
In Galton-Watson branching processes single particle is remained alive just for
|
| 65 |
+
a unit of time, and only at the end of its life produces a random number of
|
| 66 |
+
progeny according to a probability distribution. Every generated particle at
|
| 67 |
+
the first generation may alive and generate similar particles as particles in zero
|
| 68 |
+
generation. In the second generation, progeny particles behaves in the identi-
|
| 69 |
+
cal way and so on. In this process the life spans of all particles are identical
|
| 70 |
+
and equal to one, then this process can be modeled by a discrete-time index,
|
| 71 |
+
identical to the number of generations [1].
|
| 72 |
+
|
| 73 |
+
To obtain a random variable which mathematical expectation is equal to
|
| 74 |
+
(1.3), we consider a branching process with the following property: any particle
|
| 75 |
+
distributed with initial density function
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
p_0(x) \ge 0, \text{ and } \int_D p_0(x) dx = 1
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
is born in the domain $\mathbb{R}^m$ at a random point $x_0$. In the next generation, this
|
| 82 |
+
particle either dies out with probability $h(x_0)$, where $0 \le h(x_0) < 1$, or gener-
|
| 83 |
+
ates $m \ge 2$ new analogical particles in the random points $x_{00}, x_{01}, \ldots, x_{0m-1}$
|
| 84 |
+
with probability $p_m(x_0) = 1 - h(x_0)$ and the transition density function
|
| 85 |
+
$p(x_0, x_{00}, x_{01}, \ldots, x_{0m-1}) \ge 0$, where
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\int_D \dots \int_D p(x_0, x_{00}, x_{01}, \dots, x_{0m-1}) \prod_{i=0}^{m-1} dx_{0i} = 1.
|
| 89 |
+
$$
|
| 90 |
+
---PAGE_BREAK---
|
| 91 |
+
|
| 92 |
+
The used index numbers are called multi-indices. The particle which belongs to zero generation is enumerated with zero index, i.e. $x_0$. Its discrete inheritors are enumerated with the indices $00, 01, \dots, 0m-1$, i.e. the next points $x_{00}, x_{01}, \dots, x_{0m-1}$ belong to the first generation. If a parent particle has index $t$, then its progeny have index $t0, \dots, t1, \dots, tm-1$. The generated particles behave in the next moment as the initial one and etc. The trace of such a process is a tree form structure shown in Figure 1.
|
| 93 |
+
|
| 94 |
+
**Figure 1.** A trace of the branching process.
|
| 95 |
+
|
| 96 |
+
To find relation between the branching process and a solution of the integral equation (1.1), we calculate the first two iterations of iterative equation (1.2) in the simple case when $m=2$. The branching process which corresponds to these two iterations is presented in Figure 2.
|
| 97 |
+
|
| 98 |
+
**Figure 2.** The branching processes in the case $m=2$.
|
| 99 |
+
---PAGE_BREAK---
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\begin{align*}
|
| 103 |
+
u_0(x_0) &= f(x_0), & u_1(x_0) &= f(x_0) + \iint K(x_0, x_{00}, x_{01}) f(x_{00}) f(x_{01}) dx_{00} dx_{01}, \\
|
| 104 |
+
u_2(x_0) &= f(x_0) + \iint K(x_0, x_{00}, x_{01}) f(x_{00}) f(x_{01}) dx_{00} dx_{01} + \iint K(x_0, x_{00}, x_{01}) \\
|
| 105 |
+
&\quad \times f(x_{00}) (\iint K(x_{00}, x_{000}, x_{001}) f(x_{000})) f(x_{001}) dx_{000} dx_{001} dx_{00} dx_{01} \\
|
| 106 |
+
&\quad + \iint K(x_0, x_{00}, x_{01}) f(x_{00}) \Biggl( \iint K(x_{01}, x_{010}, x_{011}) f(x_{010}) \\
|
| 107 |
+
&\qquad \times f(x_{011}) dx_{010} dx_{011} dx_{00} dx_{01} + \iint K(x_0, x_{00}, x_{01}) \\
|
| 108 |
+
&\qquad \times (\iint K(x_{01}, x_{010}, x_{011}) f(x_{010}) f(x_{011}) dx_{010} dx_{011}) \\
|
| 109 |
+
&\qquad \times (\iint K(x_{00}, x_{000}, x_{001}) f(x_{000}) f(x_{001}) dx_{000} dx_{001}) \Biggr) dx_{00} dx_{01}.
|
| 110 |
+
\end{align*}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
Obviously, the term of $u_0$ corresponds to zero generation, see Fig. 2a, and itera-
|
| 114 |
+
tive equation $u_1$ corresponds to all trees which appear until the first generation,
|
| 115 |
+
see Figures 2a and 2b. The structure of iterative equation $u_2$ is linked with all
|
| 116 |
+
trees which appear until the second generation.
|
| 117 |
+
|
| 118 |
+
A full tree with $n$ generations is called the tree $\Gamma_n$, where the dying of
|
| 119 |
+
particles is not visible from zero to $(n-2)$-th generation, but all generated
|
| 120 |
+
particles of $(n-1)$-th generation die. We present $\Gamma$ as the subtree from a
|
| 121 |
+
full tree. Consider $A$ be the all particles that can generate the same particles
|
| 122 |
+
in the next steps and $B$ denotes the all died particles of the above explained
|
| 123 |
+
branches. Consider a branching process with $l$ generations in the general case
|
| 124 |
+
$m \ge 2$. It corresponds to the truncated iterative process with $l$ iterations (1.2).
|
| 125 |
+
There is a one-to-one correspondence between the number of subtrees of the
|
| 126 |
+
number of the terms of the truncated iterative process with $l$ iterations. This
|
| 127 |
+
correspondence allows us to construct a procedure for a random choice of the
|
| 128 |
+
tree and to calculate the value of a random variable which corresponds to
|
| 129 |
+
this tree. Thus, when we construct the branching process we receive arbitrary
|
| 130 |
+
subtrees $\Gamma_l$ from the full tree $\Gamma$. We set the random variable
|
| 131 |
+
|
| 132 |
+
$$
|
| 133 |
+
\Theta_g(\Gamma) = \frac{g(x_0)}{p_0(x_0)} \prod_{x_t \in A} \frac{K(x_t)}{p(x_t)} \prod_{x_t \in B} \frac{f(x_t)}{h(x_t)}
|
| 134 |
+
$$
|
| 135 |
+
|
| 136 |
+
with the density function
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
p(\Gamma) = p_0(x_0) \prod_{x_t \in A} p(x_t) \prod_{x_t \in B} h(x_t), \quad (2.1)
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
where $K(x_t)$, denotes $K(x_t) = K(x_t, x_{t1}, x_{t2}, \ldots, x_{tm})$ and
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
p(x_t) = p_m(x_t)p(x_t, x_{t1}, x_{t2}, \dots, x_{tm}), \quad h(x_t) = 1 - p_m(x_t),
|
| 146 |
+
$$
|
| 147 |
+
---PAGE_BREAK---
|
| 148 |
+
|
| 149 |
+
where points $x_{t1}, x_{t2}, \dots, x_{tm}$ are generated by $x_t$.
|
| 150 |
+
|
| 151 |
+
According to the following theorem, we obtain random variable for arbitrary tree $\Gamma$ which estimate $l$th iteration, $u_l$, of the iterative process (1.2) [3].
|
| 152 |
+
|
| 153 |
+
**Theorem 1.** The mathematical expectation of the r.v. $\Theta_g(\Gamma)$ is equal to the functional $<g, u_l>$, i.e.
|
| 154 |
+
|
| 155 |
+
$$E(\Theta_g(\Gamma)) = \langle g, u_l \rangle = \int_D g(x) u_l(x) dx.$$
|
| 156 |
+
|
| 157 |
+
It is clear that if $l \to \infty$, then the mathematical expectation of the random variable is:
|
| 158 |
+
|
| 159 |
+
$$\lim_{l \to \infty} E(\Theta_g(\Gamma)) = \langle g, u \rangle = \int_D g(x)u(x)dx.$$
|
| 160 |
+
|
| 161 |
+
The case, when the given function $g(x) = \delta(x - x_0)$, where $\delta(\cdot)$ is the delta function, is of special interest, because we are interested in calculating the value of $u$ in a fixed point as $x_0$. If we simulate $N$ branches as $\Gamma$ (Number of Markov chains), we can estimate $u(x_0)$ by taking the average of estimation of $u(x_0)$ as Monte Carlo solution where $g(x) = \delta(x - x_0)$.
|
| 162 |
+
|
| 163 |
+
The probable error of this method is $r_N = 0.6745\sigma(\Theta(\Gamma))/\sqrt{N}$, where $\sigma(\Theta(\Gamma))$ is the standard deviation of $\Theta(\Gamma)$ [5]. For reducing the error we can consider sufficiently large $N$ or reduce the variance of the error i.e. $\sigma^2(\Theta(\Gamma))$.
|
| 164 |
+
|
| 165 |
+
The problem of optimization of Monte Carlo algorithms consists in mini-
|
| 166 |
+
mization of the standard deviation, i.e. minimization of the second moment
|
| 167 |
+
$E(\Theta_g^2(\Gamma))$ of the r.v. $\Theta_g(\Gamma)$. This is done by a suitable choice of the density
|
| 168 |
+
function $p(\Gamma)$. Dimov have shown that the probability transition density
|
| 169 |
+
|
| 170 |
+
$$p(x, y_1, y_2, \dots, y_m) = \frac{|\mathcal{K}(x, y_1, y_2, \dots, y_m)|}{\int \dots \int |\mathcal{K}(x, y_1, y_2, \dots, y_m)| \prod_{i=1}^{m} dy_i}$$
|
| 171 |
+
|
| 172 |
+
and the initial transition probability $p_0(x) = |g(x)|/\int |g(x)|dx$, minimizes the
|
| 173 |
+
variance and in this case the density function $p(\Gamma)$ in (2.1) is called almost
|
| 174 |
+
optimal [2].
|
| 175 |
+
|
| 176 |
+
Now, we present the Monte Carlo algorithm using the almost optimal den-
|
| 177 |
+
sity function:
|
| 178 |
+
|
| 179 |
+
*Almost Optimal Monte Carlo Algorithm*
|
| 180 |
+
|
| 181 |
+
1. Set $\Theta_g(\Gamma) = 1$ and get point $\xi$ for calculate of $u(\xi)$.
|
| 182 |
+
|
| 183 |
+
2. Choose an independent realization, $\alpha$, of the uniformly distributed random variable in the interval $(0, 1)$.
|
| 184 |
+
|
| 185 |
+
3. If $\alpha \le p_m(\xi)$ then go to steps 5 else go to step 4.
|
| 186 |
+
|
| 187 |
+
4. Set $\Theta_g(\Gamma) = \Theta_g(\Gamma) \times \frac{f(\xi)}{1-p_m(\xi)}$. In this case we say that the point dies out.
|
| 188 |
+
|
| 189 |
+
5. Generate the numbers $\xi_1, \xi_2, \dots, \xi_m$ with transition density function
|
| 190 |
+
|
| 191 |
+
$$p(x, y_1, y_2, \dots, y_m) = \frac{|\mathcal{K}(x, y_1, y_2, \dots, y_m)|}{\int \dots \int |\mathcal{K}(x, y_1, y_2, \dots, y_m)| \prod_{i=1}^{m} dy_i}.$$
|
| 192 |
+
---PAGE_BREAK---
|
| 193 |
+
|
| 194 |
+
6. Obtain
|
| 195 |
+
|
| 196 |
+
$$ \Theta_g(\Gamma) = \Theta_g(\Gamma) \times \frac{K(\xi_1, \xi_2, \ldots, \xi_m)}{p_m(\xi) \times p(x, \xi_1, \xi_2, \ldots, \xi_m)}. $$
|
| 197 |
+
|
| 198 |
+
7. Repeat the steps 2 and 3 for the generated points: $\xi_1, \xi_2, \ldots, \xi_m$
|
| 199 |
+
|
| 200 |
+
8. Stop the algorithm when all points die out.
|
| 201 |
+
|
| 202 |
+
## 3 Classification of Branching Processes
|
| 203 |
+
|
| 204 |
+
A very important classification of a branching process is based on the mean progeny of a particle, i.e. $m = E(X)$, where the random variable X denotes the number of progeny of a particle. Therefore, in the expected value sense, the process grows geometrically if $m > 1$, stays constant if $m = 1$, and decays geometrically if $m < 1$. We call these cases as supercritical, critical, and subcritical for the process, respectively. If we denote the number of particles at time t by $Z_t$, then $E[Z_t] = m^t$ so $E[Z_t] \uparrow \infty$, $E[Z_t] = 1$ and $E[Z_t] \downarrow 0$ for supercritical, critical, and subcritical branching process, respectively.
|
| 205 |
+
|
| 206 |
+
Let us consider the probability $q_t = P(Z_t = 0)$ that the process extincts at time t. The sequence $q_t$ tends to a limit q which is the probability of eventual extinction. If $m > 1$, then $0 \le q < 1$. If $m \le 1$, then q has to be equal to 1.
|
| 207 |
+
|
| 208 |
+
The supercritical and subcritical processes behave as expected from the expression for the means. The critical process is counterintuitive. Although the mean value stays constant and equal to 1, the process becomes extinct almost surely [7]. In this paper we want to have comparison between these classifications, if $p_m < \frac{1}{m}$, $p_m = \frac{1}{m}$ and $p_m > \frac{1}{m}$ then we have the subcritical, critical and supercritical processes respectively.
|
| 209 |
+
|
| 210 |
+
## 4 Numerical Results
|
| 211 |
+
|
| 212 |
+
For the numerical tests of the Monte Carlo algorithm, we focus on three parameters: (i) $p_m$, (ii) the number of employed Markov chain (N), (iii) total number of employed points in branching process. Since in the case of supercritical branching process, the probability of extinction is not equal to one, then with positive probability, the number of generated points tends to infinity. So we have to consider a rule to limit the number of points. We note that with this stopping rule for terminating process, we have biased on our results. For this, we consider the maximum height of supercritical branching process as 20. The height of a branching process is the length of the path from the parent node to the deepest node in the branching process.
|
| 213 |
+
|
| 214 |
+
Consider the following Fredholm integral equation ($m = 2$):
|
| 215 |
+
|
| 216 |
+
$$ u(x) = ax^4 + bx^3 + cx^2 + dx + e + \lambda \int_0^1 \int_0^1 x^4 y z u(y) u(z) dy dz. $$
|
| 217 |
+
|
| 218 |
+
This integral equation under condition
|
| 219 |
+
|
| 220 |
+
$$ \lambda = \frac{3}{2} \left( \frac{a}{6} + \frac{b}{5} + \frac{c}{4} + \frac{d}{3} + \frac{e}{2} \right)^{-1} $$
|
| 221 |
+
---PAGE_BREAK---
|
| 222 |
+
|
| 223 |
+
**Figure 3.** The results of Monte Carlo simulation in case $p_2 = 0.1$.
|
| 224 |
+
|
| 225 |
+
**Figure 4.** The results of Monte Carlo simulation in case $p_2 = 0.2$.
|
| 226 |
+
|
| 227 |
+
**Figure 5.** The results of Monte Carlo simulation in case $p_2 = 0.3$.
|
| 228 |
+
|
| 229 |
+
has the exact solution
|
| 230 |
+
|
| 231 |
+
$$u(x) = \left(a + \frac{9}{\lambda}\right)x^4 + bx^3 + cx^2 + dx + e.$$
|
| 232 |
+
|
| 233 |
+
Here, we want to evaluate the following integral equation with unique solution
|
| 234 |
+
---PAGE_BREAK---
|
| 235 |
+
|
| 236 |
+
**Figure 6.** The results of Monte Carlo simulation in case $p_2 = 0.4$.
|
| 237 |
+
|
| 238 |
+
**Figure 7.** The results of Monte Carlo simulation in case $p_2 = 0.5$.
|
| 239 |
+
|
| 240 |
+
**Figure 8.** The results of Monte Carlo simulation in case $p_2 = 0.6$.
|
| 241 |
+
|
| 242 |
+
$u(x) = 2$ by Monte Carlo simulation:
|
| 243 |
+
|
| 244 |
+
$$u(x) = 2 - 3x^4 + 3 \int_0^1 \int_0^1 x^4 yzu(y)u(z) \, dydz. \quad (4.1)$$
|
| 245 |
+
---PAGE_BREAK---
|
| 246 |
+
|
| 247 |
+
Figures 3–10 present the Monte Carlo solutions and their errors for $p_2 = 0.1, \dots, 0.8$ and $N = 3000$. From these figures, we conclude that using $p_2 = 0.2, 0.3, 0.4, 0.5$ we have better results, see Figures 5–8.
|
| 248 |
+
|
| 249 |
+
**Figure 9.** The results of Monte Carlo simulation in case $p_2 = 0.7$.
|
| 250 |
+
|
| 251 |
+
**Figure 10.** The results of Monte Carlo simulation in case $p_2 = 0.8$.
|
| 252 |
+
|
| 253 |
+
To see the behavior of the Monte Carlo method, we have repeated the algorithm five times using the same parameters. For $p_2 = 0.1, 0.2, \dots, 0.8$ and $N = 500, 1000, 2000, 3000, 4000, 5000$ the solution of (4.1) is shown in Figs. 11 and 12. These figures show the relation between the Monte Carlo simulation and total employed points in simulated branching processes with error of Monte Carlo method. We find that total points in simulated branching processes and error are independent (see Fig. 11, left hand side). We conclude that for $p_2 = 0.3$ and $p_2 = 0.4$ the obtained results are better than the others (Fig. 12, left hand side).
|
| 254 |
+
|
| 255 |
+
Now, we may consider the following testing of hypothesis
|
| 256 |
+
|
| 257 |
+
$$ \begin{cases} H_0: \mu_1 < \mu_2 \\ H_1: \mu_1 \ge \mu_2 \end{cases} $$
|
| 258 |
+
|
| 259 |
+
where $\mu_1$ denotes the mean of errors of the Monte Carlo simulation in the subcritical and critical classifications, and $\mu_2$ denotes the same error for su-
|
| 260 |
+
---PAGE_BREAK---
|
| 261 |
+
|
| 262 |
+
**Figure 11.** The results for five repeats of Monte Carlo simulation.
|
| 263 |
+
|
| 264 |
+
**Figure 12.** The results for five repeats of Monte Carlo simulation.
|
| 265 |
+
|
| 266 |
+
percritical classification process. Using t-test for two independent samples, we accept $H_0$ with 99 percent confidence.
|
| 267 |
+
|
| 268 |
+
## 5 Conclusion
|
| 269 |
+
|
| 270 |
+
Monte Carlo method can be efficiently used for obtaining the solution of integral equations with polynomial nonlinearity (1.1), if we employ the subcritical or critical branching process. Since the employed estimator in the supercritical is biased, then there is a systematic error which is relatively high.
|
| 271 |
+
|
| 272 |
+
## References
|
| 273 |
+
|
| 274 |
+
[1] K.B. Athreya and P.E. Ney. *Branching Processes*. Springer-Verlag, Berlin, 1972.
|
| 275 |
+
|
| 276 |
+
[2] I. Dimov. *Monte Carlo Methods for Applied Scientists*. World Scientific, 2008.
|
| 277 |
+
|
| 278 |
+
[3] S.M. Ermakov and G.A. Mikhailov. *Statistical simulation*. Moscow, Nauka, 1982.
|
| 279 |
+
|
| 280 |
+
[4] V.B. Fathi and M. Moradi. Solving nonlinear Fredholm differential integral equations by Monte Carlo method. *Int. J. Appl. Math.*, **19**(4):411–421, 2006.
|
| 281 |
+
---PAGE_BREAK---
|
| 282 |
+
|
| 283 |
+
[5] T. Grušov. Minimization of the probable error of the Monte Carlo method for solving of nonlinear integral equation. *J. Mathematica Balkanica*, **6**:237-249, 1992.
|
| 284 |
+
|
| 285 |
+
[6] T. Grušov. Monte Carlo methods for nonlinear equations. *Advances in Numerical Methods and Applications*, pp. 127-135, 1994.
|
| 286 |
+
|
| 287 |
+
[7] M. Kimmel and E.A. David. *Branching processes in biology*. Springer-Verlag, New York, 2002.
|
samples/texts_merged/2796137.md
ADDED
|
@@ -0,0 +1,592 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
invariant-Set-Based Analysis and Design
|
| 5 |
+
– A Survey of Some Noticeable
|
| 6 |
+
Contributions
|
| 7 |
+
|
| 8 |
+
Octavian PASTRAVANU, Mihaela Hanako MATCOVSCHI,
|
| 9 |
+
Mihail VOICU
|
| 10 |
+
|
| 11 |
+
Technical University Gh. Asachi of Iași,
|
| 12 |
+
Department of Automatic Control and Applied Informatics,
|
| 13 |
+
Blvd. Mangeron 53A, 700050 Iași, Romania
|
| 14 |
+
|
| 15 |
+
E-mail: {opastrav, mhanako, mvoicu}@ac.tuiasi.ro
|
| 16 |
+
|
| 17 |
+
**Abstract.** Invariant-set-based techniques represent a research direction in systems engineering emerged during the last two decades. The authors of the current paper have brought some noticeable contributions to the development of this direction. The exposition of our results is connected to the international evolution of the field. Our material offers an overview structured at two levels:
|
| 18 |
+
|
| 19 |
+
1. Present framework – results available for invariant sets with general shapes described by arbitrary Hölder *p*-norms: (i) Types of dynamical systems and invariant sets under consideration; (ii) Invariance and stability; (iii) Invariance criteria for nonlinear systems; (iv) Invariance criteria for linear systems (time-invariant, time-invariant, positive, with interval type uncertainties); (v) Linear synthesis based on invariant sets; (vi) Comparison methods for invariant sets.
|
| 20 |
+
|
| 21 |
+
2. Researches prefiguring the present framework – results for invariant sets with rectangular shapes (i) Linear time-invariant systems; (ii) Linear systems with interval-type uncertainties; (iii) Linear synthesis; (iv) Nonlinear systems.
|
| 22 |
+
|
| 23 |
+
# 1. Introduction
|
| 24 |
+
|
| 25 |
+
The exploration of invariant sets with respect to system dynamics was initiated by (Nagumo, 1942) and developed between 1960 and 1980 by well-known mathematicians such as (Yorke, 1968) (Crandall, 1972), (Martin Jr, 1973), (Brezis, 1976), (Pavel, 1977) – see monograph [1]. In the mid nineties, this topic has also been addressed
|
| 26 |
+
---PAGE_BREAK---
|
| 27 |
+
|
| 28 |
+
by control engineering specialists researchers such as (Voicu, 1984), (Bitsoris, 1988), (Molchanov and Pyatnitskii, 1986), (Blanchini, 1990) – see monograph [2]. The investigations in the area of systems theory and engineering increased during the tenth decade, the significant results being discussed by the survey paper [3]. A more elaborated version of this survey with ample extensions, including numerous examples and covering the evolution till 2005 inclusive yielded the monograph [2].
|
| 29 |
+
|
| 30 |
+
The current paper presents the contributions of the three authors to the development of system analysis and design techniques based on invariant sets. Chronologically speaking, these contributions initially focused on invariant sets with rectangular shapes and, later on, the results referred to invariant sets with general shapes described by arbitrary Hölder $p$-norms. Unlike most of the approaches reported in literature, which consider constant or exponentially decreasing invariant sets, our researches encompass the general case of sets with arbitrary time dependence.
|
| 31 |
+
|
| 32 |
+
To ensure a unified exposition and for brevity reasons as well, our presentation does not follow chronological terms. Thus, Section 2 provides a picture of the present framework we have built relying on our entire research experience in the field, most of the results being fairly recent and treating invariant sets with arbitrary forms. The previous contributions, limited to invariant sets with rectangular shapes, are discussed in Section 3 that also accommodates these earlier results as particular cases of the general construction developed by Section 2. This strategy in the organization of the paper makes Sections 2 and 3 unbalanced from the material-allocation point of view, but it ensures a global vision on the currently available tools corroborated with the progress of our researches during the past two decades. For extended approaches (including proofs of the results, examples, comparisons with other techniques etc.) the reader is guided to the full texts of our works.
|
| 33 |
+
|
| 34 |
+
Throughout the paper we use the following notations:
|
| 35 |
+
|
| 36 |
+
* For a vector $\mathbf{x} \in \mathbb{R}^n$, $||\mathbf{x}||_p$ is the Hölder vector $p$-norm defined by $||\mathbf{x}||_p = [|x_1|^p + \dots + |x_n|^p]^{1/p}$ for $1 \le p < \infty$, and by $||\mathbf{x}||_\infty = \max_{1 \le i \le n} |x_i|$ for $p = \infty$.
|
| 37 |
+
* For a matrix $\mathbf{M} \in \mathbb{R}^{n \times n}$, $||\mathbf{M}||_p$ is the matrix norm induced by the vector $p$-norm $||\bullet||_p$, $1 \le p \le \infty$; $\mu_{||\bullet||_p}(\mathbf{M}) = \lim_{\theta \downarrow 0} \frac{1}{\theta} |||\mathbf{I} + \theta \mathbf{M}||_p - 1|$ is a matrix measure (also called “logarithmic norm”) based on the matrix norm $||\bullet||_p$.
|
| 38 |
+
|
| 39 |
+
If $\mathbf{M} \in \mathbb{R}^{n \times n}$ is a symmetrical matrix, $\mathbf{M} \prec 0$ ($\mathbf{M} \preceq 0$) means that matrix $\mathbf{M}$ is negative definite (semidefinite). If $\mathbf{X}, \mathbf{Y} \in \mathbb{R}^{n \times m}$, then "$\mathbf{X} \le \mathbf{Y}$", "$\mathbf{X} < \mathbf{Y}$" mean componentwise inequalities.
|
| 40 |
+
|
| 41 |
+
## 2. Present framework – results available for invariant sets with general shapes
|
| 42 |
+
|
| 43 |
+
### 2.1. Types of dynamical systems and invariant sets considered by our researches
|
| 44 |
+
|
| 45 |
+
The present framework provides concepts and instruments for studying invariance properties relative to dynamical systems described by
|
| 46 |
+
---PAGE_BREAK---
|
| 47 |
+
|
| 48 |
+
$$ \dot{\mathbf{x}}(t) = \mathbf{f}(\mathbf{x}(t), t), \quad \mathbf{x}(t_0) = \mathbf{x}_0, \quad t \ge t_0, \qquad (1) $$
|
| 49 |
+
|
| 50 |
+
where $\mathbf{f}: \mathbb{R}^n \times \mathbb{R}_+ \rightarrow \mathbb{R}^n$ is continuously differentiable in $x \in \mathbb{R}^n$ and continuous in $t \in \mathbb{R}_+$; $\mathbf{f}(0, t) = 0$ for all $t \in \mathbb{R}_+$, meaning that the origin $\{0\}$ is an *equilibrium*. Let $\mathbf{x}(t; t_0, \mathbf{x}_0)$ denote the state-space trajectory of system (1) initialized in $\mathbf{x}(t_0) = \mathbf{x}_0$.
|
| 51 |
+
|
| 52 |
+
Two types of sets are considered, each of them being described by Hölder $p$-norms,
|
| 53 |
+
$1 \le p \le \infty$.
|
| 54 |
+
|
| 55 |
+
Sets with arbitrary time-dependence, defined by
|
| 56 |
+
|
| 57 |
+
$$ S_{p,\mathbf{H}(t)}^{c} = \{ \mathbf{x} \in \mathbb{R}^{n} || |\mathbf{H}^{-1}(t)\mathbf{x}|_{p} \le c \}, \quad t \ge 0, c > 0, \qquad (2) $$
|
| 58 |
+
|
| 59 |
+
where $\mathbf{H}(t)$ is a diagonal matrix whose diagonal entries are positive functions, continuously differentiable:
|
| 60 |
+
|
| 61 |
+
$$ \mathbf{H}(t) = \operatorname{diag}\{h_1(t), \dots, h_n(t)\}, \quad h_i(t) > 0, t \in \mathbb{R}_+, i = 1, \dots, n. \qquad (3) $$
|
| 62 |
+
|
| 63 |
+
Sets with exponential time-dependence, defined by
|
| 64 |
+
|
| 65 |
+
$$ S_{p, D_{e}^{r}}^{c} = \{ \mathbf{x} \in \mathbb{R}^{n} | |\mathbf{D}^{-1}\mathbf{x}|_{p} \le c e^{rt} \}, \quad t \ge 0, c > 0, \qquad (4) $$
|
| 66 |
+
|
| 67 |
+
where $\mathbf{D}$ is a diagonal matrix whose diagonal entries are positive constants:
|
| 68 |
+
|
| 69 |
+
$$ \mathbf{D} = \operatorname{diag}\{d_1, \dots, d_n\}, \quad d_i > 0, i = 1, \dots, n, \qquad (5) $$
|
| 70 |
+
|
| 71 |
+
and $r < 0$ is a negative constant.
|
| 72 |
+
|
| 73 |
+
In geometrical terms, the aforementioned sets possess the following characteristics
|
| 74 |
+
(that also suggest their significance for applications):
|
| 75 |
+
|
| 76 |
+
The axes of the coordinate system in $\mathbb{R}^n$ represent symmetry axes, regardless of
|
| 77 |
+
the Hölder $p$-norm, $1 \le p \le \infty$.
|
| 78 |
+
|
| 79 |
+
The Hölder $p$-norm defines the shape of the set at each time $t \ge 0$. For the
|
| 80 |
+
frequently-used Hölder $p$-norms with $p \in \{1, 2, \infty\}$, the shape is an hyper-diamond,
|
| 81 |
+
an hyper-ellipse, or an hyper-rectangle (regarded as generalizations of the represen-
|
| 82 |
+
tations in $\mathbb{R}^2$).
|
| 83 |
+
|
| 84 |
+
For a given (but arbitrary constant) $c > 0$, the lengths of the $n$ semi-axes at a
|
| 85 |
+
given moment $t \ge 0$, are defined by $ch_i(t) > 0$ for $S_{p,\mathbf{H}(t)}^c$ (2) and $cd_i e^{rt} > 0$ for
|
| 86 |
+
$S_{p,\mathbf{D}_{e}^{r}}^c$ (4).
|
| 87 |
+
|
| 88 |
+
**Definition 1.** (Invariance of a set of form $S_{p,\mathbf{H}(t)}^c / S_{p,\mathbf{D}_{e}^{r}}^c$ with respect to a system)
|
| 89 |
+
|
| 90 |
+
Let $1 \le p \le \infty$ and $c > 0$. The set $S_{p,\mathbf{H}(t)}^c / S_{p,\mathbf{D}_{e}^{r}}^c$ defined by (2) / (4) is flow invariant with respect to (abbreviated as FI w.r.t.) system (1), if any trajectory $\mathbf{x}(t; t_0, \mathbf{x}_0)$ of system (1) initialized at $t_0$ in $S_{p,\mathbf{H}(t_0)}^c / S_{p,\mathbf{D}_{e}^{r}t_0}^c$ does not leave $S_{p,\mathbf{H}(t)}^c / S_{p,\mathbf{D}_{e}^{r}t_0}^c$ for any $t \ge t_0$, i.e.
|
| 91 |
+
|
| 92 |
+
(a) for the set $S_{p,\mathbf{H}(t)}^c$:
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
\begin{align}
|
| 96 |
+
& \forall t_0 \in \mathbb{R}_+, \forall \mathbf{x}_0 \in \mathbb{R}^n, ||\mathbf{H}^{-1}(t_0)\mathbf{x}_0||_p \le c &&\Rightarrow \tag{6} \\
|
| 97 |
+
& \forall t > t_0, ||\mathbf{H}^{-1}(t)\mathbf{x}(t; t_0, \mathbf{x}_0)||_p \le c; && \nonumber
|
| 98 |
+
\end{align}
|
| 99 |
+
$$
|
| 100 |
+
---PAGE_BREAK---
|
| 101 |
+
|
| 102 |
+
(b) for the set $S_{p, \mathbf{D}_{e^{rt}}}^c$:
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\begin{aligned}
|
| 106 |
+
& \forall t_0 \in \mathbb{R}_+, \forall x_0 \in \mathbb{R}^n, ||\mathbf{D}^{-1}\mathbf{x}_0||_p \le ce^{rt_0} \Longrightarrow \\
|
| 107 |
+
& \forall t > t_0, ||\mathbf{D}^{-1}\mathbf{x}(t; t_0, \mathbf{x}_0)||_p \le ce^{rt}.
|
| 108 |
+
\end{aligned}
|
| 109 |
+
\tag{7}
|
| 110 |
+
$$
|
| 111 |
+
|
| 112 |
+
**Remark 1.** (Comments on the equilibrium trajectories considered for the dynamical systems described by (1))
|
| 113 |
+
|
| 114 |
+
As mentioned in the first paragraph of the current subsection, our work considers
|
| 115 |
+
the state-space origin {0} as an equilibrium trajectory for the nonlinear system of form
|
| 116 |
+
(1). Generally speaking, nonlinear systems may also exhibit equilibrium trajectories
|
| 117 |
+
defined by sets with an infinite number of points (such as closed orbits, or limit cycles).
|
| 118 |
+
This case was not addressed by our research yet, and requires a more complex scenario.
|
| 119 |
+
For instance, if $\mathcal{X}_e \subset \mathbb{R}^n$ denotes the set of all points corresponding to an equilibrium
|
| 120 |
+
trajectory, then the time-dependent sets of form (2) are to be replaced by
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
\tilde{S}_{p,\mathbf{H}(t)}^c = \bigcup_{\bar{\mathbf{x}} \in \mathcal{X}_e} \left\{ \mathbf{x} \in \mathbb{R}^n \mid \|\mathbf{H}^{-1}(t)(\mathbf{x}-\bar{\mathbf{x}})\|_p \le c \right\}, \quad t \ge 0, c > 0. \tag{2'}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
The development of invariance criteria for this type of sets cannot be approached
|
| 127 |
+
by a direct generalization of the results presented below for (2) and, for the time
|
| 128 |
+
being, it may be regarded as a problem remained open for further investigations. $\square$
|
| 129 |
+
|
| 130 |
+
## 2.2. Set invariance versus stability
|
| 131 |
+
|
| 132 |
+
This subsection analyzes the connections between the invariant sets and the prop-
|
| 133 |
+
erties of stability, asymptotic stability or exponential stability of the equilibrium {0}
|
| 134 |
+
of system (1). The local or global character of the property is taken into considera-
|
| 135 |
+
tion. We show that the existence of sets FI w.r.t. system (1) yields a refinement of
|
| 136 |
+
the stability concepts.
|
| 137 |
+
|
| 138 |
+
**Theorem 1.** [4] (*Connection with the “stability” property*)
|
| 139 |
+
|
| 140 |
+
Let $1 \le p \le \infty$. Assume that the functions $h_i(t)$, $i=1, \dots, n$, in (3) are bounded.
|
| 141 |
+
|
| 142 |
+
(i) If there exists $\rho > 0$ such that $\forall c \in (0, \rho]$, the sets $S_{p,H(t)}^c$ are FI w.r.t. system (1), then equilibrium $\{0\}$ of system (1) is locally stable. If system (1) is autonomous (time-invariant), then equilibrium $\{0\}$ of system (1) is locally uniformly stable.
|
| 143 |
+
|
| 144 |
+
(ii) If $\forall c > 0$ the sets $S_{p,H(t)}^c$ are FI w.r.t. system (1), then equilibrium $\{0\}$ of system (1) is globally stable. If system (1) is autonomous (time-invariant), then equilibrium $\{0\}$ of (1) is globally uniformly stable. $\square$
|
| 145 |
+
|
| 146 |
+
**Theorem 2.** [4] (*Connection with the “asymptotic stability” property*)
|
| 147 |
+
Let $1 \le p \le \infty$. Assume the functions $h_i(t)$, $i = 1, \dots, n$, in (3) satisfy the conditions
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\lim_{t \to \infty} h_i(t) = 0, \quad i = 1, \dots, n. \tag{8}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
(i) If there exists $\rho > 0$ such that $\forall c \in (0, \rho]$, the sets $S_{p,H(t)}^c$ are FI w.r.t. system (1), then equilibrium $\{0\}$ of system (1) is locally asymptotically stable. If system (1) is
|
| 154 |
+
---PAGE_BREAK---
|
| 155 |
+
|
| 156 |
+
autonomous (time-invariant), then equilibrium {0} of system (1) is locally uniformly asymptotically stable.
|
| 157 |
+
|
| 158 |
+
(ii) If $\forall c > 0$ the sets $S_{p,H(t)}^c$ are FI w.r.t. system (1), then equilibrium $\{0\}$ of system (1) is globally asymptotically stable. If system (1) is autonomous (time-invariant), then equilibrium $\{0\}$ of system (1) is globally uniformly asymptotically stable. $\square$
|
| 159 |
+
|
| 160 |
+
**Theorem 3.** [4] (*Connection with the "exponential stability" property*)
|
| 161 |
+
Let $1 \le p \le \infty$.
|
| 162 |
+
|
| 163 |
+
(i) If there exists $\rho > 0$ such that $\forall c \in (0, \rho]$, the sets $S_{p,De^{rt}}^c$ are FI w.r.t. system (1), then equilibrium $\{0\}$ of system (1) is locally exponentially stable.
|
| 164 |
+
|
| 165 |
+
(ii) If $\forall c > 0$ the sets $S_{p,De^{rt}}^c$ are FI w.r.t. system (1), then equilibrium $\{0\}$ of system (1) is globally exponentially stable. $\square$
|
| 166 |
+
|
| 167 |
+
**Remark 2.** (*Refinement of some stability concepts*)
|
| 168 |
+
The converse parts of Theorems 1–3 are false in general, showing that the invariance properties are stronger than the stability ones. Consequently, relying on the invariance properties, we get the following refinement for the stability concepts of equilibrium $\{0\}$ of (1):
|
| 169 |
+
|
| 170 |
+
(a) *Diagonally invariant stability relative to the p-norm* (abbreviated DIS$_p$) local or global – if there exists **H**(t) for which Theorem 1 is satisfied.
|
| 171 |
+
|
| 172 |
+
(b) Diagonally invariant asymptotic stability relative to the p-norm (abbreviated DIAS$_p$) local or global – if there exists **H**(t) for which Theorem 2 is satisfied.
|
| 173 |
+
|
| 174 |
+
(c) Diagonally invariant exponential stability relative to the p-norm (abbreviated DIES$_p$) local or global – if there exists $De^{rt}$ for which Theorem 3 is satisfied. $\square$
|
| 175 |
+
|
| 176 |
+
**2.3. Invariance criteria for nonlinear systems**
|
| 177 |
+
|
| 178 |
+
This subsection provides sufficient conditions for the invariance of the sets of form $S_{p,H(t)}^c / S_{p,De^{rt}}^c$ with respect to nonlinear, time-variant systems. The time-invariant case represents a particular form of the results.
|
| 179 |
+
|
| 180 |
+
Consider the Jacobian matrix with respect to $x$ of the vector function $f$ defining
|
| 181 |
+
system (1)
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\mathbf{J}(\mathbf{x}, t) = \left[ \frac{\partial f(\mathbf{x}, t)}{\partial \mathbf{x}} \right] \in \mathbb{R}^{n \times n}. \qquad (9)
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
Also consider the *n × n* matrix defined by:
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\mathbf{A}(\mathbf{x}, t) = \int_{0}^{1} \mathbf{J}(\mathbf{s}\mathbf{x}, t)\mathrm{d}\mathbf{s}, \qquad (10)
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
that allows writing system (1) in the equivalent form
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
\dot{\mathbf{x}}(t) = \mathbf{A}(\mathbf{x}(t), t) \mathbf{x}(t), \quad \mathbf{x}(t_0) = \mathbf{x}_0, \quad t \ge t_0. \tag{1}
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
**Theorem 4.** [5] (*Invariance of the sets of form S<sub>p,H(t)</sub><sup>c</sup>*
|
| 200 |
+
|
| 201 |
+
Let $1 \le p \le \infty$.
|
| 202 |
+
---PAGE_BREAK---
|
| 203 |
+
|
| 204 |
+
(i) Let $\rho > 0$ a positive constant and $\Omega_p \subseteq \mathbb{R}^n$ a set with the property $S_{p,H(t)}^\rho \subseteq \Omega_p$, $\forall t \in \mathbb{R}_+$. (For instance, if functions $h_i(t)$, $i = 1, \dots, n$, in (3) are bounded, then $\Omega_p$ can be defined as $\Omega_p = \{\mathbf{x} \in \mathbb{R}^n | ||\mathbf{H}_{\sup}^{-1}\mathbf{x}||_p \le \rho\}$ with $H_{\sup} = \text{diag}\{\sup_{t \in \mathbb{R}_+} h_1(t), \dots, \sup_{t \in \mathbb{R}_+} h_n(t)\})$. The sets $S_{p,H(t)}^c$, $c \in (0, \rho]$, are FI w.r.t. system (1), if one of the following conditions is fulfilled:
|
| 205 |
+
|
| 206 |
+
$$ \mu_{||} ||_p (\mathbf{H}^{-1}(t)\mathbf{A}(\mathbf{x}, t)\mathbf{H}(t) - \mathbf{H}^{-1}(t)\dot{\mathbf{H}}(t)) \le 0, \quad (11a) $$
|
| 207 |
+
|
| 208 |
+
or
|
| 209 |
+
|
| 210 |
+
$$ \mu_{||} ||_p (\mathbf{H}^{-1}(t)\mathbf{J}(\mathbf{x}, t)\mathbf{H}(t) - \mathbf{H}^{-1}(t)\dot{\mathbf{H}}(t)) \le 0, \quad (11b) $$
|
| 211 |
+
|
| 212 |
+
for any $t \in \mathbb{R}_+$ and any $\mathbf{x} \in \Omega_p$.
|
| 213 |
+
|
| 214 |
+
(ii) The sets $S_{p,H(t)}^c$, $c > 0$, are FI w.r.t. system (1), if one of the conditions (11a) or (11b) is fulfilled for any $t \in \mathbb{R}_+$ and any $\mathbf{x} \in \mathbb{R}^n$. $\square$
|
| 215 |
+
|
| 216 |
+
**Theorem 5.** [5] (*Invariance of the sets of form* $S_{p,\mathbf{D}_{\mathrm{e}rt}}^c$)
|
| 217 |
+
|
| 218 |
+
Let $1 \le p \le \infty$.
|
| 219 |
+
|
| 220 |
+
(i) Let $\rho > 0$ a positive constant and $\Omega_p \subseteq \mathbb{R}^n$ a set with the property $S_{p,\mathbf{D}_{\mathrm{e}rt}}^\rho \subseteq \Omega_p$ for $\forall t \in \mathbb{R}_+$. (For instance, $\Omega_p$ can be defined as $\Omega_p = \{\mathbf{x} \in \mathbb{R}^n | ||\mathbf{D}^{-1}\mathbf{x}||_p \le \rho\}$). The sets $S_{p,\mathbf{D}_{\mathrm{e}rt}}^c$, $c \in (0, \rho]$, are FI w.r.t. system (1), if one of the following conditions is fulfilled for $\forall t \in \mathbb{R}_+$ and $\forall \mathbf{x} \in \Omega_p$:
|
| 221 |
+
|
| 222 |
+
$$ \mu_{||} ||_p (\mathbf{D}^{-1}\mathbf{A}(\mathbf{x}, t)\mathbf{D}) \le r, \quad (12a) $$
|
| 223 |
+
|
| 224 |
+
or
|
| 225 |
+
|
| 226 |
+
$$ \mu_{||} ||_p (\mathbf{D}^{-1}\mathbf{J}(\mathbf{x}, t)\mathbf{D}) \le r. \quad (12b) $$
|
| 227 |
+
|
| 228 |
+
(ii) The sets $S_{p,\mathbf{D}_{\mathrm{e}rt}}^c$, $c > 0$, are FI w.r.t. system (1), if one of the conditions (12a) or (12b) is fulfilled for $\forall t \in \mathbb{R}_+$ and $\forall \mathbf{x} \in \mathbb{R}^n$. $\square$
|
| 229 |
+
|
| 230 |
+
## 2.4. Invariance criteria for linear systems
|
| 231 |
+
|
| 232 |
+
This subsection provides sufficient conditions for the invariance of the sets of form $S_{p,H(t)}^c / S_{p,\mathbf{D}_{\mathrm{e}rt}}^c$ with respect to the following types of linear systems: time-variant, time-invariant, positive, interval.
|
| 233 |
+
|
| 234 |
+
### 2.4.1. Linear time-variant systems
|
| 235 |
+
|
| 236 |
+
Consider the linear time-variant system:
|
| 237 |
+
|
| 238 |
+
$$ \dot{\mathbf{x}}(t) = \mathbf{A}(t)\mathbf{x}(t), \quad \mathbf{x}(t_0) = \mathbf{x}_0, \quad t \ge t_0, \quad (13) $$
|
| 239 |
+
|
| 240 |
+
where $\mathbf{A}(t)$ is an $n \times n$ matrix whose entries are continuous functions for $t \in \mathbb{R}_+$.
|
| 241 |
+
|
| 242 |
+
**Theorem 6.** [5] (*Invariance of the sets of form* $S_{p,H(t)}^c$)
|
| 243 |
+
---PAGE_BREAK---
|
| 244 |
+
|
| 245 |
+
Let $1 \le p \le \infty$. The sets $S_{p,H(t)}^c$, $c > 0$, are FI w.r.t. system (13), if and only if
|
| 246 |
+
the following condition is fulfilled:
|
| 247 |
+
|
| 248 |
+
$$
|
| 249 |
+
\forall t \in \mathbb{R}_+, \quad \mu_{||} ||_p \left( \mathbf{H}^{-1}(t) \mathbf{A}(t) \mathbf{H}(t) - \mathbf{H}^{-1}(t) \dot{\mathbf{H}}(t) \right) \le 0. \qquad (14)
|
| 250 |
+
$$
|
| 251 |
+
|
| 252 |
+
**Theorem 7.** [5] (Invariance of the sets of form $S_{p,De^{rt}}^c$)
|
| 253 |
+
|
| 254 |
+
Let $1 \le p \le \infty$. The sets $S_{p, \mathbf{D}_{e^{rt}}}^c$, $c > 0$, are FI w.r.t. system (13), if and only if
|
| 255 |
+
the following condition is fulfilled:
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\forall t \in \mathbb{R}_{+}, \quad \mu_{||} ||_{p} (\mathbf{D}^{-1} \mathbf{A}(t) \mathbf{D}) \leq r. \tag{15}
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
2.4.2. Linear time-invariant systems
|
| 262 |
+
|
| 263 |
+
Consider the linear time-invariant system
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
\dot{\mathbf{x}}(t) = \mathbf{A}\mathbf{x}(t), \quad \mathbf{x}(t_0) = \mathbf{x}_0, \quad t \ge t_0, \qquad (16)
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
where $\mathbf{A} \in \mathbb{R}^{n \times n}$ is a matrix with constant entries.
|
| 270 |
+
|
| 271 |
+
**Remark 3.** (Consequence of Theorems 6 and 7)
|
| 272 |
+
|
| 273 |
+
Let $1 \le p \le \infty$. The sets $S_{p,H(t)}^c/S_{p,De^{rt}}^c$, $c > 0$, are FI w.r.t. system (16), if and only if the condition (14) / (15) is fulfilled for $\mathbf{A}(t) = \mathbf{A}$ (the constant matrix defined by (16)). $\square$
|
| 274 |
+
|
| 275 |
+
**Remark 4.** [6], [7] (*Generalization of the "diagonal stability" concept*)
|
| 276 |
+
Condition (15) written for the constant matrix **A** ∈ R^(n×n) in the form
|
| 277 |
+
|
| 278 |
+
$$
|
| 279 |
+
\mu_{||} ||_p (\mathbf{D}^{-1} \mathbf{A} \mathbf{D}) < 0 \qquad (17)
|
| 280 |
+
$$
|
| 281 |
+
|
| 282 |
+
represents the generalization for an arbitrary Hölder norm of the well-known Lyapunov inequality $\mathbf{A}^T\mathbf{P} + \mathbf{PA} < 0$ with $\mathbf{P} = (\mathbf{D}^{-1})^2$ [8], that is equivalent to $\mu_{||} ||_2 (\mathbf{D}^{-1}\mathbf{A}\mathbf{D}) < 0$. In other words, inequality (17) characterizes the diagonal stability relative to a Hölder $p$-norm of the matrix $\mathbf{A} \in \mathbb{R}^{n \times n}$. By developing this new point of view, we have shown that the diagonal stability is not a concept exclusively associated with the quadratic norm ($p = 2$), as treated before our investigations in [7]. $\square$
|
| 283 |
+
|
| 284 |
+
Given a matrix $\mathbf{A} = (a_{ij}) \in \mathbb{R}^{n \times n}$, define the “bar” operator $(-)$ that provides the
|
| 285 |
+
matrix $\bar{\mathbf{A}} = (\bar{a}_{ij})$ built as follows:
|
| 286 |
+
|
| 287 |
+
$$
|
| 288 |
+
\bar{a}_{ii} = a_{ii}, i = 1, \dots, n; \quad \bar{a}_{ij} = |a_{ij}|, i \neq j. \tag{18}
|
| 289 |
+
$$
|
| 290 |
+
|
| 291 |
+
**Theorem 8.** [7] (*Existence of the sets of form S<sub>p,De<sup>rt</sup></sub><sup>c</sup> FI w.r.t. system (16)*)
|
| 292 |
+
|
| 293 |
+
(a) Let $p = 1, \infty$. There exist sets $S_{p,De^{rt}}^c$, $c > 0$, FI w.r.t. system (16), if and only if the matrix $\bar{\mathbf{A}}$ is Hurwitz stable.
|
| 294 |
+
---PAGE_BREAK---
|
| 295 |
+
|
| 296 |
+
(b) Let $1 < p < \infty$. There exist sets $S_{p, \mathbf{D}^{\mathrm{ert}}}^c$, $c > 0$, FI w.r.t. system (16), if the matrix $\bar{\mathbf{A}}$ is Hurwitz stable. $\square$
|
| 297 |
+
|
| 298 |
+
2.4.3. Linear positive systems
|
| 299 |
+
|
| 300 |
+
Consider the linear positive system defined by (16) where $\mathbf{A}$ is an essentially non-negative matrix.
|
| 301 |
+
|
| 302 |
+
**Theorem 9.** [9] (*Existence of the sets of form $S_{p,\mathbf{D}^{\mathrm{ert}}}^c$ FI w.r.t. the linear positive system*)
|
| 303 |
+
|
| 304 |
+
Let $1 \le p \le \infty$. There exist sets $S_{p,\mathbf{D}^{\mathrm{ert}}}^c$, $c > 0$, FI w.r.t. positive system (16), if and only if the matrix $\bar{\mathbf{A}}$ is Hurwitz stable. $\square$
|
| 305 |
+
|
| 306 |
+
2.4.4. Linear systems with interval type uncertainties
|
| 307 |
+
|
| 308 |
+
Consider the linear system
|
| 309 |
+
|
| 310 |
+
$$ \dot{\mathbf{x}}(t) = \mathbf{A}\mathbf{x}(t), \quad \mathbf{x}(t_0) = \mathbf{x}_0 \in \mathbb{R}^n, \ t \ge t_0, \ \mathbf{A} \in \mathcal{A}^I, \qquad (19) $$
|
| 311 |
+
|
| 312 |
+
where $\mathcal{A}^I$ is an interval matrix
|
| 313 |
+
|
| 314 |
+
$$ \mathcal{A}^I = \{ \mathbf{A} \in \mathbb{R}^{n \times n} | \mathbf{A}^{-} \le \mathbf{A} \le \mathbf{A}^{+} \}, \qquad (20) $$
|
| 315 |
+
|
| 316 |
+
defined by the componentwise inequalities $a_{ij}^{-} \le a_{ij} \le a_{ij}^{+}$, $i,j = 1, \dots, n$, with $a_{ij}^{-}$, $a_{ij}$, $a_{ij}^{+}$ denoting the generic elements of the matrices $\mathbf{A}^{-}$, $\mathbf{A}$, $\mathbf{A}^{+}$. Define the majorant matrix of $\mathcal{A}^I$, denoted by $\mathbf{U} = (u_{ij})_{i,j=1,...,n}$, built as follows:
|
| 317 |
+
|
| 318 |
+
$$
|
| 319 |
+
\begin{align}
|
| 320 |
+
u_{ii} &= \sup_{\mathbf{A} \in \mathcal{A}^I} \{a_{ii}\} = a_{ii}^{+}, & i = 1, \dots, n, \\
|
| 321 |
+
u_{ij} &= \sup_{\mathbf{A} \in \mathcal{A}^I} |a_{ij}| = \max\{|a_{ij}^{-}|, |a_{ij}^{+}|\}, & i \neq j, i,j = 1, \dots, n.
|
| 322 |
+
\end{align}
|
| 323 |
+
\qquad (21)
|
| 324 |
+
$$
|
| 325 |
+
|
| 326 |
+
**Theorem 10.** [10] (*Invariance of the sets of form $S_{p,\mathbf{H}(t)}^c$*)
|
| 327 |
+
|
| 328 |
+
(a) Let $p = 1, \infty$. The sets $S_{p,\mathbf{H}(t)}^c$, $c > 0$, are FI w.r.t. interval system (19) if and only if the following condition is fulfilled:
|
| 329 |
+
|
| 330 |
+
$$ \forall t \in \mathbb{R}_{+}, \quad \mu_{||\|_p} (\mathbf{H}^{-1}(t)\mathbf{U}\mathbf{H}(t) - \mathbf{H}^{-1}(t)\dot{\mathbf{H}}(t)) \le 0. \qquad (22) $$
|
| 331 |
+
|
| 332 |
+
(b) Let $1 < p < \infty$. The sets $S_{p,\mathbf{H}(t)}^c$, $c > 0$, are FI w.r.t. interval system (19) if condition (22) is fulfilled. $\square$
|
| 333 |
+
|
| 334 |
+
**Theorem 11.** [10] (*Invariance of the sets of form $S_{p,\mathbf{D}^{\mathrm{ert}}}^c$*)
|
| 335 |
+
|
| 336 |
+
(a) Let $p = 1, \infty$. The sets $S_{p,\mathbf{D}^{\mathrm{ert}}}^c$, $c > 0$, are FI w.r.t. interval system (19) if and only if the following condition is fulfilled:
|
| 337 |
+
|
| 338 |
+
$$ \mu_{||\|_p} (\mathbf{D}^{-1}\mathbf{U}\mathbf{D}) \le r \qquad (23) $$
|
| 339 |
+
|
| 340 |
+
(b) Let $1 < p < \infty$. The sets $S_{p,\mathbf{D}^{\mathrm{ert}}}^c$, $c > 0$, are FI w.r.t. interval system (19) if condition (23) is fulfilled. $\square$
|
| 341 |
+
---PAGE_BREAK---
|
| 342 |
+
|
| 343 |
+
**Theorem 12.** [10] (*Existence of the sets of form $S_{p, \mathbf{D}^{ert}}^c$ FI w.r.t. the interval system (19))*
|
| 344 |
+
|
| 345 |
+
(a) Let $p = 1, \infty$. There exist sets $S_{p, \mathbf{D}^{ert}, c}^c$, $c > 0$, FI w.r.t. interval system (19) if and only if the matrix $\mathbf{U}$ is Hurwitz stable.
|
| 346 |
+
|
| 347 |
+
(b) Let $1 < p < \infty$. There exist sets $S_{p, \mathbf{D}^{ert}, c}^c$, $c > 0$, FI w.r.t. interval system (19) if the matrix $\mathbf{U}$ is Hurwitz stable. $\square$
|
| 348 |
+
|
| 349 |
+
**Remark 5.** [10] (*Necessity for Theorems 11 – 12, part (b)*)
|
| 350 |
+
|
| 351 |
+
Part (b) of Theorems 11 – 12 also represents a necessary condition if there exists a matrix $\mathbf{A}^* \in \mathcal{A}^I$ with the property $\mu_{|| ||_p} (\mathbf{D}^{-1}\mathbf{A}^*\mathbf{D}) = \mu_{|| ||_p} (\mathbf{D}^{-1}\mathbf{U}\mathbf{D})$ (for instance if $\mathbf{U} \in \mathcal{A}^I$). $\square$
|
| 352 |
+
|
| 353 |
+
## 2.5. Linear synthesis based on invariant sets
|
| 354 |
+
|
| 355 |
+
This subsection exploits set invariance for designing: • state-feedback laws for linear systems, which keep the closed-loop trajectories within sets of form $S_{p, \mathbf{D}^{ert}, c}^c$,
|
| 356 |
+
• state-variable observers for linear systems, which ensure the componentwise monitoring of the estimation error, by keeping the error trajectories within sets of form $S_{\infty, \mathbf{D}^{ert}, c}^c$. The design procedures are numerically tractable.
|
| 357 |
+
|
| 358 |
+
Consider the linear system
|
| 359 |
+
|
| 360 |
+
$$
|
| 361 |
+
\begin{aligned}
|
| 362 |
+
\dot{\mathbf{x}}(t) &= \mathbf{A}\mathbf{x}(t) + \mathbf{B}\mathbf{u}(t), & \mathbf{x}(t_0) &= \mathbf{x}_0, & t \ge t_0, \\
|
| 363 |
+
\mathbf{y}(t) &= \mathbf{C}\mathbf{x}(t), & && \\
|
| 364 |
+
\mathbf{A} &\in \mathbb{R}^{n \times n}, & \mathbf{B} &\in \mathbb{R}^{n \times m}, & \mathbf{C} &\in \mathbb{R}^{p \times n}.
|
| 365 |
+
\end{aligned}
|
| 366 |
+
\quad (24)
|
| 367 |
+
$$
|
| 368 |
+
|
| 369 |
+
**Theorem 13.** [11] (*FI of the sets of form $S_{p, \mathbf{D}^{ert}, c}^c$ w.r.t. the state-feedback closed-loop system)*
|
| 370 |
+
|
| 371 |
+
Let $1 \le p \le \infty$. There exists a state feedback
|
| 372 |
+
|
| 373 |
+
$$ \mathbf{u}(t) = \mathbf{K}\mathbf{x}(t), \quad \mathbf{K} \in \mathbb{R}^{m \times n} \quad (25) $$
|
| 374 |
+
|
| 375 |
+
that ensures the invariance of the sets $S_{p, \mathbf{D}^{ert}, c}^c$, $c > 0$, with respect to the closed-loop system, if and only if the following condition is fulfilled:
|
| 376 |
+
|
| 377 |
+
$$ \mu_{|| ||_p} (\mathbf{D}^{-1}(\mathbf{A} - \mathbf{B}\mathbf{K})\mathbf{D}) \le r. \quad (26) $$
|
| 378 |
+
|
| 379 |
+
$\square$
|
| 380 |
+
|
| 381 |
+
**Theorem 14.** [11] (*State feedback design for the usual p-norms, $p \in \{1, 2, \infty\}$)
|
| 382 |
+
|
| 383 |
+
(i) For $p = 1$, condition (26) is equivalent to the following linear inequalities:
|
| 384 |
+
|
| 385 |
+
$$
|
| 386 |
+
\begin{aligned}
|
| 387 |
+
& -\mathbf{B}\mathbf{K} - \mathbf{G} \le -\mathbf{A}, \\
|
| 388 |
+
& (\mathbf{B}\mathbf{K} - \mathbf{G})^{off} \le (\mathbf{A})^{off}, \\
|
| 389 |
+
& \mathbf{G}^T \delta \le r\delta,
|
| 390 |
+
\end{aligned}
|
| 391 |
+
\quad (27)
|
| 392 |
+
$$
|
| 393 |
+
|
| 394 |
+
where $\delta = [\delta_1 ... \delta_n]^T \in \mathbb{R}^n$, $\delta_i = 1/d_i$, $i = 1, ..., n$, $(*)^{off}$ is a matrix with null diagonal entries and the off-diagonal elements are taken from the matrix *, whereas $\mathbf{K} \in \mathbb{R}^{m \times n}$, $\mathbf{G} \in \mathbb{R}^{n \times n}$ are unknown matrices.
|
| 395 |
+
---PAGE_BREAK---
|
| 396 |
+
|
| 397 |
+
(ii) For $p=2$, condition (26) is equivalent to the following linear matrix inequality:
|
| 398 |
+
|
| 399 |
+
$$
|
| 400 |
+
(\mathbf{A} - \mathbf{B}\mathbf{K})\mathbf{D}^2 + \mathbf{D}^2(\mathbf{A} - \mathbf{B}\mathbf{K})^T - 2r\mathbf{D}^2 \le 0, \quad (28)
|
| 401 |
+
$$
|
| 402 |
+
|
| 403 |
+
where $\mathbf{K} \in \mathbb{R}^{m \times n}$ is an unknown matrix.
|
| 404 |
+
|
| 405 |
+
(iii) For *p* = ∞, condition (26) is equivalent to the following linear inequalities:
|
| 406 |
+
|
| 407 |
+
$$
|
| 408 |
+
\begin{align}
|
| 409 |
+
-B\mathbf{K} - \mathbf{G} &\leq -\mathbf{A}, \nonumber \\
|
| 410 |
+
(\mathbf{B}\mathbf{K} - \mathbf{G})^{off} &\leq (\mathbf{A})^{off}, \tag{29} \\
|
| 411 |
+
\mathbf{G}\mathbf{d} &\leq r\mathbf{d}, \nonumber
|
| 412 |
+
\end{align}
|
| 413 |
+
$$
|
| 414 |
+
|
| 415 |
+
where $\mathbf{d} = [d_1 \dots d_n]^T \in \mathbb{R}^n$, $i = 1, \dots, n$, $(*)^{off}$ is a matrix with null diagonal entries and the off-diagonal elements are taken from the matrix *, whereas $\mathbf{K} \in \mathbb{R}^{m \times n}$, $\mathbf{G} \in \mathbb{R}^{n \times m}$ are unknown matrices. $\square$
|
| 416 |
+
|
| 417 |
+
**Remark 6.** (*Numerical tractability of Theorem 14*)
|
| 418 |
+
|
| 419 |
+
If $p = 1$ or $p = \infty$, the resolution of inequalities (27) or (29) can be approached as a linear programming problem. If $p = 2$, inequality (28) is handled as an LMI [12]. Each of the three procedures operates as a computable necessary and sufficient condition, in the sense that either provides a state feedback (25), or guarantees that such a feedback law does not exist [11]. $\square$
|
| 420 |
+
|
| 421 |
+
**Remark 7.** (*State-feedback design for interval systems and p = ∞*)
|
| 422 |
+
|
| 423 |
+
Consider system (24) where $\mathbf{A} \in \mathcal{A}^I$ and $\mathbf{B} \in \mathcal{B}^I$, with $\mathcal{A}^I$ and $\mathcal{B}^I$ interval matrices.
|
| 424 |
+
Paper [13] formulates numerically tractable necessary and sufficient conditions for
|
| 425 |
+
the existence of the state feedback (25) that ensures the invariance of sets $S_{p,\mathrm{D}_{e}^{rt}}^{c}$,
|
| 426 |
+
$p = \infty$, $c > 0$, with respect to the closed-loop system. This approach represents a
|
| 427 |
+
generalization of Theorem 14 (iii) for the case of interval matrices. $\square$
|
| 428 |
+
|
| 429 |
+
**Remark 8.** [*Observer design with componentwise monitored error*]
|
| 430 |
+
|
| 431 |
+
The dynamics of the estimation-error vector $x_e(t)$ is described by
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
\dot{\mathbf{x}}_e(t) = (\mathbf{A} - \mathbf{L}\mathbf{C})\mathbf{x}_e(t), \quad \mathbf{x}(t_0) = \mathbf{x}_0, \quad t \ge t_0. \tag{30}
|
| 435 |
+
$$
|
| 436 |
+
|
| 437 |
+
The invariance of the rectangular sets $S_{\infty, D_{e}^{rt}}^{c}$, $c > 0$, that allow monitoring each component of the estimation-error vector, is ensured by the necessary and sufficient condition (similar to Theorem 13):
|
| 438 |
+
|
| 439 |
+
$$
|
| 440 |
+
\mu_{|| ||\infty} (\mathbf{D}^{-1}(\mathbf{A} - \mathbf{L}\mathbf{C})\mathbf{D}) \leq r. \quad (31)
|
| 441 |
+
$$
|
| 442 |
+
|
| 443 |
+
Condition (31) is equivalent to the following linear inequalities (similar to Theorem
|
| 444 |
+
14 (iii)):
|
| 445 |
+
|
| 446 |
+
$$
|
| 447 |
+
\begin{align}
|
| 448 |
+
& -\mathbf{L}\mathbf{C} - \mathbf{G} \leq -\mathbf{A}, \notag \\
|
| 449 |
+
& (\mathbf{L}\mathbf{C} - \mathbf{G})^{off} \leq (\mathbf{A})^{off}, \tag{32} \\
|
| 450 |
+
& \mathbf{G}\mathbf{d} \leq r\mathbf{d}. \notag
|
| 451 |
+
\end{align}
|
| 452 |
+
$$
|
| 453 |
+
|
| 454 |
+
□
|
| 455 |
+
---PAGE_BREAK---
|
| 456 |
+
|
| 457 |
+
## 2.6. Comparison methods for invariant sets
|
| 458 |
+
|
| 459 |
+
This subsection applies the comparison theory for deriving results adequate to set invariance.
|
| 460 |
+
|
| 461 |
+
Consider a time-variant nonlinear system, of form (1) and a linear positive system of form (16), whose matrix is denoted by $\Gamma$:
|
| 462 |
+
|
| 463 |
+
$$ \dot{\mathbf{x}}(t) = \Gamma \mathbf{x}(t), \quad \mathbf{x}(t_0) = \mathbf{x}_0, \quad t \geq t_0, \qquad (16') $$
|
| 464 |
+
|
| 465 |
+
In the following, system (16') is used as a comparison system.
|
| 466 |
+
|
| 467 |
+
For the matrix-valued functions $\mathbf{J}(\mathbf{x}, t)$ (9) and $\mathbf{A}(\mathbf{x}, t)$ (10) associated with system (1), use the "bar" operator ($\bar{\cdot}$) defined by (18) in order to build the matrix-valued functions $\overline{\mathbf{J}}(\mathbf{x}, t)$ and $\overline{\mathbf{A}}(\mathbf{x}, t)$.
|
| 468 |
+
|
| 469 |
+
**Theorem 15.** [5] (Sets of form $S_{p,\mathbf{H}(t)}^c / S_{p,\mathbf{D}_{\mathrm{ert}}^c}^c$ FI w.r.t. nonlinear systems)
|
| 470 |
+
Let $1 \le p \le \infty$.
|
| 471 |
+
(i) Let $\rho > 0$ a positive constant and $\Omega_p \subseteq \mathbb{R}^n$ a set with the property $S_{p,\mathbf{H}(t)}^\rho / S_{p,\mathbf{D}_{\mathrm{ert}}^c}^\rho \subseteq \Omega_p, \forall t \in \mathbb{R}_+$. If one of the following two conditions:
|
| 472 |
+
|
| 473 |
+
$$ \overline{\mathbf{A}}(\mathbf{x}, t) \leq \Gamma, \qquad (33a) $$
|
| 474 |
+
|
| 475 |
+
or
|
| 476 |
+
|
| 477 |
+
$$ \overline{\mathbf{J}}(x,t) \leq \Gamma, \qquad (33b) $$
|
| 478 |
+
|
| 479 |
+
is fulfilled for $\forall t \in \mathbb{R}_+, \forall \mathbf{x} \in \Omega_p$, and the sets $S_{p,\mathbf{H}(t)}^c / S_{p,\mathbf{D}_{\mathrm{ert}}^c}^c, c > 0$, are FI w.r.t. the comparison system (16'), then the sets $S_{p,\mathbf{H}(t)}^c / S_{p,\mathbf{D}_{\mathrm{ert}}^c}, c \in (0, \rho]$, are FI w.r.t. the nonlinear system (1).
|
| 480 |
+
|
| 481 |
+
(ii) If one of the conditions (33a) or (33b) is fulfilled $\forall t \in \mathbb{R}_+, \forall \mathbf{x} \in \mathbb{R}^n$, and the sets $S_{p,\mathbf{H}(t)}^c / S_{p,\mathbf{D}_{\mathrm{ert}}^c}, c > 0$, are FI w.r.t. the comparison system (16'), then the sets $S_{p,\mathbf{H}(t)}^c / S_{p,\mathbf{D}_{\mathrm{ert}}^c}, c > 0$, are FI w.r.t. the nonlinear system (1). $\square$
|
| 482 |
+
|
| 483 |
+
**Remark 9.** (Sets of form $S_{p,\mathbf{H}(t)}^c / S_{p,\mathbf{D}_{\mathrm{ert}}^c}^c$ FI w.r.t. recurrent neural networks)
|
| 484 |
+
|
| 485 |
+
The dynamics of a recurrent neural network is described with respect to the equilibrium $\{0\}$ by:
|
| 486 |
+
|
| 487 |
+
$$ \dot{\mathbf{x}}(t) = \mathbf{B}\mathbf{x}(t) + \mathbf{W}\mathbf{g}(\mathbf{x}(t)), \qquad (34) $$
|
| 488 |
+
|
| 489 |
+
where $\mathbf{x} = [x_1, \cdots, x_n]^T \in \mathbb{R}^n$, $\mathbf{B}, \mathbf{W} \in \mathbb{R}^{n \times n}$ with $\mathbf{B} = \text{diag}\{b_1, \cdots, b_n\}$, $b_i < 0$, $i = 1, \dots, n$. The vector-valued function $\mathbf{g}: \mathbb{R}^n \to \mathbb{R}^n$, $\mathbf{g}(\mathbf{x}) = [g_1(\mathbf{x}) \cdots g_n(\mathbf{x})]^T$, is continuously differentiable on $\mathbb{R}^n$, satisfying the conditions $g_i(\mathbf{x}) = g_i(x_i)$, $g_i(0) = 0$ and $0 \le g'_i(s) \le L_i, \forall s \in \mathbb{R}, i = 1, \dots, n$. Let $\Pi = \mathbf{B} + \tilde{\mathbf{W}}\Lambda$, where the matrix $\tilde{\mathbf{W}} = [\tilde{w}_{ij}] \in \mathbb{R}^{n \times n}$ has the elements $\tilde{w}_{ii} = \max\{0, w_{ii}\}, i = 1, \dots, n, \\ i \ne j, i, j = 1, \dots, n$, and $\Lambda = \text{diag}\{L_i, \dots, L_n\}$ is a diagonal matrix. Theorem 15 allows studying the invariance of the sets $S_{p,\mathbf{H}(t)}^c / S_{p,\mathbf{D}_{\mathrm{ert}}^c}^c, c > 0$, with respect to the neural network (34), by using the linear positive system
|
| 490 |
+
---PAGE_BREAK---
|
| 491 |
+
|
| 492 |
+
$$ \dot{\mathbf{x}}(t) = \Pi \mathbf{x}(t), \tag{35} $$
|
| 493 |
+
|
| 494 |
+
as a comparison system.
|
| 495 |
+
|
| 496 |
+
### 3. Researches prefiguring the present framework
|
| 497 |
+
results for invariant sets with rectangular shapes
|
| 498 |
+
|
| 499 |
+
The construction of the present framework relied on our previous investigations of invariant rectangular sets of type $S_{p,H(t)}^c / S_{p,De^{rt}}^c$ with $p = \infty$. These investigations are reviewed by the survey paper [15]. At that time, the rectangular sets were written in the particular form of hyper-intervals $[-ch_1(t), ch_1(t)] \times \dots \times [-ch_n(t), ch_n(t)] / [-cd_1 e^{rt}, cd_1 e^{rt}] \times \dots \times [-cd_n e^{rt}, cd_n e^{rt}]$, that later on proved to be equivalent with norm-based description (2) / (4) for $p = \infty$.
|
| 500 |
+
|
| 501 |
+
The current section just recalls those results that represented key elements for the development of our approaches, permitting the extension from sets defined by the norm $\infty$ to sets described by arbitrary Hölder norms.
|
| 502 |
+
|
| 503 |
+
#### 3.1. Linear time-invariant systems
|
| 504 |
+
|
| 505 |
+
Papers [16] and [17] were devoted to systems of form (16) and marked the beginning of the researches on invariant sets of rectangular type. They brought the following contributions: • Proofs of Theorems 6 and 7 for A a constant matrix and $p = \infty$, the inequalities (14) and (15) being obtained in particular forms $\dot{\mathbf{h}}(t) \geq \bar{\mathbf{A}}\mathbf{h}(t)$, $\mathbf{h}(t) = [h_1(t) \cdots h_n(t)]^T$ and $r\mathbf{d} \geq \bar{\mathbf{A}}\mathbf{d}$, $\mathbf{d} = [d_1 \cdots d_n]^T$, respectively, where $\bar{\mathbf{A}}$ is defined by (18). • Proofs of Theorem 8 for $p = \infty$. • Refinement of the stability concepts by introducing the *componentwise asymptotic stability* (abbreviated CWAS) and the *componentwise exponential asymptotic stability* (abbreviated CWEAS), which correspond to $\text{DIAS}_p$ and $\text{DIES}_p$ for $p = \infty$ in subsection 2.2.
|
| 506 |
+
|
| 507 |
+
Paper [18] completed the approach in [19] that considered rectangular sets non-symmetrical with respect to $\{0\}$. It proved that CWAS and CWEAS testing in the nonsymmetrical case can be reduced to the Hurwitz stability analysis of the matrix $\bar{\mathbf{A}}$, exactly as in the symmetrical case.
|
| 508 |
+
|
| 509 |
+
#### 3.2. Linear systems with interval-type uncertainties
|
| 510 |
+
|
| 511 |
+
Papers [20], [21] proved Theorems 10 – 12 for $p = \infty$. Paper [21] also treated the case of rectangular sets nonsymmetrical with respect to $\{0\}$. It proved that CWAS and CWEAS testing in the nonsymmetrical case can be reduced to the Hurwitz stability analysis of the matrix $\mathbf{U}$, exactly as in the symmetrical case.
|
| 512 |
+
|
| 513 |
+
#### 3.3. Linear synthesis
|
| 514 |
+
|
| 515 |
+
Papers [22], [23] addressed the state-feedback synthesis that ensures the CWEAS property for the closed-loop system. They used inequality (26) with $p = \infty$ in the
|
| 516 |
+
---PAGE_BREAK---
|
| 517 |
+
|
| 518 |
+
particular form $rd \geq (\mathbf{A} - \mathbf{B}\mathbf{K})d$ with $d = [d_1 \cdots d_n]^T$, but the proposed algorithm is more restrictive than the resolution of inequalities (29).
|
| 519 |
+
|
| 520 |
+
Paper [24] approached the observer design with componentwise monitored error. It used inequality (31) with $p = \infty$ in the particular form $rd \geq (\mathbf{A} - \mathbf{L}\mathbf{C})d$ with $d = [d_1 \cdots d_n]^T$, and formulated necessary and sufficient conditions for the analytical computation of matrix $\mathbf{L}$.
|
| 521 |
+
|
| 522 |
+
### 3.4. Nonlinear systems
|
| 523 |
+
|
| 524 |
+
Paper [25] considered state-space representations of the form (1') and proved Theorem 15 (i), (ii) case (a) for sets of form $S_{\mathbf{p},\mathcal{D}^{\mathrm{rt}}}^{c}$ with $p = \infty$.
|
| 525 |
+
|
| 526 |
+
Papers [26], [27] considered recurrent neural networks of type (34) with uncertainties and provided tests for the existence of invariant sets of form $S_{\mathbf{p},\mathbf{H}(t)}/S_{\mathbf{p},\mathcal{D}^{\mathrm{rt}}}$, $c > 0$, with $p = \infty$. Uncertainties in [26] referred to the slopes of the functions $g_i(x_i)$, whereas the uncertainties in [27] took into account both the slopes of functions $g_i(x_i)$ and the values of the entries of matrices $\mathbf{B}$ and $\mathbf{W}$ (considered as interval matrices). The proposed testing strategies rely on the proof of Theorem 15 (ii) case (a) for $p = \infty$.
|
| 527 |
+
|
| 528 |
+
## 4. Concluding remarks
|
| 529 |
+
|
| 530 |
+
Relying on the exploration of invariant sets with rectangular shapes that started in the mid eighties, we have foreseen that most of the investigated properties could remain valid for invariant sets with general shapes. It took some time to mathematically formalize this intuition, but we have managed to develop a nice generalization which accommodates our earlier researches on rectangular sets as a particular case.
|
| 531 |
+
|
| 532 |
+
The new scenario provides analysis and synthesis tools for sets with arbitrary or exponential time-dependence, described by Hölder $p$-norms, $1 \leq p \leq \infty$. Besides the practical role in applications, the present framework has an important theoretical value, by proving the existence of a unified theory for many problems previously treated as completely independent one to the other.
|
| 533 |
+
|
| 534 |
+
## References
|
| 535 |
+
|
| 536 |
+
[1] CARJA O., VRABIE I.I., *Differential equations on closed sets*, in *Handbook of Differential Equations: Ordinary Differential Equations* (A. Canada, P. Drabek and A. Fonda, Ed.), Elsevier BV/North Holland, Amsterdam (2005), vol. **2**, pp. 147–238.
|
| 537 |
+
|
| 538 |
+
[2] BLANCHINI F., MIANI S., *Set-Theoretic Methods in Control*, Birkhäuser, Boston (2008).
|
| 539 |
+
|
| 540 |
+
[3] F. BLANCHINI, Set invariance in control Survey paper. *Automatica*, vol. **35** (1999), pp. 1747–1767.
|
| 541 |
+
|
| 542 |
+
[4] MATCOVSCHI M.H., PASTRAVANU O., *Invariance properties of recurrent neural networks*, in *Intelligent Systems and Technologies - Methods and Applications* (H.N.
|
| 543 |
+
---PAGE_BREAK---
|
| 544 |
+
|
| 545 |
+
Teodorescu, Junzo Watada, L. Jain, Eds.), Studies in Computational Intelligence Series no. 217, Springer-Verlag, Berlin Heidelberg (2009), pp. 105–119.
|
| 546 |
+
|
| 547 |
+
[5] PASTRAVANU O., MATCOVSCHI M.H., VOICU M., *Time-dependent invariant sets in system dynamics*, Proc. 2006 IEEE Conf. on Control Applications CCA 2006, München (2006), CD-ROM.
|
| 548 |
+
|
| 549 |
+
[6] PASTRAVANU O., MATCOVSCHI M.H., VOICU M., *Diagonally-invariant exponential stability*, Proc. of the 16-th World Congress of the Int. Fed. of Automatic Control, Prague (2005), DVD-ROM.
|
| 550 |
+
|
| 551 |
+
[7] PASTRAVANU O., VOICU M., *Generalized matrix diagonal stability and linear dynamical systems*, Linear Algebra and its Applications, vol. **419** (2006), iss. 2–3, pp. 299–310.
|
| 552 |
+
|
| 553 |
+
[8] KASZKUREWICZ E., BHAYA A., *Matrix Diagonal Stability in Systems and Computation*, Birkhäuser, Boston (2000).
|
| 554 |
+
|
| 555 |
+
[9] PASTRAVANU O., MATCOVSCHI M.H., VOICU M., *New results in the state-space analysis of positive linear systems*, Romanian Journal of Information Science and Technology (ROMJIST), vol. **9** (2006), nr. 3, pp. 217–225.
|
| 556 |
+
|
| 557 |
+
[10] PASTRAVANU O., MATCOVSCHI M.H., VOICU M., *Majorant matrices in the qualitative analysis of interval dynamical systems*, Proc. European Control Conf. ECC’07, Kos, Greece (2007), CD-ROM.
|
| 558 |
+
|
| 559 |
+
[11] MATCOVSCHI M.H., PASTRAVANU O., *Contractive invariant sets in the dynamics of switched linear systems*, Proc. of the 8-th Int. Conf. on Technical Informatics CONTI 2008, Timişoara (2008), CD-ROM.
|
| 560 |
+
|
| 561 |
+
[12] BOYD S., FERON E., EL GHAOUI L., BALAKRISHNAN V., *Linear Matrix Inequalities in System and Control Theory*, SIAM, Philadelphia (1994).
|
| 562 |
+
|
| 563 |
+
[13] PASTRAVANU O., MATCOVSCHI M.H., *Componentwise stabilization of interval systems*, Proc. of the 17-th World Congress of the Int. Fed. of Automatic Control, Seoul (2008), DVD-ROM.
|
| 564 |
+
|
| 565 |
+
[14] PASTRAVANU O., MATCOVSCHI M.H., *Robust design of componentwise stabilizers and observers*, Proc. 9-th Int. Symp. Automatic Control and Computer Science. SACCS 2007 (V. Manta, C. Lazar, Eds.), Ed. Politehnium, Iaşi (2007), pp. 184–189.
|
| 566 |
+
|
| 567 |
+
[15] VOICU M., PASTRAVANU O., *Flow-invariance method in control – a survey of some results*, in *Advances in Automatic Control* (M. Voicu, Ed.), Kluwer Academic Publishers, Boston (2004), pp. 393–434.
|
| 568 |
+
|
| 569 |
+
[16] VOICU M., *Free response characterization via flowinvariance*, Prep. of the 9-th World Congress of the Int. Fed. of Automatic Control, Budapest (1984), vol. **5**, pp. 12–17.
|
| 570 |
+
|
| 571 |
+
[17] VOICU M., *Componentwise asymptotic stability of linear constant dynamical systems*, IEEE Trans. on Automatic Control, vol. **29** (1984), pp. 937–939.
|
| 572 |
+
|
| 573 |
+
[18] PASTRAVANU O., VOICU M., *On the componentwise stability of linear systems*, Int. Jrnl. Robust Nonlinear Control, vol. **15** (2005), pp. 15–23.
|
| 574 |
+
|
| 575 |
+
[19] HMAMED A., BENZAOUIA A., *Componentwise stability of linear systems: A non-symmetrical case*, Int. Jrnl. Robust and Nonlinear Control, vol. **7** (1997), pp. 1023–1028.
|
| 576 |
+
|
| 577 |
+
[20] PASTRAVANU O., VOICU M., *Flow invariance and componentwise asymptotic stability*, Differential and Integral Equations, vol. **15** (2002), pp. 1377–1394.
|
| 578 |
+
---PAGE_BREAK---
|
| 579 |
+
|
| 580 |
+
[21] PASTRAVANU O., VOICU M., *Necessary and sufficient conditions for componentwise stability of interval matrix systems*, IEEE Trans. on Automatic Control, vol. **49** (2004), pp. 1016–1021.
|
| 581 |
+
|
| 582 |
+
[22] VOICU M., *State feedback matrices for linear constant dynamical systems with state constraints*, Prep. of the 4-th Int. Conf. Control Systems and Computer Science, Bucharest (1981), vol. **1**, pp. 110–115.
|
| 583 |
+
|
| 584 |
+
[23] VOICU M., *System matrix with prescribed off-diagonal entries obtained via state feedback*, Bul. Inst. Polit. Iași, vol. **XLIII** (**XLVII**) (1997), s. IV, pp. 5–9.
|
| 585 |
+
|
| 586 |
+
[24] VOICU M., *Observing the state with componentwise exponentially decaying error*, Systems and Control Letters, vol. **9** (1987), pp. 33–42.
|
| 587 |
+
|
| 588 |
+
[25] VOICU M., *On the application of the flow-invariance method in control theory and design*, Prep. 10-th World Congress of Int. Fed. of Automatic Control, Munchen (1987), vol. **8**, pp. 364–369.
|
| 589 |
+
|
| 590 |
+
[26] MATCOVSCHI M.H., PASTRAVANU O., *Flow-invariance and stability analysis for a class of nonlinear systems with slope conditions*, Eur. Jrnl. Control, vol. **10** (2004), pp. 352–364.
|
| 591 |
+
|
| 592 |
+
[27] PASTRAVANU O., MATCOVSCHI M.H., *Absolute componentwise stability of interval Hopfield neural networks*, IEEE Trans. on Systems, Man, and Cybernetics, Part B, vol. **35** (2005), no. 1, pp. 136–141.
|
samples/texts_merged/3395999.md
ADDED
|
@@ -0,0 +1,142 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# A New CMOS Current Controlled Quadrature Oscillator Based on a MCCII
|
| 5 |
+
|
| 6 |
+
Ashwek Ben Saied¹,², Samir Ben Salem²,³, Dorra Sellami Masmoudi¹,²
|
| 7 |
+
|
| 8 |
+
¹Computor Imaging and Electronic Systems Group (CIEL), Research Unit on Intelligent Design and Control of complex Systems (ICOS), Sfax, Tunisia
|
| 9 |
+
|
| 10 |
+
²University of Sfax, National Engineering School of Sfax (ENIS), Sfax, Tunisia
|
| 11 |
+
|
| 12 |
+
³Development Group in Electronics and Communications (EleCom)
|
| 13 |
+
Laboratory of Electronics and Information Technology (LETI), Sfax, Tunisia
|
| 14 |
+
|
| 15 |
+
E-mail: Achwek.bensaied@gmail.com, samir.bensalem@isecs.rnu.tn, dorra.masmoudi@enis.rnu.tn
|
| 16 |
+
|
| 17 |
+
Received March 15, 2011; revised April 21, 2011; accepted April 28, 2011
|
| 18 |
+
|
| 19 |
+
## Abstract
|
| 20 |
+
|
| 21 |
+
In this paper, we propose a design of a current controlled Quadrature Sinusoidal Oscillator. The proposed circuit employs three optimized Multi-output translinear second generation current conveyer (MCCII). The oscillation condition and the oscillation frequency are independently controllable. The proposed Quadrature Oscillator frequency can be tuned in the range of [198 - 261 MHz] by a simple variation of a DC current. PSpice simulation results are performed using CMOS 0.35 µm process of AMS.
|
| 22 |
+
|
| 23 |
+
**Keywords:** Quadrature Sinusoidal Oscillator, Optimized MCCII
|
| 24 |
+
|
| 25 |
+
## 1. Introduction
|
| 26 |
+
|
| 27 |
+
Controlled Quadrature Sinusoidal Oscillator is a basic signal-generating block frequently needed in communication systems, instrumentation and control systems. In communication it is require for Quadrature mixer and single side band generators.
|
| 28 |
+
|
| 29 |
+
MCCII based Quadrature oscillator presents a good solution to avoid limitations of Surface Acoustic Wave, such as problems of integration, impedance matching, tuning, linearity, etc.
|
| 30 |
+
|
| 31 |
+
In order to get controllable characteristics for the proposed Quadrature Oscillator, translinear Multi-output second generation current controlled conveyer based structure seems to be the most attractive [1-3]. In fact, being able to control the output resistance at port X by means of a current source [4-6], one may exploit this in the synthesis of electronically adjustable functions [5-8].
|
| 32 |
+
|
| 33 |
+
Translinear MCCII family is wathly extended to MOS submicron technologies going towards VLSI design. Indeed, reaching sub-micron technologies, the MOS transistor becomes able to achieve high transit frequencies [1,7,9,11]. These Multi-output conveyors are employed in different RF controllable applications such as oscillators, quadrature oscillator and filters [1,2,7,11].
|
| 34 |
+
|
| 35 |
+
In this paper, we are interested in the design of MCCII based Quadrature Oscillator. This paper is organized as follows: in section II, we present the MCCII based Quadrature oscillator architecture [1]. Then After presenting the inconvenient of this oscillator, we present the general characteristics of Multi-output second generation translinear current conveyor in section III. In section IV, we give the proposed Controlled Quadrature Oscillator. Finally, the proposed structure is designed and simulated using PSPICE.
|
| 36 |
+
|
| 37 |
+
## 2. The Controlled Quadrature Oscillator
|
| 38 |
+
|
| 39 |
+
Parven Beg [1] presents a novel single resistance controlled sinusoidal quadrature oscillator shown in Figure 1. This architecture uses only two CMOS multioutput CCIIs along with the grounded resistors and capacitors.
|
| 40 |
+
The corresponding oscillation condition is given by:
|
| 41 |
+
|
| 42 |
+
$$s^2 + s \left[ \frac{1}{R_1} - \frac{1}{R_2} \right] \frac{1}{C_2} + \frac{1}{C_1 C_2 R_2 R_3} = 0 \quad (1)$$
|
| 43 |
+
|
| 44 |
+
It leads to the following condition:
|
| 45 |
+
|
| 46 |
+
$$R_1 = R_2 \quad (2)$$
|
| 47 |
+
|
| 48 |
+
and following oscillation frequency:
|
| 49 |
+
---PAGE_BREAK---
|
| 50 |
+
|
| 51 |
+
Figure 1. Quadrature oscillator implementation proposed by Parven Beg.
|
| 52 |
+
|
| 53 |
+
$$f_0 = \frac{1}{2\pi\sqrt{C_1 C_2 R_3 R_2}} \quad (3)$$
|
| 54 |
+
|
| 55 |
+
From Equation (3), we get a variable frequency oscillator. The oscillation frequency can be adjusted independently without modification of the oscillation condition by varying $R_3$ [1]. However, to avoid tuning $R_3$ after integration, one can change this external resistance by an internal active controllable one corresponding to the X port parasitic resistance of the MCCII.
|
| 56 |
+
|
| 57 |
+
## CMOS Implementation of the MCCII
|
| 58 |
+
|
| 59 |
+
The MCCII can be represented by the symbol of Figure 2. The port relations of the MCCII can be characterized by the following expression:
|
| 60 |
+
|
| 61 |
+
$$I_Y = 0, V_X = V_Y + I_X R_X, \quad I_{Zi+} = I_X \quad \text{and} \quad I_{Zi-} = -I_X$$
|
| 62 |
+
|
| 63 |
+
where, $R_X$ denote the parasitic resistance at the X input terminal of the MCCII and $i = 1, 2, 3$. The plus and minus sign of the current transfer ratio represent the positive and negative types of the MCCII outputs.
|
| 64 |
+
|
| 65 |
+
The terminal characteristic of the MCCII can be described by the following matrix equation:
|
| 66 |
+
|
| 67 |
+
$$\begin{bmatrix} I_Y \\ V_X \\ I_{Z1+} \\ I_{Z2+} \\ I_{Z3+} \\ I_{Z1-} \\ I_{Z2-} \\ I_{Z3-} \end{bmatrix} = \begin{bmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & -1 & 0 \\ 0 & -1 & 0 \\ 0 & -1 & 0 \end{bmatrix} \begin{bmatrix} V_Y \\ I_X \\ V_Z \end{bmatrix} \quad (4)$$
|
| 68 |
+
|
| 69 |
+
The MCCII implementation is given in Figure 3. As-
|
| 70 |
+
|
| 71 |
+
suming the same gain factors for both NMOS and PMOS transistors, the parasitic impedances are described by the following expressions (5)
|
| 72 |
+
|
| 73 |
+
$$R_y = \frac{1}{I_o^* (\lambda_N + \lambda_p)} \quad (6)$$
|
| 74 |
+
|
| 75 |
+
$$R_{zi} = \frac{1}{I_o^* (\lambda_N + \lambda_p)} \quad (i = 1, \dots, 3 \text{ or } 1, \dots, 3) \quad (7)$$
|
| 76 |
+
|
| 77 |
+
We notice that the optimization process can be done in the same way for other simulation conditions [7,9,10]. **Table 1** shows the optimal device scaling that we get after applying the optimization approach.
|
| 78 |
+
|
| 79 |
+
**Figure 4** shows the simulated parasitic resistance at port X ($R_X$) in the optimized configuration. It can be tuned on more than a decade over [427 Ω, 7.1 kΩ] by varying $I_o$ in the range [1 μA - 400 μA]. Such control is very important, since it will be used to replace the resistance $R_3$ in the Quadrature Oscillator giving in **Figure 3**. **Figure 4** depicts results obtained from both PSPICE software simulations ($R_X$) and MAPLE theoretical calculus of ($R_{Xthe}$). We Notice a global agreement between both characteristics.
|
| 80 |
+
|
| 81 |
+
## 3. Proposed Oscillator
|
| 82 |
+
|
| 83 |
+
The basic idea in the improved structure consists on replacing the resistance $R_3$ by the parasitic resistance on port X. We then use this implementation of MCCII, presenting a variable resistance on port X. The Quadrature oscillator, will be in that case controlled by means of the bias current $I_o$ in the MCCII3.
|
| 84 |
+
|
| 85 |
+
The proposed Quadrature Sinusoidal Oscillator is presented in **Figure 5**. The modified oscillation condition and oscillation frequency are respectively given by the following expressions:
|
| 86 |
+
|
| 87 |
+
$$R_1 = R_2 \quad (8)$$
|
| 88 |
+
|
| 89 |
+
$$f_0 = \frac{1}{2\pi\sqrt{C_1 C_2 R_{X3} R_2}} \quad (9)$$
|
| 90 |
+
|
| 91 |
+
From Equation (9), we get a variable frequency oscillator. In fact, the oscillation frequency can be adjusted independently without modification of the oscillation condition by varying $R_{X3}$ (by varying $I_{o3}$ current of the MCCII3). The proposed Quadrature oscillator is simulated for different MCCII3 bias currents. Simulation results are shown in **Figure 6**. When varying the control current between 10 μA and 400 μA, the oscillation frequency is tuned in the range [198 MHz - 261MHz].
|
| 92 |
+
|
| 93 |
+
$$R_x = \frac{1}{\sqrt{I_o^*} * \left[ \sqrt{2K_N \left(\frac{W}{L}\right)_{NXX}} (1 + \lambda_N V_{DS}) + \sqrt{2K_P \left(\frac{W}{L}\right)_{PXX}} (1 + \lambda_P V_{DS}) \right]} \quad (5)$$
|
| 94 |
+
---PAGE_BREAK---
|
| 95 |
+
|
| 96 |
+
Figure 2. MCCII block.
|
| 97 |
+
|
| 98 |
+
Figure 3. MCCII implementation using translinear loop $I_o$.
|
| 99 |
+
|
| 100 |
+
Figure 4. Parasitic resistance at port X versus the control current $I_o$ (R<sub>xthe</sub>, ---R<sub>x</sub>).
|
| 101 |
+
|
| 102 |
+
Figure 5. The proposed quadrature oscillator implementation.
|
| 103 |
+
---PAGE_BREAK---
|
| 104 |
+
|
| 105 |
+
Figure 6. Oscillation frequency versus control current.
|
| 106 |
+
|
| 107 |
+
Table 1. Device scaling after optimisation process
|
| 108 |
+
|
| 109 |
+
<table><thead><tr><th>Device Name</th><th>Aspect ratio W/L</th></tr></thead><tbody><tr><td>M1, M2</td><td>12/0.35 (µm)</td></tr><tr><td>M3, M4</td><td>36/0.35 (µm)</td></tr><tr><td>Mxx (in PMOS current mirrors)</td><td>18/0.35 (µm)</td></tr><tr><td>Mxx (in NMOS current mirrors)</td><td>6/0.35 (µm)</td></tr></tbody></table>
|
| 110 |
+
|
| 111 |
+
The circuit was simulated using $R_1 = R_2 = R_4 = 500$ Ω, $R_5 = 1$ KΩ, $R_{X3} = 450$ Ω ($I_{O3} = 100$ μA), $C_1 = C_2 = 0.2$ pF and $I_{O1} = I_{O2} = 100$ μA. The obtained oscillation frequency is 225 MHz and the obtained quadrature voltage waveforms are shown in Figure 7. Simulations were carried out using 0.35 μm CMOS process parameters.
|
| 112 |
+
|
| 113 |
+
Figure 7. The simulated of Quadrature output waveforms.
|
| 114 |
+
---PAGE_BREAK---
|
| 115 |
+
|
| 116 |
+
**4. Conclusions**
|
| 117 |
+
|
| 118 |
+
In this paper, we have proposed a new design of variable frequency current controlled Quadrature oscillators. In order to get high frequency performances of the oscillator, we use an optimized translinear multi-output CCII structure in 0.35 µm CMOS process of AMS. Simulation results show that this Quadrature oscillator provides a control of the oscillation frequency which is independent from the oscillation condition in the range [198 MHz - 261 MHz] by varying the control current in the range [10 µA - 400 µA].
|
| 119 |
+
|
| 120 |
+
**5. References**
|
| 121 |
+
|
| 122 |
+
[1] P. Beg, I. A. Khan and M. T. Ahmed, "Tunable Four Phase Voltage Mode Quadrature Oscillator Using Two CMOS MOCCIs," Multimedia, Signal Processing and Communication Technologies, Aligarh, 14-16 March 2009, pp. 155-157. doi:10.1109/MSPCT.2009.5164198
|
| 123 |
+
|
| 124 |
+
[2] S. Maheshwari, "Quadrature Oscillator Using Grounded Components with Current and Voltage Outputs," IET Circuits, Devices & Systems, Vol. 3, No. 4, 2009, pp. 153-160. doi:10.1049/iet-cds.2009.0072
|
| 125 |
+
|
| 126 |
+
[3] S. Maheshwari, "Analogue Signal Processing Applications Using a New Circuit Topology," IET Circuits, Devices & Systems, Vol. 3, No. 3, 2008, pp. 106-115. doi:10.1049/iet-cds.2008.0294
|
| 127 |
+
|
| 128 |
+
[4] A. Fabre, O. Saiid, F. Wiest and C. Bouchern, "High Frequency High-Q BICMOS Current-Mode Bandpass Filter and Mobile Communication Application," IEEE Transaction on Circuits and Systems 1: Fundamental Theory and Applications, Vol. 33, No. 4, 1998, pp. 614-625. doi:10.1109/4.663567
|
| 129 |
+
|
| 130 |
+
[5] H. O. Elwan and A. M. Soliman, "Low-Voltage Low-Power CMOS Current Conveyors," IEEE Transaction on Circuits and Systems 1: Fundamental Theory and Applications, Vol. 44, No. 9, 1997, pp. 828-835. doi:10.1109/81.622987
|
| 131 |
+
|
| 132 |
+
[6] D. S. Masmoudi, S. Ben Salem, M. Loulou, L. Kammoun "A Radio Frequency CMOS Current Controlled Oscillator Based on a New Low Parasitic Resistance CCII," 2004 International Conference on Electrical, Electronic and Computer Engineering, Egypt, 5-7 September 2004, pp. 563-566. doi:10.1109/ICEEC.2004.1374532
|
| 133 |
+
|
| 134 |
+
[7] S. B. Salem, M. Fakhfakh, D. S. Masmoudi, M. Loulou, P. Loumeau and N. Masmoudi, "A High Performances CMOS CCII and High Frequency Applications," Journal of Analog Integrated Circuits and Signal Processing, Vol. 49, No. 1, 2006, pp. 71-78. doi:10.1007/s10470-006-8694-4
|
| 135 |
+
|
| 136 |
+
[8] S. B. Salem, D. S. Masmoudi and M. Loulou "A Novel CCII-Based Tunable Inductance and High Frequency Current Mode Band Pass Filter Application," Journal of Circuits, Systems, and Computers (JCSC), Vol. 15, No. 6, 2006, pp. 849-860.
|
| 137 |
+
|
| 138 |
+
[9] S. B. Salem, D. S. Masmoudi, A. B. Saied and M. Loulou "An Optimized Low Voltage and High Frequency CCII Based Multifunction Filters," 13th IEEE International Conference on Electronics, Circuits and Systems, Nice, 10-13 December 2006, pp. 1268-1271. doi:10.1109/ICECS.2006.379693
|
| 139 |
+
|
| 140 |
+
[10] A. B. Saied, S. B. Salem, M. Fkih and D. S. Masmoudi "A New High Frequency Second Generation Current Conveyor Based Chaos Generator," 14th IEEE International Conference on Electronics, Circuits and Systems, Marrakech, 11-14 December 2007, pp. 387-390. doi:10.1109/ICECS.2007.4511011
|
| 141 |
+
|
| 142 |
+
[11] C. Thoumazou, F. J. Lidgey and D. Haigh, "Integrated Circuits: The Current Mode Approach," IEEE Circuit and Systems Series 2, Peter Ltd., London, 1993.
|
samples/texts_merged/3611010.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples/texts_merged/3863943.md
ADDED
|
@@ -0,0 +1,420 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Proto Logic and Neural Sub-Symbolic Reasoning
|
| 5 |
+
|
| 6 |
+
Andreas Wichert
|
| 7 |
+
Department of Informatics
|
| 8 |
+
INESC-ID / IST - Technical University of Lisboa
|
| 9 |
+
Portugal
|
| 10 |
+
andreas.wichert@ist.utl.pt
|
| 11 |
+
|
| 12 |
+
April 11, 2012
|
| 13 |
+
|
| 14 |
+
Abstract
|
| 15 |
+
|
| 16 |
+
The sub-symbolic representation of the world often corresponds to a pattern that mirrors the world as described by the biological sense organs. Sparse binary vectors can describe sub-symbolic representations, which can be efficiently stored in associative memories. According to the production system theory, a geometrically based problem-solving model can be defined as a production system operating on sub-symbols. Our goal is to form a sequence of associations, which lead to a desired state represented by sub-symbols, from an initial state represented by sub-symbols. A simple and universal heuristic function can be defined, which takes into account the relationship between the vector and the corresponding similarity of the represented object or state in the real world. The manipulation of the sub-symbols is described by a simple proto logic, which verifies if a subset of sub-symbols is present in a set of sub-symbols.
|
| 17 |
+
|
| 18 |
+
# 1 Introduction
|
| 19 |
+
|
| 20 |
+
One form of distributed representation corresponds to a pattern that mirrors the way the biological sense organs describe the world. Sense organs sense the world by receptors. The order of the receptors defines the reality as a simple Euclidian geometry. It is the basis of the distributed representation. Changes in the world correspond to the changes in the distributed representation. Prediction of these changes by the nervous system is an example of a simple geometrical reasoning process. Mental imagery problem solving is an example of a complex geometrical problem solving. It is described by a sequence of associations, which progressively change the mental imagery until a desired problem solution is formed. For example, do the skis fit in the boot of my car? Mental representations of images retain the depictive properties of the image itself as perceived by the eye Kosslyn [1994]. The imagery is formed without perception through the construction of the represented object from memory.
|
| 21 |
+
|
| 22 |
+
Symbols on the other hand are not present in the world; they are the constructs of a human mind and simplify the process of representation used in communication and problem solving. Symbols are used to denote or refer to other things in the world (according to the pioneering work of Tarski Tarski [1956]). They are defined by their occurrence in a structure and by a formal language, which manipulates these structures Newell [1990], Simon [1991]. In this context, symbols do not by themselves represent any utilizable knowledge. They cannot be used for a definition of similarity criteria between themselves. The use of symbols in algorithms
|
| 23 |
+
---PAGE_BREAK---
|
| 24 |
+
|
| 25 |
+
which imitate intelligent human behaviour led to the famous physical symbol system hypothesis by Newell and Simon Newell and Simon [1976]: “The necessary and sufficient condition for a physical system to exhibit intelligence is that it be a physical symbol system.”
|
| 26 |
+
|
| 27 |
+
The author does not agree with the physical symbol system hypothesis. Instead the author states: the actual perception of the world and manipulation in the world by living organisms lead to the invention or recreation of an experience. The recreation resembles at least in some respects the experience of actually perceiving and manipulating objects, however in the absence of direct sensory stimulation. This kind of representation is called sub-symbolic.
|
| 28 |
+
|
| 29 |
+
Sub-symbolic representation suggests a heuristic function based on similarity between sub-symbols. Symbols liberate people from the reality of the world although they are embodied in geometrical problem solving through the usage of additional heuristic functions. Without the use of heuristic functions, real world problems become intractable.
|
| 30 |
+
|
| 31 |
+
In this paper the basis of the manipulation of the sub-symbols is described by simple proto logic, which verifies if a subset of sub-symbols is present in a set of sub-symbols in contrast to other powerful logics like predicate or temporal logics.
|
| 32 |
+
|
| 33 |
+
The paper is organized as follows: the representation principles of objects by features as used in cognitive science is reviewed. In the next step, the paper indicates how the perception-oriented representation is built on this approach. The optimal sparse sub-symbolic representation is defined. Finally, the sub-symbolic problem solving, which relies on a sensorial representation of reality, is introduced.
|
| 34 |
+
|
| 35 |
+
# 2 Sub-symbols
|
| 36 |
+
|
| 37 |
+
Perception-oriented representation is an example of sub-symbolic representation, such as the representation of numbers by the Oksapmin tribe of Papua New Guinea. The Oksapmin tribe of Papua New Guinea counts by associating a number with the position of the body Lancy [1983]. The sub-symbolic representation often corresponds to a pattern that mirrors the way the biological senses organs describe the world. Vectors represent patterns. A vector is only a sub-symbol if there is a relationship between the vector and the represented object or state in the real world through sensors or biological senses. Feature based representation is an example of sub-symbolic representation.
|
| 38 |
+
|
| 39 |
+
## 2.1 Feature Approach
|
| 40 |
+
|
| 41 |
+
Objects can be described by a set of discrete features, such as red, round and sweet McClelland and Rumelhart [1985], Tversky [1977]. The similarity between them can be defined as a function of the features they have in common Gilovich [1999], Goldstone [1999], Osherson [1995], Sun [1995]. The contrast model of Tversky Tversky [1977] is one well-known model in cognitive psychology Opwis and Plötzner [1996], Smith [1995], which describes the similarity between two objects, which are described by their features. An object is judged to belong to a verbal category to the extent that its features are predicted by the verbal category Osherson [1987]. The similarity of a category represented by a feature set *C* and of a feature set *F* is given by the following formula, which is inspired by the contrast model of Tversky Opwis and Plötzner [1996], Smith [1995], Tversky [1977],
|
| 42 |
+
|
| 43 |
+
$$Sim(C, F) = \frac{|C \cap F|}{|C|} \in [0, 1] \quad (1)$$
|
| 44 |
+
---PAGE_BREAK---
|
| 45 |
+
|
| 46 |
+
$|C|$ is the number of the prototypical features that define the category *a*. For example, the category *bird* is defined by the following features: flies, sings, lays eggs, nests in trees, eats insects. The category *bat* is defined by the following features: flies, gives milk, eat insects. The following features are present: flies and gives milk.
|
| 47 |
+
|
| 48 |
+
$$Sim(\mathbf{bird}, \text{present features}) = \frac{1}{5}$$
|
| 49 |
+
|
| 50 |
+
$$Sim(\mathbf{bat}, \text{present features}) = \frac{2}{3}$$
|
| 51 |
+
|
| 52 |
+
The present features are counted and normalized so that the value can be compared. The similarity value can be interpreted as a probability that the object belongs to the category. This is a very simple and efficient form of representing sub-symbols. A binary vector in which the positions represent different features can represent the set of features. For each category a binary vector can be defined. Overlaps between stored patterns correspond to overlaps between categories.
|
| 53 |
+
|
| 54 |
+
## 2.2 Sub-symbolic representation by associative memory
|
| 55 |
+
|
| 56 |
+
Associative memory models human memory Churchland and Sejnowski [1994], Fuster [1995], Palm [1990], Squire and Kandel [1999]. The associative memory and sub-symbolic distributed representation incorporate the following abilities in a natural way Anderson [1995b], Hertz et al. [1991], Kohonen [1989], Palm [1982]:
|
| 57 |
+
|
| 58 |
+
* The ability to correct faults if false information is given.
|
| 59 |
+
|
| 60 |
+
* The ability to complete information if some parts are missing.
|
| 61 |
+
|
| 62 |
+
* The ability to interpolate information. In other words, if a sub-symbol is not currently stored the most similar stored sub-symbol is determined.
|
| 63 |
+
|
| 64 |
+
The Lernmatrix, also simply called “associative memory”, was developed by Steinbuch in 1958 as a biologically inspired model from the effort to explain the psychological phenomenon of conditioning Steinbuch [1961, 1971]. Later this model was studied under biological and mathematical aspects by Willshaw Willshaw et al. [1969] and Palm Palm [1982, 1990].
|
| 65 |
+
|
| 66 |
+
Associative memory is composed of a cluster of units. Each unit represents a simple model of a real biological neuron. The Lernmatrix was invented by Steinbuch, whose goal was to produce a network that could use a binary version of Hebbian learning to form associations between pairs of binary vectors, for example each one representing a cognitive entity. Each unit is composed of binary weights, which correspond to the synapses and dendrites in a real neuron. They are described by $w_{ij} \in \{0, 1\}$ in Figure 1. T is the threshold of the unit. The Lernmatrix is simply called *associative memory* if no confusion with other models is possible Anderson [1995a], Ballard [1997].
|
| 67 |
+
|
| 68 |
+
The patterns, which are stored in the Lernmatrix, are represented by binary vectors. The presence of a feature is indicated by a ‘one’ component of the vector, its absence through a ‘zero’ component of the vector. A pair of these vectors is associated and this process of association is called learning. The first of the two vectors is called the *question vector* and the second, the *answer vector*. After learning, the question vector is presented to the associative memory and the answer vector is determined by the retrieval rule.
|
| 69 |
+
|
| 70 |
+
**Learning** Initially, no information is stored in the associative memory. Because the information is represented in weights, all unit weights are initially set to zero.
|
| 71 |
+
---PAGE_BREAK---
|
| 72 |
+
|
| 73 |
+
Figure 1: The Lernmatrix is composed of a set of units which represent a simple model of a real biological neuron. The unit is composed of weights, which correspond to the synapses and dendrites in the real neuron. In this figure they are described by $w_{ij} \in \{0, 1\}$ where $1 \le i \le m$ and $1 \le j \le n$. T is the threshold of the unit.
|
| 74 |
+
|
| 75 |
+
In the learning phase, pairs of binary vector are associated. Let $\vec{x}$ be the question vector and $\vec{y}$ the answer vector, the learning rule is:
|
| 76 |
+
|
| 77 |
+
$$ w_{ij}^{new} = \begin{cases} 1 & \text{if } y_i \cdot x_j = 1 \\ w_{ij}^{old} & \text{otherwise.} \end{cases} \qquad (2) $$
|
| 78 |
+
|
| 79 |
+
This rule is called the binary Hebbian rule Palm [1982]. Every time a pair of binary vectors is stored, this rule is used.
|
| 80 |
+
|
| 81 |
+
**Retrieval** In the *one-step* retrieval phase of the associative memory, a fault tolerant answering mechanism recalls the appropriate answer vector for a question vector $\vec{x}$. The retrieval rule for the determination of the answer vector $\vec{y}$ is:
|
| 82 |
+
|
| 83 |
+
$$ y_i = \begin{cases} 1 & \sum_{j=1}^{n} w_{ij} x_j = T \\ 0 & \text{otherwise.} \end{cases} \qquad (3) $$
|
| 84 |
+
|
| 85 |
+
where T is the threshold of the unit. The threshold T is set to the number of “one” components in the question vector $\vec{x}$, $T := |\vec{x}|$. It is quite possible that no answer vector is determined (zero answer vector). This happens when the question vector has a subset of components that was not correlated with the answer vector.
|
| 86 |
+
|
| 87 |
+
**Storage capacity** For an estimation of the asymptotic number $L$ of vector pairs $(\vec{x}, \vec{y})$ that can be stored in an associative memory before it begins to make mistakes in the retrieval phase, it is assumed that both vectors have the same dimension $n$. It is also assumed that both vectors are composed of $k$ ones, which are equally likely to be in any coordinate of the vector. In this case it was shown Hecht-Nielsen [1989], Palm [1982], Sommer [1993] that the optimum value for $k$ is approximately
|
| 88 |
+
|
| 89 |
+
$$ k \doteq \log_2(n/4). \qquad (4) $$
|
| 90 |
+
|
| 91 |
+
For example for a vector of the dimension $n=1000000$ only $k = 18$ ones should be used to code a pattern according to the Equation 4. For an optimal value for $k$ according to the Equation 5 with ones equally distributed over the coordinates of the vectors, approximately $L$ vector pairs can be stored in the associative memory Hecht-Nielsen [1989], Palm [1982]. $L$ is approximately
|
| 92 |
+
|
| 93 |
+
$$ L \doteq (\ln 2)(n^2/k^2). \qquad (5) $$
|
| 94 |
+
---PAGE_BREAK---
|
| 95 |
+
|
| 96 |
+
This value is much greater than $n$. The estimate of $L$ is very rough because Equation 6 is only valid for very large networks. Equation 6 does not apply for networks of reasonable size, however the capacity increase is still considerable. For realistic values please consult Table 2 in Knoblauch et al. [2010]. Small deviation from the logarithmic sparseness reduces the network capacity. It is very difficult to find coding schemas that represent the information by logarithmic sparse codes Knoblauch et al. [2010].
|
| 97 |
+
|
| 98 |
+
It should be noted that the Lernmatrix system allows high capacity and fast access when working in parallel, each unit represents a neuron that performs calculations. On a conventional Von Neumann architecture, compressed look-up tables are more efficient Knoblauch et al. [2010]. However a Von Neumann architecture is not biologically plausible.
|
| 99 |
+
|
| 100 |
+
## 3 Sparse Code for Sub-Symbols
|
| 101 |
+
|
| 102 |
+
Usually suboptimal sparse codes are used. An example of a suboptimal sparse code is the representation of words by context-sensitive letter units Bentz et al. [1989], Rumelhart and McClelland [1986], Wickelgren [1969, 1977]. The ideas for the used robust mechanism come from psychology and biology Bentz et al. [1989], Rumelhart and McClelland [1986], Wickelgren [1969, 1977]. Each letter in a word is represented as a triple, which consists of the letter itself, its predecessor, and its successor. For example, six context-sensitive letters encode the word desert, namely: _de_, _des_, _ese_, _ser_, _ert_, _rt_._ The character “_” marks the word beginning and ending. Because the alphabet is composed of 26+1 characters, $27^3$ different context-sensitive letters exist. In the $27^3$ dimensional binary vector each position corresponds to a possible context-sensitive letter, and a word is represented by indication of the actually present context-sensitive letters.
|
| 103 |
+
|
| 104 |
+
A set of features can be represented by a binary vector and represent a category. A position in the corresponding vector corresponds to a feature. To be sparse, the set of features that describes a category compared to the dimension of the vector has to be sufficiently small. This is because, of all possible features, only some should define categories. This can be achieved by sparsification based on unary sub-vector representation.
|
| 105 |
+
|
| 106 |
+
### 3.1 Sparsification based on unary sub-vectors
|
| 107 |
+
|
| 108 |
+
A binary representation of a number $h$ would require a vector of length $d = \lceil \log_2 h + 1 \rceil$. However if we represent the number $h$ in unary, we require $h$ positions. One unary representation of $h \neq 0$ is a string of $h-1$ zeros with a one at $h$-th position. A binary number of length $d$ is represented by a unary number of $2^d$ positions, which is exponential in the size of input. A binary vector $\vec{x}$ of dimension $t$ is split into $f$ distinct sub vectors of dimension $p = \text{dim}(t/f)$. The binary sub vectors $u_i(\vec{x})$ of dimension $p = \text{dim}(t/f)$ are represented as unary vectors of dimension $2^p$:
|
| 109 |
+
|
| 110 |
+
$$ \vec{x} = \underbrace{x_1, x_2, \dots, x_p}_{u_1(\vec{x})}, \dots, \underbrace{x_{m-p+1}, \dots, x_m}_{u_f(\vec{x})} \quad (6) $$
|
| 111 |
+
|
| 112 |
+
The resulting binary vector is composed out of the unary vectors and has the dimension $f \cdot 2^p$. In the following example a binary vector of dimension 6 is split into 2 distinct sub vectors of dimension 3. The binary sub vectors $u_i(\vec{x})$ of dimension 3 are represented as unary vectors of dimension $2^3$:
|
| 113 |
+
---PAGE_BREAK---
|
| 114 |
+
|
| 115 |
+
$$ \vec{x} = \underbrace{1,0,1}_{u_1(1,0,1)}, \underbrace{0,0,1}_{u_2(0,0,1)} \qquad (7) $$
|
| 116 |
+
|
| 117 |
+
$$ u_1(1,0,1) = (0,0,0,0,1,0,0,0); \quad (h=5) $$
|
| 118 |
+
|
| 119 |
+
$$ u_2(0,0,1) = (1,0,0,0,0,0,0,0); \quad (h=1) $$
|
| 120 |
+
|
| 121 |
+
$$ u(\vec{x}) = (0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0) \qquad (8) $$
|
| 122 |
+
|
| 123 |
+
Resulting in a new vector of dimension $16 = 2 \times 2^3$ with 2 ones.
|
| 124 |
+
|
| 125 |
+
## 3.2 Sensors at different positions
|
| 126 |
+
|
| 127 |
+
Such a binary sparse vector could correspond to set of *c* sensors at different positions. At a position one and only one sensor is activated. For *f* positions and *c* sensors a state would be represented by a binary vector of the dimension *f* × *c* with *f* ones. In the preceding example *f*=2 and *c*=8. An example of such unary coding is the Map Transformation Cascade model of the visual system Cardoso and Wichert [2010]. Several models of the visual system [Cardoso and Wichert, 2010, Fukushima, 1980, 1989, Riesenhuber and Poggio, 1999] were motivated by the work on the visual systems of Hubel and Wiesel. The neural units have local receptive fields and are ordered in layers. The layers form a hierarchy in the sense that features at one stage are built from features at earlier stages. An image passes through layers of units with progressively more complex features. The hierarchical network gradually reduces the information from the input layer through the output layer until classification can be performed. The proposed Map Transformation Cascade Cardoso and Wichert [2010] is a less complex description of the pattern recognition capabilities of the Neocognitron. Each layer represents a set of features. A binary vector in which the positions represent different features at different positions on the image describes it. The input image is tiled with a squared mask *M* of size *j*×*j* in which a corresponding category of a feature is determined, see Figure 2. Each feature is determined through the use of the elements in each squared mask. Each of the corresponding *f* sub-patterns $\vec{x}_t$, with $t \in \{1, 2, ..., f\}$, is mapped into one corresponding category represented by a number *h*. The categories can be learned by a simple clustering algorithm such as K-Means Cardoso and Wichert [2010]. The number of categories is represented by the number *c*. A category *h* is represented by a unary vector of dimension *c* with *c* − 1 zeros and a one at the position *h*. The whole image state is represented by a binary vector of dimension *n* = *c* × *f* with *f* ones. This vector is formed by the concatenation of the unary vectors that represent the categories at different positions.
|
| 128 |
+
|
| 129 |
+
## 3.3 Logarithmic sparsification
|
| 130 |
+
|
| 131 |
+
The ideal *c* value for a logarithmic sparse code is related to the number of ones
|
| 132 |
+
$k \doteq \log_2(n/4)$.
|
| 133 |
+
|
| 134 |
+
$$ k = \log_2(f \cdot c/4) $$
|
| 135 |
+
|
| 136 |
+
$$ 2^k = f \cdot c/4 $$
|
| 137 |
+
|
| 138 |
+
$$ c = \frac{4 \cdot 2^k}{f} \qquad (9) $$
|
| 139 |
+
|
| 140 |
+
The ideal value for *c* grows exponentially in relation to *f* with the assumption that the number of ones is *k* = *f*. Usually the value for *c* is much lower than the ideal
|
| 141 |
+
---PAGE_BREAK---
|
| 142 |
+
|
| 143 |
+
Figure 2: The input image is tiled with a squared mask *M* of size *j* × *j* in which a corresponding category of a feature is determined. Each feature is determined through the use of the elements in each squared mask. In this example, a simple image is covered with *f* = 36 masks, a category would correspond to a line or an edge at a certain orientation.
|
| 144 |
+
|
| 145 |
+
value resulting in a suboptimal sparse code. The representation of images by masks
|
| 146 |
+
results in a suboptimal code. The optimal code is approached with the size of masks,
|
| 147 |
+
the larger the mask, the smaller the value of f. The number of pixels inside a mask
|
| 148 |
+
grows quadratically in the size of the edge. A larger masks implies the ability to
|
| 149 |
+
represent more distinct categories, which implies a bigger c. An ideal value for c is
|
| 150 |
+
possible only if the value for f << 100.
|
| 151 |
+
|
| 152 |
+
**3.4 Logarithmic sparsification based on cognitive entities**
|
| 153 |
+
|
| 154 |
+
Often no category is present at a certain location. The non presence of a category
|
| 155 |
+
could be represented by a vector with *c* zeros (h=0). Suppose that the actual
|
| 156 |
+
number of present categories is much smaller than the number of positions, in this
|
| 157 |
+
case *k* << *f*. A pointer representation of objects in a scene leads to an even sparser
|
| 158 |
+
representation. In this case the number of ones *k* is not related to the number
|
| 159 |
+
of categories at *f* positions, but to the number of objects at *f* possible positions.
|
| 160 |
+
Objects and their positions in the visual field can represent a visual scene. A sub-
|
| 161 |
+
vector of the vector representing the visual scene represents each object.
|
| 162 |
+
|
| 163 |
+
**3.4.1 Cognitive entities**
|
| 164 |
+
|
| 165 |
+
It was suggested Gross and Mishkin [1977] that the brain includes two mechanisms
|
| 166 |
+
for visual categorization Posner and Raichle [1994]: one for the representation of
|
| 167 |
+
the object and the other for the representation of the localization Kosslyn [1994].
|
| 168 |
+
According to this division, the identity of a visual object can be coded apart from
|
| 169 |
+
its location. A visual scene can be either represented by an image or by objects
|
| 170 |
+
and their position in the visual field. Objects are represented by patterns together
|
| 171 |
+
with their corresponding position in the image. Cognitive entities Anderson [1995a]
|
| 172 |
+
represent objects and their position in the image. Each cognitive entity represents
|
| 173 |
+
the identity of the object and its position is given by Cartesian coordinates (see
|
| 174 |
+
Figures 3, 4 and 5). The advantage of such cognitive entity representation is that
|
| 175 |
+
the manipulation of objects is simplified and it is a basis for a binary sparse code
|
| 176 |
+
---PAGE_BREAK---
|
| 177 |
+
|
| 178 |
+
Figure 3: Representation of an object in a 2D world (a) by a cognitive entity (b). The identity of an object is represented in the first associative field by a binary pattern that is normalized for size and orientation. Its location corresponding to the abscissa is represented by a binary vector in the second associative field. The location corresponding to the ordinate is likewise represented by a binary vector in the third associative field of the size of the ordinate of the pictogram representing the state. A binary bar of the size and position of the object in the pictogram of the state represents the location.
|
| 179 |
+
|
| 180 |
+
that can be stored efficiently in an associative memory.
|
| 181 |
+
|
| 182 |
+
The identity of an object is represented by a binary pattern which is normalized for size and orientation. Its location on the x-axis is represented by a binary vector of the size of the abscissa of the pattern representing the object. The location on the y-axis is likewise represented by a binary vector of the size of the coordinate of the pattern representing the object. A binary bar of the size and position of the object in the pictogram of the state represents the location and size (see Figure 3) in each of those vectors. The three vectors that compose the cognitive entity are called associative fields. Each associative field is represented by a binary vector of a fixed dimension; each cognitive entity is formed by the concatenation of the associative fields.
|
| 183 |
+
|
| 184 |
+
### 3.4.2 Representation of a cognitive entity by a unary vector
|
| 185 |
+
|
| 186 |
+
A cognitive entity can be represented alternatively by a unary vector. A simple code for an object would indicate if it is present or not. One indicates present, zero not present. Four categories of objects are represented in this example by the first associative field; cube, cube clear, pyramid and pyramid clear ($c = 4$, see Figure 6, Figure 4 and 5).
|
| 187 |
+
|
| 188 |
+
The presence of a category is indicated by a unary vector of dimension four. There are $x \times y$ possible positions of the object. In our example there are $10 \times 10$ possible positions, see Figure 7.
|
| 189 |
+
|
| 190 |
+
A cognitive entity represents an object at a certain position. The corresponding category is represented by a unary vector of dimension four on the corresponding position. For each remaining position, a zero vector of dimension four is repeated, see Figure 8. The principle of forming a unary vector representing a cognitive entity is based on the tensor product between the vector representing the category that is present or not and the vector representing the position.
|
| 191 |
+
|
| 192 |
+
As a result, a cognitive entity is represented by a unary vector of the dimension $c \times x \times y$ or $c \times f$, see Figure 8.
|
| 193 |
+
---PAGE_BREAK---
|
| 194 |
+
|
| 195 |
+
Figure 4: A state in the geometric block world. Blocks can be placed in three different positions and picked up and set down. There are two different categories of blocks: cubes and pyramids. No other block may be placed on top of a pyramid, while either type of block may be placed on top of a cube. The gripper is represented in the upper right corner. Five objects are present: three cubes and two pyramids. The "clear" positions are represented by a dot.
|
| 196 |
+
|
| 197 |
+
Figure 5: Alternative representation of the geometric block world state (see Figure 4) represented by a set of eight cognitive entities.
|
| 198 |
+
|
| 199 |
+
Figure 6: Four categories of objects are represented in our example by the first associative field; cube, clear, pyramid and pyramid clear.
|
| 200 |
+
---PAGE_BREAK---
|
| 201 |
+
|
| 202 |
+
Figure 7: There are $f = 10 \times 10$ possible positions of 4 categories of objects. A cognitive entity is represented by a unary vector of the dimension $4 \times 100$. Seven objects are represented, two cubes, three clears, one pyramid and one clear pyramid. The visual scene is represented by seven unary vectors. Each cognitive entity corresponds to a unary sub-vector.
|
| 203 |
+
|
| 204 |
+
Figure 8: A cognitive entity represents an object at a certain position. The corresponding category is represented by a unary vector of dimension four on the corresponding position, in our case a cube on the position 92. For each remaining position, a zero vector of dimension four is repeated indicating that no category is present.
|
| 205 |
+
---PAGE_BREAK---
|
| 206 |
+
|
| 207 |
+
### 3.4.3 Sparse scene representation based unary sub-vectors
|
| 208 |
+
|
| 209 |
+
A scene is represented by a set of these unary vectors resulting in a sparse binary vector. Each cognitive entity corresponds to a unary sub-vector, which represents an object. A set of those sub-vectors represents a visual scene.
|
| 210 |
+
|
| 211 |
+
For an image of size $x \times y$, an object covers the image $f$ times. In our example there are $f = 10 \times 10$ possible positions of 4 categories of objects. A cognitive entity is represented by a unary vector of the dimension $4 \times 100$, $m$ objects are represented by $m$ cognitive entities. In a vector representing a visual scene, the objects correspond to unary sub-vectors of fixed length. This form of representation is called the set pointer representation and it corresponds to the cognitive entity representation (see Figure 5). Set pointer representation allows the manipulation of objects by means of proto logic. Proto logic allows the access of objects of a set as shown in section 3. A binary vector representing a visual scene like the state in the block world is formed by the concatenation of the unary vectors. This representation is called a "set", because the order of the unary vectors representing the cognitive entities is not defined. This representation is logarithmic sparse, in our example for $m = 7$ objects, $c = 4$ and $f = 100$ resulting in a vector of dimension 2800 with only seven ones. The $k = m$ and the maximal number of represented objects of four different categories ($c=4$) at $f = 100$ positions is constrained by $m < 10$ for a logarithmic sparse code:
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
\begin{align*}
|
| 215 |
+
k &\doteq \log_2(n/4) \\
|
| 216 |
+
k &\le \log_2(m \cdot 4 \cdot 100/4) \\
|
| 217 |
+
k &\le \log_2(m) + \log_2(100)
|
| 218 |
+
\end{align*} $$
|
| 219 |
+
|
| 220 |
+
for $k=m$,
|
| 221 |
+
|
| 222 |
+
$$ m - \log_2(m) \le 6.64 $$
|
| 223 |
+
|
| 224 |
+
To compute a distance between two visual scenes represented by two “sets” of $m$ cognitive entities, resulting in two $c \times f \times m$ binary sparse vector with $m$ ones, one computes a bitwise “OR” between the $n$ sub-vectors representing the $m$ cognitive entities of a set, resulting in a $c \times f$ dimensional vector with $m$ ones (compressed ordered representation). The similarity is measured by Equation 1. The similarity between visual scenes is defined as a function of the objects and their locations they have in common Gilovich [1999], Goldstone [1999], Osherson [1995], Sun [1995].
|
| 225 |
+
|
| 226 |
+
## 4 Sub-symbolic Production System based on Proto Logic
|
| 227 |
+
|
| 228 |
+
Human problem solving can be described by a problem-behaviour graph constructed from a protocol of the person talking aloud, mentioning considered moves and aspects of the situation. According to the resulting theory, searching a state includes the initial situation and the desired situation in a problem space that solves problems Anderson [1995b], Newell [1990]. This process can be described by the production system theory. The production system in the context of classical Artificial Intelligence and Cognitive Psychology is one of the most successful computer models of human problem solving. The production system theory describes how to form a sequence of actions, which lead to a goal, and offers a computational theory of how humans solve problems Anderson [1995b]. Production systems are composed of if-then rules that are also called productions. A rule contains several “if” patterns and one or more “then” patterns. A pattern in the context of rules is an individual predicate, which can be negated together with arguments. A rule can establish a new
|
| 229 |
+
---PAGE_BREAK---
|
| 230 |
+
|
| 231 |
+
assertion by the “then” part (its conclusion) whenever the if part (its precondition) is true. One of the best-known cognitive models, based on the production system, is Soar. The Soar state, operator and result model was developed to explain human problem-solving behaviour Newell [1990]. It is a hierarchical production system in which the conflict-resolution strategy is treated as another problem to be solved.
|
| 232 |
+
|
| 233 |
+
According to the production system theory, a geometrically based problem-solving model can be defined as a production system operating on vectors of fixed dimension. Instead of rules, associations are used and vectors represent states. Instead of predicates and facts, sub-vectors and proto logic are used. The goal is to form a sequence of associations, which lead to a desired state represented by a vector, from an initial state represented by a vector. Each association changes some parts of the vector. In each state, several possible associations can be executed, but only one has to be chosen, otherwise, conflicts in the representation of the state would occur. To perform these operations, a vector representing a state is divided into sub-vectors. An association recognizes some sub-vectors of the vector and exchanges them for different sub-vectors. The association is composed of a precondition of fixed arranged $\beta$ sub-vectors and a conclusion of $\beta$ sub-vectors. Associations are learned by the associative memory (see Figure 9). Each cognitive entity is represented by a unary vector. A precondition and the conclusion are represented by a $4 \times 100 \times 3 = 1200$ dimensional binary vector with three ones.
|
| 234 |
+
|
| 235 |
+
Figure 9: The learning phase of an association represented by 3 sub-vectors. In our example the precondition and the conclusion are represented by a $4 \times 100 \times 3 = 1200$ dimensional binary vector with three ones.
|
| 236 |
+
|
| 237 |
+
## 4.1 Proto Logic
|
| 238 |
+
|
| 239 |
+
Suppose a vector is divided into $\alpha$ sub-vectors with $\alpha > \beta$. An association recognizes $\beta$ different sub-vectors and exchanges them for $\beta$ different sub-vectors.
|
| 240 |
+
|
| 241 |
+
Let $\alpha = 7$ objects that were recognized in the visual scene. The seven visual objects are indicated at a certain position of the scene by symbols A, B, C, D, E, F and G. The task of proto logic is to identify a precondition formed by visual objects represented by the set $B, C, G$, $\beta = 3$. *Proto logic operates on sets. It verifies whether a subset is present in a certain set.* The task of proto logic is trivial when working with sets. Each of the symbols $B, C, G$ is checked for presence in the set that represents the scene. It is verified if a set representing a precondition is a subset of the set representing a scene.
|
| 242 |
+
|
| 243 |
+
However if the precondition (set of objects) is stored in an associative memory the task of proto logic is non trivial Wichert [2011]. In an associative memory direct access to the stored information is not present. An associative memory operates on vectors of fixed dimensions.
|
| 244 |
+
---PAGE_BREAK---
|
| 245 |
+
|
| 246 |
+
A set of objects (a precondition) is represented by a vector by concatenating
|
| 247 |
+
the sub-vectors that represent the objects. For *m* sub-vectors there are *m!* possi-
|
| 248 |
+
ble orderings of the corresponding sub-vectors. Each sub-vector corresponds to a
|
| 249 |
+
cognitive entity.
|
| 250 |
+
|
| 251 |
+
To verify if a set of $\beta$ sub-vectors representing a precondition is a subset of the
|
| 252 |
+
set of $n$ sub-vectors representing a scene, there are $\frac{\beta!}{(\alpha-\beta)!}$ orderings. Then it is
|
| 253 |
+
verified if each permutation corresponds to a valid precondition of an association.
|
| 254 |
+
For example, if there is a total of seven elements and a sequence of three elements
|
| 255 |
+
from this set is selected, then the first selection is one from seven elements, the next
|
| 256 |
+
one from the remaining six, and finally one from the remaining five, resulting in
|
| 257 |
+
$7 \times 6 \times 5 = 210$, see Figure 10. In our example, all possible three-permutation sub-
|
| 258 |
+
vectors of seven sub-vectors are formed to test if the precondition of an association
|
| 259 |
+
is valid. An association is valid, if the answer vector representing the conclusion is
|
| 260 |
+
not equal to a zero vector.
|
| 261 |
+
|
| 262 |
+
Figure 10: To recognize one learned association, permutations are formed. For example, if there is a total of seven elements and a sequence of three elements from this set is selected, then the first selection is one from seven elements, the next one from the remaining six, and finally from the remaining five, resulting in $7 \times 6 \times 5 = 210$. In our example, all possible three-permutation sub-vectors of seven sub-vectors are formed to test if the precondition of an association is valid.
|
| 263 |
+
|
| 264 |
+
## 4.2 Sub-symbolic Problem solving
|
| 265 |
+
|
| 266 |
+
Sub-symbolic problem solving forms a sequence of associations that change the initial scene in the desired scene. The input to the problem is the initial and the desired scene. The solution is the sequence of associations. The sub-symbolic problem solving is based on the following principles:
|
| 267 |
+
|
| 268 |
+
* A scene is represented by $\alpha$ different sub-vectors. Each sub-vector corresponds to a cognitive entity.
|
| 269 |
+
|
| 270 |
+
• The association is composed of a precondition of fixed arranged $\beta$ sub-vectors and a conclusion of $\beta$ sub-vectors with $\alpha > \beta$.
|
| 271 |
+
|
| 272 |
+
* Proto logic verifies whether a subset, $\beta$ sub-vectors of is present in a set of $\alpha$ different sub-vectors representing a scene.
|
| 273 |
+
---PAGE_BREAK---
|
| 274 |
+
|
| 275 |
+
**Algorithm for sub-symbolic problem solving**
|
| 276 |
+
|
| 277 |
+
1. For a scene (starting with the initial scene), represented by $\alpha$ different sub-vectors;
|
| 278 |
+
|
| 279 |
+
2. Valid associations are determined, their number is indicated by $\omega$.
|
| 280 |
+
|
| 281 |
+
3. $\omega$ identical copies of the set of $\alpha$ different sub-vectors representing a scene are formed.
|
| 282 |
+
|
| 283 |
+
4. For each of the $\omega$ valid associations and each copy of the set representing the scene;
|
| 284 |
+
|
| 285 |
+
(a) The subset of $\beta$ sub-vectors of the precondition is replaced by the subset $\beta$ sub-vectors of the conclusion of the association resulting in a temporal answer set.
|
| 286 |
+
|
| 287 |
+
5. $\omega$ temporal answer sets of $\alpha$ sub-vector are mapped into $\omega$ answer vectors by performing a bitwise "OR" between the $\alpha$ sub-vectors representing the $\alpha$ cognitive entities of each temporal answer set.
|
| 288 |
+
|
| 289 |
+
6. The similarity between the $\omega$ answer vectors and the desired state is measured by Equation 1.
|
| 290 |
+
|
| 291 |
+
7. If the similarity corresponds to equality, the problem is solved and the computation is terminated.
|
| 292 |
+
|
| 293 |
+
8. Otherwise the most similar answer vector to the desired state according to Equation 1 is chosen. The corresponding temporal answer set represents the scene and a new cycle of computation is repeated. The computation is repeated in cycles until a solution is found. This search strategy corresponds to hill climbing Winston [1992].
|
| 294 |
+
|
| 295 |
+
Out of several possible associations, the one is chosen that modifies the state in such a way that it becomes more similar to the desired state according to Equation 1. The desired state corresponds to the category of Equation 1, each feature represents a possible state. The states are represented by sparse features. With the aid of this, heuristic hill climbing is performed.
|
| 296 |
+
|
| 297 |
+
## 4.3 Simple and universal heuristic function
|
| 298 |
+
|
| 299 |
+
The computation can be improved by this simple and universal heuristic function, which takes into account the relationship between the vector and the corresponding similarity of the represented states (see Figure 11 and Figure 12). The heuristics function makes a simple assumption that the distance between the states in the problem space is related to the similarity of the vectors representing the states. The similarity between the corresponding vectors can indicate the distance between the sub-symbols representing the state. Empirical experiments in popular problem-solving domains of Artificial Intelligence, like a robot in a maze, block world or 8-puzzle indicate that the distance between the states in the problem space is actually related to the similarity between the images representing the states Wichert [2001, 2009], Wichert et al. [2008].
|
| 300 |
+
|
| 301 |
+
The hill climbing search resulted from the fact that distance between states in the problem space is related to the similarity between the sub-symbols. This heuristic is fairly simple and cannot be applied to problems where the similarity of the representation is not related to the distance in the problem space, such as for example the missionaries and cannibals problem. This also happens due to the fact
|
| 302 |
+
---PAGE_BREAK---
|
| 303 |
+
|
| 304 |
+
Figure 11: The simplest method corresponds to a random choice, and does not offer any advantage over simple symbolical representation. An example of visual planning of the tower building task of three blocks using random choice is shown. The upper left pattern represents the initial state; the bottom right pattern, the desired state.
|
| 305 |
+
|
| 306 |
+
Figure 12: The computation can be improved by a simple and universal heuristic function, which takes into account the relationship between the vector and the corresponding similarity of the represented object or states in the real world as expressed by Equation 1 for binary vectors. The heuristics function makes a simple assumption that the distance between the states in the problem space is related to the distance between the sub-symbols representing the visual states. The distance between the states in the problem space is related to the distance between the visual state. An example of visual planning of the tower building task of three blocks using hill climbing using the similarity function, see Equation 1. The upper left pattern represents the initial state; the bottom right pattern, the desired state.
|
| 307 |
+
---PAGE_BREAK---
|
| 308 |
+
|
| 309 |
+
that we do not represent the problem space and our system gets caught in loops and fails to deliver a solution. Of course humans have difficulties with problems like the missionaries and cannibals problem, in which one cannot perform the first necessary action without undoing them at a later stage. In case the problems become too complex, the sub-symbolic problem is often transferred to a symbolic representation and solved using external memory in the real world, like paper and pencil Wichert et al. [2008].
|
| 310 |
+
|
| 311 |
+
# 5 Conclusion
|
| 312 |
+
|
| 313 |
+
Living organisms experience the world as a simple Euclidian geometrical world. The actual perception of the world and manipulation in the world by living organisms lead to the invention or recreation of an experience that, at least in some respects, resembles the experience of actually perceiving and manipulating objects in the absence of direct sensory stimulation. This kind of representation is called sub-symbolic. The manipulation of the sub-symbols is described by simple proto logic, which verifies if a subset of sub-symbols is present in a certain set of sub-symbols. Sub-symbolic representation implies heuristic functions. The assumption that the distance between states in the problem space is related to the similarity between the sub-symbols representing the states is only valid in simple cases. However, simple cases represent the majority of problems in any real world domain. Sense organs sense the world by receptors that are part of the sensory system and the nervous system. Optimal sparse binary vectors can describe sub-symbolic representation, which can be efficiently stored in biologically motivated associative memories.
|
| 314 |
+
|
| 315 |
+
## Acknowledgments
|
| 316 |
+
|
| 317 |
+
This paper is an extended version of the presentation during the NeSy'11 Workshop at IJCAI-11. The author would thank those present for the valuable discussion during the presentation and two anonymous reviewers for their valuable suggestions. This work was supported by Fundação para a Ciência e Tecnologia (FCT) (INESC-ID multiannual funding) through the PIDDAC Program funds.
|
| 318 |
+
|
| 319 |
+
## References
|
| 320 |
+
|
| 321 |
+
Anderson, J. A. (1995a). *An Introduction to Neural Networks*. The MIT Press.
|
| 322 |
+
|
| 323 |
+
Anderson, J. R. (1995b). *Cognitive Psychology and its Implications*. W. H. Freeman and Company, fourth edition.
|
| 324 |
+
|
| 325 |
+
Ballard, D. H. (1997). *An Introduction to Natural Computation*. The MIT Press.
|
| 326 |
+
|
| 327 |
+
Bentz, H. J., Hagstroem, M., and Palm, G. (1989). Information storage and effective data retrieval in sparse matrices. *Neural Networks*, 2(4):289–293.
|
| 328 |
+
|
| 329 |
+
Cardoso, A. and Wichert, A. (2010). Neocognitron and the map transformation cascade. *Neural Networks*, 23(1):74–88.
|
| 330 |
+
|
| 331 |
+
Churchland, P. S. and Sejnowski, T. J. (1994). *The Computational Brain*. The MIT Press.
|
| 332 |
+
|
| 333 |
+
Fukushima, K. (1980). Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. *Biol Cybern*, 36(4):193–202.
|
| 334 |
+
---PAGE_BREAK---
|
| 335 |
+
|
| 336 |
+
Fukushima, K. (1989). Analysis of the process of visual pattern recognition by the neocognitron. *Neural Networks*, 2:413-420.
|
| 337 |
+
|
| 338 |
+
Fuster, J. (1995). *Memory in the Cerebral Cortex*. The MIT Press.
|
| 339 |
+
|
| 340 |
+
Gilovich, T. (1999). Tversky. In *The MIT Encyclopedia of the Cognitive Sciences*, pages 849–850. The MIT Press.
|
| 341 |
+
|
| 342 |
+
Goldstone, R. (1999). Similarity. In *The MIT Encyclopedia of the Cognitive Sciences*, pages 763–765. The MIT Press.
|
| 343 |
+
|
| 344 |
+
Gross, C. and Mishkin (1977). The neural basis of stimulus equivalence across retinal translation. In Harnad, S., Dorty, R., Jaynes, J., Goldstein, L., and Krauthamer, editors, *Lateralization in the nervous system*. Academic Press, New York.
|
| 345 |
+
|
| 346 |
+
Hecht-Nielsen, R. (1989). *Neurocomputing*. Addison-Wesley.
|
| 347 |
+
|
| 348 |
+
Hertz, J., Krogh, A., and Palmer, R. G. (1991). *Introduction to the Theory of Neural Computation*. Addison-Wesley.
|
| 349 |
+
|
| 350 |
+
Knoblauch, A., Palm, G., and Sommer, F. (2010). Memory capacities for synaptic and structural plasticity. *Neural Computation*, 22:289–341.
|
| 351 |
+
|
| 352 |
+
Kohonen, T. (1989). *Self-Organization and Associative Memory*. Springer-Verlag, 3 edition.
|
| 353 |
+
|
| 354 |
+
Kosslyn, S. M. (1994). *Image and Brain, The Resolution of the Imagery Debate*. The MIT Press.
|
| 355 |
+
|
| 356 |
+
Lancy, D. (1983). *Cross-Cultural Studies in Cognition and Mathematics*. Academic Press, New York.
|
| 357 |
+
|
| 358 |
+
McClelland, J. and Rumelhart, D. (1985). Distributed memory and the representation of general and specific memory. *Journal of Experimental Psychology: General*, 114:159-188.
|
| 359 |
+
|
| 360 |
+
Newell, A. (1990). *Unified Theories of Cognition*. Harvard University Press.
|
| 361 |
+
|
| 362 |
+
Newell, A. and Simon, H. (1976). Computer science as empirical inquiry: symbols and search. *Communication of the ACM*, 19(3):113-126.
|
| 363 |
+
|
| 364 |
+
Opwis, K. and Plötzner, R. (1996). *Kognitive Psychologie mit dem Computer*. Spektrum Akademischer Verlag, Heidelberg Berlin Oxford.
|
| 365 |
+
|
| 366 |
+
Osherson, D. N. (1987). New axioms for the contrast model of similarity. *Journal of Mathematical Psychology*, 31:93-103.
|
| 367 |
+
|
| 368 |
+
Osherson, D. N. (1995). Probability judgment. In Smith, E. E. and Osherson, D. N., editors, *Thinking*, volume 3, chapter two, pages 35–75. MIT Press, second edition.
|
| 369 |
+
|
| 370 |
+
Palm, G. (1982). *Neural Assemblies, an Alternative Approach to Artificial Intelligence*. Springer-Verlag.
|
| 371 |
+
|
| 372 |
+
Palm, G. (1990). Assoziatives Gedächtnis und Gehirntheorie. In *Gehirn und Kognition*, pages 164–174. Spektrum der Wissenschaft.
|
| 373 |
+
|
| 374 |
+
Posner, M. I. and Raichle, M. E. (1994). *Images of Mind*. Scientific American Library, New York.
|
| 375 |
+
---PAGE_BREAK---
|
| 376 |
+
|
| 377 |
+
Riesenhuber, M. and Poggio, T. (1999). Hierarchical models of object recognition in cortex. *Nature Neuroscience*, 2:1019–1025.
|
| 378 |
+
|
| 379 |
+
Rumelhart, D. and McClelland (1986). On learning the past tense of english verbs. In McClelland, J. and Rumelhart, D., editors, *Parallel Distributed Processing*, pages 216–271. MIT Press.
|
| 380 |
+
|
| 381 |
+
Simon, H. A. (1991). *Models of my Life*. Basic Books, New York.
|
| 382 |
+
|
| 383 |
+
Smith, E. E. (1995). Concepts and categorization. In Smith, E. E. and Osherson, D. N., editors, *Thinking*, volume 3, chapter one, pages 3–33. MIT Press, second edition.
|
| 384 |
+
|
| 385 |
+
Sommer, F. T. (1993). *Theorie neuronaler Assoziativspeicher*. PhD thesis, Heinrich-Heine-Universität Düsseldorf, Düsseldorf.
|
| 386 |
+
|
| 387 |
+
Squire, L. R. and Kandel, E. R. (1999). *Memory. From Mind to Moleculus*. Scientific American Library.
|
| 388 |
+
|
| 389 |
+
Steinbuch, K. (1961). *Die Lernmatrix*. *Kybernetik*, 1:36–45.
|
| 390 |
+
|
| 391 |
+
Steinbuch, K. (1971). *Automat und Mensch*. Springer-Verlag, fourth edition.
|
| 392 |
+
|
| 393 |
+
Sun, R. (1995). A two-level hybrid architecture for structuring knowledge for com-
|
| 394 |
+
monsense reasoning. In Sun, R. and Bookman, L. A., editors, *Computational
|
| 395 |
+
Architectures Integrating Neural and Symbolic Processing*, chapter 8, pages 247–
|
| 396 |
+
182. Kluwer Academic Publishers.
|
| 397 |
+
|
| 398 |
+
Tarski, A. (1956). *Logic, Semantics, Metamathematics*. Oxford University Press,
|
| 399 |
+
London.
|
| 400 |
+
|
| 401 |
+
Tversky, A. (1977). Feature of similarity. *Psychological Review*, 84:327–352.
|
| 402 |
+
|
| 403 |
+
Wichert, A. (2001). Pictorial reasoning with cell assemblies. *Connection Science*, 13(1).
|
| 404 |
+
|
| 405 |
+
Wichert, A. (2009). Sub-symbols and icons. *Cognitive Computation*, 1(4):342–347.
|
| 406 |
+
|
| 407 |
+
Wichert, A. (2011). The role of attention in the context of associative memory.
|
| 408 |
+
*Cognitive Computation*, 3(1).
|
| 409 |
+
|
| 410 |
+
Wichert, A., Pereira, J. D., and Carreira, P. (2008). Visual search light model for
|
| 411 |
+
mental problem solving. *Neurocomputing*, 71(13-15):2806–2822.
|
| 412 |
+
|
| 413 |
+
Wickelgren, W. A. (1969). Context-sensitive coding, associative memory, and serial order in (speech)behavior. *Psychological Review*, 76:1–15.
|
| 414 |
+
|
| 415 |
+
Wickelgren, W. A. (1977). *Cognitive Psychology*. Prentice-Hall.
|
| 416 |
+
|
| 417 |
+
Willshaw, D., Buneman, O., and Longuet-Higgins, H. (1969). Nonholgraphic asso-
|
| 418 |
+
ciative memory. *Nature*, 222:960–962.
|
| 419 |
+
|
| 420 |
+
Winston, P. H. (1992). *Artificial Intelligence*. Addison-Wesley, third edition.
|
samples/texts_merged/3975828.md
ADDED
|
@@ -0,0 +1,291 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Upper Bounds on the Spanning Ratio of Constrained Theta-Graphs*
|
| 5 |
+
|
| 6 |
+
Prosenjit Bose and André van Renssen
|
| 7 |
+
|
| 8 |
+
School of Computer Science, Carleton University, Ottawa, Canada.
|
| 9 |
+
jit@scs.carleton.ca, andre@cg.scs.carleton.ca
|
| 10 |
+
|
| 11 |
+
**Abstract.** We present tight upper and lower bounds on the spanning ratio of a large family of constrained $\theta$-graphs. We show that constrained $\theta$-graphs with $4k + 2$ ($k \ge 1$ and integer) cones have a tight spanning ratio of $1 + 2\sin(\theta/2)$, where $\theta$ is $2\pi/(4k+2)$. We also present improved upper bounds on the spanning ratio of the other families of constrained $\theta$-graphs.
|
| 12 |
+
|
| 13 |
+
## 1 Introduction
|
| 14 |
+
|
| 15 |
+
A geometric graph $G$ is a graph whose vertices are points in the plane and whose edges are line segments between pairs of points. Every edge is weighted by the Euclidean distance between its endpoints. The distance between two vertices $u$ and $v$ in $G$, denoted by $d_G(u, v)$, is defined as the sum of the weights of the edges along the shortest path between $u$ and $v$ in $G$. A subgraph $H$ of $G$ is a *t-spanner* of $G$ (for $t \ge 1$) if for each pair of vertices $u$ and $v$, $d_H(u, v) \le t \cdot d_G(u, v)$. The smallest value $t$ for which $H$ is a *t-spanner* is the *spanning ratio* or *stretch factor*. The graph $G$ is referred to as the *underlying graph* of $H$. The spanning properties of various geometric graphs have been studied extensively in the literature (see [4,9] for a comprehensive overview of the topic). We look at a specific type of geometric spanner: *$\theta$-graphs.
|
| 16 |
+
|
| 17 |
+
Introduced independently by Clarkson [6] and Keil [8], $\theta$-graphs partition the plane around each vertex into $m$ disjoint cones, each having aperture $\theta = 2\pi/m$. The $\theta_m$-graph is constructed by, for each cone of each vertex $u$, connecting $u$ to the vertex $v$ whose projection along the bisector of the cone is closest. Ruppert and Seidel [10] showed that the spanning ratio of these graphs is at most $1/(1 - 2\sin(\theta/2))$, when $\theta < \pi/3$, i.e. there are at least seven cones. Recent results include a tight spanning ratio of $1 + 2\sin(\theta/2)$ for $\theta$-graphs with $4k + 2$ cones [1], where $k \ge 1$ and integer, and improved upper bounds for the other three families of $\theta$-graphs [5].
|
| 18 |
+
|
| 19 |
+
Most of the research, however, has focused on constructing spanners where the underlying graph is the complete Euclidean geometric graph. We study this problem in a more general setting with the introduction of line segment constraints. Specifically, let $P$ be a set of points in the plane and let $S$ be a set
|
| 20 |
+
|
| 21 |
+
* Research supported in part by NSERC and Carleton University's President's 2010 Doctoral Fellowship.
|
| 22 |
+
---PAGE_BREAK---
|
| 23 |
+
|
| 24 |
+
of line segments between two vertices in P, called *constraints*. The set of con-
|
| 25 |
+
straints is planar, i.e. no two constraints intersect properly. Two vertices u and
|
| 26 |
+
v can see each other if and only if either the line segment uv does not properly
|
| 27 |
+
intersect any constraint or uv is itself a constraint. If two vertices u and v can
|
| 28 |
+
see each other, the line segment uv is a *visibility edge*. The *visibility graph* of P
|
| 29 |
+
with respect to a set of constraints S, denoted *Vis*(P, S), has P as vertex set
|
| 30 |
+
and all visibility edges as edge set. In other words, it is the complete graph on
|
| 31 |
+
P minus all edges that properly intersect one or more constraints in S.
|
| 32 |
+
|
| 33 |
+
This setting has been studied extensively within the context of motion planning amid obstacles. Clarkson [6] was one of the first to study this problem and showed how to construct a linear-sized (1+ε)-spanner of Vis(P, S). Subsequently, Das [7] showed how to construct a spanner of Vis(P, S) with constant spanning ratio and constant degree. The Constrained Delaunay Triangulation was shown to be a 2.42-spanner of Vis(P, S) [3]. Recently, it was also shown that the constrained θ₆-graph is a 2-spanner of Vis(P, S) [2]. In this paper, we generalize the recent results on unconstrained θ-graphs to the constrained setting. There are two main obstacles that differentiate this work from previous results. First, the main difficulty with the constrained setting is that induction cannot be applied directly, as the destination need not be visible from the vertex closest to the source (see Figure 5, where w is not visible from v₀, the vertex closest to u). Second, when the graph does not have 4k + 2 cones, the cones do not line up as nicely as in [2], making it more difficult to apply induction.
|
| 34 |
+
|
| 35 |
+
In this paper, we overcome these two difficulties and show that constrained
|
| 36 |
+
θ-graphs with 4k+2 cones have a spanning ratio of at most 1 + 2 sin(θ/2), where
|
| 37 |
+
θ is 2π/(4k + 2). Since the lower bounds of the unconstrained θ-graphs carry
|
| 38 |
+
over to the constrained setting, this shows that this spanning ratio is tight. We
|
| 39 |
+
also show that constrained θ-graphs with 4k + 4 cones have a spanning ratio of
|
| 40 |
+
at most 1 + 2 sin(θ/2) / (cos(θ/2) - sin(θ/2)), where θ is 2π/(4k + 4). Finally, we
|
| 41 |
+
show that constrained θ-graphs with 4k+3 or 4k+5 cones have a spanning ratio
|
| 42 |
+
of at most cos(θ/4) / (cos(θ/2) - sin(3θ/4)), where θ is 2π/(4k+3) or 2π/(4k+5).
|
| 43 |
+
|
| 44 |
+
## 2 Preliminaries
|
| 45 |
+
|
| 46 |
+
We define a cone C to be the region in the plane between two rays originating from a vertex referred to as the apex of the cone. When constructing a (constrained) $\theta_{(4k+x)}$-graph, for each vertex $u$ consider the rays originating from $u$ with the angle between consecutive rays being $\theta = 2\pi/(4k+x)$, where $k \ge 1$ and integer and $x \in \{2, 3, 4, 5\}$. Each pair of consecutive rays defines a cone. The cones are oriented such that the bisector of some cone coincides with the vertical halfline through $u$ that lies above $u$. Let this cone be $C_0$ of $u$ and number the cones in clockwise order around $u$. The cones around the other vertices have the same orientation as the ones around $u$. We write $C_i^u$ to indicate the $i$-th cone of a vertex $u$. For ease of exposition, we only consider point sets in general position: no two points lie on a line parallel to one of the rays that define the cones, no two points lie on a line perpendicular to the bisector of a cone, and no three points are collinear.
|
| 47 |
+
---PAGE_BREAK---
|
| 48 |
+
|
| 49 |
+
Let vertex $u$ be an endpoint of a constraint $c$ and let the other endpoint $v$ lie in cone $C_i^u$. The lines through all such constraints $c$ split $C_i^u$ into several subcones. We use $C_{i,j}^u$ to denote the $j$-th subcone of $C_i^u$. When a constraint $c = (u, v)$ splits a cone of $u$ into two subcones, we define $v$ to lie in both of these subcones. We consider a cone that is not split to be a single subcone.
|
| 50 |
+
|
| 51 |
+
We now introduce the constrained $\theta_{(4k+x)}$-graph: for each subcone $C_{i,j}$ of each vertex $u$, add an edge from $u$ to the closest vertex in that subcone that can see $u$, where distance is measured along the bisector of the original cone (not the subcone). More formally, we add an edge between two vertices $u$ and $v$ if $v$ can see $u$, $v \in C_{i,j}^u$, and for all points $w \in C_{i,j}^u$ that can see $u$, $|uv'| \le |uw'|$, where $v'$ and $w'$ denote the projection of $v$ and $w$ on the bisector of $C_i^u$ and $|xy|$ denotes the length of the line segment between two points $x$ and $y$. Note that our assumption of general position implies that each vertex adds at most one edge for each of its subcones.
|
| 52 |
+
|
| 53 |
+
Given a vertex $w$ in the cone $C_i$ of vertex $u$, we define the canonical triangle $T_{uw}$ to be the triangle defined by the borders of $C_i^u$ and the line through $w$ perpendicular to the bisector of $C_i^u$. Note that subcones do not define canonical triangles. We use $m$ to denote the midpoint of the side of $T_{uw}$ opposing $u$ and $\alpha$ to denote the unsigned angle between $uw$ and $um$ (see Figure 1). Note that for any pair of vertices $u$ and $w$, there exist two canonical triangles: $T_{uw}$ and $T_{wu}$. We say that a region is empty if it does not contain any vertex of $P$.
|
| 54 |
+
|
| 55 |
+
Fig. 1. The canonical triangle $T_{uw}$
|
| 56 |
+
|
| 57 |
+
## 3 Some Useful Lemmas
|
| 58 |
+
|
| 59 |
+
In this section, we list a number of lemmas that are used when bounding the spanning ratio of the various graphs. Note that these lemmas are not new, as they are already used in [2,5], though some are expanded to work for all four families of constrained $\theta$-graphs. We start with a nice property of visibility graphs from [2].
|
| 60 |
+
|
| 61 |
+
**Lemma 1.** Let $u, v$, and $w$ be three arbitrary points in the plane such that $uw$ and $vw$ are visibility edges and $w$ is not the endpoint of a constraint intersecting the interior of triangle uvw. Then there exists a convex chain of visibility edges from $u$ to $v$ in triangle uvw, such that the polygon defined by $uw, vw$ and the convex chain is empty and does not contain any constraints.
|
| 62 |
+
|
| 63 |
+
Fig. 2. The convex chain between vertices $u$ and $v$, where thick lines are visibility edges
|
| 64 |
+
---PAGE_BREAK---
|
| 65 |
+
|
| 66 |
+
Next, we use two lemmas from [5] to bound the length of certain line segments. Note that Lemma 2 is extended such that it also holds for the constrained $\theta_{(4k+2)}$-graph. We use $\angle xyz$ to denote the smaller angle between line segments $xy$ and $yz$.
|
| 67 |
+
|
| 68 |
+
**Lemma 2.** Let $u, v$ and $w$ be three vertices in the $\theta_{(4k+x)}$-graph, $x \in \{2, 3, 4, 5\}$, such that $w \in C_0^u$ and $v \in T_{uw}$, to the left of $uw$. Let $a$ be the intersection of the side of $T_{uw}$ opposite $u$ and the left boundary of $C_0^v$. Let $C_i^v$ denote the cone of $v$ that contains $w$ and let $c$ and $d$ be the upper and lower corner of $T_{vw}$. If $1 \le i \le k-1$, or $i=k$ and $|cw| \le |dw|$, then $\max\{|vc| + |cw|, |vd| + |dw|\} \le |va| + |aw|$ and $\max\{|cw|, |dw|\} \le |aw|$.
|
| 69 |
+
|
| 70 |
+
Fig. 3. The situation where we apply Lemma 2
|
| 71 |
+
|
| 72 |
+
Fig. 4. The situation where we apply Lemma 3
|
| 73 |
+
|
| 74 |
+
**Lemma 3.** Let $u, v$ and $w$ be three vertices in the $\theta_{(4k+x)}$-graph, $x \in \{2, 3, 4, 5\}$, such that $w \in C_0^u$, $v \in T_{uw}$ to the left of $uw$, and $w \notin C_0^v$. Let $a$ be the intersection of the side of $T_{uw}$ opposite $u$ and the line through $v$ parallel to the left boundary of $T_{uw}$. Let $y$ and $z$ be the corners of $T_{vw}$ opposite to $v$. Let $\beta = \angle awv$ and let $\gamma$ be the unsigned angle between $vw$ and the bisector of $T_{vw}$. Let $c$ be a positive constant. If $c \ge \frac{\cos\gamma - \sin\beta}{\cos(\frac{\theta}{2}-\beta) - \sin(\frac{\theta}{2}+\gamma)}$, then $|vp| + c \cdot |pw| \le |va| + c \cdot |aw|$, where $p$ is $y$ if $|yw| \ge |zw|$ and $z$ if $|yw| < |zw|$.
|
| 75 |
+
|
| 76 |
+
# 4 Constrained $\theta_{(4k+2)}$-Graph
|
| 77 |
+
|
| 78 |
+
In this section we prove that the constrained $\theta_{(4k+2)}$-graph has spanning ratio at most $1+2\cdot\sin(\theta/2)$. Since this is also a lower bound [1], this proves that this spanning ratio is tight.
|
| 79 |
+
---PAGE_BREAK---
|
| 80 |
+
|
| 81 |
+
**Theorem 1.** Let $u$ and $w$ be two vertices in the plane such that $u$ can see $w$. Let $m$ be the midpoint of the side of $T_{uw}$ opposing $u$ and let $\alpha$ be the unsigned angle between $uw$ and $um$. There exists a path connecting $u$ and $w$ in the constrained $\theta_{(4k+2)}$-graph of length at most
|
| 82 |
+
|
| 83 |
+
$$ \left( \left( \frac{1 + \sin\left(\frac{\theta}{2}\right)}{\cos\left(\frac{\theta}{2}\right)} \right) \cdot \cos\alpha + \sin\alpha \right) \cdot |uw|. $$
|
| 84 |
+
|
| 85 |
+
*Proof.* We assume without loss of generality that $w \in C_0^u$. We prove the theorem by induction on the area of $T_{uw}$. Formally, we perform induction on the rank, when ordered by area, of the triangles $T_{xy}$ for all pairs of vertices $x$ and $y$ that can see each other. Let $a$ and $b$ be the upper left and right corner of $T_{uw}$, and let $A$ and $B$ be the triangles $uaw$ and $ubw$ (see Figure 5).
|
| 86 |
+
|
| 87 |
+
Our inductive hypothesis is the following, where $\delta(u, w)$ denotes the length of the shortest path from $u$ to $w$ in the constrained $\theta_{(4k+2)}$-graph:
|
| 88 |
+
|
| 89 |
+
- If $A$ is empty, then $\delta(u, w) \le |ub| + |bw|$.
|
| 90 |
+
|
| 91 |
+
- If $B$ is empty, then $\delta(u, w) \le |ua| + |aw|$.
|
| 92 |
+
|
| 93 |
+
- If neither $A$ nor $B$ is empty, then $\delta(u, w) \le \max\{|ua| + |aw|, |ub| + |bw|\}$.
|
| 94 |
+
|
| 95 |
+
We first show that this induction hypothesis implies the theorem: $|um| = |uw| \cdot \cos\alpha$, $|mw| = |uw| \cdot \sin\alpha$, $|am| = |bm| = |uw| \cdot \cos\alpha \cdot \tan(\theta/2)$, and $|ua| = |ub| = |uw| \cdot \cos\alpha / \cos(\theta/2)$. Thus the induction hypothesis gives that $\delta(u, w)$ is at most $|uw| \cdot ((1 + \sin(\theta/2)) / \cos(\theta/2)) \cdot \cos\alpha + \sin\alpha)$.
|
| 96 |
+
|
| 97 |
+
**Base case:** $T_{uw}$ has rank 1. Since the triangle is a smallest triangle, $w$ is the closest vertex to $u$ in that cone. Hence the edge $(u, w)$ is part of the constrained $\theta_{(4k+2)}$-graph, and $\delta(u, w) = |uw|$. From the triangle inequality, we have $|uw| \le \min\{|ua| + |aw|, |ub| + |bw|\}$, so the induction hypothesis holds.
|
| 98 |
+
|
| 99 |
+
**Induction step:** We assume that the induction hypothesis holds for all pairs of vertices that can see each other and have a canonical triangle whose area is smaller than the area of $T_{uw}$.
|
| 100 |
+
|
| 101 |
+
If $(u, w)$ is an edge in the constrained $\theta_{(4k+2)}$-graph, the induction hypothesis follows by the same argument as in the base case. If there is no edge between $u$ and $w$, let $v_0$ be the vertex closest to $u$ in the sub-cone of $u$ that contains $w$, and let $a_0$ and $b_0$ be the upper left and right corner of $T_{uv_0}$ (see Figure 5). By definition, $\delta(u, w) \le |uv_0| + \delta(v_0, w)$, and by the triangle inequality, $|uv_0| \le \min\{|ua_0| + |a_0v_0|, |ub_0| + |b_0v_0|\}$. We assume without loss of generality that $v_0$ lies to the left of $uw$, which means that $A$ is not empty.
|
| 102 |
+
|
| 103 |
+
Since $uw$ and $uv_0$ are visibility edges, by applying Lemma 1 to triangle $v_0uw$, a convex chain $v_0, ..., v_l = w$ of visibility edges
|
| 104 |
+
|
| 105 |
+
Fig. 5. A convex chain from $v_0$ to $w$
|
| 106 |
+
---PAGE_BREAK---
|
| 107 |
+
|
| 108 |
+
connecting $v_0$ and $w$ exists (see Figure 5). Note that, since $v_0$ is the closest visible vertex to $u$, every vertex along the convex chain lies above the horizontal line through $v_0$.
|
| 109 |
+
|
| 110 |
+
We now look at two consecutive vertices $v_{j-1}$ and $v_j$ along the convex chain. There are four types of configurations (see Figure 6): (i) $v_j \in C_k^{v_{j-1}}$, (ii) $v_j \in C_i^{v_{j-1}}$ where $1 \le i < k$, (iii) $v_j \in C_0^{v_{j-1}}$ and $v_j$ lies to the right of or has the same x-coordinate as $v_{j-1}$, (iv) $v_j \in C_0^{v_{j-1}}$ and $v_j$ lies to the left of $v_{j-1}$. By convexity, the direction of $\overrightarrow{v_j v_{j+1}}$ is rotating counterclockwise for increasing $j$. Thus, these configurations occur in the order Type (i), Type (ii), Type (iii), Type (iv) along the convex chain from $v_0$ to $w$. We bound $\delta(v_{j-1}, v_j)$ as follows:
|
| 111 |
+
|
| 112 |
+
**Type (i):** If $v_j \in C_k^{v_{j-1}}$, let $a_j$ and $b_j$ be the upper and lower left corner of $T_{v_j v_{j-1}}$ and let $B_j = v_{j-1} b_j v_j$. Note that since $v_j \in C_k^{v_{j-1}}$, $a_j$ is also the intersection of the left boundary of $C_0^{v_{j-1}}$ and the horizontal line through $v_j$. Triangle $B_j$ lies between the convex chain and $uw$, so it must be empty. Since $v_j$ can see $v_{j-1}$ and $T_{v_j v_{j-1}}$ has smaller area than $T_{uw}$, the induction hypothesis gives that $\delta(v_{j-1}, v_j)$ is at most $|v_{j-1} a_j| + |a_j v_j|$.
|
| 113 |
+
|
| 114 |
+
Fig. 6. The four types of configurations
|
| 115 |
+
|
| 116 |
+
**Type (ii):** If $v_j \in C_i^{v_{j-1}}$ where $1 \le i < k$, let $c$ and $d$ be the upper and lower right corner of $T_{v_{j-1} v_j}$. Let $a_j$ be the intersection of the left boundary of $C_0^{v_{j-1}}$ and the horizontal line through $v_j$. Since $v_j$ can see $v_{j-1}$ and $T_{v_{j-1} v_j}$ has smaller area than $T_{uw}$, the induction hypothesis gives that $\delta(v_{j-1}, v_j)$ is at most $\max\{|v_{j-1} c| + |cv_j|, |v_{j-1} d| + |dv_j|\}$. Since $v_j \in C_i^{v_{j-1}}$ where $1 \le i < k$, we can apply Lemma 2 (where $v$, $w$, and $a$ from Lemma 2 are $v_{j-1}$, $v_j$, and $a_j$), which gives us that $\max\{|v_{j-1} c| + |cv_j|, |v_{j-1} d| + |dv_j|\} \le |v_{j-1} a_j| + |a_j v_j|$.
|
| 117 |
+
|
| 118 |
+
**Type (iii):** If $v_j \in C_0^{v_{j-1}}$ and $v_j$ lies to the right of or has the same x-coordinate as $v_{j-1}$, let $a_j$ and $b_j$ be the left and right corner of $T_{v_{j-1} v_j}$ and let $A_j = v_{j-1} a_j v_j$ and $B_j = v_{j-1} b_j v_j$. Since $v_j$ can see $v_{j-1}$ and $T_{v_{j-1} v_j}$ has smaller area than $T_{uw}$, we can apply the induction hypothesis. Regardless of whether $A_j$ and $B_j$ are empty or not, $\delta(v_{j-1}, v_j)$ is at most $\max\{|v_{j-1} a_j| + |a_j v_j|, |v_{j-1} b_j| + |b_j v_j|\}$. Since $v_j$ lies to the right of or has the same x-coordinate as $v_{j-1}$, we know that $|v_{j-1} a_j| + |a_j v_j| \ge |v_{j-1} b_j| + |b_j v_j|$, so $\delta(v_{j-1}, v_j)$ is at most $|v_{j-1} a_j| + |a_j v_j|$.
|
| 119 |
+
|
| 120 |
+
**Type (iv):** If $v_j \in C_0^{v_{j-1}}$ and $v_j$ lies to the left of $v_{j-1}$, let $a_j$ and $b_j$ be the left and right corner of $T_{v_{j-1} v_j}$ and let $A_j = v_{j-1} a_j v_j$ and $B_j = v_{j-1} b_j v_j$. Since $v_j$ can see $v_{j-1}$ and $T_{v_{j-1} v_j}$ has smaller area than $T_{uw}$, we can apply the
|
| 121 |
+
---PAGE_BREAK---
|
| 122 |
+
|
| 123 |
+
**Fig. 7.** Visualization of the paths (thick lines) in the inequalities of case (c)
|
| 124 |
+
|
| 125 |
+
induction hypothesis. Thus, if $B_j$ is empty, $\delta(v_{j-1}, v_j)$ is at most $|v_{j-1}a_j| + |a_jv_j|$ and if $B_j$ is not empty, $\delta(v_{j-1}, v_j)$ is at most $|v_{j-1}b_j| + |b_jv_j|$.
|
| 126 |
+
|
| 127 |
+
To complete the proof, we consider three cases: (a) $\angle awu \le \pi/2$, (b) $\angle awu > \pi/2$ and B is empty, (c) $\angle awu > \pi/2$ and B is not empty.
|
| 128 |
+
|
| 129 |
+
**Case (a):** If $\angle awu \le \pi/2$, the convex chain cannot contain any Type (iv) configurations: for Type (iv) configurations to occur, $v_j$ needs to lie to the left of $v_{j-1}$. However, by construction, $v_j$ lies on or to the right of the line through $v_{j-1}$ and w. Hence, since $\angle awv_{j-1} < \angle awu \le \pi/2$, $v_j$ lies to the right of or has the same x-coordinate as $v_{j-1}$. We can now bound $\delta(u, w)$ by using these bounds: $\delta(u, w) \le |uv_0| + \sum_{j=1}^l \delta(v_{j-1}, v_j) \le |ua_0| + |a_0v_0| + \sum_{j=1}^l (|v_{j-1}a_j| + |a_jv_j|) = |ua| + |aw|$.
|
| 130 |
+
|
| 131 |
+
**Case (b):** If $\angle awu > \pi/2$ and B is empty, the convex chain can contain Type (iv) configurations. However, since B is empty and the area between the convex chain and *uw* is empty (by Lemma 1), all $B_j$ are also empty. Using the computed bounds on the lengths of the paths between the points along the convex chain, we can bound $\delta(u, w)$ as in the previous case.
|
| 132 |
+
|
| 133 |
+
**Case (c):** If $\angle awu > \pi/2$ and B is not empty, the convex chain can contain Type (iv) configurations and since B is not empty, the triangles $B_j$ need not be empty. Recall that $v_0$ lies in A, hence neither A nor B are empty. Therefore, it suffices to prove that $\delta(u, w) \le \max\{|ua| + |aw|, |ub| + |bw|\} = |ub| + |bw|$. Let $T_{v_j,v_{j'+1}}$ be the first Type (iv) configuration along the convex chain (if it has any), let $a'$ and $b'$ be the upper left and right corner of $T_{uv_j}$, and let $b''$ be the upper right corner of $T_{v_j,w}$. We now have that $\delta(u, w) \le |uv_0| + \sum_{j=1}^l \delta(v_{j-1}, v_j) \le |ua'| + |a'v_{j'}| + |v_{j'}b''| + |b''w| \le |ub| + |bw|$ (see Figure 7). $\square$
|
| 134 |
+
|
| 135 |
+
Since $\left(\frac{(1+\sin(\theta/2))}{\cos(\theta/2)}\right) \cdot \cos\alpha + \sin\alpha$ is increasing for $\alpha \in [0, \theta/2]$, for $\theta \le \pi/3$, it is maximized when $\alpha = \theta/2$, and we obtain the following corollary:
|
| 136 |
+
|
| 137 |
+
**Corollary 1.** The constrained $\theta_{(4k+2)}$-graph is a $(1+2\cdot\sin(\frac{\theta}{2}))$-spanner of Vis($P, S$).
|
| 138 |
+
---PAGE_BREAK---
|
| 139 |
+
|
| 140 |
+
**5 Generic Framework for the Spanning Proof**
|
| 141 |
+
|
| 142 |
+
Next, we modify the spanning proof from the previous section and provide a
|
| 143 |
+
generic framework for the spanning proof for the other three families of $\theta$-graphs.
|
| 144 |
+
After providing this framework, we fill in the blanks for the individual families.
|
| 145 |
+
|
| 146 |
+
**Theorem 2.** Let $u$ and $w$ be two vertices in the plane such that $u$ can see $w$. Let $m$ be the midpoint of the side of $T_{uw}$ opposing $u$ and let $\alpha$ be the unsigned angle between $uw$ and $um$. There exists a path connecting $u$ and $w$ in the constrained $\theta_{(4k+x)}$-graph of length at most
|
| 147 |
+
|
| 148 |
+
$$ \left( \frac{\cos \alpha}{\cos\left(\frac{\theta}{2}\right)} + \left( \cos \alpha \cdot \tan\left(\frac{\theta}{2}\right) + \sin \alpha \right) \cdot \mathbf{c} \right) |uw|, $$
|
| 149 |
+
|
| 150 |
+
where **c** ≥ 1 is a constant that depends on x ∈ {3, 4, 5}. For the constrained θ<sub>(4k+4)</sub>-graph, **c** equals 1/(cos(θ/2) − sin(θ/2)) and for the constrained θ<sub>(4k+3)</sub>-graph and θ<sub>(4k+5)</sub>-graph, **c** equals cos(θ/4)/(cos(θ/2) − sin(3θ/4)).
|
| 151 |
+
|
| 152 |
+
*Proof.* We prove the theorem by induction on the area of $T_{uw}$. Formally, we perform induction on the rank, when ordered by area, of the triangles $T_{xy}$ for all pairs of vertices $x$ and $y$ that can see each other. We assume without loss of generality that $w \in C_0^u$. Let $a$ and $b$ be the upper left and right corner of $T_{uw}$ (see Figure 5).
|
| 153 |
+
|
| 154 |
+
Our inductive hypothesis is the following, where $\delta(u, w)$ denotes the length of the shortest path from $u$ to $w$ in the constrained $\theta_{(4k+x)}$-graph: $\delta(u, w) \le \max\{|ua| + |aw| \cdot c, |ub| + |bw| \cdot c\}$.
|
| 155 |
+
|
| 156 |
+
We first show that this induction hypothesis implies the theorem. Basic trigonometry gives us the following equalities: $|um| = |uw| \cdot \cos \alpha$, $|mw| = |uw| \cdot \sin \alpha$, $|am| = |bm| = |uw| \cdot \cos \alpha \cdot \tan(\theta/2)$, and $|ua| = |ub| = |uw| \cdot \cos \alpha / \cos(\theta/2)$. Thus the induction hypothesis gives that $\delta(u, w)$ is at most $|uw| \cdot (\cos \alpha / \cos(\theta/2) + (\cos \alpha \cdot \tan(\theta/2) + \sin \alpha) \cdot c)$.
|
| 157 |
+
|
| 158 |
+
**Base case:** $T_{uw}$ has rank 1. Since the triangle is a smallest triangle, $w$ is the closest vertex to $u$ in that cone. Hence the edge $(u, w)$ is part of the constrained $\theta_{(4k+x)}$-graph, and $\delta(u, w) = |uw|$. From the triangle inequality and the fact that $c \ge 1$, we have $|uw| \le \min\{|ua| + |aw| \cdot c, |ub| + |bw| \cdot c\}$, so the induction hypothesis holds.
|
| 159 |
+
|
| 160 |
+
**Induction step:** We assume that the induction hypothesis holds for all pairs of vertices that can see each other and have a canonical triangle whose area is smaller than the area of $T_{uw}$.
|
| 161 |
+
|
| 162 |
+
If $(u, w)$ is an edge in the constrained $\theta_{(4k+x)}$-graph, the induction hypothesis follows by the same argument as in the base case. If there is no edge between $u$ and $w$, let $v_0$ be the vertex closest to $u$ in the subcone of $u$ that contains $w$, and let $a_0$ and $b_0$ be the upper left and right corner of $T_{uv_0}$ (see Figure 5). By definition, $\delta(u, w) \le |uv_0| + \delta(v_0, w)$, and by the triangle inequality, $|uv_0| \le \min\{|ua_0| + |a_0v_0|, |ub_0| + |b_0v_0|\}$. We assume without loss of generality that $v_0$ lies to the left of $uw$.
|
| 163 |
+
---PAGE_BREAK---
|
| 164 |
+
|
| 165 |
+
Since *uw* and *uv*₀ are visibility edges, by applying Lemma 1 to triangle *v*₀*uw*, a convex chain *v*₀, ..., *v*ₙ = *w* of visibility edges connecting *v*₀ and *w* exists (see Figure 5). Note that, since *v*₀ is the closest visible vertex to *u*, every vertex along the convex chain lies above the horizontal line through *v*₀.
|
| 166 |
+
|
| 167 |
+
We now look at two consecutive vertices $v_{j-1}$ and $v_j$ along the convex chain. When $v_j \notin C_0^{v_{j-1}}$, let $c$ and $d$ be the upper and lower right corner of $T_{v_{j-1}v_j}$. We distinguish four types of configurations: (i) $v_j \in C_i^{v_{j-1}}$ where $i > k$, or $i = k$ and $|cw| > |dw|$, (ii) $v_j \in C_i^{v_{j-1}}$ where $1 \le i \le k - 1$, or $i = k$ and $|cw| \le |dw|$, (iii) $v_j \in C_0^{v_{j-1}}$ and $v_j$ lies to the right of or has the same x-coordinate as $v_{j-1}$, (iv) $v_j \in C_0^{v_{j-1}}$ and $v_j$ lies to the left of $v_{j-1}$. By convexity, the direction of $\overrightarrow{v_j v_{j+1}}$ is rotating counterclockwise for increasing $j$. Thus, these configurations occur in the order Type (i), Type (ii), Type (iii), Type (iv) along the convex chain from $v_0$ to $w$. We bound $\delta(v_{j-1}, v_j)$ as follows:
|
| 168 |
+
|
| 169 |
+
**Type (i):** $v_j \in C_i^{v_{j-1}}$ where $i > k$, or $i = k$ and $|cw| > |dw|$. Since $v_j$ can see $v_{j-1}$ and $T_{v_j v_{j-1}}$ has smaller area than $T_{uw}$, the induction hypothesis gives that $\delta(v_{j-1}, v_j)$ is at most $\max\{|v_{j-1}c| + |cv_j| \cdot c, |v_{j-1}d| + |dv_j| \cdot c\}$.
|
| 170 |
+
|
| 171 |
+
Let $a_j$ be the intersection of the left boundary of $C_0^{v_{j-1}}$ and the horizontal line through $v_j$. We aim to show that $\max\{|v_{j-1}c| + |cv_j| \cdot c, |v_{j-1}d| + |dv_j| \cdot c\} \le |v_{j-1}a_j| + |ajvj| \cdot c$. We use Lemma 3 to do this. However, since the precise application of this lemma depends on the family of $\theta$-graphs and determines the value of $c$, this case is discussed in the spanning proofs of the three families.
|
| 172 |
+
|
| 173 |
+
**Type (ii):** $v_j \in C_i^{v_{j-1}}$ where $1 \le i \le k - 1$, or $i = k$ and $|cw| \le |dw|$. Since $v_j$ can see $v_{j-1}$ and $T_{v_j v_{j-1}}$ has smaller area than $T_{uw}$, the induction hypothesis gives that $\delta(v_{j-1}, v_j)$ is at most $\max\{|v_{j-1}c| + |cv_j| \cdot c, |v_{j-1}d| + |dv_j| \cdot c\}$.
|
| 174 |
+
|
| 175 |
+
Let $a_j$ be the intersection of the left boundary of $C_0^{v_{j-1}}$ and the horizontal line through $v_j$. Since $v_j \in C_i^{v_{j-1}}$ where $1 \le i \le k - 1$, or $i = k$ and $|cw| \le |dw|$, we can apply Lemma 2 in this case (where $v, w$, and $a$ from Lemma 2 are $v_{j-1}, v_j$, and $a_j$) and we get that $\max\{|v_{j-1}c| + |cv_j|, |v_{j-1}d| + |dv_j|\} \le |v_{j-1}a_j| + |ajvj|$ and $\max\{|cv_j|, |dv_j|\} \le |ajvj|$. Since $c \ge 1$, this implies that $\max\{|v_{j-1}c| + |cv_j| \cdot c, |v_{j-1}d| + |dv_j| \cdot c\} \le |v_{j-1}a_j| + |ajvj| \cdot c$.
|
| 176 |
+
|
| 177 |
+
**Type (iii):** If $v_j \in C_0^{v_{j-1}}$ and $v_j$ lies to the right of or has the same x-coordinate as $v_{j-1}$, let $a_j$ and $b_j$ be the left and right corner of $T_{v_{j-1}v_j}$. Since $v_j$ can see $v_{j-1}$ and $T_{v_{j-1}v_j}$ has smaller area than $T_{uw}$, we can apply the induction hypothesis. Thus, since $v_j$ lies to the right of or has the same x-coordinate as $v_{j-1}$, $\delta(v_{j-1}, v_j)$ is at most $|v_{j-1}a_j| + |ajvj| \cdot c$.
|
| 178 |
+
|
| 179 |
+
**Type (iv):** If $v_j \in C_0^{v_{j-1}}$ and $v_j$ lies to the left of $v_{j-1}$, let $a_j$ and $b_j$ be the left and right corner of $T_{v_{j-1}v_j}$. Since $v_j$ can see $v_{j-1}$ and $T_{v_{j-1}v_j}$ has smaller area than $T_{uw}$, we can apply the induction hypothesis. Thus, since $v_j$ lies to the left of $v_{j-1}$, $\delta(v_{j-1}, v_j)$ is at most $|v_{j-1}b_j| + |bv_j v_j| \cdot c$.
|
| 180 |
+
|
| 181 |
+
To complete the proof, we consider two cases: (a) $\angle awu \le \frac{\pi}{2}$, (b) $\angle awu > \frac{\pi}{2}$.
|
| 182 |
+
|
| 183 |
+
**Case (a):** We need to prove that $\delta(u, w) \le \max\{|ua| + |aw|, |ub| + |bw|\} = |ua| + |aw|$. We first show that the convex chain cannot contain any Type (iv) configurations: for Type (iv) configurations to occur, $v_j$ needs to lie to the left of $v_{j-1}$. However, by construction, $v_j$ lies on or to the right of the line through $v_{j-1}$ and $w$. Hence, since $\angle awv_{j-1} < \angle awu \le \pi/2$, $v_j$ lies to the right of $v_{j-1}$. We can
|
| 184 |
+
---PAGE_BREAK---
|
| 185 |
+
|
| 186 |
+
now bound $\delta(u, w)$ by using these bounds: $\delta(u, w) \le |uv_0| + \sum_{j=1}^l \delta(v_{j-1}, v_j) \le |ua_0| + |a_0v_0| + \sum_{j=1}^l (|v_{j-1}a_j| + |ajv_j| \cdot c) \le |ua| + |aw| \cdot c$.
|
| 187 |
+
|
| 188 |
+
**Case (b):** If $\angle awu > \pi/2$, the convex chain can contain Type (iv) configurations. We need to prove that $\delta(u, w) \le \max\{|ua|+|aw|, |ub|+|bw|\} = |ub|+|bw|$. Let $T_{v_j'v_{j'+1}}$ be the first Type (iv) configuration along the convex chain (if it has any), let $a'$ and $b'$ be the upper left and right corner of $T_{uv_j}$, and let $b''$ be the upper right corner of $T_{v_j'w}$. We now have that $\delta(u, w) \le |uv_0| + \sum_{j=1}^l \delta(v_{j-1}, v_j) \le |ua'| + |a'v_{j'}| \cdot c + |v_j'b''| + |b''w| \cdot c \le |ub| + |bw| \cdot c$ (see Figure 7). $\square$
|
| 189 |
+
|
| 190 |
+
# 6 The Constrained $\theta_{(4k+4)}$-Graph
|
| 191 |
+
|
| 192 |
+
In this section we complete the proof of Theorem 2 for the constrained $\theta_{(4k+4)}$-
|
| 193 |
+
graph.
|
| 194 |
+
|
| 195 |
+
**Theorem 3.** Let $u$ and $w$ be two vertices in the plane such that $u$ can see $w$. Let $m$ be the midpoint of the side of $T_{uw}$ opposite $u$ and let $\alpha$ be the unsigned angle between $uw$ and $um$. There exists a path connecting $u$ and $w$ in the constrained $\theta_{(4k+4)}$-graph of length at most
|
| 196 |
+
|
| 197 |
+
$$ \left( \frac{\cos \alpha}{\cos\left(\frac{\theta}{2}\right)} + \frac{\cos \alpha \tan\left(\frac{\theta}{2}\right) + \sin \alpha}{\cos\left(\frac{\theta}{2}\right) - \sin\left(\frac{\theta}{2}\right)} \right) \cdot |uw|. $$
|
| 198 |
+
|
| 199 |
+
*Proof.* We apply Theorem 2 using **c** = 1/(cos(θ/2) - sin(θ/2)). The assumptions made in Theorem 2 still apply. It remains to show that for the Type (i) configurations, we have that max{||v<sub>j−1</sub>c|| + |cv<sub>j</sub>|·**c**, |v<sub>j−1</sub>d| + |dv<sub>j</sub>|·**c**} ≤ |v<sub>j−1</sub>a<sub>j</sub>| + |ajv<sub>j</sub>|·**c**, where **c** and **d** are the upper and lower right corner of T<sub>v<sub>j−1</sub>v<sub>j</sub></sub> and a<sub>j</sub> is the intersection of the left boundary of C<sub>0</sub><sup>v<sub>j−1</sub></sup> and the horizontal line through v<sub>j</sub>.
|
| 200 |
+
|
| 201 |
+
We distinguish two cases: (a) $v_j \in C_k^{v_{j-1}}$ and $|cw| > |dw|$, (b) $v_j \in C_{k+1}^{v_{j-1}}$. Let $\beta$ be $\angle a_j v_j v_{j-1}$ and let $\gamma$ be the angle between $v_j v_{j-1}$ and the bisector of $T_{v_{j-1}v_j}$.
|
| 202 |
+
|
| 203 |
+
Case (a): When $v_j \in C_k^{v_{j-1}}$ and $|cw| > |dw|$, the induction hypothesis for $T_{v_{j-1}v_j}$ gives $\delta(v_{j-1}, v_j) \le |v_{j-1}c| + |cv_j| \cdot c$. We note that $\gamma = \theta - \beta$. Hence Lemma 3 gives that the inequality holds when $c \ge (\cos(\theta - \beta) - \sin\beta)/((\cos\theta/2 - \beta) - \sin(3\theta/2 - \beta))$. As this function is decreasing in $\beta$ for $\theta/2 \le \beta \le \theta$, it is maximized when $\beta$ equals $\theta/2$. Hence $c$ needs to be at least $(\cos\theta/2 - \sin(\theta/2))/(1 - \sin\theta)$, which can be rewritten to $1/(\cos(\theta/2) - \sin(\theta/2))$. $\square$
|
| 204 |
+
|
| 205 |
+
Case (b): When $v_j \in C_{k+1}^{v_{j-1}}$, $v_j$ lies above the bisector of $T_{v_{j-1}v_j}$ and the induction hypothesis for $T_{v_{j-1}v_j}$ gives $\delta(v_{j-1}, v_j) \le |v_j d| + |dv_{j-1}| \cdot c$. We note that $\gamma = \beta$. Hence Lemma 3 gives that the inequality holds when $c \ge (\cos\beta - \sin\beta)/((\cos\theta/2 - \beta) - \sin(\theta/2 + \beta))$. As this function is decreasing in $\beta$ for $0 \le \beta \le \theta/2$, it is maximized when $\beta$ equals 0. Hence $c$ needs to be at least $1/(\cos(\theta/2) - \sin(\theta/2))$. $\square$
|
| 206 |
+
|
| 207 |
+
Since cos α/ cos(θ/2) + (cos α · tan(θ/2) + sin α)/(cos(θ/2) − sin(θ/2)) is increasing for α ∈ [0, θ/2], for θ ≤ π/4, it is maximized when α = θ/2, and we obtain the following corollary:
|
| 208 |
+
---PAGE_BREAK---
|
| 209 |
+
|
| 210 |
+
**Corollary 2.** The constrained $\theta_{(4k+4)}$-graph is a $\left(1 + \frac{2 \cdot \sin\left(\frac{\theta}{2}\right)}{\cos\left(\frac{\theta}{2}\right) - \sin\left(\frac{\theta}{2}\right)}\right)$-spanner of Vis($P, S$).
|
| 211 |
+
|
| 212 |
+
# 7 The Constrained $\theta_{(4k+3)}$-Graph and $\theta_{(4k+5)}$-Graph
|
| 213 |
+
|
| 214 |
+
In this section we complete the proof of Theorem 2 for the constrained $\theta_{(4k+3)}$-
|
| 215 |
+
graph and $\theta_{(4k+5)}$-graph.
|
| 216 |
+
|
| 217 |
+
**Theorem 4.** Let $u$ and $w$ be two vertices in the plane such that $u$ can see $w$. Let $m$ be the midpoint of the side of $T_{uw}$ opposite $u$ and let $\alpha$ be the unsigned angle between $uw$ and $um$. There exists a path connecting $u$ and $w$ in the constrained $\theta_{(4k+3)}$-graph of length at most
|
| 218 |
+
|
| 219 |
+
$$ \left( \frac{\cos \alpha}{\cos \left(\frac{\theta}{2}\right)} + \frac{\left(\cos \alpha \cdot \tan \left(\frac{\theta}{2}\right) + \sin \alpha\right) \cdot \cos \left(\frac{\theta}{4}\right)}{\cos \left(\frac{\theta}{2}\right) - \sin \left(\frac{3\theta}{4}\right)} \right) \cdot |uw|. $$
|
| 220 |
+
|
| 221 |
+
*Proof.* We apply Theorem 2 using **c** = cos(θ/4)/(cos(θ/2) - sin(3θ/4)). The assumptions made in Theorem 2 still apply. It remains to show that for the Type (i) configurations, we have that max{|v<sub>j-1</sub>c| + |cv<sub>j</sub>|·**c**, |v<sub>j-1</sub>d| + |dv<sub>j</sub>|·**c**} ≤ |v<sub>j-1</sub>a<sub>j</sub>| + |a<sub>j</sub>v<sub>j</sub>|·**c**, where c and d are the upper and lower right corner of T<sub>v<sub>j-1</sub>v<sub>j</sub></sub> and a<sub>j</sub> is the intersection of the left boundary of C<sub>0</sub><sup>v<sub>j-1</sub></sup> and the horizontal line through v<sub>j</sub>.
|
| 222 |
+
|
| 223 |
+
We distinguish two cases: (a) $v_j \in C_k^{v_{j-1}}$ and $|cw| > |dw|$, (b) $v_j \in C_{k+1}^{v_{j-1}}$. Let $\beta$ be $\angle a_j v_j v_{j-1}$ and let $\gamma$ be the angle between $v_j v_{j-1}$ and the bisector of $T_{v_{j-1}v_j}$.
|
| 224 |
+
|
| 225 |
+
Case (a): When $v_j \in C_k^{v_{j-1}}$ and $|cw| > |dw|$, the induction hypothesis for $T_{v_{j-1}v_j}$ gives $\delta(v_{j-1}, v_j) \le |v_{j-1}c| + |cv_j| \cdot c$. We note that $\gamma = 3\theta/4 - \beta$. Hence Lemma 3 gives that the inequality holds when $c \ge (\cos(3\theta/4 - \beta) - \sin\beta)/((\cos(\theta/2 - \beta) - \sin(5\theta/4 - \beta)))$. As this function is decreasing in $\beta$ for $\theta/4 \le \beta \le 3\theta/4$, it is maximized when $\beta$ equals $\theta/4$. Hence $c$ needs to be at least $(\cos(\theta/2) - \sin(\theta/4))/((\cos(\theta/4) - \sin\theta))$, which is equal to $\cos(\theta/4)/((\cos(\theta/2) - \sin(3\theta/4)))$. $\square$
|
| 226 |
+
|
| 227 |
+
Case (b): When $v_j \in C_{k+1}^{v_{j-1}}$, $v_j$ lies above the bisector of $T_{v_{j-1}v_j}$ and the induction hypothesis for $T_{v_{j-1}v_j}$ gives $\delta(v_{j-1}, v_j) \le |v_j d| + |dv_{j-1}| \cdot c$. We note that $\gamma = \theta/4 + \beta$. Hence Lemma 3 gives that the inequality holds when $c \ge (\cos(\theta/4 + \beta) - \sin\beta)/((\cos(\theta/2 - \beta) - \sin(3\theta/4 + \beta)))$, which is equal to $\cos(\theta/4)/((\cos(\theta/2) - \sin(3\theta/4)))$. $\square$
|
| 228 |
+
|
| 229 |
+
**Theorem 5.** Let $u$ and $w$ be two vertices in the plane such that $u$ can see $w$. Let $m$ be the midpoint of the side of $T_{uw}$ opposite $u$ and let $\alpha$ be the unsigned angle between $uw$ and $um$. There exists a path connecting $u$ and $w$ in the constrained $\theta_{(4k+5)}$-graph of length at most
|
| 230 |
+
|
| 231 |
+
$$ \left( \frac{\cos \alpha}{\cos \left(\frac{\theta}{2}\right)} + \frac{\left(\cos \alpha \cdot \tan \left(\frac{\theta}{2}\right) + \sin \alpha\right) \cdot \cos \left(\frac{\theta}{4}\right)}{\cos \left(\frac{\theta}{2}\right) - \sin \left(\frac{3\theta}{4}\right)} \right) \cdot |uw|. $$
|
| 232 |
+
---PAGE_BREAK---
|
| 233 |
+
|
| 234 |
+
*Proof.* We apply Theorem 2 using **c** = cos(θ/4)/(cos(θ/2) - sin(3θ/4)). The assumptions made in Theorem 2 still apply. It remains to show that for the Type (i) configurations, we have that max{|v<sub>j-1</sub>c| + |cv<sub>j</sub>|·**c**, |v<sub>j-1</sub>d| + |dv<sub>j</sub>|·**c**} ≤ |v<sub>j-1</sub>a<sub>j</sub>| + |a<sub>j</sub>v<sub>j</sub>|·**c**, where *c* and *d* are the upper and lower right corner of T<sub>v<sub>j-1</sub>v<sub>j</sub></sub> and *a*<sub>j</sub> is the intersection of the left boundary of C<sup>v<sub>j-1</sub></sup><sub>0</sub> and the horizontal line through v<sub>j</sub>.
|
| 235 |
+
|
| 236 |
+
We distinguish two cases: (a) $v_j \in C_k^{v_{j-1}}$ and $|cw| > |dw|$, (b) $v_j \in C_{k+1}^{v_{j-1}}$. Let $\beta$ be $\angle a_j v_j v_{j-1}$ and let $\gamma$ be the angle between $v_j v_{j-1}$ and the bisector of $T_{v_{j-1}v_j}$.
|
| 237 |
+
|
| 238 |
+
Case (a): When $v_j \in C_k^{v_{j-1}}$ and $|cw| > |dw|$, the induction hypothesis for $T_{v_{j-1}v_j}$ gives $\delta(v_{j-1}, v_j) \le |v_{j-1}c| + |cv_j| \cdot c$. We note that $\gamma = 5\theta/4 - \beta$. Hence Lemma 3 gives that the inequality holds when $c \ge (\cos(5\theta/4 - \beta) - \sin \beta) / (\cos(\theta/2-\beta) - \sin(5\theta/4-\beta))$. As this function is decreasing in $\beta$ for $3\theta/4 \le \beta \le 5\theta/4$, it is maximized when $\beta$ equals $3\theta/4$. Hence $c$ needs to be at least $(\cos(\theta/2) - \sin(3\theta/4)) / (\cos(\theta/4) - \sin \theta)$, which is less than $\cos(\theta/4) / (\cos(\theta/2) - \sin(3\theta/4))$.
|
| 239 |
+
|
| 240 |
+
Case (b): When $v_j \in C_{k+1}^{v_{j-1}}$, the induction hypothesis for $T_{vw}$ gives
|
| 241 |
+
$\delta(v_{j-1}, v_j) \le \max\{|v_{j-1}c| + |cv_j| \cdot c, |v_{j-1}d| + |dv_j| \cdot c\}$. If $\delta(v_{j-1}, v_j) \le |v_{j-1}c| + |cv_j| \cdot c$, we note that $\gamma = \theta/4 - \beta$. Hence Lemma 3 gives that the inequality holds when $c \ge (\cos(\theta/4 - \beta) - \sin \beta) / (\cos(\theta/2 - \beta) - \sin(3\theta/4 - \beta))$. As this function is decreasing in $\beta$ for $0 \le \beta \le \theta/4$, it is maximized when $\beta$ equals 0. Hence $c$ needs to be at least $\cos(\theta/4) / (\cos(\theta/2) - \sin(3\theta/4))$.
|
| 242 |
+
|
| 243 |
+
If $\delta(v_{j-1}, v_j) \le |v_{j-1}d| + |dv_j| \cdot c$, we note that $\gamma = \theta/4 + \beta$. Hence Lemma 3 gives that the inequality holds when $c \ge (\cos(\beta - \theta/4) - \sin \beta) / (\cos(\theta/2 - \beta) - \sin(\theta/4 + \beta))$, which is equal to $\cos(\theta/4) / (\cos(\theta/2) - \sin(3\theta/4))$. $\square$
|
| 244 |
+
|
| 245 |
+
When looking at two vertices *u* and *w* in the constrained $\theta_{(4k+3)}$-graph and $\theta_{(4k+5)}$-graph, we notice that when the angle between *uw* and the bisector of $T_{uw}$ is $\alpha$, the angle between *wu* and the bisector of $T_{wu}$ is $\theta/2 - \alpha$. Hence the worst case spanning ratio becomes the minimum of the spanning ratio when looking at $T_{uw}$ and the spanning ratio when looking at $T_{wu}$.
|
| 246 |
+
|
| 247 |
+
**Theorem 6.** The constrained $\theta_{(4k+3)}$-graph and $\theta_{(4k+5)}$-graph are
|
| 248 |
+
$$
|
| 249 |
+
\frac{\cos\left(\frac{\theta}{4}\right)}{\cos\left(\frac{\theta}{2}\right)-\sin\left(\frac{3\theta}{4}\right)}\text{-spanners of } Vis(P, S).
|
| 250 |
+
$$
|
| 251 |
+
|
| 252 |
+
*Proof.* The spanning ratio of the constrained $\theta_{(4k+3)}$-graph and $\theta_{(4k+5)}$-graph is
|
| 253 |
+
at most:
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
\min \left\{
|
| 257 |
+
\frac{\frac{\cos \alpha}{\cos(\frac{\theta}{2})} + \frac{(\cos \alpha \cdot \tan(\frac{\theta}{2}) + \sin \alpha) \cdot \cos(\frac{\theta}{4})}{\cos(\frac{\theta}{2}) - \alpha}}
|
| 258 |
+
{\cos(\frac{\theta}{2}) + \frac{(\cos(\frac{\theta}{2}) - \alpha) \cdot \tan(\frac{\theta}{2}) + \sin(\frac{\theta}{2} - \alpha)) \cdot \cos(\frac{\theta}{4})}{\cos(\frac{\theta}{2}) - \sin(\frac{3\theta}{4})}}
|
| 259 |
+
\right\}
|
| 260 |
+
$$
|
| 261 |
+
|
| 262 |
+
Since cos α/ cos(θ/2) + (cos α · tan(θ/2) + sin α) · c is increasing for α ∈ [0, θ/2],
|
| 263 |
+
for θ ≤ 2π/7, the minimum of these two functions is maximized when the two
|
| 264 |
+
functions are equal, i.e. when α = θ/4. Thus the constrained θ(4k+3)-graph and
|
| 265 |
+
---PAGE_BREAK---
|
| 266 |
+
|
| 267 |
+
$\theta_{(4k+5)}$-graph has spanning ratio at most:
|
| 268 |
+
|
| 269 |
+
$$ \frac{\cos\left(\frac{\theta}{4}\right)}{\cos\left(\frac{\theta}{2}\right)} + \frac{\left(\cos\left(\frac{\theta}{4}\right) \cdot \tan\left(\frac{\theta}{2}\right) + \sin\left(\frac{\theta}{4}\right)\right) \cdot \cos\left(\frac{\theta}{4}\right)}{\cos\left(\frac{\theta}{2}\right) - \sin\left(\frac{3\theta}{4}\right)} = \frac{\cos\left(\frac{\theta}{4}\right) \cdot \cos\left(\frac{\theta}{2}\right)}{\cos\left(\frac{\theta}{2}\right) \cdot \left(\cos\left(\frac{\theta}{2}\right) - \sin\left(\frac{3\theta}{4}\right)\right)} \quad \square $$
|
| 270 |
+
|
| 271 |
+
## References
|
| 272 |
+
|
| 273 |
+
1. P. Bose, J.-L. De Carufel, P. Morin, A. van Renssen, and S. Verdonschot. Optimal bounds on theta-graphs: More is not always better. In *Proceedings of the 24th Canadian Conference on Computational Geometry (CCCG 2012)*, pages 305–310, 2012.
|
| 274 |
+
|
| 275 |
+
2. P. Bose, R. Fagerberg, A. van Renssen, and S. Verdonschot. On plane constrained bounded-degree spanners. In *Proceedings of the 10th Latin American Symposium on Theoretical Informatics (LATIN 2012)*, volume 7256 of Lecture Notes in Computer Science, pages 85–96, 2012.
|
| 276 |
+
|
| 277 |
+
3. P. Bose and J. M. Keil. On the stretch factor of the constrained Delaunay triangulation. In *Proceedings of the 3rd International Symposium on Voronoi Diagrams in Science and Engineering (ISVD 2006)*, pages 25–31, 2006.
|
| 278 |
+
|
| 279 |
+
4. P. Bose and M. Smid. On plane geometric spanners: A survey and open problems. In *Computational Geometry: Theory and Applications (CGTA)*, accepted, 2011.
|
| 280 |
+
|
| 281 |
+
5. P. Bose, A. van Renssen, and S. Verdonschot. On the spanning ratio of theta-graphs. In *Proceedings of the 13th Workshop on Algorithms and Data Structures (WADS 2013)*, volume 8037 of Lecture Notes in Computer Science, pages 182–194, 2013.
|
| 282 |
+
|
| 283 |
+
6. K. Clarkson. Approximation algorithms for shortest path motion planning. In *Proceedings of the 19th Annual ACM Symposium on Theory of Computing (STOC 1987)*, pages 56–65, 1987.
|
| 284 |
+
|
| 285 |
+
7. G. Das. The visibility graph contains a bounded-degree spanner. In *Proceedings of the 9th Canadian Conference on Computational Geometry (CCCG 1997)*, pages 70–75, 1997.
|
| 286 |
+
|
| 287 |
+
8. J. Keil. Approximating the complete Euclidean graph. In *Proceedings of the 1st Scandinavian Workshop on Algorithm Theory (SWAT 1988)*, pages 208–213, 1988.
|
| 288 |
+
|
| 289 |
+
9. G. Narasimhan and M. Smid. Geometric Spanner Networks. Cambridge University Press, 2007.
|
| 290 |
+
|
| 291 |
+
10. J. Ruppert and R. Seidel. Approximating the $d$-dimensional complete Euclidean graph. In *Proceedings of the 3rd Canadian Conference on Computational Geometry (CCCG 1991)*, pages 207–210, 1991.
|
samples/texts_merged/4150074.md
ADDED
|
@@ -0,0 +1,669 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# On The Norms of Another Form of r-Circulant Matrices with The Hyper-Fibonacci and Lucas Numbers
|
| 5 |
+
|
| 6 |
+
MUSTAFA BAHŞI¹,*, SÜLEYMAN SOLAK²
|
| 7 |
+
|
| 8 |
+
¹Department of Mathematics and Science Education, Faculty of Education, Aksaray University, 68100, Aksaray, Turkey.
|
| 9 |
+
|
| 10 |
+
²Department of Mathematics and Science Education, Faculty of Education, N.E. University, 42090, Konya, Turkey.
|
| 11 |
+
|
| 12 |
+
Received: 05-04-2020 • Accepted: 05-07-2020
|
| 13 |
+
|
| 14 |
+
**ABSTRACT.** In this paper, we compute the spectral norms of *r*- circulant matrices with the hyper-Fibonacci and hyper-Lucas numbers of the forms $F_r = \text{Circ} - r(F_k^{(0)}, F_k^{(1)}, \dots, F_k^{(n-1)})$, $L_r = \text{Circ} - r(L_k^{(0)}, L_k^{(1)}, \dots, L_k^{(n-1)})$ and their Hadamard and Kronecker products. For this, we firstly compute the spectral and Euclidean norms of circulant matrices of the forms $F = \text{Circ}(F_k^{(0)}, F_k^{(1)}, \dots, F_k^{(n-1)})$ and $L = \text{Circ}(L_k^{(0)}, L_k^{(1)}, \dots, L_k^{(n-1)})$. Moreover, we give some examples related to special cases of our results.
|
| 15 |
+
|
| 16 |
+
**2010 AMS Classification:** 15A60, 15B05, 15B36, 11B39
|
| 17 |
+
|
| 18 |
+
**Keywords:** Circulant matrix, *r*-circulant matrix, Hyper- Fibonacci numbers, Hyper-Lucas numbers, Euclidean norm, Spectral norm.
|
| 19 |
+
|
| 20 |
+
## 1. INTRODUCTION
|
| 21 |
+
|
| 22 |
+
The circulant matrices and *r*- circulant matrices are closely related to signal processing, coding theory and many other areas [1, 10, 11]. An $n \times n$ *r*-circulant matrix $C_r$ is of the form
|
| 23 |
+
|
| 24 |
+
$$ C_r = \begin{bmatrix} c_0 & c_1 & c_2 & \cdots & c_{n-2} & c_{n-1} \\ rc_{n-1} & c_0 & c_1 & \cdots & c_{n-3} & c_{n-2} \\ rc_{n-2} & rc_{n-1} & c_0 & \cdots & c_{n-4} & c_{n-3} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ rc_1 & rc_2 & rc_3 & \cdots & rc_{n-1} & c_0 \end{bmatrix}. $$
|
| 25 |
+
|
| 26 |
+
When we take $r = 1$, the matrix $C_1 = C$ is called a circulant matrix. For brevity, we denote the matrices $C_r$ and $C_1$ as $C_r = \text{Circ} - r(c_0, c_1, \dots, c_{n-1})$ and $C = \text{Circ}(c_0, c_1, \dots, c_{n-1})$, respectively. If A and B are circulant matrices then they are normal, their inverses (if any), conjugate transposes, sums and products are also circulant [8]. The eigenvalues of C are
|
| 27 |
+
|
| 28 |
+
$$ \lambda_m = \sum_{0 \le m \le n-1} c_k w^{-mk} $$
|
| 29 |
+
|
| 30 |
+
where $w = e^{\frac{2\pi i}{n}}$ and $i = \sqrt{-1}$ [8, 14].
|
| 31 |
+
|
| 32 |
+
*Corresponding Author
|
| 33 |
+
|
| 34 |
+
Email addresses: mhvbahsi@yahoo.com (M. Bahşi), ssolak42@yahoo.com (S. Solak)
|
| 35 |
+
|
| 36 |
+
This paper is the extended form of the talk entitled "On the norms of another form of *r*- circulant matrices with the hyper-Fibonacci and Lucas numbers" presented in the "International Conference on Mathematics and Mathematics Education" (ICMME-2016) at Firat University, Turkey, 12-14 May 2016.
|
| 37 |
+
---PAGE_BREAK---
|
| 38 |
+
|
| 39 |
+
The circulant matrices and *r*-circulant matrices have been scientific research area in the recent past decades. Especially, the norms of circulant matrices with special elements such as Fibonacci or Fibonacci like numbers have been investigated extensively [2, 3, 5, 6, 15–19, 21–25]. Shen and Cen [21] derived upper and lower bounds for the spectral norms of *r*-circulant matrices in the forms $A = C_r(F_0, F_1, \dots, F_{n-1})$ and $B = C_r(L_0, L_1, \dots, L_{n-1})$. Tuğlu and Kızılateş [18] studied norms of circulant and *r*-circulant matrices involving harmonic Fibonacci and hyperharmonic Fibonacci numbers. Türkmen and Gökbaş [24] found some bound estimations for the spectral norm of *r*-circulant matrices with Pell and Pell-Lucas numbers. In [5], the authors computed spectral norms of circulant matrices in the forms $F = \text{Circ}(F_0^{(k)}, F_1^{(k)}, \dots, F_{n-1}^{(k)})$, $L = \text{Circ}(L_0^{(k)}, L_1^{(k)}, \dots, L_{n-1}^{(k)})$ and *r*-circulant matrices in the forms $F_r = \text{Circ} - r(F_0^{(k)}, F_1^{(k)}, \dots, F_{n-1}^{(k)})$, $L_r = \text{Circ} - r(L_0^{(k)}, L_1^{(k)}, \dots, L_{n-1}^{(k)})$, where $F_n^{(k)}$ and $L_n^{(k)}$ denote the hyper-Fibonacci and hyper-Lucas numbers, respectively.
|
| 40 |
+
|
| 41 |
+
In this research, we establish some bounds for the spectral norms of *r*-circulant matrices with the hyper-Fibonacci and hyper-Lucas numbers of the forms $F_r = \text{Circ} - r(F_k^{(0)}, F_k^{(1)}, \dots, F_k^{(n-1)})$, $L_r = \text{Circ} - r(L_k^{(0)}, L_k^{(1)}, \dots, L_k^{(n-1)})$ and their Hadamard and Kronecker products. For this, we firstly compute the spectral and Euclidean norms of circulant matrices of the forms $F = \text{Circ}(F_k^{(0)}, F_k^{(1)}, \dots, F_k^{(n-1)})$ and $L = \text{Circ}(L_k^{(0)}, L_k^{(1)}, \dots, L_k^{(n-1)})$. We use some relations concerning the spectral norm, Euclidean norm, row norm, column norm. Moreover, we give some examples related to special cases of our results.
|
| 42 |
+
|
| 43 |
+
## 2. PRELIMINARIES
|
| 44 |
+
|
| 45 |
+
The Fibonacci numbers are defined by the recurrence relation: $F_{n+1} = F_n + F_{n-1}$ ($n \ge 1$), $F_0 = 0$ and $F_1 = 1$. Similarly, the Lucas numbers are defined by $L_{n+1} = L_n + L_{n-1}$ ($n \ge 1$), $L_0 = 2$ and $L_1 = 1$. Fibonacci and Lucas numbers have many generalizations [7, 9, 20]. In [9], Dil and Mezö introduced two concepts as hyper - Fibonacci numbers and hyper - Lucas numbers. These concepts are defined as
|
| 46 |
+
|
| 47 |
+
$$F_n^{(k)} = \sum_{s=0}^{n} F_s^{(k-1)}, \text{ with } F_n^{(0)} = F_n, F_0^{(k)} = 0 \text{ and } F_1^{(k)} = 1$$
|
| 48 |
+
|
| 49 |
+
and
|
| 50 |
+
|
| 51 |
+
$$L_n^{(k)} = \sum_{s=0}^{n} L_s^{(k-1)}, \text{ with } L_n^{(0)} = L_n, L_0^{(k)} = 2, L_1^{(k)} = 2k + 1.$$
|
| 52 |
+
|
| 53 |
+
The hyper-Fibonacci and the hyper-Lucas numbers have the recurrence relations $F_n^{(k)} = F_{n-1}^{(k)} + F_n^{(k-1)}$ and $L_n^{(k)} = L_{n-1}^{(k)} + L_n^{(k-1)}$, respectively. Also, $F_n^{(k)}$ and $L_n^{(k)}$ have the following more explicit forms when $k=1, 2, 3$ or $n=2, 3$.
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\begin{aligned}
|
| 57 |
+
& F_n^{(1)} = F_{n+2} - 1, \quad F_n^{(2)} = F_{n+4} - n - 3 \quad \text{and} \quad F_n^{(3)} = F_{n+6} - \frac{n^2 + 7n + 16}{2}, \\
|
| 58 |
+
& L_n^{(1)} = L_{n+2} - 1, \quad L_n^{(2)} = L_{n+4} - n - 5 \quad \text{and} \quad L_n^{(3)} = L_{n+6} - \frac{n^2 + 11n + 32}{2}.
|
| 59 |
+
\end{aligned}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
$$F_2^{(n)} = n + 1, \quad F_3^{(n)} = \frac{n^2 + 3n + 4}{2} \quad \text{and} \quad L_2^{(n)} = n^2 + 2n + 3. \qquad (2.1)$$
|
| 63 |
+
|
| 64 |
+
In [4], the authors defined hyper-Horadam numbers and studied their some properties. Also, they gave the following formulas related to sums of hyper - Fibonacci and hyper - Lucas numbers
|
| 65 |
+
|
| 66 |
+
$$\sum_{s=0}^{r} F_{n}^{(s)} = F_{n+1}^{(r)} - F_{n-1} \qquad (2.2)$$
|
| 67 |
+
|
| 68 |
+
and
|
| 69 |
+
|
| 70 |
+
$$\sum_{s=0}^{s} L_{n}^{(s)} = L_{n+1}^{(r)} - L_{n-1}. $$
|
| 71 |
+
|
| 72 |
+
For more information related to hyper - Fibonacci numbers see [4, 7, 9].
|
| 73 |
+
|
| 74 |
+
Now we give some definitions and lemmas related to our study.
|
| 75 |
+
---PAGE_BREAK---
|
| 76 |
+
|
| 77 |
+
**Definition 2.1.** Let $A = (a_{ij})$ be any $m \times n$ matrix. The Euclidean norm of A is
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\|A\|_E = \sqrt{\left(\sum_{i=1}^{m} \sum_{j=1}^{n} |a_{ij}|^2\right)}.
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
**Definition 2.2.** Let $A = (a_{ij})$ be any $m \times n$ matrix. The spectral norm of $A$ is
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\|A\|_2 = \sqrt{\max_i \lambda_i (A^H A)},
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
where $\lambda_i(A^H A)$ are eigenvalues of $A^H A$ and $A^H$ is conjugate transpose of $A$.
|
| 90 |
+
|
| 91 |
+
There are two well known relations between Euclidean norm and spectral norm as the following:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\frac{1}{\sqrt{n}} \|A\|_E \le \|A\|_2 \le \|A\|_E \quad (2.3)
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\|A\|_2 \le \|A\|_E \le \sqrt{n} \|A\|_2 . \qquad (2.4)
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
**Definition 2.3 ([13]).** Let $A = (a_{ij})$ and $B = (b_{ij})$ be $m \times n$ matrices. Then their Hadamard product $A \circ B$ is defined
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
A \circ B = [a_i b_i].
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
**Definition 2.4 ([13]).** Let $A = (a_{ij})$ and $B = (b_{ij})$ be $m \times n$ and $p \times r$ matrices, respectively. Then their Kronecker product $A \otimes B$ is defined
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
A \otimes B = [a_i b_j].
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
**Lemma 2.5 ([13]).** Let A and B be two *m×n* matrices. Then we have
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\|A \circ B\|_2 \le \|A\|_2 \|B\|_2 .
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
**Lemma 2.6 ([13]).** Let A and B be two *m×n* matrices. Then we have
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\|A \circ B\|_2 \le r_1(A) c_1(B)
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
where $r_1(A) = \max_{1 \le i \le m} \sqrt{\sum_{j=1}^{n} |a_{ij}|^2}$ and $c_1(B) = \max_{1 \le j \le n} \sqrt{\sum_{i=1}^{m} |b_{ij}|^2}$.
|
| 126 |
+
|
| 127 |
+
**Lemma 2.7 ([13]).** Let A and B be two *m×n* matrices. Then we have
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\|A \otimes B\|_2 = \|A\|_2 \|B\|_2.
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
**Lemma 2.8 ([12]).** Let A be an $n \times n$ matrix with eigenvalues $\lambda_1, \lambda_2, \dots, \lambda_n$. Then, A is a normal matrix if and only if the eigenvalues of $A^H A$ are $|\lambda_1|^2, |\lambda_2|^2, \dots, |\lambda_n|^2$.
|
| 134 |
+
|
| 135 |
+
3. MAIN RESULTS
|
| 136 |
+
|
| 137 |
+
**Theorem 3.1.** The spectral norm of the matrix $F = \text{Circ}(F_k^{(0)}, F_k^{(1)}, \ldots, F_k^{(n-1)})$ is
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
\|F\|_2 = F_{k+1}^{(n-1)} - F_{k-1}.
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
*Proof.* Since the circulant matrix *F* is normal, its spectral norm is equal to its spectral radius. Furthermore, by considering *F* is irreducible and its entries are nonnegative, we have that the spectral radius (or spectral norm) of the matrix *F* is equal to its Perron root. We select an *n*-dimensional column vector *v* = (1, 1, ..., 1)^T, then
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
F_v = \left( \sum_{s=0}^{n-1} F_k^{(s)} \right) v.
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
Obviously, $\sum_{s=0}^{n-1} F_k^{(s)}$ is an eigenvalue of F associated with v and it is the Perron root of F. Hence, by (2.2) we have
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\|F\|_2 = \sum_{s=0}^{n-1} F_k^{(s)} = F_{k+1}^{(n-1)} - F_{k-1}.
|
| 153 |
+
$$
|
| 154 |
+
---PAGE_BREAK---
|
| 155 |
+
|
| 156 |
+
This completes the proof.
|
| 157 |
+
|
| 158 |
+
**Example 3.2.** Theorem 3.1 and the equations in (2.1) yield
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
\|F\|_2 = \begin{cases} 0, & \text{if } k=0, \\ n, & \text{if } k=1, \\ \frac{n^2+n}{2}, & \text{if } k=2. \end{cases}
|
| 162 |
+
$$
|
| 163 |
+
|
| 164 |
+
**Corollary 3.3.** Euclidean norm of the matrix $F = \text{Circ}(F_k^{(0)}, F_k^{(1)}, \dots, F_k^{(n-1)})$ holds
|
| 165 |
+
|
| 166 |
+
$$
|
| 167 |
+
F_{k+1}^{(n-1)} - F_{k-1} \le \|F\|_E \le \sqrt{n}(F_{k+1}^{(n-1)} - F_{k-1}).
|
| 168 |
+
$$
|
| 169 |
+
|
| 170 |
+
*Proof.* The proof is trivial from Theorem 3.1 and the relation between spectral norm and Euclidean norm in (2.4). $\square$
|
| 171 |
+
|
| 172 |
+
**Corollary 3.4.** We have
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
\frac{1}{\sqrt{n}}(F_{k+1}^{(n-1)} - F_{k-1}) \leq \sqrt{\sum_{s=0}^{n-1} (F_k^{(s)})^2} \leq F_{k+1}^{(n-1)} - F_{k-1}. \quad (3.1)
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
*Proof.* This follows from the definition of Euclidean norm and Corollary 3.3. $\square$
|
| 179 |
+
|
| 180 |
+
**Theorem 3.5.** *The spectral norm of the matrix* $L = \text{Circ}(L_k^{(0)}, L_k^{(1)}, \ldots, L_k^{(n-1)})$ *is*
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
\|L\|_2 = L_{k+1}^{(n-1)} - L_{k-1}.
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
*Proof.* This theorem can be proved by using a similar method to method of the proof of Theorem 3.1. $\square$
|
| 187 |
+
|
| 188 |
+
**Example 3.6.** Theorem 3.5 and the equations in (2.1) yield
|
| 189 |
+
|
| 190 |
+
$$
|
| 191 |
+
\|L\|_2 = \begin{cases} 2n, & \text{if } k=0, \\ n^2 + 2n + 1, & \text{if } k=1. \end{cases}
|
| 192 |
+
$$
|
| 193 |
+
|
| 194 |
+
**Corollary 3.7.** We have
|
| 195 |
+
|
| 196 |
+
$$
|
| 197 |
+
L_{k+1}^{(n-1)} - L_{k-1} \le \|L\|_E \le \sqrt{n}(L_{k+1}^{(n-1)} - L_{k-1}).
|
| 198 |
+
$$
|
| 199 |
+
|
| 200 |
+
*Proof.* Theorem 3.5 and the relation between spectral norm and Euclidean norm in (2.4) immediately yield desired result. $\square$
|
| 201 |
+
|
| 202 |
+
**Corollary 3.8.** Sum of squares of hyper-Lucas numbers holds
|
| 203 |
+
|
| 204 |
+
$$
|
| 205 |
+
\frac{1}{\sqrt{n}}(L_{k+1}^{(n-1)} - L_{k-1}) \leq \sqrt{\sum_{s=0}^{n-1} (L_k^{(s)})^2} \leq L_{k+1}^{(n-1)} - L_{k-1}. \quad (3.2)
|
| 206 |
+
$$
|
| 207 |
+
|
| 208 |
+
*Proof.* From the definition of Euclidean norm and Corollary 3.7, desired result is obtained. $\square$
|
| 209 |
+
|
| 210 |
+
**Corollary 3.9.** The spectral norm of the Hadamard product of $F = \text{Circ}(F_k^{(0)}, F_k^{(1)}, \dots, F_k^{(n-1)})$ and $L = \text{Circ}(L_k^{(0)}, L_k^{(1)}, \dots, L_k^{(n-1)})$ satisfies
|
| 211 |
+
|
| 212 |
+
$$
|
| 213 |
+
\|F \circ L\|_2 \le (F_{k+1}^{(n-1)} - F_{k-1})(L_{k+1}^{(n-1)} - L_{k-1}).
|
| 214 |
+
$$
|
| 215 |
+
|
| 216 |
+
*Proof.* Since $\|F \circ L\|_2 \le \|F\|_2 \|L\|_2$ desired result is trivial. $\square$
|
| 217 |
+
|
| 218 |
+
**Corollary 3.10.** The spectral norm of the Kronecker product of $F = \text{Circ}(F_k^{(0)}, F_k^{(1)}, \dots, F_k^{(n-1)})$ and $L = \text{Circ}(L_k^{(0)}, L_k^{(1)}, \dots, L_k^{(n-1)})$ satisfies
|
| 219 |
+
|
| 220 |
+
$$
|
| 221 |
+
\|F \otimes L\|_2 = (F_{k+1}^{(n-1)} - F_{k-1})(L_{k+1}^{(n-1)} - L_{k-1}).
|
| 222 |
+
$$
|
| 223 |
+
|
| 224 |
+
*Proof.* Since $\|F \otimes L\|_2 = \|F\|_2 \|L\|_2$ we get the desired result. $\square$
|
| 225 |
+
|
| 226 |
+
**Theorem 3.11.** Let $F_r = \text{Circ} - r(F_k^{(0)}, F_k^{(1)}, \dots, F_k^{(n-1)})$ be an $r$-circulant matrix.
|
| 227 |
+
i) If $|r| \ge 1$, then
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
\frac{1}{\sqrt{n}} (F_{k+1}^{(n-1)} - F_{k-1}) \le \|F_r\|_2 \le |r| (F_{k+1}^{(n-1)} - F_{k-1})^2
|
| 231 |
+
$$
|
| 232 |
+
---PAGE_BREAK---
|
| 233 |
+
|
| 234 |
+
$$
|
| 235 |
+
\text{ii) If } |r| < 1, \text{ then}
|
| 236 |
+
$$
|
| 237 |
+
|
| 238 |
+
$$
|
| 239 |
+
\frac{|r|}{\sqrt{n}} (F_{k+1}^{(n-1)} - F_{k-1}) \le \|F_r\|_2 \le \sqrt{n} (F_{k+1}^{(n-1)} - F_{k-1}).
|
| 240 |
+
$$
|
| 241 |
+
|
| 242 |
+
*Proof.* Since the matrix $F_r$ is of the form
|
| 243 |
+
|
| 244 |
+
$$
|
| 245 |
+
F_r = \begin{bmatrix}
|
| 246 |
+
F_k^{(0)} & F_k^{(1)} & F_k^{(2)} & \cdots & F_k^{(n-2)} & F_k^{(n-1)} \\
|
| 247 |
+
rF_k^{(n-1)} & F_k^{(0)} & F_k^{(1)} & \cdots & F_k^{(n-3)} & F_k^{(n-2)} \\
|
| 248 |
+
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
|
| 249 |
+
rF_k^{(2)} & rF_k^{(3)} & rF_k^{(4)} & \cdots & F_k^{(0)} & F_k^{(1)} \\
|
| 250 |
+
rF_k^{(1)} & rF_k^{(2)} & rF_k^{(3)} & \cdots & rF_k^{(n-1)} & F_k^{(0)}
|
| 251 |
+
\end{bmatrix}
|
| 252 |
+
$$
|
| 253 |
+
|
| 254 |
+
and from the definition of Euclidean norm, we have
|
| 255 |
+
|
| 256 |
+
$$
|
| 257 |
+
\|F_r\|_E = \sqrt{\sum_{s=0}^{n-1} (n-s) (F_k^{(s)})^2 + \sum_{s=0}^{n-1} s |r|^2 (F_k^{(s)})^2}.
|
| 258 |
+
$$
|
| 259 |
+
|
| 260 |
+
i) Since $|r| \ge 1$, (3.1) yields
|
| 261 |
+
|
| 262 |
+
$$
|
| 263 |
+
\|F_r\|_E \geq \sqrt{\sum_{s=0}^{n-1} (n-s) (F_k^{(s)})^2 + \sum_{s=0}^{n-1} s (F_k^{(s)})^2} = \sqrt{n \sum_{s=0}^{n-1} (F_k^{(s)})^2} \geq F_{k+1}^{(n-1)} - F_{k-1}.
|
| 264 |
+
$$
|
| 265 |
+
|
| 266 |
+
From (2.4)
|
| 267 |
+
|
| 268 |
+
$$
|
| 269 |
+
\|F_r\|_2 \geq \frac{1}{\sqrt{n}} (F_{k+1}^{(n-1)} - F_{k-1}).
|
| 270 |
+
$$
|
| 271 |
+
|
| 272 |
+
Let $F_r$ be $F_r = B \circ C$, where
|
| 273 |
+
|
| 274 |
+
$$
|
| 275 |
+
B = \begin{bmatrix}
|
| 276 |
+
1 & 1 & 1 & \dots & 1 & 1 \\
|
| 277 |
+
rF_k^{(n-1)} & 1 & 1 & \dots & 1 & 1 \\
|
| 278 |
+
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
|
| 279 |
+
rF_k^{(2)} & rF_k^{(3)} & rF_k^{(4)} & \dots & 1 & 1 \\
|
| 280 |
+
rF_k^{(1)} & rF_k^{(2)} & rF_k^{(3)} & \dots & rF_k^{(n-1)} & 1
|
| 281 |
+
\end{bmatrix}
|
| 282 |
+
$$
|
| 283 |
+
|
| 284 |
+
and
|
| 285 |
+
|
| 286 |
+
$$
|
| 287 |
+
C = \begin{bmatrix}
|
| 288 |
+
F_k^{(0)} & F_k^{(1)} & F_k^{(2)} & \dots & F_k^{(n-2)} & F_k^{(n-1)} \\
|
| 289 |
+
1 & F_k^{(0)} & F_k^{(1)} & \dots & F_k^{(n-3)} & F_k^{(n-2)} \\
|
| 290 |
+
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
|
| 291 |
+
1 & 1 & 1 & \dots & F_k^{(0)} & F_k^{(1)} \\
|
| 292 |
+
1 & 1 & 1 & \dots & 1 & F_k^{(0)}
|
| 293 |
+
\end{bmatrix}.
|
| 294 |
+
$$
|
| 295 |
+
|
| 296 |
+
Then
|
| 297 |
+
|
| 298 |
+
$$
|
| 299 |
+
\begin{align*}
|
| 300 |
+
r_1(B) &= \max_{1 \le i \le n} \sqrt{\sum_{j=1}^{n} |b_{ij}|^2} = \sqrt{\sum_{j=1}^{n} |b_{nj}|^2} = \sqrt{1 + \sum_{s=1}^{n-1} |r|^2 (F_k^{(s)})^2} \\
|
| 301 |
+
&\le |r| \sqrt{\sum_{s=0}^{n-1} (F_k^{(s)})^2}
|
| 302 |
+
\end{align*}
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
and
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
c_1(C) = \max_{1 \le j \le n} \sqrt{\sum_{i=1}^{n} |c_{ij}|^2} = \sqrt{\sum_{i=1}^{n} |c_{in}|^2} = \sqrt{\sum_{s=0}^{n-1} (F_k^{(s)})^2}.
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
(3.1) and Lemma 2.3 yield
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
\|F_r\|_2 \le r_1(B) c_1(C) \le |r|(F_{k+1}^{(n-1)} - F_{k-1})^2.
|
| 315 |
+
$$
|
| 316 |
+
---PAGE_BREAK---
|
| 317 |
+
|
| 318 |
+
Thus,
|
| 319 |
+
|
| 320 |
+
$$
|
| 321 |
+
\frac{1}{\sqrt{n}} (F_{k+1}^{(n-1)} - F_{k-1}) \le \|F_r\|_2 \le |r| (F_{k+1}^{(n-1)} - F_{k-1})^2 .
|
| 322 |
+
$$
|
| 323 |
+
|
| 324 |
+
ii) Since |r| < 1, (3.1) yields
|
| 325 |
+
|
| 326 |
+
$$
|
| 327 |
+
\begin{align*}
|
| 328 |
+
\|F_r\|_E &= \sqrt{\sum_{s=0}^{n-1} (n-s) (F_k^{(s)})^2 + \sum_{s=0}^{n-1} s |r|^2 (F_k^{(s)})^2} \\
|
| 329 |
+
&\geq \sqrt{\sum_{s=0}^{n-1} (n-s) |r|^2 (F_k^{(s)})^2 + \sum_{s=0}^{n-1} s |r|^2 (F_k^{(s)})^2} \\
|
| 330 |
+
&= \sqrt{n |r|^2 \sum_{s=0}^{n-1} (F_k^{(s)})^2} \geq |r| (F_{k+1}^{(n-1)} - F_{k-1}).
|
| 331 |
+
\end{align*}
|
| 332 |
+
$$
|
| 333 |
+
|
| 334 |
+
From (2.4)
|
| 335 |
+
|
| 336 |
+
$$
|
| 337 |
+
\|F_r\|_2 \geq \frac{|r|}{\sqrt{n}} (F_{k+1}^{(n-1)} - F_{k-1}).
|
| 338 |
+
$$
|
| 339 |
+
|
| 340 |
+
Now, let the matrices $F_r$ be $F_r = D \circ E$, where
|
| 341 |
+
|
| 342 |
+
$$
|
| 343 |
+
D = \begin{bmatrix}
|
| 344 |
+
1 & 1 & 1 & \cdots & 1 & 1 \\
|
| 345 |
+
r & 1 & 1 & \cdots & 1 & 1 \\
|
| 346 |
+
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
|
| 347 |
+
r & r & r & \cdots & 1 & 1 \\
|
| 348 |
+
r & r & r & \cdots & r & 1
|
| 349 |
+
\end{bmatrix}
|
| 350 |
+
$$
|
| 351 |
+
|
| 352 |
+
and
|
| 353 |
+
|
| 354 |
+
$$
|
| 355 |
+
E = \begin{bmatrix}
|
| 356 |
+
F_k^{(0)} & F_k^{(1)} & F_k^{(2)} & \dots & F_k^{(n-2)} & F_k^{(n-1)} \\
|
| 357 |
+
F_k^{(n-1)} & F_k^{(0)} & F_k^{(1)} & \dots & F_k^{(n-3)} & F_k^{(n-2)} \\
|
| 358 |
+
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
|
| 359 |
+
F_k^{(2)} & F_k^{(3)} & F_k^{(4)} & \dots & F_k^{(0)} & F_k^{(1)} \\
|
| 360 |
+
F_k^{(1)} & F_k^{(2)} & F_k^{(3)} & \dots & F_k^{(n-1)} & F_k^{(0)}
|
| 361 |
+
\end{bmatrix}.
|
| 362 |
+
$$
|
| 363 |
+
|
| 364 |
+
Then we compute $r_1(D)$ and $c_1(E)$ as
|
| 365 |
+
|
| 366 |
+
$$
|
| 367 |
+
r_1(D) = \max_{1 \le i \le n} \sqrt{\sum_{j=1}^{n} |d_{ij}|^2} = \sqrt{n}
|
| 368 |
+
$$
|
| 369 |
+
|
| 370 |
+
and
|
| 371 |
+
|
| 372 |
+
$$
|
| 373 |
+
c_1(E) = \max_{1 \le j \le n} \sqrt{\sum_{i=1}^{n} |e_{ij}|^2} = \sqrt{\sum_{s=0}^{n-1} (F_k^{(s)})^2}.
|
| 374 |
+
$$
|
| 375 |
+
|
| 376 |
+
Hence, from (3.1) and Lemma 2.3, we have
|
| 377 |
+
|
| 378 |
+
$$
|
| 379 |
+
\|F_r\|_2 \le r_1(B) c_1(E) \le \sqrt{n}(F_{k+1}^{(n-1)} - F_{k-1}).
|
| 380 |
+
$$
|
| 381 |
+
|
| 382 |
+
Thus,
|
| 383 |
+
|
| 384 |
+
$$
|
| 385 |
+
\frac{|r|}{\sqrt{n}} (F_{k+1}^{(n-1)} - F_{k-1}) \le \|F_r\|_2 \le \sqrt{n} (F_{k+1}^{(n-1)} - F_{k-1}).
|
| 386 |
+
$$
|
| 387 |
+
|
| 388 |
+
This completes the proof.
|
| 389 |
+
|
| 390 |
+
**Example 3.12.** By using Theorem 3.11 and the equations in (2.1), if $|r| \ge 1$, we have
|
| 391 |
+
|
| 392 |
+
$$
|
| 393 |
+
\|F_r\|_2 = 0, \text{ if } k = 0,
|
| 394 |
+
$$
|
| 395 |
+
---PAGE_BREAK---
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
\begin{gather*}
|
| 399 |
+
\sqrt{n} \le \|F_r\|_2 \le |r|n^2, \text{ if } k=1, \\
|
| 400 |
+
\frac{1}{\sqrt{n}} \left(\frac{n^2+n}{2}\right) \le \|F_r\|_2 \le |r|\left(\frac{n^2+n}{2}\right)^2, \text{ if } k=2,
|
| 401 |
+
\end{gather*}
|
| 402 |
+
$$
|
| 403 |
+
|
| 404 |
+
and if $|r| < 1$, we have
|
| 405 |
+
|
| 406 |
+
$$
|
| 407 |
+
\|F_r\|_2 = 0, \text{ if } k=0,
|
| 408 |
+
$$
|
| 409 |
+
|
| 410 |
+
$$
|
| 411 |
+
|r| \sqrt{n} \le \|F_r\|_2 \le n^2 \sqrt{n}, \text{ if } k=1,
|
| 412 |
+
$$
|
| 413 |
+
|
| 414 |
+
$$
|
| 415 |
+
\frac{|r|}{\sqrt{n}} \left( \frac{n^2 + n}{2} \right) \le \|F_r\|_2 \le \sqrt{n} \left( \frac{n^2 + n}{2} \right), \text{ if } k=2.
|
| 416 |
+
$$
|
| 417 |
+
|
| 418 |
+
**Theorem 3.13.** Let $L_r = \text{Circ} - r(L_k^{(0)}, L_k^{(1)}, \dots, L_k^{(n-1)})$ be an $r$-circulant matrix.
|
| 419 |
+
|
| 420 |
+
i) If $|r| \ge 1$, then
|
| 421 |
+
|
| 422 |
+
$$
|
| 423 |
+
\frac{1}{\sqrt{n}} (L_{k+1}^{(n-1)} - L_{k-1}) \le \|L_r\|_2 \le |r| (L_{k+1}^{(n-1)} - L_{k-1})^2 .
|
| 424 |
+
$$
|
| 425 |
+
|
| 426 |
+
ii) If $|r| < 1$, then
|
| 427 |
+
|
| 428 |
+
$$
|
| 429 |
+
\frac{|r|}{\sqrt{n}} (L_{k+1}^{(n-1)} - L_{k-1}) \le \|L_r\|_2 \le \sqrt{n} (L_{k+1}^{(n-1)} - L_{k-1}).
|
| 430 |
+
$$
|
| 431 |
+
|
| 432 |
+
*Proof.* Since the matrix $L_r$ is of the form
|
| 433 |
+
|
| 434 |
+
$$
|
| 435 |
+
L_r =
|
| 436 |
+
\begin{bmatrix}
|
| 437 |
+
L_k^{(0)} & L_k^{(1)} & L_k^{(2)} & \cdots & L_k^{(n-2)} & L_k^{(n-1)} \\
|
| 438 |
+
r L_k^{(n-1)} & L_k^{(0)} & L_k^{(1)} & \cdots & L_k^{(n-3)} & L_k^{(n-2)} \\
|
| 439 |
+
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
|
| 440 |
+
r L_k^{(2)} & r L_k^{(3)} & r L_k^{(4)} & \cdots & L_k^{(0)} & L_k^{(1)} \\
|
| 441 |
+
r L_k^{(1)} & r L_k^{(2)} & r L_k^{(3)} & \cdots & r L_k^{(n-1)} & L_k^{(0)}
|
| 442 |
+
\end{bmatrix}
|
| 443 |
+
$$
|
| 444 |
+
|
| 445 |
+
and from the definition of Euclidean norm, we have
|
| 446 |
+
|
| 447 |
+
$$
|
| 448 |
+
\|L_r\|_E = \sqrt{\sum_{s=0}^{n-1} (n-s) (L_k^{(s)})^2 + \sum_{s=0}^{n-1} s |r|^2 (L_k^{(s)})^2}.
|
| 449 |
+
$$
|
| 450 |
+
|
| 451 |
+
i) Since $|r| \ge 1$, (3.2) yields
|
| 452 |
+
|
| 453 |
+
$$
|
| 454 |
+
\|L_r\|_E \geq \sqrt{\sum_{s=0}^{n-1} (n-s)(L_k^{(s)})^2 + \sum_{s=0}^{n-1} s(L_k^{(s)})^2} = \sqrt{n \sum_{s=0}^{n-1} (L_k^{(s)})^2} \geq L_{k+1}^{(n-1)} - L_{k-1}.
|
| 455 |
+
$$
|
| 456 |
+
|
| 457 |
+
From (2.3)
|
| 458 |
+
|
| 459 |
+
$$
|
| 460 |
+
\|L_r\|_2 \geq \frac{1}{\sqrt{n}} (L_{k+1}^{(n-1)} - L_{k-1}).
|
| 461 |
+
$$
|
| 462 |
+
|
| 463 |
+
Now, let the matrices $L_r$ be $L_r = B \circ C$, where
|
| 464 |
+
|
| 465 |
+
$$
|
| 466 |
+
B = \begin{bmatrix}
|
| 467 |
+
1 & 1 & 1 & \cdots & 1 & 1 \\
|
| 468 |
+
r L_k^{(n-1)} & 1 & 1 & \cdots & 1 & 1 \\
|
| 469 |
+
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
|
| 470 |
+
r L_k^{(2)} & r L_k^{(3)} & r L_k^{(4)} & \cdots & 1 & 1 \\
|
| 471 |
+
r L_k^{(1)} & r L_k^{(2)} & r L_k^{(3)} & \cdots & r L_k^{(n-1)} & 1
|
| 472 |
+
\end{bmatrix}
|
| 473 |
+
$$
|
| 474 |
+
---PAGE_BREAK---
|
| 475 |
+
|
| 476 |
+
and
|
| 477 |
+
|
| 478 |
+
$$
|
| 479 |
+
C = \begin{bmatrix}
|
| 480 |
+
L_k^{(0)} & L_k^{(1)} & L_k^{(2)} & \cdots & L_k^{(n-2)} & L_k^{(n-1)} \\
|
| 481 |
+
1 & L_k^{(0)} & L_k^{(1)} & \cdots & L_k^{(n-3)} & L_k^{(n-2)} \\
|
| 482 |
+
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
|
| 483 |
+
1 & 1 & 1 & \cdots & L_k^{(0)} & L_k^{(1)} \\
|
| 484 |
+
1 & 1 & 1 & \cdots & 1 & L_k^{(0)}
|
| 485 |
+
\end{bmatrix}.
|
| 486 |
+
$$
|
| 487 |
+
|
| 488 |
+
Then we have
|
| 489 |
+
|
| 490 |
+
$$
|
| 491 |
+
\begin{align*}
|
| 492 |
+
r_1 (B) &= \max_{1 \le i \le n} \sqrt{\sum_{j=1}^{n} |b_{ij}|^2} = \sqrt{\sum_{j=1}^{n} |b_{nj}|^2} = \sqrt{1 + \sum_{s=1}^{n-1} |r|^2 (L_k^{(s)})^2} \\
|
| 493 |
+
&\le |r| \sqrt{\sum_{s=0}^{n-1} (L_k^{(s)})^2}
|
| 494 |
+
\end{align*}
|
| 495 |
+
$$
|
| 496 |
+
|
| 497 |
+
and
|
| 498 |
+
|
| 499 |
+
$$
|
| 500 |
+
c_1(C) = \max_{1 \le j \le n} \sqrt{\sum_{i=1}^{n} |c_{ij}|^2} = \sqrt{\sum_{i=1}^{n} |c_{in}|^2} = \sqrt{\sum_{s=0}^{n-1} (L_k^{(s)})^2}.
|
| 501 |
+
$$
|
| 502 |
+
|
| 503 |
+
Hence, from (3.2) and Lemma 2.3, we have
|
| 504 |
+
|
| 505 |
+
$$
|
| 506 |
+
\|L_r\|_2 \le r_1(B)c_1(C) \le |r|(L_{k+1}^{(n-1)} - L_{k-1})^2.
|
| 507 |
+
$$
|
| 508 |
+
|
| 509 |
+
Thus, we write
|
| 510 |
+
|
| 511 |
+
$$
|
| 512 |
+
\frac{1}{\sqrt{n}} (L_{k+1}^{(n-1)} - L_{k-1}) \le \|L_r\|_2 \le |r| (L_{k+1}^{(n-1)} - L_{k-1})^2 .
|
| 513 |
+
$$
|
| 514 |
+
|
| 515 |
+
ii) Since $|r| < 1$, (3.2) yields
|
| 516 |
+
|
| 517 |
+
$$
|
| 518 |
+
\begin{align*}
|
| 519 |
+
\|L_r\|_E &= \sqrt{\sum_{s=0}^{n-1} (n-s) (L_k^{(s)})^2 + \sum_{s=0}^{n-1} s |r|^2 (L_k^{(s)})^2} \\
|
| 520 |
+
&\geq \sqrt{\sum_{s=0}^{n-1} (n-s) |r|^2 (L_k^{(s)})^2 + \sum_{s=0}^{n-1} s |r|^2 (L_k^{(s)})^2} \\
|
| 521 |
+
&= \sqrt{n|r|^2 \sum_{s=0}^{n-1} (L_k^{(s)})^2} \geq |r|(L_{k+1}^{(n-1)} - L_{k-1}) .
|
| 522 |
+
\end{align*}
|
| 523 |
+
$$
|
| 524 |
+
|
| 525 |
+
From (2.3)
|
| 526 |
+
|
| 527 |
+
$$
|
| 528 |
+
\|L_r\|_2 \geq \frac{|r|}{\sqrt{n}} (L_{k+1}^{(n-1)} - L_{k-1}).
|
| 529 |
+
$$
|
| 530 |
+
|
| 531 |
+
Now, let the matrices *B* and *C* be as
|
| 532 |
+
|
| 533 |
+
$$
|
| 534 |
+
D = \begin{bmatrix}
|
| 535 |
+
1 & 1 & 1 & \cdots & 1 & 1 \\
|
| 536 |
+
r & 1 & 1 & \cdots & 1 & 1 \\
|
| 537 |
+
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
|
| 538 |
+
r & r & r & \cdots & 1 & 1 \\
|
| 539 |
+
r & r & r & \cdots & r & 1
|
| 540 |
+
\end{bmatrix}
|
| 541 |
+
$$
|
| 542 |
+
|
| 543 |
+
and
|
| 544 |
+
|
| 545 |
+
$$
|
| 546 |
+
E = \begin{bmatrix}
|
| 547 |
+
L_k^{(0)} & L_k^{(1)} & L_k^{(2)} & \dots & L_k^{(n-2)} & L_k^{(n-1)} \\
|
| 548 |
+
L_k^{(n-1)} & L_k^{(0)} & L_k^{(1)} & \dots & L_k^{(n-3)} & L_k^{(n-2)} \\
|
| 549 |
+
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
|
| 550 |
+
L_k^{(2)} & L_k^{(3)} & L_k^{(4)} & \dots & L_k^{(0)} & L_k^{(1)} \\
|
| 551 |
+
L_k^{(n-1)} & L_k^{(n-2)} & L_k^{(n-3)} & \dots & L_k^{(n-1)} & L_k^{(n-2)}
|
| 552 |
+
\end{bmatrix}.
|
| 553 |
+
$$
|
| 554 |
+
---PAGE_BREAK---
|
| 555 |
+
|
| 556 |
+
That is, $L_r = B \circ C$. Then we obtain
|
| 557 |
+
|
| 558 |
+
$$r_1(D) = \max_{1 \le i \le n} \sqrt{\sum_{j=1}^{n} |d_{ij}|^2} = \sqrt{n}$$
|
| 559 |
+
|
| 560 |
+
and
|
| 561 |
+
|
| 562 |
+
$$c_1(E) = \max_{1 \le j \le n} \sqrt{\sum_{i=1}^{n} |e_{ij}|^2} = \sqrt{\sum_{s=0}^{n-1} (L_k^{(s)})^2}.$$
|
| 563 |
+
|
| 564 |
+
(3.2) and Lemma 2.3 yield
|
| 565 |
+
|
| 566 |
+
$$\|L_r\|_2 \le r_1(B) c_1(E) \le \sqrt{n}(L_{k+1}^{(n-1)} - L_{k-1}).$$
|
| 567 |
+
|
| 568 |
+
Thus,
|
| 569 |
+
|
| 570 |
+
$$\frac{|r|}{\sqrt{n}}(L_{k+1}^{(n-1)} - L_{k-1}) \le \|L_r\|_2 \le \sqrt{n}(L_{k+1}^{(n-1)} - L_{k-1}).$$
|
| 571 |
+
|
| 572 |
+
This completes the proof. $\square$
|
| 573 |
+
|
| 574 |
+
**Example 3.14.** By using Theorem 3.13 and the equations in (2.1), if $|r| \ge 1$, we have
|
| 575 |
+
|
| 576 |
+
$$2\sqrt{n} \le \|L_r\|_2 \le 4n^2|r|, \text{ if } k=0,$$
|
| 577 |
+
|
| 578 |
+
$$\frac{1}{\sqrt{n}}(n^2 + 2n + 1) \le \|L_r\|_2 \le |r|(n^2 + 2n + 1)^2, \text{ if } k=1,$$
|
| 579 |
+
|
| 580 |
+
and if $|r| < 1$, we have
|
| 581 |
+
|
| 582 |
+
$$2\sqrt{n}|r| \le \|L_r\|_2 \le 2n\sqrt{n}, \text{ if } k=0,$$
|
| 583 |
+
|
| 584 |
+
$$\frac{|r|}{\sqrt{n}}(n^2 + 2n + 1) \le \|L_r\|_2 \le \sqrt{n}(n^2 + 2n + 1), \text{ if } k=1.$$
|
| 585 |
+
|
| 586 |
+
**Corollary 3.15.** The spectral norm of the Hadamard product of $F_r = \text{Circ} - r(F_k^{(0)}, F_k^{(1)}, \dots, F_k^{(n-1)})$ and $L_r = \text{Circ} - r(L_k^{(0)}, L_k^{(1)}, \dots, L_k^{(n-1)})$ holds
|
| 587 |
+
|
| 588 |
+
i) If $|r| \ge 1$, then
|
| 589 |
+
|
| 590 |
+
$$\|F_r \circ L_r\|_2 \le |r|^2 (F_{k+1}^{(n-1)} - F_{k-1})^2 (L_{k+1}^{(n-1)} - L_{k-1})^2.$$
|
| 591 |
+
|
| 592 |
+
ii) If $|r| < 1$, then
|
| 593 |
+
|
| 594 |
+
$$\|F_r \circ L_r\|_2 \le n (F_{k+1}^{(n-1)} - F_{k-1})(L_{k+1}^{(n-1)} - L_{k-1}).$$
|
| 595 |
+
|
| 596 |
+
*Proof.* The proof is trivial since $\|F_r \circ L_r\|_2 \le \|F_r\|_2 \|L_r\|_2$. $\square$
|
| 597 |
+
|
| 598 |
+
**Corollary 3.16.** The spectral norm of the Kronecker product of $F_r = \text{Circ} - r(F_k^{(0)}, F_k^{(1)}, \dots, F_k^{(n-1)})$ and $L_r = \text{Circ} - r(L_k^{(0)}, L_k^{(1)}, \dots, L_k^{(n-1)})$ holds
|
| 599 |
+
|
| 600 |
+
i) If $|r| \ge 1$, then
|
| 601 |
+
|
| 602 |
+
$$\frac{1}{n}(F_{k+1}^{(n-1)} - F_{k-1})(L_{k+1}^{(n-1)} - L_{k-1}) \le \|F_r \otimes L_r\|_2 \le |r|^2 (F_{k+1}^{(n-1)} - F_{k-1})^2 (L_{k+1}^{(n-1)} - L_{k-1})^2.$$
|
| 603 |
+
|
| 604 |
+
ii) If $|r| < 1$, then
|
| 605 |
+
|
| 606 |
+
$$\frac{|r|^2}{n}(F_{k+1}^{(n-1)} - F_{k-1})(L_{k+1}^{(n-1)} - L_{k-1}) \le \|F_r \otimes L_r\|_2 \le n(F_{n-1}^{(k+1)})^2 (L_{n-1}^{(k+1)}).$$
|
| 607 |
+
|
| 608 |
+
*Proof.* The proof is trivial since $\|F_r \otimes L_r\|_2 = \|F_r\|_2 \|L_r\|_2$. $\square$
|
| 609 |
+
|
| 610 |
+
## 4. CONCLUSION
|
| 611 |
+
|
| 612 |
+
In this study, we present some bounds for the spectral norms of a different form of *r*-circulant matrices with the hyper-Fibonacci and hyper-Lucas numbers by using some relations concerning the spectral norm, Euclidean norm, row norm, column norm. The importance of our results is that our results depend on hyper-Fibonacci and hyper-Lucas numbers.
|
| 613 |
+
---PAGE_BREAK---
|
| 614 |
+
|
| 615 |
+
CONFLICTS OF INTEREST
|
| 616 |
+
|
| 617 |
+
The authors declare that there are no conflicts of interest regarding the publication of this article.
|
| 618 |
+
|
| 619 |
+
REFERENCES
|
| 620 |
+
|
| 621 |
+
[1] Bae, J., *Circulant matrix factorization based on schur algorithm for designing optical multimirror filters*, Japanese Journal of Applied Physics **45**(6A)(2006), 5163-5168. 1
|
| 622 |
+
|
| 623 |
+
[2] Bahşi, M., *On the norms of r-circulant matrices with the hyperharmonic numbers*, Journal of Mathematical Inequalities **10**(2)(2016), 445-458. 1
|
| 624 |
+
|
| 625 |
+
[3] Bahşi, M., *On the norms of circulant matrices with the generalized Fibonacci and Lucas numbers*, TWMS J. Pure Appl. Math. **6**(1)(2015), 84-92. 1
|
| 626 |
+
|
| 627 |
+
[4] Bahşi, M., Mezö, I., Solak, S., *A symmetric algorithm for hyper-Fibonacci and hyper-Lucas numbers*, Annales Mathematicae et Informaticae **43**(2014), 19-27. 2, 2
|
| 628 |
+
|
| 629 |
+
[5] Bahşi, M., S. Solak, *On the norms of r-circulant matrices with the hyper-Fibonacci and Lucas numbers*, Journal of Mathematical Inequalities **8**(4)(2014), 693-705. 1
|
| 630 |
+
|
| 631 |
+
[6] Bahsi, M., Solak, S., *On the circulant matrices with arithmetic sequence*, Int. J. Cont. Math. Sciences **5**(25)(2010), 1213 – 1222. 1
|
| 632 |
+
|
| 633 |
+
[7] Cao, N-N., Zhao, F-Z., *Some Properties of Hyperfibonacci and HyperLucas Numbers*, Journal of Integer Sequences **13**(2010), Article 10.0.8. **2**, **2**
|
| 634 |
+
|
| 635 |
+
[8] Davis, P.J., *Circulant Matrices*, Wiley, New York, Chichester, Brisbane, 1979. 1
|
| 636 |
+
|
| 637 |
+
[9] Dil, A., Mezö, I., *A symmetric algorithm for hyperharmonic and Fibonacci numbers*, Appl. Math. Comp. **206**(2008), 942-951. **2**, **2**
|
| 638 |
+
|
| 639 |
+
[10] Fischer, B., Modersitzki, J., *Fast inversion of matrices arising in image processing*, Numer. Algorithms **22** (1999), 1–11. 1
|
| 640 |
+
|
| 641 |
+
[11] Georgiou, S.D., Kravvaritis, C., *New Good Quasi-Cyclic Codes over GF(3)*, Int. J. Algebra **1**(1)(2007), 11–24. 1
|
| 642 |
+
|
| 643 |
+
[12] Horn, R.A., Johnson, C.R., *Matrix Analysis*, Cambridge University Press, Cambridge, 1985. 2.8
|
| 644 |
+
|
| 645 |
+
[13] Horn, R.A., Johnson, C.R., *Topics in Matrix Analysis*, Cambridge University Press, Cambridge, 1991. 2.3, 2.4, 2.5, 2.6, 2.7
|
| 646 |
+
|
| 647 |
+
[14] Karner, H., Schneid, J., Ueberhuber, C.W., *Spectral Decomposition of Real Circulant Matrices*, Linear Algebra and Its Appl., **367**(2003), 301-311. 1
|
| 648 |
+
|
| 649 |
+
[15] Kızılateş, C., Tuğlu, N., *On the bounds for the spectral norms of geometric circulant matrices*, Journal of Inequalities and Applications (2016), 2016:312. 1
|
| 650 |
+
|
| 651 |
+
[16] Kocer, E.G., *Circulant, negacyclic and semicirculant matrices with the modified Pell, Jacobsthal and Jacobsthal-Lucas numbers*, Hacettepe Journal of Mathematics and Statistics **36**(2)(2007), 133-142. 1
|
| 652 |
+
|
| 653 |
+
[17] Kocer, E.G., Mansour, T., Tuğlu, N., *Norms of circulant and semicirculant matrices with Horadam's numbers*, Ars Combinatoria **85**(2007), 353-359. 1
|
| 654 |
+
|
| 655 |
+
[18] Tuğlu, N., Kızılateş, C., *On the norms of circulant and r-circulant matrices with the hyperharmonic Fibonacci numbers*, Journal of Inequalities and Applications (2015), 2015:253. 1
|
| 656 |
+
|
| 657 |
+
[19] Tuğlu, N., Kızılateş, C., *On the Norms of Some Special Matrices with the Harmonic Fibonacci Numbers*, Gazi University Journal of Science **28**(3)(2015), 497-501. 1
|
| 658 |
+
|
| 659 |
+
[20] Öcal, A.A., Tuğlu, N., Altınışık, E., *On the representation of k-generalized Fibonacci and Lucas numbers*, Appl. Math. Comp. **170**(2005), 584-596. 2
|
| 660 |
+
|
| 661 |
+
[21] Shen, S., Cen, J., *On the bounds for the norms of r-circulant matrices with the Fibonacci and Lucas numbers*, Appl. Math. Comp. **216**(2010), 2891-2897. 1
|
| 662 |
+
|
| 663 |
+
[22] Solak, S., *On the norms of circulant matrices with the Fibonacci and Lucas numbers*, Appl. Math. Comp. **160**(2005), 125-132. 1
|
| 664 |
+
|
| 665 |
+
[23] Solak, S., Erratum to "On the Norms of Circulant Matrices with the Fibonacci and Lucas Numbers" [Appl. Math. Comp., 160, (2005), 125-132], Appl. Math. Comp. **190**(2007), 1855-1856. 1
|
| 666 |
+
|
| 667 |
+
[24] Türkmen, R., Gökbaş, H., *On the spectral norm of r-circulant matrices with the Pell and Pell-Lucas numbers*, Journal of Inequalities and Applications (2016), 2016:65. 1
|
| 668 |
+
|
| 669 |
+
[25] Yazlik, Y., Taskara, N., *On the norms of an r-circulant matrix with the generalized k-Horadam numbers*, Journal of Inequalities and Applications (2013), 2013:394. 1
|
samples/texts_merged/4385907.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples/texts_merged/4515563.md
ADDED
|
@@ -0,0 +1,444 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Reaching the quantum Hall regime with rotating Rydberg-dressed atoms
|
| 5 |
+
|
| 6 |
+
Michele Burrello,¹,* Igor Lesanovsky,²,³ and Andrea Trombettoni⁴,⁵,⁶
|
| 7 |
+
|
| 8 |
+
¹Niels Bohr International Academy and Center for Quantum Devices,
|
| 9 |
+
University of Copenhagen, Lyngbyvej 2, 2100 Copenhagen, Denmark
|
| 10 |
+
|
| 11 |
+
²Institut für Theoretische Physik, Universität Tübingen,
|
| 12 |
+
Auf der Morgenstelle 14, 72076 Tübingen, Germany
|
| 13 |
+
|
| 14 |
+
³School of Physics and Astronomy and Centre for the Mathematics
|
| 15 |
+
and Theoretical Physics of Quantum Non-Equilibrium Systems,
|
| 16 |
+
The University of Nottingham, Nottingham, NG7 2RD, United Kingdom
|
| 17 |
+
|
| 18 |
+
⁴Department of Physics, University of Trieste, Strada Costiera 11, I-34151 Trieste, Italy
|
| 19 |
+
|
| 20 |
+
⁵CNR-IOM DEMOCRITOS Simulation Center, via Bonomea 265, I-34136 Trieste, Italy.
|
| 21 |
+
|
| 22 |
+
⁶SISSA and INFN, Sezione di Trieste, via Bonomea 265, I-34136 Trieste, Italy.
|
| 23 |
+
|
| 24 |
+
Despite the striking progress in the field of quantum gases, one of their much anticipated applications – the simulation of quantum Hall states – remains elusive: all experimental approaches so far have failed in reaching a sufficiently small ratio between atom and vortex densities. In this paper we consider rotating Rydberg-dressed atoms in magnetic traps: these gases offer strong and tunable non-local repulsive interactions and very low densities; hence they provide an exceptional platform to reach the quantum Hall regime. Based on the Lindemann criterion and the analysis of the interplay of the length scales of the system, we show that there exists an optimal value of the dressing parameters that minimizes the ratio between the filling factor of the system and its critical value to enter the Hall regime, thus making it possible to reach this strongly-correlated phase for more than 1000 atoms under realistic conditions.
|
| 25 |
+
|
| 26 |
+
*Introduction.*– In the last decades ultracold atoms allowed for the study and quantum simulation of a plethora of quantum many-body effects [1]. Despite the impressive successes, however, one of the most anticipated applications, so far, has resisted many attempts of implementation: reaching the quantum Hall (QH) regime.
|
| 27 |
+
|
| 28 |
+
Since the realization of Bose–Einstein condensates in the mid 90s [2], the nucleation of quantized vortices in rotating ultracold atoms [3–5] naturally suggested the possibility of creating QH states by rotating strongly interacting gases. The dynamics of atomic clouds in the rotating frame can indeed be described in terms of Coriolis/Lorentz forces, which define in turn the appearance of a synthetic magnetic field *B* for neutral atoms [6, 7].
|
| 29 |
+
|
| 30 |
+
Reaching the QH regime, however, requires strong magnetic fields: it is necessary to achieve angular velocities extremely close to the critical value set by the trapping potentials – so close that, for practical purposes, this possibility was experimentally ruled out.
|
| 31 |
+
|
| 32 |
+
Alternative approaches based on optically induced gauge potentials have been proposed and tested [8–11], but, also in this case, the simulated magnetic fields were not strong enough to access the QH regime.
|
| 33 |
+
|
| 34 |
+
In all these experiments, the interactions among the atoms were effectively contact interactions. In the last years, however, atoms with long-range interactions have been at the focus of intensive investigations, in the cases of both dipolar gases [12–15] and Rydberg-dressed atoms with strong van der Waals interactions [16–18]. Intuitively, such strong long-range repulsions favor the formation of gases with lower densities, thus making it easier
|
| 35 |
+
|
| 36 |
+
to achieve the low filling factors required for QH states.
|
| 37 |
+
|
| 38 |
+
In this work, we consider ultracold bosonic gases subject to long-range repulsive interactions and synthetic gauge fields. We will show that moderate van der Waals interactions help in reaching the ratio between atomic and vortex densities required for the onset of the QH regime. We will focus on Rydberg-dressed atoms, which allows us to tune the effective value of the interactions, and we will mostly address the case of synthetic fields obtained by rotation, since for a realistic number of atoms and other parameters this technique provides better results than optically generated magnetic fields.
|
| 39 |
+
|
| 40 |
+
Our main result is that the long-range interaction between Rydberg-dressed atoms facilitates reaching lower filling factors in comparison with ground state atoms subject to the same artificial magnetic field. In particular, we show that the melting transition of the superfluid vortex lattice is favored by this interaction, and we hypothesize that this signals the onset of the QH regime, shown to appear for small filling factors in recent works [19, 20].
|
| 41 |
+
|
| 42 |
+
# I. THE MAIN IDEA
|
| 43 |
+
|
| 44 |
+
When a Bose–Einstein condensate is rotating and its angular velocity, and thus the artificial magnetic field, is progressively increased, vortices enter the superfluid and arrange themselves in denser and denser triangular lattices. For strong magnetic fields and confinement in the direction of the rotation axis, the condensate enters the so-called lowest Landau level (LLL) regime, in which the vortex size scales with the magnetic length $l_B = \sqrt{\hbar/B}$ [21]. The further transition from the LLL to the QH regime corresponds to the melting of the vortex lattice
|
| 45 |
+
|
| 46 |
+
* for correspondence: michele.burrello@nbi.ku.dk
|
| 47 |
+
---PAGE_BREAK---
|
| 48 |
+
|
| 49 |
+
into a strongly-correlated phase [6, 7, 22], driven by the quantum fluctuations of the vortices. The critical value of B for this transition can be estimated from the Lindemann criterion: the lattice melts when the ratio $l_L/l_B$ between the quantum fluctuation $l_L$ of the positions of the vortex cores and the intervortex distance (proportional to $l_B$) reaches a critical value $\alpha_L$. The main parameter to characterize these systems is the filling factor
|
| 50 |
+
|
| 51 |
+
$$\nu = n/n_v = 2\pi\hbar n/B, \quad (1)$$
|
| 52 |
+
|
| 53 |
+
given by the ratio between the atom and the vortex areal densities $n$ and $n_v$, respectively. $\nu$ is proportional to $(l_B/l_L)^2$ [7, 22], such that the gas enters the QH regime for $\nu$ smaller than a critical value $\nu_c$. For weakly interacting bosons, the most conservative estimates give a critical filling factor $\nu_{c,0} \lesssim 6$ [5, 7, 23–25]. Here the subscript 0 specifies that this is the critical value for weak contact interactions. We will show below that such a critical value does not hold for interactions beyond a certain threshold.
|
| 54 |
+
|
| 55 |
+
The limit $\nu = \nu_{c,0}$ is extremely difficult to reach for rotating gases: the centrifugal limit of the in-plane trapping poses a severe bound on the maximal angular velocity, and thus on the maximal field $B$; on the experimental side, the smallest parameters $\nu$ achieved [3, 4] are about $\nu \sim 300$. Much smaller filling factors ($\nu \approx 1$) were instead reached in rotating optical microtraps, but only for a very small number of atoms, $N \approx 5$ [26].
|
| 56 |
+
|
| 57 |
+
As is evident from (1), besides increasing $B$, there is another strategy to lower $\nu$: namely, to reduce the two-dimensional (2D) atom density $n$. To this purpose, we consider Rydberg-dressed atoms: we will first estimate the behavior of the superfluid density as a function of the interactions and we will discuss its implications for the phase diagram of these gases. We will show that, for moderate long-range interactions, the range of parameters for which the QH regime exists is enhanced, whereas, when the interactions exceed a certain threshold, it is suppressed. Hence there exists an optimal value for the Rydberg dressing that minimizes the magnetic field needed to enter the QH regime.
|
| 58 |
+
|
| 59 |
+
## II. THE PHYSICAL SYSTEM
|
| 60 |
+
|
| 61 |
+
A rotating atomic cloud has a dynamics equivalent to charged particles in a magnetic field, as an effect of the Coriolis force [5–7]. We consider a Bose-Einstein condensate subject to a harmonic potential with trapping frequency $\Omega_{\text{tr}}$, and rotating with frequency $\Omega_{\text{rot}}$. As discussed in App. A, in the rotating frame the single-particle Hamiltonian is
|
| 62 |
+
|
| 63 |
+
$$H_{\text{rot}} = \frac{(-i\hbar\nabla + \vec{A})^2}{2m} + \frac{m(\Omega_{\text{tr}}^2 - \Omega_{\text{rot}}^2)r^2}{2}. \quad (2)$$
|
| 64 |
+
|
| 65 |
+
Here we have introduced the vector potential $\vec{A} = m\Omega_{\text{rot}}(y, -x, 0)$ and $m$ is the atomic mass. The resulting artificial magnetic field lies along $\hat{z}$ with intensity
|
| 66 |
+
|
| 67 |
+
$B \equiv 2m\Omega_{\text{rot}}$, and the effective in-plane trapping potential is reduced by the centrifugal force. In particular, we define the ratio $\gamma = \Omega_{\text{rot}}/\Omega_{\text{tr}} < 1$, such that the effective trapping frequency is $\sqrt{1-\gamma^2}\Omega_{\text{tr}}$.
|
| 68 |
+
|
| 69 |
+
The Rydberg dressing amounts to a weak coupling between a ground state $|g\rangle$ of the chosen atoms and a Rydberg state $|e\rangle$. This coupling is obtained through laser beams propagating in the $\hat{z}$ direction such that the related Rabi frequency $\Omega = |\Omega|e^{i\phi_{\Omega}}$ does not depend on the position in the $xy$ plane. We consider an effective detuning for this coupling $2\delta \gg \Omega$ such that, for a single atom, its lowest energy state becomes
|
| 70 |
+
|
| 71 |
+
$$|\tilde{g}\rangle = -e^{i\phi_{\Omega}} \sin(\theta/2) |e\rangle + \cos(\theta/2) |g\rangle, \quad (3)$$
|
| 72 |
+
|
| 73 |
+
where $\tan\theta = |\Omega|/\delta$ (see App. A). A generic Rydberg interaction $H_{\text{int}} = V(\vec{r}_1 - \vec{r}_2)|ee\rangle\langle ee|$ results in a typical long-range interaction that decays like $\sin^4(\theta/2)V$ for large separations, and is characterized by a plateau $2\delta \sin^4(\theta/2)$ for short distances
|
| 74 |
+
|
| 75 |
+
$$V_{\text{Rydberg}}(|\vec{r}_1 - \vec{r}_2|) \approx \frac{C_6 \sin^4(\theta/2)}{a^6 + |\vec{r}_1 - \vec{r}_2|^6}, \quad (4)$$
|
| 76 |
+
|
| 77 |
+
where we introduced the Rydberg radius $a \approx (C_6/2\delta)^{1/6}$ and the van der Waals coefficient $C_6$. For the typical Rydberg state $43S_{1/2}$ of $^{87}$Rb, the interaction is given by $C_6 \approx h \cdot 2.4$ GHz µm$^6$ [27] with $a \approx 2.0$ µm for a mixing angle $\theta = 0.05$.
|
| 78 |
+
|
| 79 |
+
## III. DENSITY OF A 2D RYDBERG-DRESSED GAS
|
| 80 |
+
|
| 81 |
+
The effect of the strong interactions of Rydberg-dressed atoms on their density can be estimated with a variational Gross-Pitaevskii calculation. In particular, we consider the isotropic interaction in Eq. (4) and we focus on gases in the lowest Landau level (LLL) regime with strong confinement in the third direction, such that a 2D approximation holds. We combine a description of the van der Waals interactions in the spirit of [28] and the variational ansatz introduced in [29] for the superfluid vortex lattice, defined by the many-body wavefunction $\psi_s(\vec{r}) = p(\vec{r})e^{-r^2/(2s^2)}\sqrt{N/\pi s^2}$ where $r = |\vec{r}|$. This wavefunction is characterized by a periodic modulation $p(\vec{r})$, which defines the triangular vortex lattice, and a slow-varying Gaussian envelope of width $s$. The average density of the system is approximately set only by its long-distance behavior, thus by the Gaussian envelope. It is therefore a function of the variational parameter $s$, which is linear in the average distance from the center. By averaging over the modulation (see App. A), the mean-field energy results in
|
| 82 |
+
|
| 83 |
+
$$E \approx \frac{\hbar^2 N}{2ms^2} + \frac{Nm\Omega_{\text{tr}}^2(1-\gamma^2)s^2}{2} + \frac{N^2}{s^2} \left[ \frac{bg}{4\pi} + \int_0^\infty r dr e^{-r^2/(2s^2)} \frac{V_6}{a^6+r^6} \right]. \quad (5)$$
|
| 84 |
+
---PAGE_BREAK---
|
| 85 |
+
|
| 86 |
+
Here $g$ is the 2D contact interaction parameter and
|
| 87 |
+
$V_6 = \sin^4(\theta/2)C_6$ is the effective van der Waals coupling
|
| 88 |
+
constant. The numerical factor $b \approx 1.1596$ effectively in-
|
| 89 |
+
creases the contact interaction in the LLL approximation
|
| 90 |
+
due to the inhomogeneity introduced by the triangular
|
| 91 |
+
vortex lattice [29] (see App. A).
|
| 92 |
+
|
| 93 |
+
The potential energy and the long-range interaction
|
| 94 |
+
are estimated by separating their rapidly and slowly os-
|
| 95 |
+
cillating contributions with a procedure analogous to the
|
| 96 |
+
so-called averaged vortex approximation [30] for $s \gg l_B$:
|
| 97 |
+
within each unit cell of the lattice, the value of the Gaus-
|
| 98 |
+
sian factor, harmonic potential and van der Waals inter-
|
| 99 |
+
action is considered approximately constant, such that
|
| 100 |
+
the modulation averages to 1 and does not affect the fi-
|
| 101 |
+
nal result.
|
| 102 |
+
|
| 103 |
+
By expanding the integral in Eq. (5) in series of $s^{-1}$ for $s \gg a$ and minimizing the energy, we find
|
| 104 |
+
|
| 105 |
+
$$s = \sqrt[4]{\frac{Nm g' + 2\pi\hbar^2}{2\pi m^2 \Omega_{\text{tr}}^2 (1 - \gamma^2)}}, \quad (6)$$
|
| 106 |
+
|
| 107 |
+
$$g' = bg + \frac{4\pi^2 V_6}{3\sqrt{3}a^4} \approx 1.16g + 7.6 \frac{V_6}{a^4}. \quad (7)$$
|
| 108 |
+
|
| 109 |
+
With the introduction of the Rydberg dressing, the con-
|
| 110 |
+
tact interaction $g$ must be effectively replaced by the con-
|
| 111 |
+
siderably stronger interaction amplitude $g'$, derived by
|
| 112 |
+
the large-$s$ expansion of the integral in Eq. (5). In the
|
| 113 |
+
following, we use Eq. (6) to estimate the gas density. We
|
| 114 |
+
adopt in particular $n = N/(4\pi s^2)$: $n$ scales like $1/\sqrt{g'}$
|
| 115 |
+
when the kinetic energy is negligible.
|
| 116 |
+
|
| 117 |
+
For a gas of $^{87}$Rb, the typical strength of the contact
|
| 118 |
+
interaction is $g \approx h \cdot 23 \text{ Hz µm}^2$ (for a trapping frequency
|
| 119 |
+
along $\hat{z}$ given by $\Omega_z = 2\pi$ kHz). For the Rydberg state
|
| 120 |
+
$43S_{1/2}$ at mixing angle $\theta = 0.05$, $V_6/a^4 \approx h \cdot 61\text{Hz µm}^2$.
|
| 121 |
+
The ratio between the contact and Rydberg interactions
|
| 122 |
+
is thus $(7.6V_6/a^4)/bg \approx 18$, which implies that $n$ and $\nu$
|
| 123 |
+
decrease by a factor $f_\nu \sim 4.5$.
|
| 124 |
+
|
| 125 |
+
FIG. 1. Qualitative phase diagram of the Rydberg dressed gas as a function of the magnetic length $l_B$ and the interaction parameter $g'$. For $l_B \to \infty$ (thus $B \to 0$), the system displays a superfluid (SF), supersolid (SS) or crystal (C) state. For interactions $g' < g_o$, by decreasing $l_B$, the system presents first a cross-over into the lowest Landau level triangular vortex lattice (LLL Tr. VL) (at $l_B \sim \xi/\alpha_\xi$), then a transition to the QH regime for $l_B \sim l_L/\alpha_L$. For $g' > g_o$, instead, the LLL vortex lattice phase disappears and the QH regime is reached for lower values of the filling factor. The thin red lines are lines at constant filling factor.
|
| 126 |
+
|
| 127 |
+
**IV. LOW-DENSITY RYDBERG-DRESSED GASES**
|
| 128 |
+
|
| 129 |
+
To understand the effect of the long-range interac-
|
| 130 |
+
tion on the onset of the QH regime, let us analyze
|
| 131 |
+
the main changes in the phase diagram of the rotat-
|
| 132 |
+
ing condensate (Fig. 1). For weak or no Rydberg
|
| 133 |
+
dressing, the phase diagram can be intuitively under-
|
| 134 |
+
stood from the comparison of three distinct length scales
|
| 135 |
+
[6, 7, 24]: the magnetic length $l_B$, the superfluid healing
|
| 136 |
+
length $\xi = \hbar/\sqrt{2mng'} \propto (g')^{-1/4}$ and the Lindemann
|
| 137 |
+
length [7, 22] $l_L \approx \sqrt{1/\pi n} \propto (g')^{1/4}$. By increasing $B$,
|
| 138 |
+
the system evolves from the pure superfluid phase with
|
| 139 |
+
$l_L/\alpha_L < \xi/\alpha_\xi < l_B$, to the vortex lattice phase in the
|
| 140 |
+
LLL regime with $l_L/\alpha_L < l_B < \xi/\alpha_\xi$, to the QH phase
|
| 141 |
+
where $l_B < l_L/\alpha_L < \xi/\alpha_\xi$ (left side of Fig. 1). Here
|
| 142 |
+
$\alpha_\xi \approx 0.3$ [3, 31, 32] is the ratio $\xi/l_B$ at the crossover to
|
| 143 |
+
the LLL regime [33], whereas $\alpha_L \approx 0.4$ is the Lindemann
|
| 144 |
+
parameter [7, 22], corrected by the geometrical factor for
|
| 145 |
+
|
| 146 |
+
triangular lattices. In particular, the relation $l_L/\alpha_L = l_B$
|
| 147 |
+
provides the estimate $\nu_{c,0} \approx 14$ which, however, must be
|
| 148 |
+
corrected to account for collective modes of the vortices
|
| 149 |
+
[23], resulting in $\nu_{c,0} \approx 8$ [7]. Even lower values, $\nu_{c,0} \lesssim 6$,
|
| 150 |
+
are suggested by numerical works [24, 25]; therefore, we
|
| 151 |
+
introduce an effective Lindemann factor $\alpha'_L = \sqrt{2/\nu_{c,0}}$.
|
| 152 |
+
|
| 153 |
+
The Rydberg interaction modifies this scenario because it decreases the ratio $\xi/l_L$ by the factor $f_\nu$. Hence, for sufficiently large mixing angles, the LLL vortex lattice phase is suppressed and additional supersolid phases may appear [34]. Therefore, for $\xi/\alpha_\xi < l_L/\alpha'_L$ the usual Lindemann criterion cannot be applied for the onset of the QH phase. A new estimate of $\nu_c$, though, can be obtained by imposing $l_B < \xi/\alpha_\xi$, which results in
|
| 154 |
+
|
| 155 |
+
$$\nu_c = \frac{\pi\hbar^2}{mg'\alpha_\xi^2}, \quad \text{for } \xi/\alpha_\xi < l_L/\alpha'_L. \qquad (8)$$
|
| 156 |
+
|
| 157 |
+
Globally, the ratio $\nu/\nu_c$ can be minimized by interactions such that $\xi/\alpha_\xi \approx l_L/\alpha'_L$, hence for an optimal value $g_o$ of the parameter $g'$ given by:
|
| 158 |
+
|
| 159 |
+
$$g_o \equiv \frac{\pi\hbar^2}{m\nu_{c,0}\alpha_\xi^2}. \qquad (9)$$
|
| 160 |
+
|
| 161 |
+
$g_o$ ranges in $h \cdot 280 - 640$ Hz µm² for $\nu_{c,o} = 14 - 6$. For the van der Waals interactions of the $^{87}$Rb state 43S, the condition $\xi/\alpha_\xi \approx l_L/\alpha'_L$ is met for mixing angles $\theta \approx 0.04 - 0.06$ (see Fig. 2). For $g' = g_0$ and B such that $\nu =$
|
| 162 |
+
---PAGE_BREAK---
|
| 163 |
+
|
| 164 |
+
FIG. 2. Filling factor as a function of the mixing angle $\theta$ for $\delta = 20$MHz and $N = 15000$. The solid lines correspond to rotating gases with $\Omega_{tr} = 2\pi 100$Hz and different values of $\gamma$. The green dot-dashed line is an estimate of $\nu$ for optically induced magnetic fields based on [45] obtained with counter-propagating Gaussian beams (waist $w = 50$µm, wavelength $\lambda = 790$nm) for $\Omega_{tr} = 2\pi 20$Hz. The red dashed line represents the critical filling factor assuming $\nu_{c,0} = 8$; its cusp determines the optimal interaction point at $\theta \approx 0.05$.
|
| 165 |
+
|
| 166 |
+
$\nu_{c,0}$, the system lies at a critical point that separates four different phases (see Fig. 1): the triangular vortex lattice in the LLL regime, appearing for $g' < g_o$ B constant; the QH regime obtained for $g' = g_o$ by increasing B; the superfluid vortex lattice for smaller values of B and a strongly interacting phase for $g' > g_o$.
|
| 167 |
+
|
| 168 |
+
The interaction amplitude $g_o$ lies at the edge of the regime of validity of the mean-field energy estimate: the effective scattering length results $a_s \approx 0.83$ µm for $\theta = 0.05$ (see App. B), to be compared with the average interatom distance of about 0.7 µm for $\gamma = 0.98$ and $\Omega_{tr} = 2\pi$ 100 Hz. The gas, in proximity to $g_o$, reaches a regime that cannot be considered any longer ultradilute and the breakdown of the mean-field approximation signals indeed that the superfluid approached an unstable point.
|
| 169 |
+
|
| 170 |
+
We also observe that for the parameters adopted in Fig. 2, in proximity to the critical point at $\theta = 0.05$, the ratio of the first two Haldane pseudopotential [19, 35] results $V_2/V_0 \approx 0.22$, close to the critical value 0.20 identified in Ref. [36] for the transition between a triangular and a square vortex lattice [37]. This suggests the onset of several phases with broken translational invariance for $g' > g_o$ and it is consistent with the behavior of the system at both strong and weak magnetic fields.
|
| 171 |
+
|
| 172 |
+
Concerning strong magnetic fields and interactions ($g' > g_o$), the healing length decreases and, with it, the extent of the QH phase. For intermediate filling factors, this gives rise to several inhomogeneous phases including stripe and bubble states [36]. In the extreme regime with low filling factors ($\nu \lesssim 1$), it is known that a competition between QH states and Wigner crystals appears [19, 20].
|
| 173 |
+
|
| 174 |
+
Concerning weak magnetic fields (upper part of Fig. 1), the system is no longer in the LLL regime and different mean-field ansätze are required. The strong Rydberg interactions cause a spontaneous breaking of translational symmetry, corresponding to supersolid and crystal phases [34, 38–40]. Mean-field analyses [38, 40, 41] estimate the onset of the crystal phase based on the dispersion
|
| 175 |
+
|
| 176 |
+
of the superfluid roton. The roton gap closes, which signals an instability towards a crystalline phase, when the interaction energy density $u = 3\sqrt{3}mn a^2 g'/h^2$ reaches a critical value $u_c \gtrsim 40$, corresponding to $g' > g_r \approx 1600 \cdot 8\pi h^4 / (27m^4 a^4 N \Omega_{tr}^2) \approx h \cdot 2.5$kHz µm² (for 15000 particles with $\Omega_{tr} = 2\pi \cdot 100$Hz). Additionally, in our regime of interest ($a^2n > 1$), a supersolid phase is expected for intermediate interactions with $30 \lesssim u \lesssim 40$ [40] (see also the recent experiments [12–15] in elongated dipolar clouds). By increasing $B$, the value of the critical interaction is non-monotonic [34, 42] and it is hard to extrapolate the behavior of the system for $g' > g_o$: a probable scenario is that several phases with broken translational symmetry alternate for large interactions.
|
| 177 |
+
|
| 178 |
+
Our estimates for the melting point of the LLL vortex lattice relies on the Gross-Pitaevskii approximation of the gas density. We emphasize that the validity of these mean-field estimates of the density and energy of superfluid systems in proximity to the breaking of superfluidity has been successfully used in many different cases. An example is given by superfluid-Mott phase transitions, where the Gross-Pitaevskii equation provides reasonable results for the energy and Hamiltonian parameters also in the presence of strong quantum fluctuations [43]. In the case of long-range interactions, these mean-field analyses give a fair estimate of the superfluid breakdown and even a reasonable estimate of the energy beyond the transition point from superfluid to supersolid [41]. Therefore we expect Eq. (6) to capture the main physical features of the system also in proximity to the LLL vortex lattice melting.
|
| 179 |
+
|
| 180 |
+
## V. THE OPTIMAL FILLING FACTOR
|
| 181 |
+
|
| 182 |
+
By comparing the density of atoms obtained from Eq. (6) and the artificial magnetic field, the filling factor (Eq. (1)) results:
|
| 183 |
+
|
| 184 |
+
$$ \nu = \frac{N}{4\gamma} \sqrt{\frac{2\pi(1-\gamma^2)\hbar^2}{Nm\dot{g}' + 2\pi\hbar^2}} \quad (10) $$
|
| 185 |
+
|
| 186 |
+
From Eq. (8) and (10) we derive its optimal value
|
| 187 |
+
|
| 188 |
+
$$ \nu_o = \frac{N}{2\gamma} \sqrt{\frac{(1-\gamma^2)\alpha_\xi^2}{N\alpha_L'^2 + 4\alpha_\xi^2}} = \frac{N}{2\gamma} \sqrt{\frac{(1-\gamma^2)\alpha_\xi^2}{2N/\nu_{c,0} + 4\alpha_\xi^2}} \quad (11) $$
|
| 189 |
+
|
| 190 |
+
In Fig. 3(a) we display $\nu_o$ obtained from $\nu_{c,0} = 8$ [5, 7, 23]. Under this assumption the QH regime is reached for $\gamma = 0.95$ for clouds of $N \lesssim 5000$ atoms. Based on the previous interaction parameters, the optimal mixing angle corresponds to $\theta \approx 0.05$. Numerical studies [24, 25] suggest that the critical filling factor can be smaller; it is therefore useful to consider also more restrictive values of $\nu_{c,0}$: in Fig. 3(b) we present the values of $\nu_o$ for $\nu_{c,0} = 2$. Since we aim at obtaining the QH regime for a mesoscopic gas, we included a constraint on the number $N_V$ of magnetic fluxes in the system:
|
| 191 |
+
---PAGE_BREAK---
|
| 192 |
+
|
| 193 |
+
FIG. 3. Filling factor at the optimal interaction point (Eq. (11)) as a function of the atom number $N$ assuming $\nu_{c,0} = 8$ [panel (a)] and $\nu_{c,0} = 2$ [panel (b)]. The critical filling is depicted by dashed lines and the system reaches the QH regime for $\nu_o < \nu_{c,0}$. For $\nu_{c,0} = 8$, the Rydberg-dressed gas reaches the QH regime for a large range of values of $N$ for rotation frequencies with $\gamma > 0.95$. For $\nu_{0,c} = 2$ the QH regime is reached for systems with $N \lesssim 2000$ at $\gamma = 0.99$. In (b) the area on the right of the dotted black line corresponds to system with at least 200 magnetic fluxes.
|
| 194 |
+
|
| 195 |
+
dotted black line corresponds to the limit of 200 fluxes. From the plot we see that the QH regime is within reach for a gas of 500 atoms in a trap with $\gamma = 0.97$ when the dressing is chosen close to the optimal point, at $\theta \approx 0.07$ for the considered Rydberg state $43S$. Larger numbers of atoms are sustainable for larger $\gamma$.
|
| 196 |
+
|
| 197 |
+
To realize the considered rotating regime, the atoms must first be loaded in a magnetic trap; then, the rotation can be imparted through a deformation of the trap (see App. A) and $N$ can be varied through evaporative techniques [3, 44]. Finally the Rydberg dressing is switched on. The decay time of the Rydberg state $43S$ at 300K is $\tau_{43} \approx 42\mu s$ [27], resulting in a decay time of the dressed state $\tau \approx \tau_{43}/(\sin(\theta/2))^2 \approx 67$ms for $\theta = 0.05$. This must be compared with the rotation period of about 10ms. To increase the decay time $\tau$, more excited Rydberg states can be considered, resulting in a larger $\tau_n$
|
| 198 |
+
|
| 199 |
+
and a smaller mixing at the optimal point.
|
| 200 |
+
|
| 201 |
+
## VI. OPTICALLY INDUCED MAGNETIC FIELDS
|
| 202 |
+
|
| 203 |
+
Our estimate of the filling factor (10) is based on artificial magnetic fields obtained through rotation. Different approaches, based on fully optical setups, have also been successfully applied for bosonic gases [8, 9], for example, exploiting position-dependent Raman couplings [45]. These setups, however, are usually less convenient to reach small filling factors: $B$ is typically proportional to $(\lambda w)^{-1}$ with $\lambda \approx 790$nm being the Gaussian Raman lasers' wavelength and $w \sim 170$ µm their waist [9]; therefore, to obtain a magnetic field approximately uniform over distances comparable with $s$, the typical value of $B$ is half of the value considered in the rotating case. Furthermore, the optical setups do not present the centrifugal reduction of the harmonic potential, such that the factor $\sqrt{1-\gamma^2}$ disappears from Eq. (10), thus increasing the resulting $\nu$ for the same $\Omega_{tr}$. In Fig. 2 we compare the rotating gases with a system with optically-induced fields for realistic parameters; for the optical realization we considered a lower trapping frequency to compensate for the missing centrifugal term.
|
| 204 |
+
|
| 205 |
+
## VII. CONCLUSIONS
|
| 206 |
+
|
| 207 |
+
We have shown that a combination of Rydberg dressing and rotating traps can drastically reduce the filling factors $\nu$ obtained in rotating Rb gases. Even considering the worst-case scenario of a critical filling factor $\nu_{c,0} = 2$, our estimates show that the quantum Hall regime can be reached for 2D gases of about 1000 atoms by introducing an optimal Rydberg dressing at $\theta \approx 0.07$. To increase the efficiency of this scheme, thus the number of atoms in the system, our proposal can be additionally combined with techniques for preparing many-body states with large angular momentum [46, 47].
|
| 208 |
+
|
| 209 |
+
## ACKNOWLEDGMENTS
|
| 210 |
+
|
| 211 |
+
We warmly thank N. Cooper, L. Fallani, G. Juzeliūnas, T. Macrí, C. Pethick and S. Simon for useful discussions. M. B. is supported by the Villum Foundation (Research Grant No. 25310). I. L. acknowledges support from EPSRC [Grant No. EP/N03404X/1] and from the “Wissenschaftler-Rückkehrprogramm GSO/CZS” of the Carl-Zeiss-Stiftung and the German Scholars Organization e.V..
|
| 212 |
+
---PAGE_BREAK---
|
| 213 |
+
|
| 214 |
+
# Appendix A: Rydberg dressed atoms in a rotating frame and the Gross-Pitaevskii energy
|
| 215 |
+
|
| 216 |
+
The system we analyze relies on the combination of three elements: a quadratic trapping potential, a rotation-induced artificial gauge potential and the Rydberg dressing necessary to obtain strong repulsive interactions.
|
| 217 |
+
|
| 218 |
+
To realize such a system, we consider $^{87}$Rb atoms and we select a 5S ground state $|g\rangle = |5S_{1/2}, F = 2, m_F = 2\rangle$. As discussed in [27, 48], such a state can be unambiguously dressed with a Rydberg excited states with the same F and $m_F$ quantum numbers through a two-photon process via a state 5P. In this way, transitions to Rydberg states with $F' = 1$ are forbidden due to the selection rules of the excitation scheme considered. In particular, for our numerical estimates, we considered the state $|e\rangle = |43S_{1/2}, F = 2, m_F = 2\rangle$, but states with larger n can be considered as well.
|
| 219 |
+
|
| 220 |
+
The states $|g\rangle$ and $|e\rangle$ share all the angular quantum numbers and, consequently, they share the same magnetic moment. This makes it suitable to trap them with the same magnetic trap, following, for example, the techniques adopted in [3, 44, 49] (see also [50] for different trapping schemes for Sr atoms). We considered, in particular, a time-averaged orbiting potential with an effective frequency $\Omega_{tr} = 2\pi$ 100 Hz. A large angular momentum can be imparted to the cloud by elliptically deforming the magnetic trap in the horizontal plane and suddenly changing the angle of the deformation [44]. The axial symmetry is then restored. The effect of the rotation on the motion of the center of mass of the atoms is to introduce an effective vector potential $\vec{A} = m\Omega_{rot}(y, -x)$. In the following, we assume that the Rydberg dressing is created through lasers propagating along the rotation axis and centered with respect to the trap, such that they do not explicitly break rotational symmetry. These lasers can be turned on after the system is put in rotation and we remark that Doppler effects are negligible for realistic rotation frequencies.
|
| 221 |
+
|
| 222 |
+
In the rotating frame, the single-atom Hamiltonian reads
|
| 223 |
+
|
| 224 |
+
$$
|
| 225 |
+
\begin{aligned}
|
| 226 |
+
H_{\text{RF}} &= R(t)(H_{\text{kin}} + \frac{m}{2}\Omega_{\text{tr}}^2 r^2 + H_{\text{dress}})R^{\dagger}(t) - iR(t)\partial_t R^{\dagger}(t) = \\
|
| 227 |
+
&= \left[ \frac{-\hbar^2 \vec{\nabla}^2}{2m} - \Omega_{\text{rot}} L_z + \frac{m\Omega_{\text{tr}}^2 r^2}{2} \right] \mathbb{I} + \left( \Omega^* (e^{i\omega t} + e^{-i\omega t}) e^{-ik_l z} e^{i\Omega_{\text{rot}} t(J_r - J_0)} \frac{\Omega (e^{i\omega t} + e^{-i\omega t}) e^{ik_l z} e^{i\Omega_{\text{rot}} t(J_r - J_0)}}{E_0 - J_0 \Omega_{\text{rot}}} \right)
|
| 228 |
+
\end{aligned}
|
| 229 |
+
\quad (\text{A1})
|
| 230 |
+
$$
|
| 231 |
+
|
| 232 |
+
where $R(t) = e^{i\Omega_{\text{rot}}t(J_z+L_z)}$, with $L_z$ the orbital angular momentum of the atom center of mass and $J_z$ the total inner angular momentum of the atom. $\omega$ is the laser frequency, $J_{r/0}$ are the eigenvalues of $J_z$ of the Rydberg and ground states, $E_{R/0}$ are the energies of the Rydberg and ground states and $r$ is the radial coordinate. For the states we considered $J_r = J_0$, but this is not a necessary requirement to obtain the effective Hamiltonian, and it can be relaxed for different trapping techniques. The kinetic part of the Hamiltonian can be recast in the form of a particle in the artificial gauge potential $\vec{A}$. We apply a rotating-wave approximation and we obtain
|
| 233 |
+
|
| 234 |
+
$$ H_{\text{RF}} = \left[ \frac{1}{2m}(\vec{p} + \vec{A})^2 + \frac{m}{2}(\Omega_{\text{tr}}^2 - \Omega_{\text{rot}}^2)r^2 \right] \mathbb{I} + \begin{pmatrix} 2\delta & \Omega e^{ik_l z} \\ \Omega^* e^{-ik_l z} & 0 \end{pmatrix} \quad (\text{A2}) $$
|
| 235 |
+
|
| 236 |
+
with the detuning $2\delta = E_R - E_0 - \omega$. Indeed, the unitary mapping $U_{RW}(t) = e^{i[\omega t(\frac{\sigma_z}{2}+1) - \Omega_{rot}tJ_z + E_0t]}$, needed to apply the rotating-wave approximation, completely erases the effect of the physical rotation $e^{iJ_z\Omega_{rot}t}$ due to the inner degrees of freedom. The rotation of the spin has effects only beyond the rotating-wave approximation in the off-resonant term. The matrix in the right-hand-side of Eq. (A2) defines indeed the Rydberg dressing that we adopted to obtain the state $|\tilde{g}\rangle$. In this way, within the rotating-wave approximation approximation, we can effectively consider dressed atoms in the state $|\tilde{g}\rangle$ whose dynamics is correctly described by the effective vector potential $\vec{A}$ and by the van der Waals interaction.
|
| 237 |
+
|
| 238 |
+
Based on this effective single-particle Hamiltonian, and by considering the long-range interaction in the spirit of [28], we can write the Gross-Pitaevskii energy in the following form
|
| 239 |
+
|
| 240 |
+
$$
|
| 241 |
+
\begin{aligned}
|
| 242 |
+
E = & \int d\vec{r} \psi_s^\dagger(\vec{r}) \frac{(-i\hbar\vec{\nabla} + \vec{A})^2}{2m} \psi_s(\vec{r}) + \frac{m\Omega_{\text{tr}}^2(1-\gamma^2)r^2}{2} |\psi_s(\vec{r})|^2 + \\
|
| 243 |
+
& \int d\vec{r} d\vec{r}' |\psi_s(\vec{r}')|^2 \left[ \frac{g}{2}\delta(\vec{r}-\vec{r}') + \frac{V_6}{a^6 + |\vec{r}-\vec{r}'|^6} \right] |\psi_s(\vec{r}')|^2
|
| 244 |
+
\end{aligned}
|
| 245 |
+
\quad (\text{A3})
|
| 246 |
+
$$
|
| 247 |
+
|
| 248 |
+
In the following, we show that the main effect of the vortex lattice modulation p introduced in $\psi_s$ is to cancel the contribution of the artificial gauge potential $\vec{A}$ to the kinetic energy. Therefore the kinetic energy can be approximated by the one obtained without the gauge potential based on a non-modulated Gaussian wavefunction, thus giving the
|
| 249 |
+
---PAGE_BREAK---
|
| 250 |
+
|
| 251 |
+
first term in Eq. 5 of the main text. This will be shown based on a suitable Chern–Simons transformation [51]. The integral for the long-range interaction energy in Eq. 5 of the main text, instead, is obtained by considering the relative coordinate $\vec{r} - \vec{r}'$ and integrating over its orientation.
|
| 252 |
+
|
| 253 |
+
Let us focus on the kinetic energy term in $H_{RF}$. As we mentioned in the main text, our ansatz for the many-body wavefunction corresponds to $\psi_s(\vec{r}) = p(\vec{r})\psi_0(r)$, where $\psi_0(r) = e^{-\frac{r^2}{2s^2}}\sqrt{N/\pi s^2}$ is the normalized Gaussian envelop. We adopted a Gaussian profile for simplicity: alternative approaches based on the interpolation between Gaussian and Thomas-Fermi profiles [52, 53] would provide analogous results, with the effect of resulting in a slightly lower average density. Therefore, for the purpose of estimating the onset of the QH regime, the Gaussian ansatz gives more restrictive estimates. The function $p(\vec{r})$ defines instead a hexagonal unit cell with area $2\pi l_B^2$. The average value of its norm is 1, such that the average density of the system is approximately set only by its long-distance behavior, thus by $\psi_0$. Since we are mostly interested in the behavior in the LLL regime, we assume the following analytical form for the periodic function $p$
|
| 254 |
+
|
| 255 |
+
$$p(z) = \frac{\prod_{v \in \text{V.L.}} (z - \eta_v)}{\mathcal{N}}, \quad (A4)$$
|
| 256 |
+
|
| 257 |
+
where the coordinate $z = x + iy = re^{i\phi}$ and $\eta_v$ is the complex coordinate $x_v + iy_v$ of the vortex $v$ belonging to the vortex lattice. The sum is taken over all the vortices in the lattice. The normalization factor $\mathcal{N}$ is chosen such that
|
| 258 |
+
|
| 259 |
+
$$\frac{1}{\mathcal{A}} \int_{\text{u.c.}} d\vec{r} |p(\vec{r})|^2 = 1, \quad (A5)$$
|
| 260 |
+
|
| 261 |
+
where $\mathcal{A} = 2\pi\hbar/B$ is the area of the unit cell of the triangular vortex lattice in the LLL regime.
|
| 262 |
+
|
| 263 |
+
To show that the main effect of the vortex lattice modulation $p(z)$ is to cancel the artificial gauge potential in the kinetic energy term of the Gross-Pitaevskii equation we apply the following Chern-Simons transformation (see, for example, the review [54], and Ref. [55] for the application of the Chern-Simons transformation to vortex systems):
|
| 264 |
+
|
| 265 |
+
$$\psi'(z) = e^{-i \sum_{v \in \text{V.L.}} \arg(z - \eta_v)} \psi(z). \quad (A6)$$
|
| 266 |
+
|
| 267 |
+
The Chern–Simons phase is the inverse of the phase of $p(z)$, such that $\psi'$ is a real-valued function. This transformation is just a phase change that leaves the density $|\psi|^2$ invariant, and must be treated as a gauge transformation. For the transformed wavefunction $\psi'$ the kinetic energy term reads:
|
| 268 |
+
|
| 269 |
+
$$E_{\text{kin}} = \frac{1}{2m} \int d\vec{r} \psi'^{\dagger}(\vec{r}) (-i\hbar\vec{\nabla} + \vec{A} - \vec{a})^2 \psi'(\vec{r}), \quad (A7)$$
|
| 270 |
+
|
| 271 |
+
where we introduced the Chern–Simons potential:
|
| 272 |
+
|
| 273 |
+
$$\vec{a} = -\hbar \vec{\nabla} \left[ \sum_{v \in \text{V.L.}} \arg(z - \eta_v) \right] = \hbar \sum_{v \in \text{V.L.}} \frac{\hat{x}(y-y_v) - \hat{y}(x-x_v)}{(x-x_v)^2 + (y-y_v)^2}. \quad (A8)$$
|
| 274 |
+
|
| 275 |
+
This potential corresponds to a magnetic field $b$ which vanishes everywhere, except at the positions of the vortices
|
| 276 |
+
|
| 277 |
+
$$\vec{b}(\vec{r}) = \vec{\nabla} \times \vec{a} = 2\pi\hbar\hat{z} \sum_{v \in \text{V.L.}} \delta(\vec{r} - \vec{r}_v), \quad (A9)$$
|
| 278 |
+
|
| 279 |
+
where $\vec{r}_v = (x_v, y_v)$ is the position of the vortex $v$. The average value of the amplitude of the field $\vec{b}$ is thus given by the density of the vortices
|
| 280 |
+
|
| 281 |
+
$$\bar{b} = 2\pi\hbar B / (2\pi\hbar) = B. \quad (A10)$$
|
| 282 |
+
|
| 283 |
+
We conclude that, on average, the contribution of the phase of the vortices cancels the gauge potential $\vec{A}$, corresponding to the fact that each vortex carries a quantum of angular momentum. Therefore we approximate the kinetic energy of the system by
|
| 284 |
+
|
| 285 |
+
$$E_{\text{kin}} \approx -\frac{\hbar^2}{2m} \int d\vec{r} \psi'^{\dagger} \vec{\nabla}^2 \psi' = \frac{\hbar^2}{2m} \int d\vec{r} [|p(\vec{r})|^2 (\vec{\nabla}\psi_0)^2 - \psi_0^2 |p(\vec{r})| |\vec{\nabla}^2 p(\vec{r})|]. \quad (A11)$$
|
| 286 |
+
|
| 287 |
+
To evaluate this expression, we apply the so-called averaged vortex approximation [30]: for a system in a strong magnetic field such that $s \gg l_B$ and the area of the unit cell of the vortex lattice is much smaller than the system
|
| 288 |
+
---PAGE_BREAK---
|
| 289 |
+
|
| 290 |
+
size, we can separate the rapidly oscillating contributions proportional to $p$ from the global Gaussian contribution of $\psi_0$. In particular we consider that $|p|^2$ averages to 1 whereas $-|p|\vec{\nabla}^2|p|$ averages to a constant $c$ which depends on $B$ only. We finally obtain:
|
| 291 |
+
|
| 292 |
+
$$E_{\text{kin}} \approx \frac{\hbar^2}{2m} \int d\vec{r} \left[ (\vec{\nabla}\psi_0)^2 + \psi_0^2 c \right] = \frac{\hbar^2 N}{2ms^2} + c. \quad (A12)$$
|
| 293 |
+
|
| 294 |
+
The constant $c$ does not depend on the parameter $s$; therefore it can be dropped in Eq. (5) because it has no effect in the estimate (6).
|
| 295 |
+
|
| 296 |
+
A similar separation between slowly and rapidly varying contributions applies to the estimate of the trapping energy (the second term in Eq. (5)). In this case, the rapidly oscillating modulation $|p|^2$ averages to 1, thus leaving only the result related to the Gaussian envelope. Concerning the contact interaction, instead, the role of the modulation is to increase the effective interaction $g$ to $bg$ with $b = \int_{u.c.} d\vec{r}|p(\vec{r})|^4 \approx 1.1596$. This holds in the LLL approximation, whereas for weaker magnetic fields $B$, thus smaller densities of vortices, the factor $b$ decreases to 1 in the limit $B \to 0$.
|
| 297 |
+
|
| 298 |
+
The averaged vortex approximation allows us also to show that the parameter $\alpha_\xi$ does not depend on the long-range Rydberg interactions. We observe indeed that $\alpha_\xi$ is related to the ratio $2\alpha_\xi^2$ between the vortex core and the unit cell area at the cross-over between the superfluid and LLL regimes. In the superfluid phase, $p$ does not match the analytic function (A4). Its profile $|p|$ and the size of the vortex core can be determined by minimizing the kinetic and interaction energy of the superfluid in the unit cell [32]. In the limit $s \gg l_B$, the core area is essentially independent of the long-range Rydberg interactions since its contribution to the energy is independent of $p$. This is because of two reasons: (i) The leading contribution of the interaction energy is provided by the product of densities in well-separated unit cells, such that these densities, effectively, average to the value provided by the slowly varying Gaussian envelope and are not affected by $p$. (ii) The residual contribution of the density-density interaction within the same cell is independent of $p$ because the interaction profile is flat at short distances and the Rydberg radius $a$ in our regime of interest is typically larger than $l_B$; thus this contribution results in:
|
| 299 |
+
|
| 300 |
+
$$\int_{\text{u.c.}} d\vec{r} d\vec{r}' |p(\vec{r})|^2 |p(\vec{r}')|^2 V_{\text{Rydberg}}(\vec{r}-\vec{r}') \approx V_6 A^2 / a^6, \quad (A13)$$
|
| 301 |
+
|
| 302 |
+
which is independent of $p$. Therefore, we conclude that only the short range contact interactions influence the profile $p$, and the value $\alpha_\xi$ is not affected by the long-range interactions.
|
| 303 |
+
|
| 304 |
+
## Appendix B: Scattering length of Rydberg-dressed atoms
|
| 305 |
+
|
| 306 |
+
We estimate here the scattering length determined by the van der Waals effective interaction. Our starting point is the Lippmann-Schwinger equation for the scattering state $\chi_k(\vec{r})$. For the scattering in a 3D system with isotropic interactions,
|
| 307 |
+
|
| 308 |
+
$$\chi_{\vec{k}}(\vec{r}) = \chi_{0,k}(\vec{r}) - \frac{m_r}{2\pi\hbar^2} \int d^3\vec{r}' \frac{e^{ik|\vec{r}-\vec{r}'|}}{|\vec{r}-\vec{r}'|} \frac{V_6}{a^6 + |\vec{r}'|^6} \chi_{\vec{k}}(\vec{r}'), \quad (B1)$$
|
| 309 |
+
|
| 310 |
+
where $\chi_{0,\vec{k}}(\vec{r}) = e^{i\vec{k}\cdot\vec{r}}$ is an incoming plane wave, $m_r = m/2$ is the reduced mass and we use the shorthand notation $v = |\vec{v}|$ for the moduli of momentum and position vectors. In particular we express the scattering wavefunction $\chi_{\vec{k}}(\vec{r})$ as
|
| 311 |
+
|
| 312 |
+
$$\chi_{\vec{k}}(\vec{r}) = e^{i\vec{k}\vec{r}} + \frac{e^{ikr}}{r} f(\vec{k}', \vec{k}), \quad (B2)$$
|
| 313 |
+
|
| 314 |
+
where $\vec{k}' = k\hat{r}$ is the outgoing wave-vector and $f$ is the scattering amplitude. From the previous equations we derive
|
| 315 |
+
|
| 316 |
+
$$f(\vec{k}', \vec{k}) \left[ -\frac{2\pi\hbar^2}{m_r V_6} + \frac{2\pi}{ik} \int_0^\infty dr' \frac{1-e^{2ikr'}}{a^6+r'^6} \right] = 2\pi \int_0^\infty dr' \frac{2r'}{a^6+r'^6} \frac{2\sin(r'|\vec{k}-\vec{k}'|)}{|\vec{k}-\vec{k}'|}. \quad (B3)$$
|
| 317 |
+
|
| 318 |
+
We consider only the isotropic s-wave component of the scattering amplitude by taking the angular average of the previous equation. In particular, we define
|
| 319 |
+
|
| 320 |
+
$$f_0 = \frac{1}{4\pi} \int d\Omega_r f(\vec{k}', \vec{k}), \quad (B4)$$
|
| 321 |
+
---PAGE_BREAK---
|
| 322 |
+
|
| 323 |
+
where $Ω_r$ is the direction of the vectors $\vec{r}$ and $\vec{k}'$. By integrating Eq. (B3) over $Ω_r$ we obtain
|
| 324 |
+
|
| 325 |
+
$$f_0 = \frac{-2\pi}{3a^3} \left( \frac{\hbar^2}{m_r V_6} + \frac{2\pi}{3\sqrt{3}a^4} \right)^{-1}. \qquad (B5)$$
|
| 326 |
+
|
| 327 |
+
Considering the s-wave component of the scattering matrix $S_0 = 1 + 2ikf_0$ and taking the limit $k \to 0$ we obtain the scattering length:
|
| 328 |
+
|
| 329 |
+
$$a_s = -f_0 = \frac{2\pi}{3a^3} \left( \frac{\hbar^2}{m_r V_6} + \frac{2\pi}{3\sqrt{3}a^4} \right)^{-1}. \qquad (B6)$$
|
| 330 |
+
|
| 331 |
+
We observe that in the Born approximation the last term in the parenthesis would be neglected and the result matches the calculation in Ref. [56]. Eq. (B6) results in a scattering length of approximately 0.83 µm for the typical values of the Rydberg dressing described in the main text, in proximity to the optimal point, thus to $\theta \approx 0.05$.
|
| 332 |
+
|
| 333 |
+
[1] I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. **80**, 885 (2008).
|
| 334 |
+
|
| 335 |
+
[2] F. Dalfovo, S. Giorgini, L. P. Pitaevskii and S. Stringari Rev. Mod. Phys. **71**, 463 (1999).
|
| 336 |
+
|
| 337 |
+
[3] V. Schweikhard, I. Coddington, P. Engels, V.P. Mogendorff and E.A. Cornell, Phys. Rev. Lett. **92**, 040404 (2004).
|
| 338 |
+
|
| 339 |
+
[4] V. Bretin, S. Stock, Y. Seurin and J. Dalibard, Phys. Rev. Lett. **92**, 050403 (2004).
|
| 340 |
+
|
| 341 |
+
[5] A. L. Fetter, Rev. Mod. Phys. **81**, 647 (2009).
|
| 342 |
+
|
| 343 |
+
[6] G. Baym, J. Low. Temp. Phys., **138**, 601 (2005).
|
| 344 |
+
|
| 345 |
+
[7] N. R. Cooper, Adv. Phys. **57**, 539 (2008).
|
| 346 |
+
|
| 347 |
+
[8] Y.-J. Lin, R. L. Compton, K. Jiménez-García, J. V. Porto, and I. B. Spielman, Nature **462**, 628-632 (2009).
|
| 348 |
+
|
| 349 |
+
[9] M. C. Beeler, R. A. Williams, K. Jimenez-Garca, L. J. LeBlanc, A. R. Perry and I. B. Spielman, Nature **498**, 201-204 (2013)
|
| 350 |
+
|
| 351 |
+
[10] J. Dalibard, F. Gerbier, G. Juzeliūnas and P. Öhberg Rev. Mod. Phys. **83**, 1523 (2011).
|
| 352 |
+
|
| 353 |
+
[11] N. Goldman, G. Juzeliūnas, P. Öhberg and I. B. Spielman, Rep. Prog. Phys. **77**, 126401 (2014).
|
| 354 |
+
|
| 355 |
+
[12] L. Tanzi, E. Lucioni, F. Famà, J. Catani, A. Fioretti, C. Gabbanini, R. N. Bisset, L. Santos and G. Modugno, Phys. Rev. Lett. **122**, 130405 (2019).
|
| 356 |
+
|
| 357 |
+
[13] F. Böttcher, J.-N. Schmidt, M. Wenzel, J. Hertkorn, M. Guo, T. Langen and T. Pfau, Phys. Rev. X **9**, 011051 (2019).
|
| 358 |
+
|
| 359 |
+
[14] L. Chomaz, D. Petter, P. Ilzhöfer, G. Natale, A. Trautmann, C. Politi, G. Durastante, R. M. W. van Bijnen, A. Patscheider, M. Sohmen, M. J. Mark and F. Ferlaino, Phys. Rev. X **9**, 021012 (2019).
|
| 360 |
+
|
| 361 |
+
[15] G. Natale, R.M.W. van Bijnen, A. Patscheider, D. Petter, M.J. Mark, L. Chomaz, and F. Ferlaino Phys. Rev. Lett. **123**, 050402 (2019).
|
| 362 |
+
|
| 363 |
+
[16] J. Zeiher, P. Schauss, S. Hild, T. Macrì, I. Bloch and C. Gross, Phys. Rev. X **5**, 031015 (2015).
|
| 364 |
+
|
| 365 |
+
[17] J. Zeiher, R. van Bijnen, P. Schauss, S. Hild, J.-Y. Choi, T. Pohl, I. Bloch and C. Gross, Nat. Phys. **12**, 1095 (2016).
|
| 366 |
+
|
| 367 |
+
[18] H. Bernien, S. Schwartz, A. Keesling et al., Nature **551**, 579 (2017).
|
| 368 |
+
|
| 369 |
+
[19] F. Grusdt and M. Fleischhauer, Phys. Rev. A **87**, 043628 (2013).
|
| 370 |
+
|
| 371 |
+
[20] T. Grass, P. Bienias, M. J. Gullans, R. Lundgren, J. Maciejko and A. V. Gorshkov, Phys. Rev. Lett. **121**, 253403 (2018).
|
| 372 |
+
|
| 373 |
+
[21] For neutral atoms we choose units for B given by mass/time, such that the magnetic flux has the units of an angular momentum.
|
| 374 |
+
|
| 375 |
+
[22] A. Rozhkov and D. Stroud, Phys. Rev. B **54**, R12697(R) (1996).
|
| 376 |
+
|
| 377 |
+
[23] J. Sinova, C. B. Hanna and A. H. MacDonald, Phys. Rev. Lett. **89**, 030403 (2002).
|
| 378 |
+
|
| 379 |
+
[24] N.R. Cooper, N.K. Wilkin and J.M.F. Gunn, Phys. Rev. Lett. **87**, 120405 (2001).
|
| 380 |
+
|
| 381 |
+
[25] N. Regnault and T. Jolicoeur, Phys. Rev. B **69**, 235309 (2004).
|
| 382 |
+
|
| 383 |
+
[26] N. Gemelke, E. Sarajlic and S. Chu, arXiv:1007.2677.
|
| 384 |
+
|
| 385 |
+
[27] R. Löw, H. Weimer, J. Nipper, J. B. Balewski, B. Butscher, H. P. Büchler and T. Pfau, J. Phys.: B: At.: Mol.: Opt.: Phys.: **45**, 113001 (2012).
|
| 386 |
+
|
| 387 |
+
[28] L. Santos, G. V. Shlyapnikov, P. Zoller and M. Lewenstein, Phys. Rev. Lett. **85**, 1791 (2000).
|
| 388 |
+
|
| 389 |
+
[29] A. Aftalion, X. Blanc and J. Dalibard, Phys. Rev. A **71**, 023611 (2005).
|
| 390 |
+
|
| 391 |
+
[30] T.-L. Ho, Phys. Revet **87**, 060403 (2001).
|
| 392 |
+
|
| 393 |
+
[31] I. Coddington, P.C.Haljan,P Engels,V.Schweikhard,S.Tung,and E.A.Cornell, Phys.Rev.A **70**, 063607 (2004).
|
| 394 |
+
|
| 395 |
+
[32] G.Baym and C.J.Pethick, Phys.Rev.A **69**, 043619 (2004).
|
| 396 |
+
|
| 397 |
+
[33] $2a_{\xi}^{2} = 0 . 173 - 0 . 225$ is the fractional core area of the vortices estimated in [32] and measured in [3, 31] at the crossover between the normal superfluid and LLL regimes Its value, as discussed in [32], relies mostly on the short distance behavior of the interaction and is not influenced by the long-range Rydberg interaction (see Appendix A).
|
| 398 |
+
|
| 399 |
+
[34] N.Henkel,F.Cinti,P.Jain,G.Pupillo and T.Pohl,Phys.Rev.Lett,**108**, 265301 (2012).
|
| 400 |
+
|
| 401 |
+
[35] F.D.M.Haldane, Phys.Rev.Lett., **51**, 605 (1983).
|
| 402 |
+
|
| 403 |
+
[36] N.R.Cooper,E.H.Rezayi and S.H.Simon, Phys.Rev.Lett., **95**, 200402 (2005).
|
| 404 |
+
---PAGE_BREAK---
|
| 405 |
+
|
| 406 |
+
[37] J. Zhang and H. Zhai, Phys. Rev. Lett. 95, 200403 (2005).
|
| 407 |
+
|
| 408 |
+
[38] N. Henkel, R. Nath and T. Pohl, Phys. Rev. Lett. 104, 195302 (2010).
|
| 409 |
+
|
| 410 |
+
[39] G. Pupillo, A. Micheli, M. Boninsegni, I. Lesanovsky and P. Zoller, Phys. Rev. Lett. 104, 223002 (2010).
|
| 411 |
+
|
| 412 |
+
[40] F. Cinti, T. Macrí, W. Lechner, G. Pupillo and T. Pohl, Nature Comm. 5, 3235 (2014).
|
| 413 |
+
|
| 414 |
+
[41] T. Macrí, F. Maucher, F. Cinti and T. Pohl, Phys. Rev. A 87, 061602(R) (2013).
|
| 415 |
+
|
| 416 |
+
[42] S. Sinha and G. V. Shlyapnikov, Phys. Rev. Lett. 94, 150401 (2005).
|
| 417 |
+
|
| 418 |
+
[43] S. Giovanazzi, J. Esteve and M. K. Oberthaler, New J. Phys. 10 045009 (2009).
|
| 419 |
+
|
| 420 |
+
[44] P. Engels, I. Coddington, P. C. Haljan, V. Schweikhard, and E. A. Cornell, Phys. Rev. Lett. 90, 170405 (2003).
|
| 421 |
+
|
| 422 |
+
[45] G. Juzeliūnas, J. Ruseckas, P. Ōhberg and M. Fleischhauer, Phys. Rev. A 73, 025602 (2006).
|
| 423 |
+
|
| 424 |
+
[46] M. Roncaglia, M. Rizzi and J. Dalibard, Sci. Rep. 1, 43 (2011).
|
| 425 |
+
|
| 426 |
+
[47] Tin-Lun Ho, arXiv:1608.00074 (2016).
|
| 427 |
+
|
| 428 |
+
[48] R. Heidemann, U. Raitzsch, V. Bendkowsky, B. Butscher, R. Löw, L. Santos and T. Pfau, Phys. Rev. Lett. 99, 163601 (2007).
|
| 429 |
+
|
| 430 |
+
[49] D. S. Jin, J. R. Ensher, M. R. Matthews, C. E. Wieman and E. A. Cornell, Phys. Rev. Lett. 77, 420 (1996).
|
| 431 |
+
|
| 432 |
+
[50] A.D. Bounds, N.C. Jackson, R.K. Hanley, R. Faoro, E.M. Bridge, P. Huillery and M.P.A. Jones, Phys. Rev. Lett. 120, 183401 (2018).
|
| 433 |
+
|
| 434 |
+
[51] S.-C. Zhang, Int. J. Mod. Phys. 6, 25 (1992).
|
| 435 |
+
|
| 436 |
+
[52] N. R. Cooper, S. Komineas and N. Read, Phys. Rev. A 70, 033604 (2004).
|
| 437 |
+
|
| 438 |
+
[53] G. Watanabe, G. Baym and C. J. Pethick, Phys. Rev. Lett. 93, 190401 (2004).
|
| 439 |
+
|
| 440 |
+
[54] S. H. Simon, "THE CHERN-SIMONS FERMI LIQUID DESCRIPTION OF FRACTIONAL QUANTUM HALL STATES", pp. 91-194 in "Composite Fermions", ed. O Heinonen, World Scientific (1998).
|
| 441 |
+
|
| 442 |
+
[55] S. V. Iordanski and D..S Lyubshin, J. Phys.: Condens. Matter **21**, 405601 (2009).
|
| 443 |
+
|
| 444 |
+
[56] J. Honer, H. Weimer, T. Pfau and H. P. Büchler, Phys. Rev. Lett. **105**, 160404 (2010).
|
samples/texts_merged/4694300.md
ADDED
|
@@ -0,0 +1,109 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Exercise sheet 9
|
| 5 |
+
|
| 6 |
+
1. Let $A$ be a DVR with uniformizing parameter $\pi$ (i.e., $\pi$ is a generator of the maximal ideal).
|
| 7 |
+
|
| 8 |
+
(a) Show that $A$ is of dimension 1.
|
| 9 |
+
|
| 10 |
+
(b) Deduce that $\dim(A[T]) \ge 2$.¹
|
| 11 |
+
|
| 12 |
+
(c) Show that the ideal of $A[T]$ generated by the element $f = \pi T - 1 \in A[T]$ is maximal.
|
| 13 |
+
|
| 14 |
+
(d) Let $Y = V(\langle f \rangle) \subset \text{Spec}(A[T])$. Show that $\dim(A[T]) \neq \dim(Y) + \text{codim}_{A[T]}(Y)$.
|
| 15 |
+
|
| 16 |
+
(e) For a general noetherian ring $A$ and any closed subset $Y \subseteq \text{Spec}(A)$, show that $\dim(Y) + \text{codim}_A(Y) \le \dim(A)$.
|
| 17 |
+
|
| 18 |
+
(a) Note that $|\text{Spec}(A)|$ is integral (since $A$ is an integral domain) and $V(\langle \pi \rangle)$ is the unique closed point (since $\langle \pi \rangle$ is the unique maximal ideal). Therefore,
|
| 19 |
+
|
| 20 |
+
$$ \emptyset \subsetneq V(\langle \pi \rangle) \subsetneq V(\langle 0 \rangle) = |\text{Spec}(A)| $$
|
| 21 |
+
|
| 22 |
+
is a chain of integral subsets of $|\text{Spec}(A)|$. It is maximal because if $V(p)$ is a proper integral subset containing $V(\langle \pi \rangle)$, then $p$ is a nonzero prime ideal contained in $\langle \pi \rangle$. Since $A$ is a principal ideal domain, $p = \langle f \rangle$ for some nonzero prime element $f$. Since PIDs are factorial, $f$ is irreducible and $f \in \langle \pi \rangle$ implies $\langle f \rangle = \langle \pi \rangle$. (To summarize: the maximal ideal is the unique nonzero prime ideal of a DVR.)
|
| 23 |
+
|
| 24 |
+
(b) We claim that, for any ring $A$ (possibly even non-noetherian), we have $\dim(A[T]) \ge \dim(A) + 1$. Indeed, suppose
|
| 25 |
+
|
| 26 |
+
$$ \emptyset \subsetneq V(\mathfrak{p}_0) \subsetneq V(\mathfrak{p}_1) \subsetneq \dots \subsetneq V(\mathfrak{p}_n) $$
|
| 27 |
+
|
| 28 |
+
is a maximal chain of integral closed subsets of $|\text{Spec}(A)|$. Then each extension $\mathfrak{q}_i := \mathfrak{p}_i A[T]$ is a prime ideal of $A[T]$ (since $A[T]/\mathfrak{p}_i A[T] \simeq A/\mathfrak{p}_i[T]$ is an integral domain). Set $\mathfrak{r} := \mathfrak{q}_0 + \langle T \rangle$. This is also a prime ideal of $A[T]$ since $A[T]/\mathfrak{r} \simeq A/\mathfrak{p}_0$, and we have $\mathfrak{q}_0 \subsetneq \mathfrak{r}$. Thus
|
| 29 |
+
|
| 30 |
+
$$ \emptyset \subsetneq V(\mathfrak{r}) \subsetneq V(\mathfrak{q}_0) \subsetneq V(\mathfrak{q}_1) \subsetneq \dots \subsetneq V(\mathfrak{q}_n) $$
|
| 31 |
+
|
| 32 |
+
is a chain of integral subsets of $|\text{Spec}(A[T])|$.
|
| 33 |
+
|
| 34 |
+
(c) Note that $A[\pi^{-1}]$ is the fraction field of $A$. Consider the unique $A$-algebra homomorphism $\phi : A[T] \to A[\pi^{-1}]$ sending $T \mapsto 1/\pi$. Then clearly $\phi$ is surjective, and its kernel is the ideal $\langle \pi T - 1 \rangle$: for any polynomial $f \in A[T]$, we may write $f = g \cdot (\pi T - 1) + r$ where $g \in A[T]$ and $r \in A$ (division algorithm), and then $\phi(f) = f(1/\pi) = r$. It follows that $\phi$ induces an isomorphism $A[T]/\langle \pi T - 1 \rangle \simeq A[\pi^{-1}]$, and in particular the ideal $\langle \pi T - 1 \rangle$ is maximal.
|
| 35 |
+
|
| 36 |
+
¹In fact, one has $\dim(A[T]) = \dim(A) + 1$ for any noetherian ring $A$, but this is non-trivial; see e.g. [Bourbaki, Comm. alg., §3, no. 4, Cor. 3 to Prop. 7].
|
| 37 |
+
---PAGE_BREAK---
|
| 38 |
+
|
| 39 |
+
(d) By (b) we have $\dim(A[T]) \ge 2$. By (c), $Y$ is a closed point, so $\dim(Y) = 0$. But $\dim(Y) = 1$ by Krull's principal ideal theorem (since $f$ is a non-zero divisor as $A[T]$ is an integral domain).
|
| 40 |
+
|
| 41 |
+
(e) Easy from the definitions.
|
| 42 |
+
|
| 43 |
+
**2. Let A be a noetherian ring.**
|
| 44 |
+
|
| 45 |
+
(a) The codimension of any integral closed subset $V(\mathfrak{p}) \subset |\text{Spec}(A)|$ is given by
|
| 46 |
+
$$\text{codim}_A(V(\mathfrak{p})) = \dim(A_{\mathfrak{p}}).$$
|
| 47 |
+
|
| 48 |
+
(b) Show that the dimension of $A$ is given by the formula
|
| 49 |
+
$$\dim(A) = \sup_x \text{codim}_A(\{x\}),$$
|
| 50 |
+
where the supremum is taken over all closed points $x$ of $|\text{Spec}(A)|$.
|
| 51 |
+
|
| 52 |
+
**3. Let A be a noetherian ring. Define a homomorphism**
|
| 53 |
+
|
| 54 |
+
$$\gamma_A : Z_*(A) \to G_0(A)$$
|
| 55 |
+
|
| 56 |
+
by sending the class of an integral subset $V(\mathfrak{p})$ to the class $[A/\mathfrak{p}]$.
|
| 57 |
+
|
| 58 |
+
(a) Let $k$ be an algebraically closed field and $A = k[T, U]$. Show that $\gamma_A$ descends to a homomorphism
|
| 59 |
+
|
| 60 |
+
$$\gamma_A : \text{CH}_*(A) \to \text{G}_0(A)$$
|
| 61 |
+
|
| 62 |
+
which is invertible.
|
| 63 |
+
|
| 64 |
+
(b) Let $A$ be any noetherian ring and $\phi: A \twoheadrightarrow A/I$ a surjective ring homomorphism.
|
| 65 |
+
Show that the square
|
| 66 |
+
|
| 67 |
+
commutes.
|
| 68 |
+
|
| 69 |
+
(a) Let's first describe $Z_k(A)$ for all $k$. Since $A$ is of dimension 2, $Z_k(A) = 0$ for $k \ge 3$. Since $A$ is an integral domain and hence irreducible, $Z_2(A)$ is free abelian on the generator $V(\langle 0 \rangle) = |\text{Spec}(A)|$. By definition $Z_1(A)$ is free abelian on the generators $V(\mathfrak{p})$, integral closed subsets of dimension 1, or equivalently² of codimension 1. Finally $Z_0(A)$ is free abelian on integral closed subsets of dimension 0, i.e., closed points of $|\text{Spec}(A)|$, which are in bijection with pairs $(x_1, x_2) \in k^2$ since $k$ is algebraically closed (Sheet 8, Exercise 1).
|
| 70 |
+
|
| 71 |
+
Note that $R_2(A) = 0$ since there are no 3-dimensional closed subsets of $|\text{Spec}(A)|$, so $\text{CH}_2(A) \cong \mathbb{Z}$.
|
| 72 |
+
|
| 73 |
+
²Though the equality $\text{codim}(V(\mathfrak{p})) + \dim(V(\mathfrak{p})) = \dim(A)$ does not hold in general (see Sheet 8, Exercise 1), it does hold when $A$ is an integral domain of finite type over a field $k$.
|
| 74 |
+
---PAGE_BREAK---
|
| 75 |
+
|
| 76 |
+
Since $A$ is factorial, given $[V(\mathfrak{p})] \in Z_1(A)$, we may write $\mathfrak{p} = \langle f \rangle$, for some (nonzero) element $f \in A$, by the Lemma in the proof of Sheet 8, Exercise 3. Then we have $\mathrm{div}_{V(0)}(f) = [A/\langle f \rangle]_1 = [V(\langle f \rangle)] = [V(\mathfrak{p})]$ in $Z_1(A)$. Thus every $[V(\mathfrak{p})] \in Z_1(A)$ is rationally equivalent to zero, and $\mathrm{CH}_1(A) \simeq 0$.
|
| 77 |
+
|
| 78 |
+
Take an element $[V(\mathfrak{m})] \in Z_0(A)$, corresponding to a pair $(x_1, x_2) \in k^2$ (so that $\mathfrak{m} = \langle T - x_1, U - x_2 \rangle$). Then we have
|
| 79 |
+
|
| 80 |
+
$$
|
| 81 |
+
\begin{align*}
|
| 82 |
+
\mathrm{div}_{V(\langle T-x_1 \rangle)}(U-x_2) &= [(A/(T-x_1))/\langle U-x_2 \rangle]_0 \\
|
| 83 |
+
&= [A/\mathfrak{m}]_0 = [V(\mathfrak{m})].
|
| 84 |
+
\end{align*}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
Thus $\mathrm{CH}_0(A) \simeq 0$.
|
| 88 |
+
|
| 89 |
+
Using these descriptions of the subgroups $R_k(A)$, it is easy to check that $\gamma_A : Z_k(A) \to G_0(A)$ sends $R_k(A)$ to zero. To show that the induced map $\mathrm{CH}_k(A) \to G_0(A)$ is bijective, we can use homotopy invariance of G-theory to observe that $G_0(A) \simeq G_0(k) \simeq \mathbb{Z}$ is free abelian on the single generator $[A]$ (which is the image of $[V(0)] \in \mathrm{CH}_2(A)$.)
|
| 90 |
+
|
| 91 |
+
(b) Let $V_{A/I}(\mathfrak{p})$ be an integral closed subset of $|\mathrm{Spec}(A/I)|$ (where $\mathfrak{p}$ is a prime ideal of $A/I$). Under the bijection $|\mathrm{Spec}(A/I)| \simeq V_A(I)$, it corresponds to the integral closed subset $V_A(\mathbf{q})$, where $\mathbf{q} = \phi^{-1}(\mathfrak{p})$ is the contraction of $\mathfrak{p}$. Therefore the clockwise composite sends $[V(\mathfrak{p})]$ to
|
| 92 |
+
|
| 93 |
+
$$ \gamma_A \phi_* [V_{A/I}(\mathfrak{p})] = \gamma_A [V_A(\mathbf{q})] = [A/\mathbf{q}]. $$
|
| 94 |
+
|
| 95 |
+
Since $(A/I)/\mathfrak{p} \simeq A/\mathbf{q}$, the counter-clockwise composite is given by
|
| 96 |
+
|
| 97 |
+
$$ \phi_* \gamma_{A/I} [V_{A/I}(\mathfrak{p})] = \phi_* [(A/I)/\mathfrak{p}] = [A/\mathbf{q}]. $$
|
| 98 |
+
|
| 99 |
+
4. Let $A$ be a noetherian ring and let $V(\mathfrak{p})$ and $V(\mathbf{q})$ be distinct integral closed subsets of $|\mathrm{Spec}(A)|$, both of dimension $d$. Prove the formula
|
| 100 |
+
|
| 101 |
+
$$ [A/(\mathfrak{p} \cap \mathbf{q})]_d = [V(\mathfrak{p})] + [V(\mathbf{q})] $$
|
| 102 |
+
|
| 103 |
+
in $\mathrm{CH}_d(A)$.
|
| 104 |
+
|
| 105 |
+
The construction $[-]_d$ is additive in short exact sequences. Apply this to the short exact sequence
|
| 106 |
+
|
| 107 |
+
$$ 0 \rightarrow A/(\mathfrak{p} \cap \mathbf{q}) \rightarrow A/\mathfrak{p} \oplus A/\mathbf{q} \rightarrow A/(\mathfrak{p}+\mathbf{q}) \rightarrow 0. $$
|
| 108 |
+
|
| 109 |
+
Observe that $[A/(\mathfrak{p}+\mathbf{q})]_d = 0$ since $\mathrm{Supp}_A(A/(\mathfrak{p}+\mathbf{q})) = V(\mathfrak{p}+\mathbf{q}) = V(\mathfrak{p}) \cap V(\mathbf{q})$ is of dimension strictly less than $d$ (since $V(\mathfrak{p})$ and $V(\mathbf{q})$ are distinct).
|
samples/texts_merged/4729919.md
ADDED
|
@@ -0,0 +1,576 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# The Largest Volume Conjugacy Class in Most Compact Simple Lie Groups
|
| 5 |
+
|
| 6 |
+
Woody Lichtenstein
|
| 7 |
+
|
| 8 |
+
June 16, 2021
|
| 9 |
+
|
| 10 |
+
## Abstract
|
| 11 |
+
|
| 12 |
+
We provide some details about the largest volume conjugacy class in compact simple Lie groups of types $A_n$, $B_n$, $C_n$, $D_n$, and $G_2$.
|
| 13 |
+
|
| 14 |
+
## 1 Introduction
|
| 15 |
+
|
| 16 |
+
Let $G$ be a compact connected simple Lie group with maximal torus $T$. Every element of $G$ is conjugate to an element of $T$, and the conjugacy classes of $G$ are parametrized by $T/W$ where $W$ is the Weyl group. The size of each conjugacy class is given by the Weyl Jacobian $J$ in the Weyl Integration formula
|
| 17 |
+
|
| 18 |
+
$$J : T/W \to \mathbb{R}^+.$$
|
| 19 |
+
|
| 20 |
+
Equivalently, $J$ is a $W$-invariant function
|
| 21 |
+
|
| 22 |
+
$$J : T \to \mathbb{R}^+ \text{ (the non-negative real numbers)}$$
|
| 23 |
+
|
| 24 |
+
vanishing on the singular set of $T$. Since $T$ is compact, $J$ has a maximum on $T/W$. For the groups studied in this paper that maximum is unique and can be described by simple geometric or algebraic properties.
|
| 25 |
+
|
| 26 |
+
## Related work
|
| 27 |
+
|
| 28 |
+
For the classical matrix groups, finding the largest conjugacy class is the same as determining the most likely set of eigenvalues for a random group element. This is a tiny step towards understanding more sophisticated questions about the distribution of eigenvalues of random matrices, which is a large well developed subject. A nice survey with many references is [2]. In particular, section 4 of that paper begins with a discussion of the Weyl Jacobian for unitary groups and includes an explanation of the relation between the unitary Weyl Jacobian and Toeplitz determinants.
|
| 29 |
+
---PAGE_BREAK---
|
| 30 |
+
|
| 31 |
+
## 2 Measuring the Size of a Conjugacy Class
|
| 32 |
+
|
| 33 |
+
Let $\mathfrak{g}$ be the Lie algebra of $G$, and let $t$ be the Lie algebra of $T$. Under the adjoint action of $T$, $\mathfrak{g}$ decomposes into
|
| 34 |
+
|
| 35 |
+
$$ \mathfrak{g} = t \oplus \sum_{i=1}^{m} \mathfrak{l}_i $$
|
| 36 |
+
|
| 37 |
+
where each $\mathfrak{l}_i$ is a $T$-invariant subspace of real dimension 2.
|
| 38 |
+
|
| 39 |
+
Restricted to $\mathfrak{l}_i$, the adjoint action of $\exp(H) \in T$ is rotation through an angle $\alpha_i(H)$ where $H \in t$ and $\alpha_i : i \to \mathbb{R}$ is a linear function. [The complexification $\mathfrak{l}_i \otimes \mathbb{C}$ splits into two $T$-invariant subspaces of complex dimension 1. The eigenvalues for $\exp(H)$ on these subspaces are $\exp(\pm \alpha_i(H)\sqrt{-1})$ where $\alpha_i$ is a positive root of $\mathfrak{g}$ and $-\alpha_i$ is a negative root of $\mathfrak{g}$.]
|
| 40 |
+
|
| 41 |
+
With respect to the Ad($G$)-invariant definite bilinear form on $\mathfrak{g}$, $t^\perp = \sum_{i=1}^m \mathfrak{l}_i$. The $2m$-dimensional volume of the conjugacy class containing $t \in T$ is proportional to the determinant of the derivative of the adjoint action of $G/T$ on $t$. This is a map from $t^\perp$ to itself that can be computed as
|
| 42 |
+
|
| 43 |
+
$$ \lim_{s \to 0} (\exp(sX)t \exp(-sX)), \text{ for } X \in t^\perp, $$
|
| 44 |
+
|
| 45 |
+
which is just $Xt - tX$.
|
| 46 |
+
|
| 47 |
+
Expressing $Xt - tX$ as right translation by $t$ of an element of $t^\perp$ gives
|
| 48 |
+
|
| 49 |
+
$$ Xt - tX = (X - tX t^{-1})t. $$
|
| 50 |
+
|
| 51 |
+
So we need to compute the determinant of the map $X \mapsto (\mathrm{Id} - \mathrm{Ad}(t))X$. This is a product of the determinants of the associated maps from each $\mathfrak{l}_i$ to itself. Each of these associated maps is of the form $\mathrm{Id} - \mathrm{Rotation}(\theta)$ for some angle $\theta$, and that has determinant
|
| 52 |
+
|
| 53 |
+
$$ \det(\mathrm{Id} - \mathrm{Rotation}(\theta)) = (1 - \cos(\theta))^2 + \sin^2(\theta) = 2(1 - \cos(\theta)) = 4 \sin^2\left(\frac{\theta}{2}\right). $$
|
| 54 |
+
|
| 55 |
+
Thus the $2m$-dimensional volume of the conjugacy class containing $t \in T$ is proportional to the product over the positive roots $\alpha_i$ of $\sin^2(\alpha_i(\log(t))/2)$. Ignoring constant factors,
|
| 56 |
+
|
| 57 |
+
$$ V(t) = \prod_{\alpha \in \Delta} \sin^2(\alpha(\log(t))/2), \qquad (2.1) $$
|
| 58 |
+
|
| 59 |
+
where $\Delta$ is the set of positive roots.
|
| 60 |
+
---PAGE_BREAK---
|
| 61 |
+
|
| 62 |
+
### 3 The special unitary group
|
| 63 |
+
|
| 64 |
+
$SU(n)$ is the group of unitary $n \times n$ matrices with determinant 1. For $SU(n)$, $T/W$ can be parametrized by diagonal matrices
|
| 65 |
+
|
| 66 |
+
$$t_{\theta} := \operatorname{diag}(e^{i\theta_0}, \dots, e^{i\theta_{n-1}})$$
|
| 67 |
+
|
| 68 |
+
with
|
| 69 |
+
|
| 70 |
+
$$0 \le \theta_0 \le \dots \le \theta_{n-1} \le 2\pi$$
|
| 71 |
+
|
| 72 |
+
and $\theta_0 + \dots + \theta_{n-1} = 2\pi m$ for some integer $m$.
|
| 73 |
+
|
| 74 |
+
The positive roots $\Delta$ have values at $\log(t_\theta)$ consisting of the set
|
| 75 |
+
|
| 76 |
+
$$\{\theta_k - \theta_j \mid 0 \le j < k < n\}.$$
|
| 77 |
+
|
| 78 |
+
The factor $\sin^2((\theta_k - \theta_j)/2)$ is half the straight line distance in the complex plane between $e^{i\theta_j}$ and $e^{i\theta_k}$. So the volume of the conjugacy class corresponding to the diagonal matrix $t_\theta$ attains its maximum when the n-gon inscribed in the unit circle defined by those n vertices maximizes the product of the lengths of all its edges and diagonals.
|
| 79 |
+
|
| 80 |
+
**Theorem 3.1.** The volume of the conjugacy class corresponding to the diagonal matrix $t_\theta$ is maximized when those n diagonal matrix entries are equally spaced around the unit circle, i.e. when $\theta_k = \pi(2k)/n$ for odd n, or $\theta_k = \pi(2k+1)/n$ for even n.
|
| 81 |
+
|
| 82 |
+
*Proof.* Each positive-negative root pair corresponds to a pair of angles $\theta_j, \theta_k$ with $0 \le j \ne k < n$. We choose the root $\theta_k - \theta_j$ for which $r := k - j \bmod n$ falls in the interval $[0, n/2]$.
|
| 83 |
+
|
| 84 |
+
**Example:** When $n=18$, $j=4$, $k=15$, we pick $\theta_4 - \theta_{15}$ because $9 \ge 7 \equiv 4 - 15 \pmod{18}$.
|
| 85 |
+
|
| 86 |
+
Unless $r = n/2$, the root $\theta_k - \theta_j$ is part of a cycle of length $M = n/\gcd(n,r)$ of roots
|
| 87 |
+
|
| 88 |
+
$$\theta_{j+(m+1)r} - \theta_{j+mr}, \quad 0 \le m < M,$$
|
| 89 |
+
|
| 90 |
+
where subscript addition is mod $n$. This cycle wraps around the circle $q = r/\gcd(n,r)$ times. Thus we can decompose the set of positive-negative root pairs into disjoint subsets, and correspondingly we can split the product in the formula for $V(t)$ into factors, according to the residue $r$. The main idea of the proof is that maximizing any one of these factors implies that a subset of the eigenvalues should be evenly spaced. Since these conditions are all compatible, we conclude that the maximum of $V(t)$ occurs where all the disjoint factors are simultaneously maximized, and that occurs where all the eigenvalues are evenly spaced. The
|
| 91 |
+
---PAGE_BREAK---
|
| 92 |
+
|
| 93 |
+
values of the evenly spaced eigenvalues are then fixed by the condition that their product should be 1.
|
| 94 |
+
|
| 95 |
+
Now for fixed $j$ and $r$ consider the problem of maximizing
|
| 96 |
+
|
| 97 |
+
$$ \sin^2\left(\frac{\theta_{j+r} - \theta_j}{2}\right) \cdot \sin^2\left(\frac{\theta_{j+2r} - \theta_{j+r}}{2}\right) \cdots \sin^2\left(\frac{\theta_{j+Mr} - \theta_{j+(M-1)r}}{2}\right) $$
|
| 98 |
+
|
| 99 |
+
subject to the constraint
|
| 100 |
+
|
| 101 |
+
$$ (\theta_{j+r} - \theta_j) + (\theta_{j+2r} - \theta_{j+r}) + \dots + (\theta_{j+Mr} - \theta_{j+(M-1)r}) = 2\pi q. $$
|
| 102 |
+
|
| 103 |
+
Setting $\beta_k = \theta_{j+(k+1)r} - \theta_{j+kr}$, for $k = 0, \dots, M-1$, this is equivalent to maximizing
|
| 104 |
+
|
| 105 |
+
$$ f(\beta_0, \beta_1, \dots, \beta_{M-1}) = \sin^2(\beta_0/2) \sin^2(\beta_1/2) \dots \sin^2(\beta_{M-1}/2) $$
|
| 106 |
+
|
| 107 |
+
subject to the constraint
|
| 108 |
+
|
| 109 |
+
$$ g(\beta_0, \beta_1, \dots, \beta_{M-1}) = \beta_0 + \beta_1 + \dots + \beta_{M-1} = 2\pi q. $$
|
| 110 |
+
|
| 111 |
+
Replacing $f$ with $\log(f)$, the method of Lagrange multipliers implies that
|
| 112 |
+
|
| 113 |
+
$$ (\cot(\beta_0/2), \dots, \cot(\beta_{M-1}/2)) $$
|
| 114 |
+
|
| 115 |
+
must be proportional to $(1, \dots, 1)$, i.e.
|
| 116 |
+
|
| 117 |
+
$$ \cot(\beta_0/2) = \cot(\beta_1/2) = \dots = \cot(\beta_{M-1}/2) $$
|
| 118 |
+
|
| 119 |
+
or
|
| 120 |
+
|
| 121 |
+
$$ \beta_0 = \beta_1 = \dots = \beta_{M-1}. $$
|
| 122 |
+
|
| 123 |
+
In other words, $\theta_j, \theta_{j+r}, \dots, \theta_{j+(M-1)r}$, $\theta_{j+Mr} = \theta_j$ must be equally spaced. The special case of even $n$, with $r=n/2$, is slightly different, because a single positive-negative root pair makes up a cycle of length 2 that wraps once around the circle. [Example: when $n=18, r=9, j=4$, the cycle consists of $\theta_{13}-\theta_4$ and $\theta_4-\theta_{13}$.] But the same general principle applies. Set $\beta = \theta_{j+n/2} - \theta_j$. The maximum of $f(\beta) = \sin^2(\beta/2)$ is 1, and occurs at $\beta = \pi$, i.e. when $\theta_j$ and $\theta_{j+n/2}$ are evenly spaced. $\square$
|
| 124 |
+
|
| 125 |
+
# 4 The Even Orthogonal Group
|
| 126 |
+
|
| 127 |
+
For $SO(2n)$, $T/W$ can be parametrized by $2 \times 2$ block diagonal rotation matrices with rotation angles $0 \le \theta_1 \le \dots \le \theta_n \le \pi$. The positive roots are $\theta_k - \theta_j$ and $\theta_k + \theta_j$, $1 \le j < k \le n$.
|
| 128 |
+
|
| 129 |
+
**Lemma 4.1.** $\sin^2((\theta_1 - \theta_2)/2) \sin^2((\theta_1 + \theta_2)/2) = (\cos \theta_1 - \cos \theta_2)^2/4$
|
| 130 |
+
---PAGE_BREAK---
|
| 131 |
+
|
| 132 |
+
*Proof.* Use the half-angle formula for sine and straightforward algebra. $\square$
|
| 133 |
+
|
| 134 |
+
**Corollary 4.2.** The largest conjugacy class of $SO(2n)$ corresponds to rotation angles $0 \le \theta_1 \le \dots \le \theta_n \le \pi$ for which the real polynomial
|
| 135 |
+
|
| 136 |
+
$$p(x) = (x - \cos \theta_1)(x - \cos \theta_2) \cdots (x - \cos \theta_n)$$
|
| 137 |
+
|
| 138 |
+
has the largest discriminant among all real polynomials of degree $n$ with $n$ real roots in the interval $[-1, 1]$.
|
| 139 |
+
|
| 140 |
+
**Lemma 4.3.** If $p(x) = (x-x_1)(x-x_2)\cdots(x-x_n)$ has the largest discriminant among all real polynomials with real roots $1 \ge x_1 > x_2 > \cdots > x_n \ge -1$, then $x_1 = 1$ and $x_n = -1$.
|
| 141 |
+
|
| 142 |
+
*Proof.* If $x_1 < 1$, then increasing $x_1$ to 1 increases its distance from all the other roots and hence increases the discriminant. Similarly if $x_n > -1$, then decreasing $x_n$ to $-1$ increases its distance from all the other roots and hence increases the discriminant. $\square$
|
| 143 |
+
|
| 144 |
+
**Theorem 4.4.** If $p(x) = (x-x_1)(x-x_2)\cdots(x-x_n)$ has the largest discriminant among all real polynomials with real roots $1 \ge x_1 > x_2 > \cdots > x_n \ge -1$, then $p$ satisfies the ordinary differential equation
|
| 145 |
+
|
| 146 |
+
$$p''(x) = \frac{-n(n-1)}{(1-x^2)}p(x)$$
|
| 147 |
+
|
| 148 |
+
*Proof.* Let $X = \{(x_2, x_3, \ldots, x_{n-2}, x_{n-1} | 1 = x_1 > x_2 > \cdots > x_n = -1\}$ with closure $\bar{X}$ and boundary $\bar{X} - X$. Let $D(x_1, x_2, \ldots, x_n) = \prod_{1 \le i < k \le n} (x_i - x_k)$ be the positive square root of the discriminant of $p(x)$. So $D > 0$ on $X$, $D \ge 0$ on $\bar{X}$, and $D \equiv 0$ on the boundary $\bar{X} - X$. Thus the maximum of $D$ on the compact set $\bar{X}$ occurs in $X$ and its minimum occurs on the boundary $\bar{X} - X$.
|
| 149 |
+
|
| 150 |
+
For $2 \le j \le n-1$, let
|
| 151 |
+
|
| 152 |
+
$$E_j(x_1, x_2, \ldots, x_n) = (x_1 - x_j)(x_2 - x_j) \cdots (x_{j-1} - x_j)(x_j - x_{j+1}) \cdots (x_j - x_n)$$
|
| 153 |
+
|
| 154 |
+
be the product of the factors of $D$ that include $x_j$, let
|
| 155 |
+
|
| 156 |
+
$$F_j = D/E_j = \prod_{1 \le (i \ne j) < (k \ne j) \le n} (x_i - x_k)$$
|
| 157 |
+
|
| 158 |
+
, and let
|
| 159 |
+
|
| 160 |
+
$$q_j(x) = (x - x_1)(x - x_2) \cdots (x - x_{j-1})(x - x_{j+1}) \cdots (x - x_n) = p(x)/(x - x_j)$$
|
| 161 |
+
---PAGE_BREAK---
|
| 162 |
+
|
| 163 |
+
be the product of the factors of $p$ that do not include $x_j$. Then up to sign,
|
| 164 |
+
$q'_j(x_j) = \frac{\partial E_j}{\partial x_j}.$
|
| 165 |
+
|
| 166 |
+
At the maximum of $D$, $\frac{\partial D}{\partial x_j} = 0$, and since $F_j > 0$ on X, $\frac{\partial E_j}{\partial x_j} = 0$. Thus at the maximum of $D$, $p(x) = (x-x_j)q_j(x)$ with $q'_j(x_j) = 0$. Now $p'(x) = (x-x_j)q'_j(x)+q_j(x)$, and $p''(x) = (x-x_j)q''_j(x) + 2q'_j(x)$, and therefore $p''(x_j) = 0$.
|
| 167 |
+
|
| 168 |
+
This shows that $(1-x^2)p''(x)$ and $p(x)$ have all the same roots, and therefore they agree up to a constant factor. Since $p(x)$ has highest degree term $x^n$, $p''(x)$ has highest degree term $n(n-1)x^{n-2}$, the constant factor must be $-n(n-1)$. $\square$
|
| 169 |
+
|
| 170 |
+
**Corollary 4.5.** If $n$ is even, $p(x)$ is even. If $n$ is odd, $p(x)$ is odd.
|
| 171 |
+
|
| 172 |
+
*Proof.* Let $ax^{n-(2k+1)}$ be the highest degree term in $p(x)$ with parity opposite to $n$, with $k \ge 0$. Then $(n-2k-1)(n-2k-2)ax^{n-(2k+3)}$ is the highest degree term in $p''(x)$ with parity opposite to $n$. Since $p(x) = \frac{-1}{n(n-1)}(1-x^2)p''(x)$, we must have $\frac{-1}{n(n-1)}(n-2k-1)(n-2k-2)a = a$. But this implies $a=0$, so all terms in $p(x)$ must have degree of the same parity as $n$. $\square$
|
| 173 |
+
|
| 174 |
+
**Corollary 4.6.** *The cosines of the rotation angles of the largest conjugacy class of SO(2n) are algebraic over Q.*
|
| 175 |
+
|
| 176 |
+
*Proof.* Assume $p(x) = x^n + a_{n-2}x^{n-2} + a_{n-4}x^{n-4} + \dots$. Using the equation
|
| 177 |
+
$p(x) = \frac{-1}{n(n-1)}(1-x^2)p''(x)$, we can successively solve for $a_{n-2}, a_{n-4}, \dots$. At each
|
| 178 |
+
step we get equations with rational coefficients, e.g. $a_{n-2} = \frac{(n)(n-1)}{((n-2)(n-3)-(n)(n-1))}$,
|
| 179 |
+
$a_{n-4} = \frac{(n-2)(n-3)a_{n-2}}{(n-4)(n-5)-(n)(n-1)}$, etc. Thus the coefficients of $p$ are all rational. $\square$
|
| 180 |
+
|
| 181 |
+
[The first few instances of $p(x)$ for $n=1,2,3$ are $x, x^2-1$, and $x^3-x = x(x^2-1)$ respectively.]
|
| 182 |
+
|
| 183 |
+
**Corollary 4.7.** For very large *n*, the rotation angles of the largest conjugacy class of SO(2*n*) are close to evenly distributed around the circle.
|
| 184 |
+
|
| 185 |
+
*Proof.* On an interval of length $\Delta x$ away from the end points $-1$ and $1$ and short enough for $1 - x^2$ to be approximately constant, $p''(x) = -\lambda^2 p(x)$ with $\lambda = \sqrt{\frac{n(n-1)}{(1-x^2)}}$, so on this interval $p(x) \approx \sin \lambda x$, and therefore $p(x)$ should have approximately $\lambda \Delta x / \pi$ zeros. Since $x = \cos \theta$, $dx = -\sin \theta d\theta$, and therefore $\Delta x / \sqrt{1-x^2} \approx \Delta \theta$. It follows that $\lambda \Delta x / \pi \approx \sqrt{n(n-1)} \Delta \theta / \pi \approx \frac{n}{\pi} \Delta \theta$. A more rigorous proof may be found in the Appendix. $\square$
|
| 186 |
+
---PAGE_BREAK---
|
| 187 |
+
|
| 188 |
+
**Remark:** Sam Lichtenstein pointed out that the polynomials $p(x)$ defined here satisfy the same ODE as the Jacobi polynomials with parameters $\alpha = -1$ and $\beta = -1$. A detailed and rigorous treatment of asymptotics for Jacobi polynomials is available in [8] Chapter 8. See for example Theorem 8.21.8.
|
| 189 |
+
|
| 190 |
+
**Corollary 4.8.** For $SO(2n)$ there is a unique maximum for $V(t)$ in $T/W$.
|
| 191 |
+
|
| 192 |
+
*Proof.* The set of cosines of the rotation angles for any maximum of $V(t)$ is defined by the roots of $p(x)$. The rotation angles themselves are therefore defined up to order and sign. In particular, any two choices of signs differ by some number $0 \le k \le n-2$ of sign changes of the rotation angles that are not 0 or $\pi$. The Weyl group of $SO(2n)$ includes all permutations and even numbers of sign changes. So if $k$ is even we know that the two choices of an element of $T$ are conjugate and therefore identical in $T/W$. But if $k$ is odd, we can also change the sign of either of the rotation angles that is 0 or $\pi$ without changing the selected element of $T$, and therefore we can still find an element of $W$ that transforms one of the two selections of angles to the other. $\square$
|
| 193 |
+
|
| 194 |
+
**Corollary 4.9.** For $SO(8)$ the cosines of the rotation angles of the largest conjugacy class are $\pm 1, \pm\sqrt{1/5}$.
|
| 195 |
+
|
| 196 |
+
*Proof.* For $n=4$, $p(x) = x^4 - (6/5)x^2 + (1/5) = (x^2-1)(x^2-(1/5))$. $\square$
|
| 197 |
+
|
| 198 |
+
**Corollary 4.10.** For $SO(10)$ the cosines of the rotation angles of the largest conjugacy class are 0, $\pm 1, \pm\sqrt{3/7}$.
|
| 199 |
+
|
| 200 |
+
*Proof.* For $n=5$, $p(x) = x^5 - (10/7)x^3 + (3/7)x = x(x^2-1)(x^2-(3/7))$. $\square$
|
| 201 |
+
|
| 202 |
+
**Corollary 4.11.** For $SO(12)$ the cosines of the rotation angles of the largest conjugacy class are $\pm 1, \pm\sqrt{(1/3)(1\pm\sqrt{4/7})}$.
|
| 203 |
+
|
| 204 |
+
*Proof.* For $n=6$,
|
| 205 |
+
$$p(x) = x^6 - (5/3)x^4 + (5/7)x^2 - (1/21) = (x^2-1)(x^4 - (2/3)x^2 + (1/21)). \quad \square$$
|
| 206 |
+
|
| 207 |
+
# 5 The Odd Orthogonal Group
|
| 208 |
+
|
| 209 |
+
For $SO(2n+1)$, $T/W$ can be parametrized by $2 \times 2$ block diagonal rotation matrices with rotation angles $0 \le \theta_1 \le \cdots \le \theta_n \le \pi$. The positive roots are $\theta_j$, $\theta_k - \theta_j$ and $\theta_k + \theta_j$, $1 \le j < k \le n$. Comparing with $SO(2n)$, the maximal torus $T$ fixes a vector orthogonal to the $n$ rotation planes corresponding to the $2 \times 2$ block diagonal rotation matrices. The roots $\theta_j$ correspond to Lie algebra elements that mix the $j^{th}$ rotation plane with the fixed vector.
|
| 210 |
+
---PAGE_BREAK---
|
| 211 |
+
|
| 212 |
+
For rotation angles $0 \le \theta_1 \le \dots \le \theta_n \le \pi$, we continue to denote by $p(x) \in \mathbb{R}[x]$ the polynomial
|
| 213 |
+
|
| 214 |
+
$$p(x) = (x - \cos \theta_1)(x - \cos \theta_2) \cdots (x - \cos \theta_n),$$
|
| 215 |
+
|
| 216 |
+
and by $D(x)$ the positive square root of the discriminant of $p$.
|
| 217 |
+
|
| 218 |
+
Denote by $f$ the function
|
| 219 |
+
|
| 220 |
+
$$f(x) := \sqrt{1-x} \cdot p(x).$$
|
| 221 |
+
|
| 222 |
+
**Proposition 5.1.** The largest conjugacy class in $SO(2n+1)$ corresponds to rotation angles $\theta_i$ for which the function $f(x)$ has the largest “type $B_n$” modified square root discriminant
|
| 223 |
+
|
| 224 |
+
$$M(\cos \theta_1, \ldots, \cos \theta_n) := \sqrt{(1 - \cos \theta_1)(1 - \cos \theta_2)\cdots(1 - \cos \theta_n)} D(\cos \theta_1, \ldots, \cos \theta_n).$$
|
| 225 |
+
|
| 226 |
+
*Proof.* By Lemma 4.1, up to constants, the positive square roots of the factors in formula (2.1) corresponding to the roots $\theta_k - \theta_j$ and $\theta_k + \theta_j$ comprise exactly $D(\cos \theta_1, \ldots, \cos \theta_n)$. The remaining factors in the square root of formula (2.1) correspond to the roots $\theta_j$. Since $\sin^2(\psi/2) = (1 - \cos \psi)/2$, these agree with $M/D$ (again up to constants). $\square$
|
| 227 |
+
|
| 228 |
+
**Theorem 5.2.** When $M$ is at its maximum, $f(x)$ satisfies the ordinary differential equation
|
| 229 |
+
|
| 230 |
+
$$f''(x) = \frac{-n^2}{1-x^2} \frac{1-x + (1/(4n^2))(1+x)}{1-x} f(x).$$
|
| 231 |
+
|
| 232 |
+
*Proof.* Straightforward computation gives
|
| 233 |
+
|
| 234 |
+
$$f''(x) = (1-x)^{-3/2}[(1-x)^2p''(x) - (1-x)p'(x) - (1/4)p(x)] = (1-x)^{-3/2}[z(x)],$$
|
| 235 |
+
|
| 236 |
+
where $z(x)$ is a polynomial of degree $n$.
|
| 237 |
+
|
| 238 |
+
As in the proof of Lemma 4.3, we know that $\cos \theta_n = -1$ when $M$ achieves its maximum, so $p(x)$ is divisible by $(x+1)$. Let $p(x) = (x+1)q(x)$.
|
| 239 |
+
|
| 240 |
+
As in the proof of Theorem 4.4 we can conclude that for any root $\gamma$ of $q$,
|
| 241 |
+
|
| 242 |
+
$$f(x) = (x - \gamma)g_{\gamma}(x),$$
|
| 243 |
+
|
| 244 |
+
where $g'_{\gamma}(\gamma) = 0$. And again as in the proof of Theorem 4.4 it follows that $f''(\gamma) = 0$. This implies that $z(x)$ must be divisible by $q$, which has degree $n-1$, and therefore $z(x) = (\lambda x + \mu)q(x)$, or
|
| 245 |
+
|
| 246 |
+
$$(1+x)z(x) = (\lambda x + \mu)p(x),$$
|
| 247 |
+
---PAGE_BREAK---
|
| 248 |
+
|
| 249 |
+
for suitable constants $\lambda, \mu$. Define
|
| 250 |
+
|
| 251 |
+
$$w(x) = z(x) + \frac{1}{4}p(x) = (1-x)^2p''(x) - (1-x)p'(x).$$
|
| 252 |
+
|
| 253 |
+
Now
|
| 254 |
+
|
| 255 |
+
$$(1+x)w(x) = (1+x)(z(x) + \frac{1}{4}p(x)) = (\lambda x + \mu)p(x) + \frac{1}{4}(1+x)p(x) \\ = ((\lambda + \frac{1}{4})x + (\mu + \frac{1}{4}))p(x).$$
|
| 256 |
+
|
| 257 |
+
Note that if any $\theta_j = 0$ then $M$ vanishes, so when $M$ is at its maximum, $x=1$ is not a root of $p$. Since $w(x)$ is divisible by $(1-x)$ while $p(x)$ is not, it follows that
|
| 258 |
+
|
| 259 |
+
$$(\lambda + \frac{1}{4})x + (\mu + \frac{1}{4})$$
|
| 260 |
+
|
| 261 |
+
must vanish at $x=1$, and thus we can write $(1+x)w(x) = \beta(1-x)p(x)$. Comparing coefficients of $x^{n+1}$ gives $\beta = -n^2$.
|
| 262 |
+
|
| 263 |
+
Finally, straightforward substitution of $f(x) = \sqrt{1-xp(x)}$ into
|
| 264 |
+
|
| 265 |
+
$$f''(x) = (1-x)^{-3/2}[w(x) - \frac{1}{4}p(x)]$$
|
| 266 |
+
|
| 267 |
+
yields the formula to be proved. $\square$
|
| 268 |
+
|
| 269 |
+
**Corollary 5.3.** For very large $n$, $f''(x) \approx \frac{-n^2}{1-x^2}f(x)$, and therefore for very large $n$, the rotation angles of the largest conjugacy class of $SO(2n+1)$ are close to evenly distributed around the circle.
|
| 270 |
+
|
| 271 |
+
*Proof.* See the proof of Corollary 4.7. $\square$
|
| 272 |
+
|
| 273 |
+
**Corollary 5.4.** For $SO(7)$ the cosines of the rotation angles of the largest conjugacy class are $-1, (1 \pm \sqrt{6})/5$.
|
| 274 |
+
|
| 275 |
+
*Proof.* Use $w(x) = (-9)(1-x)q(x)$ where $q(x) = x^2+bx+c$, $p(x) = (1+x)q(x)$, and $w(x) = (1-x)^2p''(x) - (1-x)p'(x)$. The result is $q(x) = x^2 - (2/5)x - (1/5)$ or $p(x) = x^3 + (3/5)x^2 - (3/5)x - (1/5)$. $\square$
|
| 276 |
+
|
| 277 |
+
# 6 The Symplectic Group
|
| 278 |
+
|
| 279 |
+
Recall that the compact symplectic group $Sp(2n)$ is the intersection
|
| 280 |
+
$SU(2n) \cap Sp(2n, \mathbb{C}) \subset GL_{2n}(\mathbb{C})$, i.e. the matrices which preserve both the standard
|
| 281 |
+
hermitian form and the standard symplectic form on $\mathbb{C}^{2n}$, see e.g. [5]. For $Sp(2n)$
|
| 282 |
+
take $T/W$ to be diagonal matrices of the form $e^{i\theta_1}, e^{i\theta_2}, \dots, e^{i\theta_n}, e^{-i\theta_1}, e^{-i\theta_2}, \dots, e^{-i\theta_n}$
|
| 283 |
+
---PAGE_BREAK---
|
| 284 |
+
|
| 285 |
+
with $0 \le \theta_1 \le \dots \le \theta_n \le \pi$. The positive roots are $2\theta_j$, $\theta_k - \theta_j$ and $\theta_k + \theta_j$, $1 \le j < k \le n$. [The roots $2\theta_j$ correspond to Lie algebra elements that mix the eigenspaces for $e^{i\theta_j}$ and $e^{-i\theta_j}$.]
|
| 286 |
+
|
| 287 |
+
For rotation angles $0 \le \theta_1 \le \dots \le \theta_n \le \pi$, we continue to denote by $p(x) \in \mathbb{R}[x]$ the polynomial
|
| 288 |
+
|
| 289 |
+
$$p(x) = (x - \cos \theta_1)(x - \cos \theta_2) \dots (x - \cos \theta_n),$$
|
| 290 |
+
|
| 291 |
+
and by $D(x)$ the positive square root of the discriminant of $p$.
|
| 292 |
+
|
| 293 |
+
In this section, we denote by $f = f_{\text{type } C_n}$ the function
|
| 294 |
+
|
| 295 |
+
$$f(x) := \sqrt{1 - x^2} \cdot p(x),$$
|
| 296 |
+
|
| 297 |
+
and by $M$ the "type $C_n$" modified square root discriminant
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
\begin{aligned}
|
| 301 |
+
M(\cos \theta_1, \ldots, \cos \theta_n) &= \sqrt{(1 - \cos^2 \theta_1)(1 - \cos^2 \theta_2) \cdots (1 - \cos^2 \theta_n)} D(\cos \theta_1, \ldots, \cos \theta_n) \\
|
| 302 |
+
&= \sin \theta_1 \sin \theta_2 \cdots \sin \theta_n D(\cos \theta_1, \ldots, \cos \theta_n).
|
| 303 |
+
\end{aligned}
|
| 304 |
+
$$
|
| 305 |
+
|
| 306 |
+
**Proposition 6.1.** *The largest conjugacy class in Sp(2n) corresponds to the rotation angles $0 \le \theta_1 \le \dots \le \theta_n \le \pi$ for which the type $C_n$ modified square root discriminant $M$ achieves its maximum.*
|
| 307 |
+
|
| 308 |
+
*Proof.* By Lemma 4.1, up to constant factors the positive square root of the factors in formula (2.1) for the volume of a conjugacy class corresponding to the roots $\theta_k - \theta_j$ and $\theta_k + \theta_j$ are exactly $D(\cos \theta_1, \ldots, \cos \theta_n)$. The remaining factors in the square root of formula (2.1) correspond to the roots $2\theta_j$, which match up with the remaining factors in $M$. $\square$
|
| 309 |
+
|
| 310 |
+
**Theorem 6.2.** When $M$ is at its maximum, $f(x)$ satisfies the ordinary differential equation
|
| 311 |
+
|
| 312 |
+
$$f''(x) = \frac{-(n^2+n)[(1-x^2)+1/(n^2+n)]}{(1-x^2)^2} f(x).$$
|
| 313 |
+
|
| 314 |
+
*Proof.* The proof is entirely analogous to that of Theorem 5.2, using the fact that in this case $\pm 1$ cannot be roots of $p$. $\square$
|
| 315 |
+
|
| 316 |
+
**Corollary 6.3.** For very large $n$, $f''(x) \approx \frac{-(n^2+n)}{1-x^2} f(x)$, and therefore for very large $n$, the eigenvalues of the largest conjugacy class of Sp(2n) are close to evenly distributed around the circle.
|
| 317 |
+
|
| 318 |
+
*Proof.* See the proof of Corollary 5.3. $\square$
|
| 319 |
+
|
| 320 |
+
**Corollary 6.4.** For Sp(4) the real parts of the eigenvalues of the largest conjugacy class are $\pm\sqrt{1/3}$.
|
| 321 |
+
---PAGE_BREAK---
|
| 322 |
+
|
| 323 |
+
*Proof.* Use $w(x) = 6(x^2 - 1)p(x)$ where $w(x) = (1 - x^2)^2 p''(x) - 2x(1 - x^2)p'(x)$ and $p(x) = x^2 + b$. The result is $p(x) = x^2 - \frac{1}{3}$. $\square$
|
| 324 |
+
|
| 325 |
+
**Corollary 6.5.** For Sp(6) the real parts of the eigenvalues of the largest conjugacy class are $0, \pm\sqrt{3/5}$.
|
| 326 |
+
|
| 327 |
+
*Proof.* Use $w(x) = 12(x^2 - 1)p(x)$ where $w(x) = (1 - x^2)^2 p''(x) - 2x(1 - x^2)p'(x)$ and $p(x) = x^3 + bx$. The result is $p(x) = x^3 - \frac{3}{5}x$. $\square$
|
| 328 |
+
|
| 329 |
+
**Corollary 6.6.** For Sp(8) the real parts of the eigenvalues of the largest conjugacy class are $\pm((3/7) \pm (4/7)\sqrt{3}/10)$.
|
| 330 |
+
|
| 331 |
+
*Proof.* Use $w(x) = 20(x^2 - 1)p(x)$ where $w(x) = (1 - x^2)^2 p''(x) - 2x(1 - x^2)p'(x)$ and $p(x) = x^4 + bx^2 + c$. The result is $p(x) = x^4 - \frac{6}{7}x^2 + \frac{6}{70}$. $\square$
|
| 332 |
+
|
| 333 |
+
# 7 The Exceptional Group $G_2$
|
| 334 |
+
|
| 335 |
+
We describe points $t \in T$ by giving the values of 3 short positive roots at $\log(t)$. To be specific, let $(\theta_1, \theta_2, \theta_3)$ to be the values of 3 of the 6 short roots, each with angle $\frac{2\pi}{3}$ relative to the other two. Note that these 3 roots are not linearly independent since they satisfy the relation $\theta_1 + \theta_2 + \theta_3 = 0$. The 3 overlapping pairs $(\theta_1, \theta_2), (\theta_2, \theta_3), (\theta_3, \theta_1)$ each determines a long/short orthogonal root pair by sum and difference [example: $(\theta_1, \theta_2)$ corresponds to $\sum \theta_i = \sum \theta_i - \sum \theta_j$ which is short, and difference = $\sum \theta_i - \sum \theta_j$ which is long] and the resulting 3 pairs exactly cover all 6 positive/negative root pairs.
|
| 336 |
+
|
| 337 |
+
*Remark.* [11] points out that there are two isomorphic dual representations for the $g_2$ root system inside the 2-plane $x+y+z=0$. Solving for the short roots where $V(t)$ is a maximum can be done in either representation, and these are equivalent.
|
| 338 |
+
|
| 339 |
+
For now, let's set aside the question of how much about an element of $T/W$ is determined by the cosines of the short roots of the log, and proceed with the solution of the following problem:
|
| 340 |
+
|
| 341 |
+
Let $\alpha = \cos\theta_3$, $\beta = \cos\theta_2$, $\gamma = \cos\theta_1$. Let $A = \alpha + \beta + \gamma$, $B = \alpha\beta + \beta\gamma + \alpha\gamma$, and $C = \alpha\beta\gamma$ be the elementary symmetric functions of $\alpha, \beta, \gamma$ so that
|
| 342 |
+
|
| 343 |
+
$$f(x) = (x - \alpha)(x - \beta)(x - \gamma) = x^3 - Ax^2 + Bx - C$$
|
| 344 |
+
|
| 345 |
+
vanishes at $\alpha, \beta, \gamma$. Reordering the $\theta_i$s if necessary, we may assume $\alpha < \beta < \gamma$, so that $D(\alpha, \beta, \gamma) = (\beta - \alpha)(\gamma - \beta)(\gamma - \alpha)$ is the positive square root of the discriminant of $f$. Lemma 4.1 shows that if $t = \exp_T(\theta_1, \theta_2, \theta_3)$ maximizes $V(t)$, then $(\alpha, \beta, \gamma)$ maximizes $D$ subject to the constraint $\theta_1 + \theta_2 + \theta_3 = 0$.
|
| 346 |
+
---PAGE_BREAK---
|
| 347 |
+
|
| 348 |
+
**Proposition 7.1.** With notation as above, let
|
| 349 |
+
|
| 350 |
+
$$\rho(\alpha, \beta, \gamma) = -\cos^{-1}(\alpha) + \cos^{-1}(\beta) + \cos^{-1}(\gamma).$$
|
| 351 |
+
|
| 352 |
+
At the maximum of $D(\alpha, \beta, \gamma)$ subject to $\rho(\alpha, \beta, \gamma) = 0$, the equality $A = B$ holds, i.e.
|
| 353 |
+
|
| 354 |
+
$$\alpha + \beta + \gamma = \alpha\beta + \beta\gamma + \alpha\gamma.$$
|
| 355 |
+
|
| 356 |
+
Here the $\pm$ ambiguity in $\cos^{-1}$ is resolved by choosing the standard value between
|
| 357 |
+
0 and $\pi$ for $\beta$ and $\gamma$ and the negative of the standard value for $\alpha$, and that's why
|
| 358 |
+
there is a minus sign on the first term of $\rho$.
|
| 359 |
+
|
| 360 |
+
*Proof.* By Lagrange multipliers, for $(\alpha, \beta, \gamma) = \operatorname{argmax}\{D \mid \rho = 0\}$, the gradients of $D$ and $\rho$ are aligned, i.e. $\nabla D = \lambda \nabla \rho$ for some constant $\lambda$. Because $D$ is a translation invariant function of $(\alpha, \beta, \gamma)$, we know that $\nabla D$ is orthogonal to $(1, 1, 1)$, and thus $\nabla \rho$ must also be orthogonal to $(1, 1, 1)$. Equivalently,
|
| 361 |
+
|
| 362 |
+
$$\frac{-1}{\sqrt{1-\alpha^2}} + \frac{1}{\sqrt{1-\beta^2}} + \frac{1}{\sqrt{1-\gamma^2}} = 0,$$
|
| 363 |
+
|
| 364 |
+
or
|
| 365 |
+
|
| 366 |
+
$$\frac{-1}{\sin(\theta_1 + \theta_2)} + \frac{1}{\sin\theta_2} + \frac{1}{\sin\theta_1} = 0.$$
|
| 367 |
+
|
| 368 |
+
Clearing fractions gives
|
| 369 |
+
|
| 370 |
+
$$-\sin\theta_1 \sin\theta_2 + \sin\theta_1 \sin(\theta_1 + \theta_2) + \sin\theta_2 \sin(\theta_1 + \theta_2) = 0. \quad (7.1)$$
|
| 371 |
+
|
| 372 |
+
Using the identity $\sin x \sin y = \frac{1}{2}(\cos(x - y) - \cos(x + y))$ we may replace each product of sines above with a difference of cosines. The result is
|
| 373 |
+
|
| 374 |
+
$$\cos(\theta_1 + \theta_2) - \cos(\theta_1 - \theta_2) + \cos(\theta_2) - \cos(2\theta_1 + \theta_2) + \cos(\theta_1) - \cos(\theta_1 + 2\theta_2) = 0,$$
|
| 375 |
+
|
| 376 |
+
or
|
| 377 |
+
|
| 378 |
+
$$A = \alpha + \beta + \gamma = \cos\theta_1 + \cos\theta_2 + \cos(\theta_1 + \theta_2) \\
|
| 379 |
+
= \cos(\theta_1 - \theta_2) + \cos(2\theta_1 + \theta_2) + \cos(\theta_1 + 2\theta_2). \tag{7.2}$$
|
| 380 |
+
|
| 381 |
+
Expanding the RHS of (7.2) and separating into terms involving cosines followed by terms involving sines gives
|
| 382 |
+
|
| 383 |
+
$$A = \cos\theta_1 \cos\theta_2 + \cos(\theta_1 + \theta_2) \cos\theta_1 + \cos(\theta_1 + \theta_2) \cos\theta_2 \\
|
| 384 |
+
- [-\sin\theta_1 \sin\theta_2 + \sin\theta_1 \sin(\theta_1 + \theta_2) + \sin\theta_2 \sin(\theta_1 + \theta_2)].$$
|
| 385 |
+
|
| 386 |
+
By (7.1) the expression in the brackets vanishes, and the expression involving
|
| 387 |
+
cosines is just $B$, and thus $A = B$. □
|
| 388 |
+
---PAGE_BREAK---
|
| 389 |
+
|
| 390 |
+
**Theorem 7.2.** With notation as above, let
|
| 391 |
+
|
| 392 |
+
$$g(x) = \left(x - \frac{A}{3}\right)^2 (-3x^2 + 2Ax + A^2 - 4B)(1 - x^2).$$
|
| 393 |
+
|
| 394 |
+
Then if $D(\alpha, \beta, \gamma)$ is a maximum subject to $\rho(\alpha, \beta, \gamma) = 0$, there is a constant $d$ for which
|
| 395 |
+
|
| 396 |
+
$$g(\alpha) = g(\beta) = g(\gamma) = d, \quad (7.3)$$
|
| 397 |
+
|
| 398 |
+
i.e. the roots of $g(x) - d = 0$ include $\alpha, \beta$, and $\gamma$. That is, $g(x) - d$ is divisible by $f(x) = x^3 - Ax^2 + Bx - C$.
|
| 399 |
+
|
| 400 |
+
*Proof.* We will derive (7.3) from the 3 components of the equality $\nabla D = \lambda \nabla \rho$. First observe that
|
| 401 |
+
|
| 402 |
+
$$3\alpha - A = (\alpha - \beta) + (\alpha - \gamma).$$
|
| 403 |
+
|
| 404 |
+
Thus
|
| 405 |
+
|
| 406 |
+
$$\frac{\partial D}{\partial \alpha} = (\gamma - \beta)[(\alpha - \beta) + (\alpha - \gamma)] = (\gamma - \beta)(3\alpha - A).$$
|
| 407 |
+
|
| 408 |
+
The idea behind the next step is to eliminate $\gamma - \beta$ from (the square of) the previous equation, by expressing $(\gamma - \beta)^2$ in terms of $\alpha, A, B$. It is straightforward to find the required identity:
|
| 409 |
+
|
| 410 |
+
$$(\gamma - \beta)^2 = -3\alpha^2 + 2A\alpha + A^2 - 4B.$$
|
| 411 |
+
|
| 412 |
+
Thus
|
| 413 |
+
|
| 414 |
+
$$\left( \frac{\partial D}{\partial \alpha} \right)^2 = (3\alpha - A)^2 (-3\alpha^2 + 2A\alpha + A^2 - 4B).$$
|
| 415 |
+
|
| 416 |
+
On the other hand $\frac{\partial \rho}{\partial \alpha} = \frac{-1}{\sqrt{1-\alpha^2}}$, so $(\frac{\partial \rho}{\partial \alpha})^2 = \frac{1}{1-\alpha^2}$. So the first component of $\nabla D = \lambda \nabla \rho$ implies
|
| 417 |
+
|
| 418 |
+
$$(3\alpha - A)^2(-3\alpha^2 + 2A\alpha + A^2 - 4B)(1 - \alpha^2) = \lambda^2.$$
|
| 419 |
+
|
| 420 |
+
Similarly, the other two components of $\nabla D = \lambda \nabla \rho$ can be written as
|
| 421 |
+
|
| 422 |
+
$$(3\beta - A)^2(-3\beta^2 + 2A\beta + A^2 - 4B)(1 - \beta^2) = \lambda^2$$
|
| 423 |
+
|
| 424 |
+
$$(3\gamma - A)^2(-3\gamma^2 + 2A\gamma + A^2 - 4B)(1 - \gamma^2) = \lambda^2$$
|
| 425 |
+
|
| 426 |
+
This proves (7.3), using $d = \lambda^2/9$. $\square$
|
| 427 |
+
|
| 428 |
+
**Theorem 7.3.** At a maximum of $V(t)$, the cosines of the three short roots evaluated at $\log(t)$ are the roots of the cubic equation $x^3 + \frac{3}{5}x^2 - \frac{3}{5}x - \frac{7}{25} = 0$.
|
| 429 |
+
---PAGE_BREAK---
|
| 430 |
+
|
| 431 |
+
*Proof.* Using Proposition 7.1, we can replace *B* by *A* and write the monic polynomial $g(x)/3$ as
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
\begin{align*}
|
| 435 |
+
\frac{1}{3}g(x) ={}& x^6 - \frac{4A}{3}x^5 + \frac{2A^2 + 12A - 9}{9}x^4 + \frac{4A^3 - 24A^2 + 36A}{27}x^3 \\
|
| 436 |
+
& + \frac{-A^4 + 4A^3 - 6A^2 - 36A}{27}x^2 + \frac{-4A^3 + 24A^2}{27}x + \frac{A^4 - 4A^3}{27}
|
| 437 |
+
\end{align*}
|
| 438 |
+
$$
|
| 439 |
+
|
| 440 |
+
By Theorem 7.2, the remainder when dividing $\frac{1}{3}g(x)$ by $f(x)$ has degree 0. Computing the coefficients of $x^2$ and $x$ in this remainder yields the following two equations.
|
| 441 |
+
|
| 442 |
+
$$
|
| 443 |
+
A^3 - 6A^2 - 9A + 18AC = 0 \tag{7.4}
|
| 444 |
+
$$
|
| 445 |
+
|
| 446 |
+
$$
|
| 447 |
+
-A^4 + 2A^3 + 15A^2 - (3A^2 + 18A + 27)C = 0 \quad (7.5)
|
| 448 |
+
$$
|
| 449 |
+
|
| 450 |
+
Equating the two resulting expressions for *C* leads to
|
| 451 |
+
|
| 452 |
+
$$
|
| 453 |
+
5A^{4} - 12A^{3} - 54A^{2} + 108A + 81 = 0 \tag{7.6}
|
| 454 |
+
$$
|
| 455 |
+
|
| 456 |
+
The roots of (7.6) are $-\frac{3}{5}$ and $-3$ with multiplicity 1 and 3 with multiplicity 2.
|
| 457 |
+
Note that 3 and -3 are the maximum and minimum possible values for the sum
|
| 458 |
+
of three cosines, so these correspond to minima of $V(t)$ leaving $A = -\frac{3}{5}$ as the
|
| 459 |
+
only possibility for the maximum of $V(t)$. Finally setting $A = -\frac{3}{5}$ in (7.4) yields
|
| 460 |
+
$C = \frac{7}{25}$. $\square$
|
| 461 |
+
|
| 462 |
+
*Remark.* The cosines of the rotation angles of the largest conjugacy class in $SO(7)$ are the roots of $x^3 + \frac{3}{5}x^2 - \frac{3}{5}x - \frac{1}{5} = 0$. That equation differs only by a constant from the one established here for the cosines of the short roots of the largest conjugacy class of $G_2$. Since $G_2$ is a subgroup of $SO(7)$, maybe there is an easier proof for Theorem **7.3**.
|
| 463 |
+
|
| 464 |
+
**Corollary 7.4.** For $G_2$ there is a unique maximum for $V(t)$ in $T/W$.
|
| 465 |
+
|
| 466 |
+
*Proof.* We claim that there is a unique choice of $(\theta_1, \theta_2, \theta_3)$ in $t/W$ subject to the constraints
|
| 467 |
+
|
| 468 |
+
$$
|
| 469 |
+
(1) \{\alpha = \cos \theta_1, \beta = \cos \theta_2, \gamma = \cos \theta_3\} \text{ are the three roots of}
|
| 470 |
+
$$
|
| 471 |
+
|
| 472 |
+
$$
|
| 473 |
+
f(x) = x^{3} + \frac{3}{5}x^{2} - \frac{3}{5}x - \frac{7}{25} = 0
|
| 474 |
+
$$
|
| 475 |
+
|
| 476 |
+
and
|
| 477 |
+
|
| 478 |
+
$$
|
| 479 |
+
(2) \quad \theta_1 + \theta_2 + \theta_3 = 0.
|
| 480 |
+
$$
|
| 481 |
+
|
| 482 |
+
We know there is at least one such choice because there must be a maximum for $V(t)$, and the log of any such maximum must satisfy these constraints. Any two such choices must differ by sign changes. If the number of sign changes is one or two, then at least one element of $\{\theta_1, \theta_2, \theta_3\}$ must be 0, and this would lead to $V(\exp_T(\theta_1, \theta_2, \theta_3)) = 0$. But if the number of sign changes is three, the two choices differ by an element of the Weyl group, because the composition of the reflections corresponding to any 2 orthogonal roots is $-\text{Id}$. $\square$
|
| 483 |
+
---PAGE_BREAK---
|
| 484 |
+
|
| 485 |
+
# 8 Appendix
|
| 486 |
+
|
| 487 |
+
In this Appendix we present a Theorem and proof based on the Sturm Comparison Theorem equivalent to Corollary 4.7.
|
| 488 |
+
|
| 489 |
+
**Theorem 8.1.** Let $k(x)$ be continuous with $0 < \beta < k(x) < \alpha$ on the interval $[x_0-a, x_0+a]$. Let $v_{\lambda,k}(x)$ be the unique solution of $v''_{\lambda,k}(x) = -\lambda^2 k(x)^2 v_{\lambda,k}(x)$ with $v_{\lambda,k}(x_0) = 1$ and $v'_{\lambda,k}(x_0) = 0$. For any $0 < \delta < a$, let $N_{\lambda,k,\delta}$ be the number of solutions of $v_{\lambda,k}(x) = 0$ in the interval $[x_0-\delta, x_0+\delta]$. Define $\rho_k(x_0) = \lim_{\delta \to 0} \lim_{\lambda \to \infty} \frac{N_{\lambda,k,\delta}}{2\lambda\delta}$. Then $\rho_k(x_0) = \frac{k(x_0)}{\pi}$.
|
| 490 |
+
|
| 491 |
+
[To apply this Theorem in the context of the proof of Corollary 4.7, use $\lambda = \sqrt{n(n-1)}$, $k(x) = 1/\sqrt{1-x^2}$, and $\Delta x = 2\delta$, for any $x_0 \in (-1, 1]$.]
|
| 492 |
+
|
| 493 |
+
Our proof of Theorem 8.1 is based on the Sturm Comparison Theorem [see [6], [7]]. Here is a simple statement, adequate for our purposes.
|
| 494 |
+
|
| 495 |
+
Sturm Comparison Theorem [SCT] (from [7], for a more general statement see [6]):
|
| 496 |
+
|
| 497 |
+
Let $\phi_1$ and $\phi_2$ be non-trivial solutions of equations
|
| 498 |
+
|
| 499 |
+
$$y'' + q_1(x)y = 0$$
|
| 500 |
+
|
| 501 |
+
and
|
| 502 |
+
|
| 503 |
+
$$y'' + q_2(x)y = 0$$
|
| 504 |
+
|
| 505 |
+
respectively, on an interval $I$ where $q_1$ and $q_2$ are continuous functions such that $q_1(x) \le q_2(x)$ on $I$. Then between any two consecutive zeroes $x_1$ and $x_2$ of $\phi_1$, there exists at least one zero of $\phi_2$ unless $q_1(x) \equiv q_2(x)$ on $(x_1, x_2)$.
|
| 506 |
+
|
| 507 |
+
The proof given in [7] is based on analyzing the integral of $(\phi_1\phi'_2 - \phi_2\phi'_1)'$ over the interval $[x_1, x_2]$. Another way of stating the result of SCT is that the number of zeros of $\phi_2$ in $I$ is greater than or equal to the number of intervals between consecutive zeros of $\phi_1$ in $I$.
|
| 508 |
+
|
| 509 |
+
**Proposition 8.2.** *Under the assumptions of Theorem 8.1,*
|
| 510 |
+
|
| 511 |
+
$$\frac{2\delta\lambda\beta}{\pi} - 1 \le N_{\lambda,k,\delta} \le \frac{2\delta\lambda\alpha}{\pi} + 2$$
|
| 512 |
+
---PAGE_BREAK---
|
| 513 |
+
|
| 514 |
+
*Proof.* Let $u_{\lambda,\beta}(x) = \sin\lambda\beta(x - x_0)$ be the solution of $u''_{\lambda,\beta} + \lambda^2\beta^2 u_{\lambda,\beta} = 0$ with initial conditions $u_{\lambda\beta}(x_0) = 0$ and $u'_{\lambda,\beta}(x_0) = \lambda\beta$. Similarly, let $u_{\lambda,\alpha}(x) = \sin\lambda\alpha(x - x_0)$ be the solution of $u''_{\lambda,\alpha} + \lambda^2\alpha^2 u_{\lambda,\alpha} = 0$ with initial conditions $u_{\lambda\alpha}(x_0) = 0$ and $u'_{\lambda,\alpha}(x_0) = \lambda\alpha$.
|
| 515 |
+
|
| 516 |
+
First apply SCT to $\phi_1 = u_{\lambda,\beta}$ and $\phi_2 = v_{\lambda,k}$ on the interval $I = [x_0 - \delta, x_0 + \delta]$. Each interval between consecutive zeros of $u_{\lambda,\beta}$ has length $\frac{\pi}{\lambda\beta}$ so the number of intervals between consecutive zeros of $\phi_1$ in $I$ is $\lfloor \frac{2\delta\lambda\beta}{\pi} \rfloor \ge \frac{2\delta\lambda\beta}{\pi} - 1$. Thus
|
| 517 |
+
|
| 518 |
+
$$ \frac{2\delta\lambda\beta}{\pi} - 1 \le N_{\lambda,k,\delta} $$
|
| 519 |
+
|
| 520 |
+
Second apply SCT to $\phi_2 = u_{\lambda,\alpha}$ and $\phi_1 = v_{\lambda,k}$ on the interval $I$. The number of intervals between consecutive zeros of $\phi_1$ in $I$ is $N_{\lambda,k,\delta} - 1$. The number of zeros of $\phi_2$ in $I$ is $\lfloor \frac{2\delta\lambda\alpha}{\pi} \rfloor + 1 \le \frac{2\delta\lambda\alpha}{\pi} + 1$. Thus
|
| 521 |
+
|
| 522 |
+
$$ N_{\lambda,k,\delta} \le \frac{2\delta\lambda\alpha}{\pi} + 2 $$
|
| 523 |
+
|
| 524 |
+
□
|
| 525 |
+
|
| 526 |
+
*Proof of Theorem 8.1.* By Proposition 8.2
|
| 527 |
+
|
| 528 |
+
$$ \frac{\beta}{\pi} - \frac{1}{2\lambda\delta} \le \frac{N_{\lambda,k,\delta}}{2\lambda\delta} \le \frac{\alpha}{\pi} + \frac{2}{2\lambda\delta}. $$
|
| 529 |
+
|
| 530 |
+
Taking the limit as $\lambda \to \infty$ gives
|
| 531 |
+
|
| 532 |
+
$$ \frac{\beta}{\pi} \le \lim_{\lambda \to \infty} \frac{N_{\lambda,k,\delta}}{2\lambda\delta} \le \frac{\alpha}{\pi}. $$
|
| 533 |
+
|
| 534 |
+
Since $k(x)$ is continuous, given $\epsilon > 0$ we can choose $\delta > 0$ so that
|
| 535 |
+
|
| 536 |
+
$$ k(x_0) - \epsilon < k(x) < k(x_0) + \epsilon $$
|
| 537 |
+
|
| 538 |
+
on $[x_0 - \delta, x_0 + \delta]$. Therefore, for any $\epsilon > 0$, for sufficiently small $\delta > 0$, we have
|
| 539 |
+
|
| 540 |
+
$$ \frac{k(x_0) - \epsilon}{\pi} \le \lim_{\lambda \to \infty} \frac{N_{\lambda,k,\delta}}{2\lambda\delta} \le \frac{k(x_0) + \epsilon}{\pi}. $$
|
| 541 |
+
|
| 542 |
+
Now taking limits as $\epsilon \to 0$ gives the desired result. □
|
| 543 |
+
|
| 544 |
+
# 9 Acknowledgments
|
| 545 |
+
|
| 546 |
+
Thanks to reviewers Sam Lichtenstein, Keith Conrad, Joe Wolf, and Paul Howard.
|
| 547 |
+
---PAGE_BREAK---
|
| 548 |
+
|
| 549 |
+
References
|
| 550 |
+
|
| 551 |
+
[1] E.Basor, “Toeplitz determinants, Painlevé equations, and special functions Part I: an operator approach”,
|
| 552 |
+
https://www.cirm-math.fr/RepOrga/2105/Slides/Basor_Slides.pdf.
|
| 553 |
+
|
| 554 |
+
[2] P. Diaconis, “Patterns in Eigenvalues: The 70th Josiah Willard Gibbs Lecture”, Bulletin of the American Mathematical Society, Volume 40, Number 2, pp 155–178, 2003.
|
| 555 |
+
|
| 556 |
+
[3] C. Draper Fontanals, “Note on G2: The Lie Algebra and the Lie Group”,
|
| 557 |
+
https://arxiv.org/pdf/1704.07819.pdf.
|
| 558 |
+
|
| 559 |
+
[4] T. Ehrhardt, “A generalization of Pincus’ formula and Toeplitz operator determinants”, Archiv der Matematik, v80, pages 302-309, 2003.
|
| 560 |
+
|
| 561 |
+
[5] F. Murnaghan, “Complex Symplectic Lie Algebras”,
|
| 562 |
+
http://www.math.toronto.edu/murnaghan/courses/mat445/sp.pdf.
|
| 563 |
+
|
| 564 |
+
[6] https://en.wikipedia.org/wiki/Sturm-Picone_comparison_theorem
|
| 565 |
+
|
| 566 |
+
[7] http://www.math.iitb.ac.in/~siva/ma41707/ode8.pdf
|
| 567 |
+
|
| 568 |
+
[8] G. Szegő "Orthogonal Polynomials", Fourth Edition, American Mathematical Society (1975).
|
| 569 |
+
|
| 570 |
+
[9] K. Tapp, “Matrix Groups for Undergraduates”, AMS Student Mathematical Library, v29, 2005.
|
| 571 |
+
http://www.ams.org/publications/authors/books/postpub/stml-29-MatrixGroupsChapter10.pdf
|
| 572 |
+
|
| 573 |
+
[10] P. Woit, “Topics in Representation Theory: The Weyl Integral and Character Formulas”,
|
| 574 |
+
http://www.math.columbia.edu/~woit/notes12.pdf.
|
| 575 |
+
|
| 576 |
+
[11] https://unapologetic.wordpress.com/2010/03/08/construction-of-the-g2-root-system/
|
samples/texts_merged/4742797.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples/texts_merged/5573174.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples/texts_merged/5577417.md
ADDED
|
@@ -0,0 +1,186 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Electro-Thermal Comparison and Performance Optimization
|
| 5 |
+
of Thin-Body SOI and GOI MOSFETs
|
| 6 |
+
|
| 7 |
+
Eric Pop, Chi On Chui, Sanjiv Sinha†, Robert Dutton and Kenneth Goodson†
|
| 8 |
+
|
| 9 |
+
Department of Electrical and †Mechanical Engineering, Stanford University
|
| 10 |
+
|
| 11 |
+
Contact: epop@alum.mit.edu, Bldg 500 Room 501S, Stanford CA 94305, Tel 650.723.8482, Fax 650.723.7657
|
| 12 |
+
|
| 13 |
+
ABSTRACT
|
| 14 |
+
|
| 15 |
+
This paper examines self-heating trends in ultra-scaled
|
| 16 |
+
fully depleted SOI and GOI devices. We introduce a self-
|
| 17 |
+
consistent model for calculating device temperature, saturation
|
| 18 |
+
current and intrinsic gate delay. We show that the raised
|
| 19 |
+
device source/drain can be designed to simultaneously lower
|
| 20 |
+
device temperature and parasitic capacitance, such that the
|
| 21 |
+
intrinsic gate delay (CV/I) is optimal. We find that a raised
|
| 22 |
+
source/drain height approximately 3 times the channel thick-
|
| 23 |
+
ness would be desirable both from an electrical and thermal
|
| 24 |
+
point of view. Optimized GOI devices could provide at least
|
| 25 |
+
30 percent performance advantage over similar SOI devices,
|
| 26 |
+
despite the lower thermal conductivity of the germanium layer.
|
| 27 |
+
|
| 28 |
+
Fig. 1. Ultra-thin body MOSFET and the thermal resistances (a) and parasitic capacitances (b) used in our model. The dark gray represents the metalized gate and contacts, and the light gray is the surrounding oxide insulator. The image is not drawn to scale.
|
| 29 |
+
|
| 30 |
+
INTRODUCTION
|
| 31 |
+
|
| 32 |
+
Ultra-thin body, fully depleted silicon-on-insulator (SOI)
|
| 33 |
+
devices offer great promise for scaling near the end of the
|
| 34 |
+
roadmap [1] due to better control of short channel effects and
|
| 35 |
+
lower parasitic capacitance [2][3]. Very recently germanium-
|
| 36 |
+
on-insulator (GOI) structures and devices have been reported
|
| 37 |
+
[4][5], that could be even more attractive because germanium
|
| 38 |
+
offers a mobility enhancement up to 2× compared to silicon,
|
| 39 |
+
for both electrons and holes. However, the thermal conductiv-
|
| 40 |
+
ity of bulk germanium is only 40 percent as large as that of
|
| 41 |
+
silicon, which combined with the poor thermal conductivity of
|
| 42 |
+
the buried oxide may lead to worse thermal problems for GOI
|
| 43 |
+
than those already well documented for SOI [6][7]. In this
|
| 44 |
+
work we analyze self-heating trends in GOI and SOI devices
|
| 45 |
+
and show that despite the lower thermal conductivity of Ge, the
|
| 46 |
+
temperature rise in GOI may be comparable to that in similar
|
| 47 |
+
SOI devices, owing mainly to reduced power dissipation. We
|
| 48 |
+
also show that ultra-thin body GOI and SOI devices can be
|
| 49 |
+
designed to provide optimal performance, taking self-heating
|
| 50 |
+
into account self-consistently.
|
| 51 |
+
|
| 52 |
+
SOI AND GOI MODEL ASSUMPTIONS
|
| 53 |
+
|
| 54 |
+
Thin-body SOI and GOI devices are analyzed with the lumped thermal resistance model shown in Fig. 1(a) and described in [7]. This model correctly reproduces the experimentally observed steady-state temperature rise in 100 nm channel length SOI devices [8]. In this work, the model is applied to end-of-roadmap SOI and GOI devices. The gate length ($L_g$), saturation current ($I_d$), nominal voltage ($V_{dd}$) and gate oxide thickness ($t_{ox}$) used in this study follow the most recent ITRS guidelines [1]. Other assumptions made regarding
|
| 55 |
+
|
| 56 |
+
the device geometry are as follows. The SOI body thickness
|
| 57 |
+
needed to ensure good electrostatics scales as t<sub>si</sub> = L<sub>g</sub>/4 [9].
|
| 58 |
+
The GOI body thickness should then scale by a factor of the
|
| 59 |
+
material permittivity ratio, as t<sub>ge</sub> = ε<sub>si</sub>t<sub>si</sub>/ε<sub>ge</sub> = 3t<sub>si</sub>/4. The
|
| 60 |
+
buried oxide thickness scales as t<sub>BOX</sub> = 2L<sub>g</sub> [1]. The thin
|
| 61 |
+
body is assumed to be essentially undoped to prevent dopant
|
| 62 |
+
fluctuation effects on the threshold voltage. The threshold
|
| 63 |
+
voltage is then mainly determined by the choice of gate metal
|
| 64 |
+
workfunction, which in this work is taken to be a metallic alloy
|
| 65 |
+
with a thermal conductivity of 40 W/m/K, typical of silicides.
|
| 66 |
+
|
| 67 |
+
Figure 2 plots the thermal conductivity of undoped ultra-thin silicon and germanium films based on a Matthiessen's rule estimate [7] for the phonon mean free path. The silicon film thermal conductivity is consistent with recent measurements on 20 nm thin films ($k_{si} = 22$ W/m/K) [10], however no thermal conductivity data on thin germanium films is yet available. The ratio of the thermal conductivities, $k_{ge}/k_{si}$, is closer to unity (higher) in ultra-thin films than in bulk, where germanium (60 W/m/K) is only 40 percent as thermally conductive as silicon (148 W/m/K). It should be noted that ultra-thin film thermal conductivity is largely independent of temperature, because heat transport is limited by phonon boundary scattering with the film thickness [10].
|
| 68 |
+
---PAGE_BREAK---
|
| 69 |
+
|
| 70 |
+
Fig. 2. Estimated thermal conductivity of thin Si and Ge layers. As the film is thinned, the thermal conductivity decreases due to phonon boundary scattering, but it decreases less (vs. bulk) for Ge films due to the shorter phonon mean free path of this material. In bulk form, the thermal conductivity ratio is $k_{ge}/k_{si} = 60/148 \approx 0.40$, but this fraction is closer to unity for ultra-thin films.
|
| 71 |
+
|
| 72 |
+
Electron mobility in thin germanium layers is about 2× higher than in thin silicon layers near room temperature [5][11]. Recent devices built by Yu et al [5] indicate this mobility enhancement means GOI devices can carry the same on-current ($I_d$) at 40 percent lower voltage ($V_{dd}$) than comparable SOI transistors. This is the assumption we use in the current work when comparing otherwise similar SOI and GOI transistors (except in Fig. 8 where this assumption is relaxed). However, since the FETs in Ref. [5] are large ($L_g = 10$ µm), this may be a conservative estimate for very small devices, where velocity saturation is less important and the 2× mobility advantage of germanium could play a stronger role [12][13]. With our assumption, a GOI device dissipates 40 percent less power ($P = I_d \times V_{dd}$) than an equivalent SOI device, while generating the same ITRS-specified [1] drive current.
|
| 73 |
+
|
| 74 |
+
## TEMPERATURE DEPENDENCE OF ON-CURRENT
|
| 75 |
+
|
| 76 |
+
To estimate the temperature dependence of the saturation current (per unit width) for devices near the limit of scaling, we use the following simple model [12]:
|
| 77 |
+
|
| 78 |
+
$$I_d = v_T \frac{\lambda}{2l + \lambda} C_{ox} (V_{gs} - V_t) \quad (1)$$
|
| 79 |
+
|
| 80 |
+
where $v_T$ is the unidirectional thermal velocity, $\lambda$ the electron mean free path (both at the source), $l$ is the distance of the first $k_B T/q$ potential drop in the channel, $C_{ox}$ the gate oxide capacitance per unit area and $V_t$ is the threshold voltage. The various temperature dependencies are [12][14]:
|
| 81 |
+
|
| 82 |
+
$$v_T = v_{T_o}(T/T_o)^{1/2} \quad (2)$$
|
| 83 |
+
|
| 84 |
+
$$\lambda = \lambda_o(T/T_o)^{1/2+\alpha} \quad (3)$$
|
| 85 |
+
|
| 86 |
+
$$l = l_o(T/T_o) \quad (4)$$
|
| 87 |
+
|
| 88 |
+
$$V_t = V_{to} + \eta(T - T_o) \quad (5)$$
|
| 89 |
+
|
| 90 |
+
$$\mu = \mu_o(T/T_o)^{\alpha} \quad (6)$$
|
| 91 |
+
|
| 92 |
+
where the subscript *o* denotes the value at room temperature. Electron mobility in ultra-thin silicon layers has been recently
|
| 93 |
+
|
| 94 |
+
reported to vary as $T^{-1.4}$ ($\alpha = -1.4$) near room temperature, and to be largely independent of the layer thickness [15][16]. The temperature coefficient of mobility enters the mean free path from $\lambda = 2\mu(k_B T/q)/v_T$ [14]. No data is yet available on the temperature dependence of electron mobility in ultra-thin germanium layers. However, it is well known that electron mobility in bulk germanium is less temperature sensitive ($T^{-1.7}$) than in bulk silicon ($T^{-2.4}$), due to the lower optical phonon energy in germanium. By extension, in this work we assume the thin layer germanium mobility to have a $T^{-1}$ dependence.
|
| 95 |
+
|
| 96 |
+
Finally, the threshold voltage of ultra-thin body fully depleted SOI devices varies linearly with temperature, with a coefficient $\eta$, which can be approximated as [17][18]:
|
| 97 |
+
|
| 98 |
+
$$\eta \approx \frac{\partial \phi_F}{\partial T} = \frac{k_B}{q} \left[ \ln \left( \frac{N_a}{\sqrt{N_c N_v}} \right) + \frac{1}{2k_B} \frac{\partial E_g}{\partial T} - \frac{3}{2} \right] \quad (7)$$
|
| 99 |
+
|
| 100 |
+
where all quantities have their usual meanings (see, e.g., [17]). Recent experimental work [18] has found $\eta \approx -0.7$ mV/K for fully depleted thin-body SOI devices. Although such data is not yet available for similar GOI devices, a quick estimate (accounting for the smaller germanium band gap, different conduction and valence band effective density of states) yields a value of $\eta$ in the same range.
|
| 101 |
+
|
| 102 |
+
Taking the above into account, we obtain an expression for the temperature dependence of the saturation current for devices near the limit of scaling:
|
| 103 |
+
|
| 104 |
+
$$\frac{\Delta I_d}{I_{do}} = \left[ \frac{1}{T_o} \left( \frac{1}{2} + \frac{2\alpha - 1}{2 + \lambda_o/l_o} \right) - \frac{\eta}{V_{gs} - V_{to}} \right] \Delta T, \quad (8)$$
|
| 105 |
+
|
| 106 |
+
which is a generalization of the expression in [14]. All values with subscript *o* are taken to be at room temperature ($T_o = 300$ K), and in the rest of this work we assume $I_{do}$ and $V_{to}$ to be the values of saturation current and threshold voltage, respectively, targeted by the ITRS guidelines [1]. The temperature rise due to self-heating, $\Delta T$, is assumed to be that at the source end of the channel, since it is this region which affects the injection velocity, mean free path and threshold voltage in Eq. 1 and in the rest of our model.
|
| 107 |
+
|
| 108 |
+
## SELF-CONSISTENT CURRENT ESTIMATE
|
| 109 |
+
|
| 110 |
+
We have implemented a self-consistent iterative solution of the device temperature and current based on the model in Fig. 1 and the discussion above. The total amount of heat ($I_d \times V_{dd}$) is assumed to be entirely generated in the device drain, based on previous Monte Carlo simulation results [7]. This power is input to the thermal resistance model assuming (at first) the current to be the room-temperature value ($I_{do}$) targeted by the ITRS. The model yields a temperature rise at the source end of the channel ($\Delta T$) which is used to adjust the current based on Eq. 8. The new device power is used again to solve for the device temperature, and this loop is repeated until the temperature and current are obtained self-consistently.
|
| 111 |
+
|
| 112 |
+
Figure 3 shows the calculated average temperature rise at 20 percent duty factor for SOI and GOI devices along the
|
| 113 |
+
---PAGE_BREAK---
|
| 114 |
+
|
| 115 |
+
Fig. 3. Self-consistently computed average drain- (a) and source-side (b) temperature rise in SOI and GOI devices operated with a duty factor of 20 percent. Two GOI cases are shown, one with the same current (but 40 percent lower $V_{dd}$) as the SOI, and one with the same power as the SOI. The raised SD thickness scales as $t_{sd} = 3t_{si}$ and the channel extension as $L_{ex} = L_g/2$.
|
| 116 |
+
|
| 117 |
+
roadmap. The relationship between maximum (DC) temperature and the average temperature for a given duty factor $f$ can be written as $T_{avg} = fT_{dc}$, since device thermal time constants (tens of nanoseconds) are much longer than device switching times (tens of picoseconds) [8]. Both same-current (but lower $V_{dd}$, hence lower power) and same-power scenarios are compared for GOI and SOI in Fig. 3. The drain temperature rise of GOI is expected to be higher in either case, due to the lower overall thermal conductivities. However, the source temperature rise is generally comparable, and even slightly lower for the same-current GOI vs. SOI case. This is due to the larger GOI channel thermal resistance, along with the lower dissipated power. Self-consistency is important in these calculations, since without it the temperature may be overestimated by close to 100 percent for the smallest devices, as shown in Fig. 4. Owing to their less temperature-sensitive mobility, GOI devices show less current degradation due to self-heating, as shown in Fig. 5.
|
| 118 |
+
|
| 119 |
+
## DESIGN CONSIDERATIONS
|
| 120 |
+
|
| 121 |
+
It has been previously suggested [7] that raised source/drain (SD) and shorter extension $L_{ex}$ are essential not only to reduce electrical series resistance, but also to reduce the thermal resistance of a device, and therefore lower its operating temperature. However, a raised SD and shorter $L_{ex}$ can increase the gate fringing capacitance. We quantify the performance impact of the modified SD by estimating the intrinsic gate
|
| 122 |
+
|
| 123 |
+
Fig. 4. Comparison of SOI source-side temperature estimate obtained from the self-consistent temperature-current calculation (solid line) and a calculation where the current is not iteratively adjusted for changes in temperature (dash-dotted line). The temperature-current consistency is important, especially for the smallest devices where the error is near 100 percent.
|
| 124 |
+
|
| 125 |
+
delay, $C_g V_{dd}/I_d$. The gate capacitance components are shown in Fig. 1(b), and modeled as in Ref. [19]. For example, the fringing component $C_{ex}$ can be written as:
|
| 126 |
+
|
| 127 |
+
$$C_{ex} = \frac{2\beta\epsilon_{sw}}{\pi} \ln \left( 1 + \frac{L_{ex}}{t_{ox}} \right) \quad (9)$$
|
| 128 |
+
|
| 129 |
+
where $\epsilon_{sw}$ is the dielectric constant of the sidewall material (here assumed to be oxide) and $\beta \approx 0.8$ is a geometrical shape factor [19]. Figure 6 shows the computed intrinsic delay for SOI and GOI devices with the same drive current. An elevated source/drain lowers the device temperature [7] and thus improves $I_d$, but at the same time increases the fringing capacitance. For this reason, it appears that raising the source/drain thickness $t_{sd}$ much beyond $3 \times t_{si}$ does not result in significant additional speed gain. In Fig. 7 we optimize both the SD height as well as the extension length for an 18 nm gate device. The delay contours again suggest an optimal SD height around $3-4 \times t_{si}$ and an extension length approximately $L_g/3$ for GOI and closer to $L_g/2$ for SOI devices. Finally, in Fig. 8 we use these “optimized” device geometries and show the impact of $V_{dd}$ scaling on GOI intrinsic gate delay. The figure explores various scenarios of $V_{dd}$(Ge)/$V_{dd}$(Si), since it is not yet known what voltage well-behaved GOI devices may operate at (or, rather, at what fraction of the SOI $V_{dd}$).
|
| 130 |
+
|
| 131 |
+
## SUMMARY
|
| 132 |
+
|
| 133 |
+
This study compares the electro-thermal behavior of GOI and SOI devices near the limits of scaling. We develop a self-consistent model for calculating device temperature, current and intrinsic gate delay. We show how the device source/drain can be designed to help simultaneously minimize device temperature and parasitic capacitance, such that the intrinsic gate delay is optimal. Finally, we show that optimized GOI devices could provide at least 30 percent performance advantage over similar SOI devices, even when self-heating is taken into account.
|
| 134 |
+
---PAGE_BREAK---
|
| 135 |
+
|
| 136 |
+
Fig. 5. Self-consistently computed percentage decrease in drain current due to self-heating (vs. the ITRS-targeted current), for the same cases as in Fig. 3
|
| 137 |
+
|
| 138 |
+
Fig. 6. Self-consistently computed intrinsic delay for SOI and GOI devices in the same-current scenario. The SD height $t_{sd}$ is varied as a parameter, from $t_{si}$ (no raised SD, top line in each set of curves) to $5t_{si}$. The extension length is assumed constant at each node, $L_{ex} = L_g/2$. The intrinsic delay is not reduced significantly for $t_{sd} > 3t_{si}$.
|
| 139 |
+
|
| 140 |
+
## ACKNOWLEDGMENTS
|
| 141 |
+
|
| 142 |
+
This work was supported by the SRC under contract 1043 and by MARCO under the MSD Center.
|
| 143 |
+
|
| 144 |
+
## REFERENCES
|
| 145 |
+
|
| 146 |
+
[1] ITRS 2003 [online] http://public.itrs.net
|
| 147 |
+
|
| 148 |
+
[2] B. Doris et al, Proc. IEDM, p. 631, 2003
|
| 149 |
+
|
| 150 |
+
[3] T. Ernst, IEEE TED, v. 50, p. 830, 2003
|
| 151 |
+
|
| 152 |
+
[4] N. A. Bojarczuk et al, Appl. Phys. Lett. v. 83, p. 5443, 2003
|
| 153 |
+
|
| 154 |
+
[5] D. S. Yu et al, IEEE EDL, vol. 25, p. 138, 2004
|
| 155 |
+
|
| 156 |
+
[6] L. Su et al, IEEE TED, vol. 41, p. 69, 1994
|
| 157 |
+
|
| 158 |
+
[7] E. Pop et al, Proc. IEDM, p. 883, 2003
|
| 159 |
+
|
| 160 |
+
[8] S. Polonsky et al, IEEE EDL, v. 25, p. 208, 2004
|
| 161 |
+
|
| 162 |
+
[9] H.-S.P. Wong et al, Proc. IEDM, p. 407, 1998
|
| 163 |
+
|
| 164 |
+
[10] W. Liu et al, J. Heat Transfer, in press, 2004
|
| 165 |
+
|
| 166 |
+
[11] A. Khakifirooz et al, IEEE EDL, vol. 25, p. 80, 2004
|
| 167 |
+
|
| 168 |
+
[12] M. Lundstrom, IEEE EDL, vol. 22, p. 293, 2001
|
| 169 |
+
|
| 170 |
+
[13] S. Takagi, Symp. VLSI, p. 115, 2003
|
| 171 |
+
|
| 172 |
+
[14] M.-J. Chen et al, Proc. IEDM, p. 39, 2002
|
| 173 |
+
|
| 174 |
+
[15] D. Esseni et al, Proc. IEDM, p. 671, 2000
|
| 175 |
+
|
| 176 |
+
[16] F. Gamiz, Semic. Sci. Tech., v. 19, p. 113, 2004
|
| 177 |
+
|
| 178 |
+
[17] Y. Taur and T. Ning, Cambridge Univ. Press, 1998
|
| 179 |
+
|
| 180 |
+
[18] L. Vancaillie, IEEE Int. SOI Conf., p. 78, 2003
|
| 181 |
+
|
| 182 |
+
[19] N. R. Mohapatra et al, IEEE TED, v. 50, p. 959, 2003
|
| 183 |
+
|
| 184 |
+
Fig. 7. Geometry optimization to minimize intrinsic delay for a SOI (top) and GOI device (bottom) with $L_g$ = 18 nm and $t_{si}$ = 4.5 nm, assuming the GOI device provides the same current at 40 percent less $V_{dd}$. The results are expressed as contour plots of the delay (in picoseconds) with the extension length ($L_{ex}$) and SD thickness ($t_{sd}$) as parameters.
|
| 185 |
+
|
| 186 |
+
Fig. 8. Intrinsic gate delay for SOI and GOI devices with optimized $L_{ex}$ = $L_g/3$ and $t_{sd}$ = $3t_{si}$. The GOI voltage is varied as a parameter from $0.5V_{dd}$ (bottom dashed line) to $0.8V_{dd}$ (top dashed line) in increments of $0.1V_{dd}$, where $V_{dd}$ is the nominal SOI voltage from the ITRS guidelines [1].
|
samples/texts_merged/5640834.md
ADDED
|
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Colleagues, recently we have a new group " International Professors " added to our groups. It is therefore good to create a new category in order we share insights,
|
| 5 |
+
|
| 6 |
+
new methods, interesting class encounters and new concepts introduced when teaching undergraduate mathematics. This will create a plat form to share how curriculums are apart or close on global setting and might give a hint to edu- cation policy makers what they have to expect from undergraduate mathematics curriculums
|
| 7 |
+
|
| 8 |
+
in order to go at par with international standards.
|
| 9 |
+
|
| 10 |
+
I will present my first communication.
|
| 11 |
+
|
| 12 |
+
It is on enlarging the usual differential operator $D := \frac{d}{dx}$ in variable $x$ to something else. We know that the usual differentiation makes functions to loose their smoothness or
|
| 13 |
+
|
| 14 |
+
regularity by a degree (if they are not infinitely many times continuously differentiable)
|
| 15 |
+
|
| 16 |
+
The types of questions I have, can therefore be given as extra exercises or new insights to students who take calculus on sequences, series and convergence, to engage them to think more about, not only the single calculus operations, but, combined of them and thereby do algebraic computations at the same time.
|
| 17 |
+
|
| 18 |
+
Let us define a new differential operator of infinite terms as :
|
| 19 |
+
|
| 20 |
+
$$ \sum_{j=0}^{\infty} \frac{D^{(j)}}{j!} = : e^D : , \text{ for } j = 0, \text{ we have the identity operator.} $$
|
| 21 |
+
|
| 22 |
+
Then for a real valued $C^{\infty}$- function defined on some non-degenerate open interval $I$ (or $\mathbb{R}$-for that matter )
|
| 23 |
+
|
| 24 |
+
we can question the following:
|
| 25 |
+
|
| 26 |
+
what will be the action of $e^D$ on such functions.
|
| 27 |
+
|
| 28 |
+
Thus if $\psi \in C^\infty(I, \mathbb{R})$, what will be $\sum_{j=0}^\infty \frac{D^{(j)}\psi(x)}{j!}$?
|
| 29 |
+
|
| 30 |
+
The very immediate question will be the question of summability of the series indicated?
|
| 31 |
+
|
| 32 |
+
But we take cases in which that condition works:
|
| 33 |
+
|
| 34 |
+
**Example 1:** Take $\psi(x) = e^x$ the usual natural exponential function.
|
| 35 |
+
|
| 36 |
+
We see that $e^D(e^x)$ converges to the sum : $e\psi(x) = \psi(x + 1)$.
|
| 37 |
+
|
| 38 |
+
Indeed,
|
| 39 |
+
|
| 40 |
+
$$
|
| 41 |
+
\begin{aligned}
|
| 42 |
+
\sum_{j=0}^{\infty} \frac{D^{(j)} \psi(x)}{j!} &= \sum_{j=0}^{\infty} \frac{D^{(j)} (e^x)}{j!} \\
|
| 43 |
+
&= \sum_{j=0}^{\infty} \frac{e^x}{j!} = e^x \sum_{j=0}^{\infty} \frac{1}{j!}
|
| 44 |
+
\end{aligned}
|
| 45 |
+
$$
|
| 46 |
+
---PAGE_BREAK---
|
| 47 |
+
|
| 48 |
+
$$= e^{x+1} = \psi(x + 1)$$
|
| 49 |
+
|
| 50 |
+
∴ $e^D\psi(x) = \psi(x + 1)$-which is a left translation of $\psi$ by a unit.
|
| 51 |
+
|
| 52 |
+
One can extend this result further and write a corollary as:
|
| 53 |
+
|
| 54 |
+
**Corollary:** $(e^D)^k \psi(x) = \psi(x + k)$-left translate of $\psi$ by $k$-units.
|
| 55 |
+
|
| 56 |
+
**Example 2.** Let $\phi(x) = x^3 + x^2 + x + 1$. Then
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
\begin{aligned}
|
| 60 |
+
e^D \phi(x) &= \sum_{j=0}^{\infty} \frac{D^{(j)} \phi(x)}{j!} \\
|
| 61 |
+
&= \sum_{j=0}^{\infty} \frac{D^{(j)} (x^3+x^2+x+1)}{j!} \\
|
| 62 |
+
&= x^3 + 4x^2 + 6x + 4
|
| 63 |
+
\end{aligned}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
But the expression we have at the end is $\phi(x+1)$.
|
| 67 |
+
Therefore once again we have:
|
| 68 |
+
|
| 69 |
+
$$e^D \phi(x) = \phi(x + 1)$$
|
| 70 |
+
|
| 71 |
+
**Claim:** For a polynomial function $p(x)$, $e^D p(x) = p(x + 1)$
|
| 72 |
+
|
| 73 |
+
**Conjecture:** $\forall \psi \in C^\infty(I, \mathbb{R})$, $e^D \psi(x) = \psi(x + 1)$
|
| 74 |
+
|
| 75 |
+
We can also define a similar operator that results in right translations of $C^\infty$-functions by counts of units as:
|
| 76 |
+
|
| 77 |
+
$$e^{-D} := \sum_{j=0}^{\infty} \frac{(-1)^j D^{(j)}}{j!}$$
|
| 78 |
+
|
| 79 |
+
Further communications will be posted on the last operator and combinations of both.
|
samples/texts_merged/5687555.md
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# WINDKESSEL MODEL ANALYSIS IN MATLAB
|
| 5 |
+
|
| 6 |
+
Ing. Martin HLAVÁČ, Doctoral Degree Programme (3)
|
| 7 |
+
Dept. of Biomedical Engineering, FEEC, BUT
|
| 8 |
+
E-mail: hlavacm@feec.vutbr.cz
|
| 9 |
+
|
| 10 |
+
Supervised by: Prof. Jiří Holčík
|
| 11 |
+
|
| 12 |
+
## ABSTRACT
|
| 13 |
+
|
| 14 |
+
This paper briefly describes three Windkessel models and demonstrates application of Matlab® for mathematical modelling and simulation experiments with the models. Windkessel models are usually used to describe basic properties vascular bed and to study relationships among hemodynamic variables in great vessels. Analysis of a systemic or pulmonary arterial load described by parameters such as arterial compliance and peripheral resistance, is important, for example, in quantifying the effects of vasodilator or vasoconstrictor drugs. Also, a mathematical model of the relationship between blood pressure and blood flow in the aorta and pulmonary artery can be useful, for example, in the design, development and functional analysis of a mechanical heart and/or heart-lung machines. We found that ascending aortic pressure could be predicted better from aortic flow by using the four-element windkessel than by using the three-element windkessel or two-element windkessel. The root-mean-square errors were smaller for the four-element windkessel.
|
| 15 |
+
|
| 16 |
+
## 1 INTRODUCTION
|
| 17 |
+
|
| 18 |
+
The first description of a Windkessel model was given by the German physiologist Otto Frank in an article published in 1899 [1]. The model has been applied recently in studies of chick embryo [2] and rat [3]. It expresses heart and systemic arterial system as a closed hydraulic circuit comprising a water pump connected to a chamber. The circuit is filled with water except for a pocket of air in the chamber. (*Windkessel* is a German word for *air-chamber*.) As water is pumped into the chamber, the water both compresses the air in the pocket and pushes water out of the chamber, back to the pump. The compressibility of the air in the pocket simulates the elasticity and extensibility of the major artery, as blood is pumped into it by the heart ventricle. This effect is commonly referred to as *arterial compliance*. The resistance water encounters while leaving the Windkessel and flowing back to the pump, simulates the resistance to flow encountered by the blood as it flows through the arterial tree from the major arteries, to minor arteries, to arterioles, and to capillaries, due to decreasing vessel diameter. This resistance to flow is commonly referred to as *peripheral resistance*.
|
| 19 |
+
---PAGE_BREAK---
|
| 20 |
+
|
| 21 |
+
## 2 METHOD
|
| 22 |
+
|
| 23 |
+
Fig. 1. shows three basic Windkessel models (WM), that are used for modelling hemodynamic state. The simplest model (Fig. 1a) is able to calculate only basic exponential pressure curve determined only by values of diastolic and systolic pressure. The 4-element model (Fig. 4c) generates more accurate curves, but calculations are more complicated than for the model in Fig. 1a. When deciding which model is to be used it is necessary to take into account several various criteria and rules as computational complexity of the model, required shape of produced curve, etc. Assuming the ratio of blood pressure to blood volume in the chamber constant and that the flow of fluid through the pipes connecting the air chamber to the pump follows the Poiseuille's law and is proportional to the fluid pressure.
|
| 24 |
+
|
| 25 |
+
**Fig. 1:** Three basic Windkessel models
|
| 26 |
+
|
| 27 |
+
### 2.1 THE TWO-ELEMENT WINDKESSEL MODEL
|
| 28 |
+
|
| 29 |
+
Figure 1a shows the 2-element WM that consists of parallel connection of resistor and capacitor only. Resistor R represents total peripheral resistance and capacitor C stands for compliance of veins. This simple model of arterial bed allows only a rough approximation to real system. The model is described by following differential equation:
|
| 30 |
+
|
| 31 |
+
$$i_1(t) = \frac{u(t)}{R} + C \frac{du(t)}{dt} \quad (1).$$
|
| 32 |
+
|
| 33 |
+
### 2.2 THE THREE-ELEMENT WINDKESSEL MODEL
|
| 34 |
+
|
| 35 |
+
Another model of the circulatory system is a Broemser model, which was described by the Swiss physiologists Ph. Broemser and Otto F. Ranke in an article published in 1930 [4]. It is also known as the 3-element Windkessel model. In comparison with the 2-element WM this model uses another resistive element in between the pump and the air-chamber to simulate resistance to blood flow due to the aortic or pulmonary valve. The 3-element model (Fig. 1.b) is usually exploited at studies of general characteristics of the arterial system. The differential equation defining properties of the 3-element WM is as it follows:
|
| 36 |
+
|
| 37 |
+
$$i_1(t) = \frac{u_C(t)}{R} + C \frac{du_C(t)}{dt} \quad (2)$$
|
| 38 |
+
|
| 39 |
+
### 2.3 THE FOUR-ELEMENT WINDKESSEL MODEL
|
| 40 |
+
|
| 41 |
+
The 4-elements model (Fig. 1.c) includes inductance L, which represents inertia of blood flow (neglected in the 2- and 3- element WM). This model offers relatively good approximation to real system. The model is defined by two differential equations:
|
| 42 |
+
---PAGE_BREAK---
|
| 43 |
+
|
| 44 |
+
$$ \begin{aligned} \frac{du_C(t)}{dt} &= -\frac{1}{RC}u_C(t) + \frac{1}{C}i_L(t) \\ \frac{di_L(t)}{dt} &= -\frac{r}{L}i_L(t) + \frac{r}{L}i_L(t) \end{aligned} \tag{3} $$
|
| 45 |
+
|
| 46 |
+
### 3 EXPERIMENTS
|
| 47 |
+
|
| 48 |
+
We have built the three mentioned Windkessel models in MATLAB® and its supplement SIMULINK. The input signal was used the flow of blood across aorta. This curve can be divided to two parts. The first one represents the cardiac issue in systola and we can approximate this part the curve by a sine wave. The second part of the input curve is equal to zero, that is the circuit is disconnected from the current source (closed heart valve). Figure 2 depicts the input flow curve in case that the blood flow to the aorta and pulmonary artery is
|
| 49 |
+
|
| 50 |
+
given as:
|
| 51 |
+
|
| 52 |
+
$$ i(t) = \begin{cases} I_0 * \sin^2\left(\frac{\pi \cdot t}{T_S}\right) & \tau \in (0, T_S] \\ 0 & \tau \in (T_S, T] \end{cases} $$
|
| 53 |
+
|
| 54 |
+
Fig. 2: Blood flow ($I_0=500\text{ml}, T_S=0.3\text{s}, T=0.8\text{s}$)
|
| 55 |
+
|
| 56 |
+
**Tab. 1:** The values of normal human parameters of the Windkessel models
|
| 57 |
+
|
| 58 |
+
<table><thead><tr><th></th><th>R<br>[mmHg.s.cm<sup>-3</sup>]</th><th>C<br>[cm<sup>3</sup>.mmHg<sup>-1</sup>.s<sup>2</sup>.cm<sup>-3</sup>]</th><th>r<br>[mmHg.s.cm<sup>-3</sup>]</th><th>L<br>[mmHg.s<sup>2</sup>.cm<sup>-3</sup>]</th></tr></thead><tbody><tr><td>2 WM</td><td>1</td><td>1</td><td>-</td><td>-</td></tr><tr><td rowspan="3">3 WM</td><td>1</td><td>1</td><td>0.05</td><td>-</td></tr><tr><td>0.79</td><td>1.75</td><td>0.033</td><td>-</td></tr><tr><td>0.63</td><td>5.16</td><td>0.03</td><td>-</td></tr><tr><td rowspan="3">4 WM</td><td>1</td><td>1</td><td>0.05</td><td>0.005</td></tr><tr><td>0.79</td><td>1.22</td><td>0.056</td><td>0.0051</td></tr><tr><td>0.63</td><td>2.53</td><td>0.045</td><td>0.0054</td></tr></tbody></table>
|
| 59 |
+
---PAGE_BREAK---
|
| 60 |
+
|
| 61 |
+
The parameters of normal blood flow in man and elements of WM (shows Tab. 1.) used for the calculation were taken from Westerhof [5]. The values are - $I_0=500$ ml, $T_s=0.3$ s, $T=0.8$ s.
|
| 62 |
+
|
| 63 |
+
**Fig. 3:** Arterial pressure for three WM (a – measured pressure (solid line), b – 4WM (dashed line), c – 3WM (dot line), d – 2WM (dot-and-dash line))
|
| 64 |
+
|
| 65 |
+
Figure 3 shows calculated blood pressure waveforms for different WMs. The two- and the three- and the four-element windkessels were fitted in the time domain with use of aortic flow as input and adjustment of the model parameters to minimize the root-mean-square deviation between measured pressure and windkessel-predicted pressure. The residual sum of squares (RSS) between windkessel-predicted (Pp) and measured (Pm) pressure was calculated as follows: $$RSS = \sum_{p=1}^{N} (P_p - P_m)^2$$, where N is the number of samples in the heart cycle studied. The root-mean-square error (RMSE) of the pressure deviations was calculated as follows: $$RMSE = \sqrt{\frac{RSS}{N-1}}$$. The curve b) gives best approximations for real system this curve representative pressure of 4-element WM.
|
| 66 |
+
|
| 67 |
+
**Tab. 2:** *The values of RSS and RMSE*
|
| 68 |
+
|
| 69 |
+
<table><thead><tr><td></td><td>RSS [-]</td><td>RMSE [-]</td></tr></thead><tbody><tr><td>2 WM</td><td>3.7956e+005</td><td>20.8037</td></tr><tr><td>3 WM</td><td>2.6436e+005</td><td>17.3619</td></tr><tr><td>4 WM</td><td>9.4692e+004</td><td>10.3910</td></tr></tbody></table>
|
| 70 |
+
|
| 71 |
+
When we need information only about values of systolic and diastolic pressure, we can use the 2-element WM. On the other hand, if we need to know time dependency of blood pressure, we have to use the 4-element WM. The arterial resistive properties are given mainly by the small arteries and arterioles, and C is determined mainly by the elastic properties of the large arteries, in particular of the aorta. The Windkessel models thus gave insight into a contribution of different arterial properties to the heart load.
|
| 72 |
+
---PAGE_BREAK---
|
| 73 |
+
|
| 74 |
+
# 4 CONCLUSION
|
| 75 |
+
|
| 76 |
+
The described models demonstrated how we can obtain pressure curves for the 2-, 3-, and 4-element Windkessel models. It is a simple matter to obtain pressure curves for any other differential model. Characteristics of the arterial system can usually be obtained from the differentials equation and the input impedance of the arterial tree. The input impedance is useful because it only requires the simultaneous measurements of the pressure and flow waveforms at the ascending aorta to provide information regarding the interaction between the proximal aorta and the peripheral vascular beds. Although distributed models can more accurately predict the propagating pressure and flow waveforms and the input impedance, they are generally more complex and time consuming for individual parameter identification which often outweigh any additional information that can be obtained. We have shown that the four-element windkessel model is able to fit ascending aortic pressure from flow well and that the fit is better than other models. We made this decision on the basis of the RSS and RMSE (see tab. 2).
|
| 77 |
+
|
| 78 |
+
# ACKNOWLEDGEMENTS
|
| 79 |
+
|
| 80 |
+
The research was partially supported by the research program of the Brno University of Technology No. 262200011 "Research of Electronic Communication Systems" and by the research program of the CTU in Prague No. MSM210000012 "Transdisciplinary Research in Biomedical Engineering".
|
| 81 |
+
|
| 82 |
+
# REFERENCES
|
| 83 |
+
|
| 84 |
+
[1] Otto, F.: Die Grundform des arteriellen Pulses, Zeitung f��r Biologie 37 (1899) 483-586.
|
| 85 |
+
|
| 86 |
+
[2] Yoshigi, M., et. al.: Characterization of embryonic aortic impedance with lumped parameter models", Am. J. Physiol. 273 (1997) H19-H27.
|
| 87 |
+
|
| 88 |
+
[3] Molino, P., et. al.: Beat-to-beat estimation of windkessel model parameters in conscious rats", Am. J. Physiol. 274 (1998) H171-H177.
|
| 89 |
+
|
| 90 |
+
[4] Broemser, Ph., et. al.: Ueber die Messung des Schlagvolumens des Herzens auf unblutigem Weg, Zeitung für Biologie 90 (1930) 467-507.
|
| 91 |
+
|
| 92 |
+
[5] Westerhof, N., et. al.: An artificial arterial system for pumping hearts, Journal of Applied Physiology 31 (1971) 776-781.
|
| 93 |
+
|
| 94 |
+
[6] Karamanoglu, M.: A System for Analysis of Arterial Blood Pressure Waveforms in Humans, Computers and Biomedical Research 30 (1997) 244-255.
|
| 95 |
+
|
| 96 |
+
[7] Lambermont, B., et. al.: Comparison between Three- and Four-Element Windkessel Models to Characterize Vascular Properties of Pulmonary Circulation, Arch. Physiol. and Biochem. 105 (1997) 625-632.
|
samples/texts_merged/5963949.md
ADDED
|
@@ -0,0 +1,509 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Stochastic Privacy
|
| 5 |
+
|
| 6 |
+
*Extended Version*
|
| 7 |
+
|
| 8 |
+
**Adish Singla***
|
| 9 |
+
ETH Zurich
|
| 10 |
+
adish.singla@inf.ethz.ch
|
| 11 |
+
|
| 12 |
+
**Eric Horvitz**
|
| 13 |
+
Microsoft Research
|
| 14 |
+
horvitz@microsoft.com
|
| 15 |
+
|
| 16 |
+
**Ece Kamar**
|
| 17 |
+
Microsoft Research
|
| 18 |
+
eckamar@microsoft.com
|
| 19 |
+
|
| 20 |
+
**Ryen White**
|
| 21 |
+
Microsoft Research
|
| 22 |
+
ryen.white@microsoft.com
|
| 23 |
+
|
| 24 |
+
## Abstract
|
| 25 |
+
|
| 26 |
+
Online services such as web search and e-commerce applications typically rely on the collection of data about users, including details of their activities on the web. Such personal data is used to maximize revenues via targeting of advertisements and longer engagements of users, and to enhance the quality of service via personalization of content. To date, service providers have largely followed the approach of either requiring or requesting consent for collecting user data. Users may be willing to share private information in return for incentives, enhanced services, or assurances about the nature and extent of the logged data. We introduce stochastic privacy, an approach to privacy centering on the simple concept of providing people with a guarantee that the probability that their personal data will be shared does not exceed a given bound. Such a probability, which we refer to as the *privacy risk*, can be given by users as a preference or communicated as a policy by a service provider. Service providers can work to personalize and to optimize revenues in accordance with preferences about privacy risk. We present procedures, proofs, and an overall system for maximizing the quality of services, while respecting bounds on privacy risk. We demonstrate the methodology with a case study and evaluation of the procedures applied to web search personalization. We show how we can achieve near-optimal utility of accessing information with provable guarantees on the probability of sharing data.
|
| 27 |
+
|
| 28 |
+
## Introduction
|
| 29 |
+
|
| 30 |
+
Online services such as web search, recommendation engines, social networks, and e-commerce applications typically rely on the collection of data about activities (e.g., click logs, queries, and browsing information) and personal information (e.g., location and demographics) of users. The availability of such data enables providers to personalize services to individuals and also to learn how to enhance the service for all users (e.g., improved search results relevance). User data is also important to providers for optimizing revenues via better targeted advertising, extended user engagement and popularity, and even the selling of
|
| 31 |
+
|
| 32 |
+
user data to third party companies. Permissions are typically obtained via broad consent agreements that request user permission to share their data through system dialogs or via complex *Terms of Service*. Such notices are typically difficult to understand and are often ignored (Tech- net 2012). In other cases, a plethora of requests for information, such as attempts to gain access to users' locations, may be shown in system dialogs at run time or installation time. Beyond the normal channels for sharing data, potential breaches of information are possible via attacks by malicious third parties and malware, and through surprising situations such as the AOL data release (Arrington 2006; Adar 2007) and de-anonymization of released Netflix logs (Narayanan and Shmatikov 2008). The charges by the Federal Trade Commission against Facebook (FTC 2011) and Google (FTC 2012) highlight increasing concerns by privacy advocates and government institutions about the large-scale recording of personal data.
|
| 33 |
+
|
| 34 |
+
Ideal approaches to privacy in online services would enable users to benefit from machine learning over data from populations of users, yet consider users' preferences as a top priority. Prior research in this realm has focused on designing privacy-preserving methodologies that can provide for control of a privacy-utility tradeoff (Adar 2007; Krause and Horvitz 2008). Research has also explored the feasibility of incorporating user preferences over what type of data can be logged (Xu et al. 2007; Cooper 2008; Olson, Grudin, and Horvitz 2005; Krause and Horvitz 2008).
|
| 35 |
+
|
| 36 |
+
We introduce a new approach to privacy that we refer to as *stochastic privacy*. Stochastic privacy centers on providing a guarantee to users about the likelihood that their data will be accessed and used by a service provider. We refer to this measure as the assessed or communicated *privacy risk*, which may be increased in return for increases in the quality of service or other incentives. Very small probabilities of sharing data may be tolerated by individuals (just as lightning strikes are tolerated as a rare event), yet offer service providers sufficient information to optimize over a large population of users. Stochastic privacy depends critically on harnessing inference and decision making to make choices about data collection within the constraints of a guaranteed privacy risk.
|
| 37 |
+
|
| 38 |
+
We explore procedures that can be employed by service providers when preferences or constraints about the shar-
|
| 39 |
+
|
| 40 |
+
*Adish Singla performed this research during an internship at Microsoft Research.
|
| 41 |
+
Copyright © 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
|
| 42 |
+
---PAGE_BREAK---
|
| 43 |
+
|
| 44 |
+
**Figure 1:** Overview of stochastic privacy.
|
| 45 |
+
|
| 46 |
+
ing of data are represented as privacy risk. The goal is to maximize the utility of service using data extracted from a population of users, while abiding by the agreement reached with users on privacy risk. We show that optimal selection of users under these constraints is NP-hard and thus intractable, given the massive size of the online systems. As solutions, we propose two procedures, RANDGREEDY and SPGREEDY, that combine greedy value of information analysis with obfuscation to offer mechanisms for tractable optimization, while satisfying stochastic privacy guarantees. We present performance bounds for the expected utility achievable by these procedures compared to the optimal solution. Our contributions can be summarized as follows:
|
| 47 |
+
|
| 48 |
+
* Introduction of stochastic privacy, an approach that represents preferences about the probability that data will be shared, and methods for trading off privacy risk, incentives, and quality of service.
|
| 49 |
+
|
| 50 |
+
* A tractable end-to-end system for implementing a version of stochastic privacy in online services.
|
| 51 |
+
|
| 52 |
+
* RANDGREEDY and SPGREEDY procedures for sampling users under the constraints of stochastic privacy, with theoretical guarantees on the acquired utility.
|
| 53 |
+
|
| 54 |
+
* Evaluation to demonstrate the effectiveness of the proposed procedures on a case study of user selection for personalization in web search.
|
| 55 |
+
|
| 56 |
+
## Stochastic Privacy Overview
|
| 57 |
+
|
| 58 |
+
Figure 1 provides an overview of stochastic privacy in the context of a particular design of a system that implements the methodology. The design is composed of three main components: (i) a user preference component, (ii) a system preference component, and (iii) an optimization component for guiding the system's data collection. We now provide details about each of the components and then formally specify the optimization problem for the *selective sampling* module.
|
| 59 |
+
|
| 60 |
+
### User Preference Component
|
| 61 |
+
|
| 62 |
+
The user preference component interacts with users (e.g., during sign-up) and establishes an agreement between a user and service provider on a tolerated probability that the user's data will be shared in return for better quality of service or incentives. Representing users' tolerated privacy risk allows for the design of controls that provide options for sharing data. The incentives offered to users can be personalized based on general information available about a user (e.g., general location information inferred from a previ-
|
| 63 |
+
|
| 64 |
+
ously shared IP address) and can vary from guarantees of improved service (Krause and Horvitz 2010) to complementary software and entries in a lottery to win cash prizes (as done by the comScore service (Wikipedia-comScore 2006)).
|
| 65 |
+
|
| 66 |
+
Formally, let $W$ be the population of users signed up for a service. Each user $w \in W$ is represented with the tuple $\{r_w, c_w, o_w\}$, where $o_w$ is the metadata information (e.g., IP address) available for user $w$ prior to selecting and logging finer-grained data about the user. $r_w$ is the privacy risk assessed by the user, and $c_w$ is the corresponding incentive provided in return for the user assuming the risk. The elements of this tuple can be updated through interactions between the system and the user. For simplicity of analysis, we shall assume that the pool $W$ and user preferences are static.
|
| 67 |
+
|
| 68 |
+
### System Preference Component
|
| 69 |
+
|
| 70 |
+
The goal of the service provider is to optimize the quality of service. For example, a provider may wish to personalize web search and to improve the targeting of advertising for maximization of revenue. The provider may record the activities of a subset of users (e.g., sets of queries issued, sites browsed, etc.) and use this data to provide better service globally or to a specific cohort of users. We model the private data of activity logs of user $w$ by variable $l_w \in 2^L$, where $L$ represents the web-scale space of activities (e.g., set of queries issued, sites browsed, etc.). However, $l_w$ is observed by the system only after $w$ is selected and the data from $w$ is logged. We model the system's uncertain belief of $l_w$ by a random variable $Y_w$, with $l_w$ being its realization distributed according to conditional probability distribution $P(Y_w = l_w | o_w)$. In order to make an informed decision about user selection, the distribution $P(Y_w = l_w | o_w)$ is learned by the system using data available from the user and recorded logs of other users. We quantify the utility of application by logging activities $L_S$ from selected users $S$ through function $g: 2^L \to \mathbb{R}$, given by $g(\bigcup_{s \in S} l_s)$.
|
| 71 |
+
|
| 72 |
+
The expected value of the utility that the system can expect to gain by selecting users $S$ with observed attributes $O_S$ is characterized by distribution $P$ and utility function $g$ as: $\tilde{g}(S) = E_{Y_S}[g(\bigcup_{s \in S} l_s)] = \sum_{L_S \in 2^L \times S} (P(Y_S = L_S | O_S) \cdot g(\bigcup_{s \in S} l_s))$. However, the application itself may be using the logs $L_S$ in a complex manner (such as training a ranker (Bennett et al. 2011)) and evaluating this on complex user metrics (Hassan and White 2013). Hence, the system uses a surrogate utility function $f(S) \approx \tilde{g}(S)$ to capture the utility through a simple metric, for example, coverage
|
| 73 |
+
---PAGE_BREAK---
|
| 74 |
+
|
| 75 |
+
of query-clicks obtained from the sampled users (Singla and White 2010) or reduction in uncertainty of click phenomena (Krause and Horvitz 2008).
|
| 76 |
+
|
| 77 |
+
We require the set function $f$ to be non-negative, monotone (i.e., whenever $A \subseteq A' \subseteq W$, it holds that $f(A) \le f(A')$) and submodular. Submodularity is an intuitive notion of diminishing returns, stating that, for any sets $A \subseteq A' \subseteq W$, and any given user $a \notin A'$, it holds that $f(A \cup \{a\}) - f(A) \ge f(A' \cup \{a\}) - f(A')$. These conditions are general, and are satisfied by many realistic, as well as complex utility functions (Krause and Guestrin 2007), such as reduction in click entropy (Krause and Horvitz 2008). As a concrete example, consider the setting where attributes $O$ represent geo-coordinates of the users and $D: O \times O \rightarrow \mathbb{R}$ computes the geographical distance between any two users. The goal of the service is to provide location-based personalization of web search. For such an application, click information from local users provides valuable signals for personalizing search (Bennett et al. 2011). The system's goal is to select a set of users $S$, and to leverage data from these users to enhance the service for the larger population of users. For search queries originating from any other user $w$, it uses the click data from the nearest user in $S$, given by $\arg\min_{s \in S} D(o_s, o_w)$. One approach for finding such a set $S$ is solving the *k-medoid* problem which aims to minimize the sum of pairwise distances between selected set and the remaining population (Mirzasoleiman et al. 2013; Kaufman and Rousseeuw 2009). Concretely, this can be captured by the following submodular utility function:
|
| 78 |
+
|
| 79 |
+
$$f(S) = \frac{1}{|W|} \sum_{w \in W} \left( \min_{x \in X} D(o_x, o_w) - \min_{s \in S \cup X} D(o_s, o_w) \right) \quad (1)$$
|
| 80 |
+
|
| 81 |
+
Here, $X$ is any one (or a set of) fixed reference location(s), for example, simply representing origin coordinates and is used ensure that function $f$ is non-negative and monotone. Lemma 1 formally states the properties of this function.
|
| 82 |
+
|
| 83 |
+
## Optimization Component
|
| 84 |
+
|
| 85 |
+
To make informed decisions about data access, the system computes the expected value of information (VOI) of logging the activities of a particular user, i.e., the marginal utility that the application can expect by logging the activity of this user (Krause and Horvitz 2008). In the absence of sufficient information about user attributes, the VOI may be small, and hence needs to be learned from the data. The system can randomly sample a small set of users from the population that can be used to learn and improve the models of VOI computation (explorative sampling in Figure 1). For example, for optimizing the service for a user cohort speaking a specific language, the system may choose to collect logs from a subset of users to learn how languages spoken by users map to geography. If preferences about privacy risk are overlooked, VOI can be used to select users to log with a goal of maximizing the utility for the service provider (selective sampling in Figure 1). Given that the utility function of the system is submodular, a greedy selection rule makes near-optimal decisions about data access (Krause and Guestrin 2007). However, this simple approach could violate guarantees on privacy risk. To act in accordance with
|
| 86 |
+
|
| 87 |
+
the guarantee, we design selective sampling procedures that couple obfuscation with VOI analysis to select the set of users to provide data.
|
| 88 |
+
|
| 89 |
+
The system needs to ensure that both the explorative and selective sampling approaches respect the privacy guarantees, i.e., the likelihood of sampling any user $w$ throughout the execution of the system must be less than the privacy risk factor $r_w$. The system tracks the sampling risk (likelihood of sampling) that user $w$ faces during phases of the execution of explorative sampling, denoted $r_w^{ES}$, and selective sampling, denoted $r_w^{SS}$. The privacy guarantee for a user is preserved as long as: $r_w - (1 - (1 - r_w^{ES}) \cdot (1 - r_w^{SS})) \ge 0$.
|
| 90 |
+
|
| 91 |
+
## Optimization Problem for Selective Sampling
|
| 92 |
+
|
| 93 |
+
We now focus primarily on the selective sampling module and formally introduce the optimization problem. The goal is to design a sampling procedure $M$ that abides by guarantees of stochastic privacy, yet optimizes the utility of the application in decisions about accessing user data. Given a budget constraint $B$, the goal is to select users $S^M$:
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\begin{align}
|
| 97 |
+
S^M &= \arg\max_{S \subseteq W} f(S) \tag{2} \\
|
| 98 |
+
\text{subject to} \quad & \sum_{s \in S} c_s \le B \text{ and } r_w - r_w^M \ge 0 \quad \forall w \in W.
|
| 99 |
+
\end{align}
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
Here, $r_w^M$ is the likelihood of selecting $w \in W$ by procedure $M$ and hence $r_w - r_w^M \ge 0$ captures the constraint of stochastic privacy guarantee for $w$. Note that we can interchangeably write utility acquired by procedure as $f(M)$ to denote $f(S^M)$ where $S^M$ is the set of users selected by running $M$. We shall now consider a simpler setting of constant privacy risk rate $r$ for all users and unit cost per user (thus reducing the budget constraint to a simpler cardinal constraint, given by $|S| \le B$). These assumptions lead to defining $B \le W \cdot r$, as that is the maximum possible set size that can be sampled by any procedure for Problem 2.
|
| 103 |
+
|
| 104 |
+
## Selective Sampling with Stochastic Privacy
|
| 105 |
+
|
| 106 |
+
We now present desiderata of the selection procedures, discuss the hardness of the problem, and review several different tractable approaches, as summarized in Table 1.
|
| 107 |
+
|
| 108 |
+
### Desirable Properties of Sampling Procedures
|
| 109 |
+
|
| 110 |
+
The problem defined by Equation 2 requires solving an NP-hard discrete optimization problem, even when the stochastic privacy constraint is removed. The algorithm for finding the optimal solution of this problem without the privacy constraint, referred as OPT, is intractable (Feige 1998). We address this intractability by exploiting the submodular structure of the utility function $f$ and offer procedures providing provable near-optimal solutions in polynomial time. We aim at designing procedures that satisfy the following desirable properties: (i) provides competitive utility w.r.t. OPT with provable guarantees, (ii) preserves stochastic privacy guarantees, and (iii) runs in polynomial time.
|
| 111 |
+
|
| 112 |
+
### Random Sampling: RANDOM
|
| 113 |
+
|
| 114 |
+
RANDOM samples the users at random, without any consideration of cost and utility. The likelihood of any user $w$ to be
|
| 115 |
+
---PAGE_BREAK---
|
| 116 |
+
|
| 117 |
+
<table><thead><tr><th>Procedure</th><th>Competitive utility</th><th>Privacy guarantees</th><th>Polynomial runtime</th></tr></thead><tbody><tr><td>OPT</td><td>✓</td><td>x</td><td>x O(|W|^<sup>B</sup>)</td></tr><tr><td>GREEDY</td><td>✓</td><td>x</td><td>✓ O(B · |W|)</td></tr><tr><td>RANDOM</td><td>x</td><td>✓</td><td>✓ O(B)</td></tr><tr><td>RANDGREEDY</td><td>✓</td><td>✓</td><td>✓ O(B · |W| · r)</td></tr><tr><td>SPGREEDY</td><td>✓</td><td>✓</td><td>✓ O(B · |W| · log(1/r))</td></tr></tbody></table>
|
| 118 |
+
|
| 119 |
+
**Table 1:** Properties of different procedures. RANDGREEDY and SPGREEDY satisfy all of the desired properties.
|
| 120 |
+
|
| 121 |
+
selected by the algorithm is $r_w^{RANDOM} = B/W$ and hence pri-
|
| 122 |
+
vacy risk guarantees are trivially satisfied since $B \le W \cdot r$
|
| 123 |
+
as defined in Problem 2. In general, RANDOM can perform
|
| 124 |
+
arbitrarily poorly in terms of acquired utility, specifically for
|
| 125 |
+
applications targeting particular user cohorts.
|
| 126 |
+
|
| 127 |
+
**Greedy Selection: GREEDY**
|
| 128 |
+
|
| 129 |
+
Next, we explore a greedy sampling strategy that maximizes
|
| 130 |
+
the expected marginal utility at each iteration to guide deci-
|
| 131 |
+
sions about selecting a next user to log. Formally, GREEDY
|
| 132 |
+
starts with empty set $S = \emptyset$. At an iteration $i$, it greedily
|
| 133 |
+
selects a user $s_i^* = \arg\max_{w \subseteq W \setminus S} (f(S \cup w) - f(S))$ and
|
| 134 |
+
adds the user to the current selection of users $S = S \cup \{s_i^*\}$.
|
| 135 |
+
The procedure halts when $|S| = B$.
|
| 136 |
+
|
| 137 |
+
A fundamental result by Nemhauser, Wolsey, and Fisher (1978) states that the utility obtained by this greedy selection strategy is guaranteed to be at least $(1 - 1/e) \approx 0.63)$ times that obtained by OPT. This result is tight under reasonable complexity assumptions ($P \neq NP$) (Feige 1998). However, such a greedy selection clearly violates the stochastic privacy constraint in Problem 2. Consider the user $w^*$ with highest marginal value: $w^* = \arg\max_{w \subseteq W} f(w)$. The likelihood that this user will be selected by the algorithm $r_{w^*}^{GREEDY} = 1$, regardless of the promised privacy risk $r_{w^*}$.
|
| 138 |
+
|
| 139 |
+
Sampling and Greedy Selection: RANDGREEDY
|
| 140 |
+
|
| 141 |
+
We combine the ideas of RANDOM and GREEDY to design
|
| 142 |
+
procedure RANDGREEDY which provides guarantees on
|
| 143 |
+
stochastic privacy and competitive utility. RANDGREEDY is
|
| 144 |
+
an iterative procedure that samples a small batch of users
|
| 145 |
+
$\psi(s)$ at each iteration, then greedily selects $s^* \in \psi(s)$ and
|
| 146 |
+
removes the entire set $\psi(s)$ for further consideration. By
|
| 147 |
+
keeping the batch size $\psi(s) \le W \cdot r/B$, the procedure en-
|
| 148 |
+
sures that the privacy guarantees are satisfied. As our user
|
| 149 |
+
pool $W$ is static, to reduce complexity, we consider a simpler
|
| 150 |
+
version of RANDGREEDY that defers the greedy selection.
|
| 151 |
+
Formally, this is equivalent to first sampling the users from
|
| 152 |
+
$W$ at rate $r$ to create a subset $\tilde{W}$ such that $|\tilde{W}| = |W| \cdot r$,
|
| 153 |
+
and then running the GREEDY algorithm on $\tilde{W}$ to greedily
|
| 154 |
+
select a set of users of size $B$.
|
| 155 |
+
|
| 156 |
+
The initial random sampling ensures a guarantee on the
|
| 157 |
+
privacy risk for users during the execution of the procedure.
|
| 158 |
+
In fact, for any user $w \in W$, the likelihood of $w$ being sam-
|
| 159 |
+
pled and included in subset $\tilde{W}$ is $r_w^{RANDGREEDY} \le r$. We fur-
|
| 160 |
+
ther analyze the utility obtained by this procedure in the next
|
| 161 |
+
section and show that, under reasonable assumptions, the ap-
|
| 162 |
+
proach can provide competitive utility compared to OPT.
|
| 163 |
+
|
| 164 |
+
**Greedy Selection with Obfuscation: SPGREEDY**
|
| 165 |
+
|
| 166 |
+
SPGREEDY uses an inverse approach of mixing RANDOM
|
| 167 |
+
and GREEDY: it does greedy selection, followed by obfusca-
|
| 168 |
+
|
| 169 |
+
tion, as illustrated in Procedure 1. It assumes an underlying
|
| 170 |
+
distance metric $D: W \times W \rightarrow \mathbb{R}$ which captures the notion
|
| 171 |
+
of distance or dissimilarity among users. As in GREEDY,
|
| 172 |
+
it operates in iterations and selects the user $s^*$ with maxi-
|
| 173 |
+
mum marginal utility at each iteration. However, to ensure
|
| 174 |
+
stochastic privacy, it obfuscates $s^*$ with nearest 1/r number
|
| 175 |
+
of users using distance metric $D$ to create a set $\psi(s^*)$. Then,
|
| 176 |
+
it samples one user randomly from $\psi(s^*)$ and removes the
|
| 177 |
+
entire set $\psi(s^*)$ from further consideration.
|
| 178 |
+
|
| 179 |
+
The guarantees on privacy risk hold by the following arguments: During the execution of the algorithm, any user $w$ becomes a possible candidate of being selected if the user is part of $\psi(s^*)$ in some iteration (e.g., iteration $i$). Given that $|\psi(s^*)| \ge 1/r$ and algorithm randomly sample $v \in \psi(s^*)$, the likelihood of $w$ being selected in iteration $i$ is at most $r$. The fact that set $\psi(s^*)$ is removed from available pool $\tilde{W}$ at the end of the iteration ensures that $w$ can become a possible candidate of selection only once.
|
| 180 |
+
|
| 181 |
+
Procedure 1: SPGREEDY
|
| 182 |
+
|
| 183 |
+
1 **Input:** *users* *W*; *cardinality constraint* *B*; *privacy risk* *r*; *distance metric* *D*: *W* × *W* → ℝ;
|
| 184 |
+
|
| 185 |
+
2 **Initialize:**
|
| 186 |
+
|
| 187 |
+
• Outputs: selected users *S* ← ∅;
|
| 188 |
+
|
| 189 |
+
• Variables: remaining users *W'* ← *W*;
|
| 190 |
+
|
| 191 |
+
begin
|
| 192 |
+
|
| 193 |
+
3 while $|S| \le B$ do
|
| 194 |
+
|
| 195 |
+
4 $s^* \leftarrow \arg\max_{w \in W'} f(S \cup w) - f(S);$
|
| 196 |
+
|
| 197 |
+
5 Set $\psi(s^*) \leftarrow s^*$;
|
| 198 |
+
|
| 199 |
+
6 while $|\psi(s^*)| < 1/r$ do
|
| 200 |
+
|
| 201 |
+
7 $v \leftarrow \arg\min_{w \in W' \setminus \psi(s^*)} D(w, s^*);$
|
| 202 |
+
|
| 203 |
+
8 $\psi(s^*) \leftarrow \psi(s^*) \cup \{v\};$
|
| 204 |
+
|
| 205 |
+
9 Randomly select $\tilde{s}^* \in \psi(s^*)$;
|
| 206 |
+
|
| 207 |
+
10 $S \leftarrow S \cup \{\tilde{s}^*\};$
|
| 208 |
+
|
| 209 |
+
11 $W' \leftarrow W' \setminus \psi(s^*)$;
|
| 210 |
+
|
| 211 |
+
12 **Output:** *S*
|
| 212 |
+
|
| 213 |
+
Performance Analysis
|
| 214 |
+
|
| 215 |
+
We now analyze the performance of the proposed proce-
|
| 216 |
+
dures in terms of the utility acquired compared to that of
|
| 217 |
+
OPT as a baseline. We first analyze the problem in a general
|
| 218 |
+
setting and then under a set of practical assumptions on the
|
| 219 |
+
structure of underlying utility function *f* and population of
|
| 220 |
+
users *W*. The proofs of all the results are available in the
|
| 221 |
+
appendix.
|
| 222 |
+
|
| 223 |
+
General Case
|
| 224 |
+
|
| 225 |
+
In the general setting, we show that one cannot do better than
|
| 226 |
+
$r \cdot f(OPT)$ in the worst case. Consider a population of users
|
| 227 |
+
$W$ where only one user $w^*$ has utility value of 1, and rest of
|
| 228 |
+
---PAGE_BREAK---
|
| 229 |
+
|
| 230 |
+
the users $W \setminus w^*$ have utility of 0. OPT achieves a utility of 1 by selecting $S^{\text{OPT}} = \{w^*\}$. Consider any procedure $M$ that has to respect the guarantees on privacy risk. If the privacy rate of $w^*$ is $r$, then $M$ can select $w^*$ with only a maximum probability of $r$. Hence, the maximum expected utility that any procedure $M$ for Problem 2 can achieve is $r$.
|
| 231 |
+
|
| 232 |
+
On a positive note, a trivial algorithm can always achieve a utility of $(1 - 1/e) \cdot r \cdot f(\text{OPT})$ in expectation. This result can be reached by running GREEDY to select a set $S^{\text{GREEDY}}$ and then choosing the final solution to be $S^{\text{GREEDY}}$ with probability $r$, and otherwise output an empty set. Theorem 1 formally states these results for the general problem setting.
|
| 233 |
+
|
| 234 |
+
**Theorem 1.** Consider the Problem 2 of optimizing a submodular function $f$ under cardinality constraint $B$ and privacy risk rate $r$. For any distribution of marginal utilities of population $W$, a trivial procedure can achieve an expected utility of at least $(1 - 1/e) \cdot r \cdot f(\text{OPT})$. In contrast, there exists an underlying distribution for which no procedure can have expected utility of more than $r \cdot f(\text{OPT})$.
|
| 235 |
+
|
| 236 |
+
## Smoothness and Diversification Assumptions
|
| 237 |
+
|
| 238 |
+
In practice, we can hope to do much better than the worst-case results described in Theorem 1 by exploiting the underlying structure of users' attributes and utility function. We start with the assumption that there exists a distance metric $D: W \times W \rightarrow \mathbb{R}$ which captures the notion of distance or dissimilarity among users. For any given $w \in W$, let us define its $\alpha$-neighborhood to be the set of users within a distance $\alpha$ from $w$ (i.e., $\alpha$-close to $w$): $N_{\alpha}(w) = \{v : D(v, w) \le \alpha\}$. We assume that population of users is large and that the number of users in the $N_{\alpha}(w)$ is large. We capture these requirements formally in Theorems 2,3.
|
| 239 |
+
|
| 240 |
+
First, we consider utility functions that change gracefully with changes in inputs, similar to the notion of $\lambda$-Lipschitz set functions used in Mirzasoleiman et al. (2013). We formalize the notion of smoothness in the utility function $f$ w.r.t metric $D$ as follows:
|
| 241 |
+
|
| 242 |
+
**Definition 1.** For any given set of users $S$, let us consider a set $\tilde{S}_{\alpha}$ obtained by replacing every $s \in S$ with any $w \in N_{\alpha}(s)$. Then, $|f(S) - f(\tilde{S}_{\alpha})| \le \lambda_f \cdot \alpha |S|$, where parameter $\lambda_f$ captures the notion of smoothness of function $f$.
|
| 243 |
+
|
| 244 |
+
Secondly, we consider utility functions that favor diversity or dissimilarity of users in the subset selection w.r.t the distance metric $D$. We formalize this notion of diversification in the utility function as follows:
|
| 245 |
+
|
| 246 |
+
**Definition 2.** Let us consider any given set of users $S \subseteq W$ and a user $w \in W$. Let $\alpha = \min_{s \in S} D(s, w)$. Then, $f(S \cup w) - f(S) \le \Upsilon_f \cdot \alpha$, where parameter $\Upsilon_f$ captures the notion of diversification of function $f$.
|
| 247 |
+
|
| 248 |
+
The utility function $f$ introduced in Equation 1 satisfies both of the above assumptions as formally stated below.
|
| 249 |
+
|
| 250 |
+
**Lemma 1.** Consider the utility function $f$ in Equation 1. $f$ is submodular, and satisfies the properties of smoothness and diversification, i.e. has bounded $\lambda_f$ and $\Upsilon_f$.
|
| 251 |
+
|
| 252 |
+
We note that for the functions with unbounded $\lambda$ and $\Upsilon$ (i.e., $\lambda_f \to \infty$ and $\Upsilon_f \to \infty$), it would lead to the general problem settings (equivalent to no assumptions) and hence results of Theorem 1 apply.
|
| 253 |
+
|
| 254 |
+
## Performance Bounds
|
| 255 |
+
|
| 256 |
+
Under the assumption of smoothness (i.e., bounded $\lambda_f$), we can show the following bound on utility of RANDGREEDY:
|
| 257 |
+
|
| 258 |
+
**Theorem 2.** Consider the Problem 2 for function $f$ with bounded $\lambda_f$. Let $S^{\text{OPT}}$ be the set returned by OPT for Problem 2 without the privacy constraints. For a desired $\epsilon < 1$, let $\alpha_{rg} = \arg\min_{\alpha}\{\alpha: |N_{\alpha}(s)| \ge 1/r \cdot \log(B/\epsilon) \; \forall s \in S^{\text{OPT}}$ and $N_{\alpha}(s) \cap N_{\alpha}(s') = \emptyset \; \forall s, s' \in S^{\text{OPT}}\}$. Then, with probability at least $(1-\epsilon)$,
|
| 259 |
+
|
| 260 |
+
$$ E[f(\text{RANDGREEDY})] \ge (1 - 1/e) \cdot (f(\text{OPT}) - \alpha_{rg} \cdot \lambda_f \cdot B) $$
|
| 261 |
+
|
| 262 |
+
Under the assumption of smoothness and diversification (i.e., bounded $\lambda_f$ and $\Upsilon_f$), we can show the following bound on utility of SPGREEDY:
|
| 263 |
+
|
| 264 |
+
**Theorem 3.** Consider the Problem 2 for function $f$ with bounded $\lambda_f$ and $\Upsilon_f$. Let $S^{\text{GREEDY}}$ be the set returned by GREEDY for Problem 2 without the privacy constraints. Let $\alpha_{spg} = \arg\min_{\alpha}\{\alpha: |N_{\alpha}(s)| \ge 1/r \; \forall s \in S^{\text{GREEDY}}\}$. Then,
|
| 265 |
+
|
| 266 |
+
$$ E[f(\text{SPGREEDY})] \ge (1 - 1/e) \cdot f(\text{OPT}) - 2(\lambda_f + \Upsilon_f) \cdot \alpha_{spg} \cdot B $$
|
| 267 |
+
|
| 268 |
+
Intuitively, these results imply that both RANDGREEDY and SPGREEDY achieve competitive utility w.r.t OPT, and that the performance degrades smoothly as the privacy risk $r$ is decreased or the bounds on smoothness and diversification for function $f$ increase.
|
| 269 |
+
|
| 270 |
+
## Experimental Evaluation
|
| 271 |
+
|
| 272 |
+
We now report on experiments aimed at providing insights on the performance of the stochastic privacy procedures with a case study of the selective collection of user data in support of the personalization of web search.
|
| 273 |
+
|
| 274 |
+
## Benchmarks and Metrics
|
| 275 |
+
|
| 276 |
+
We compare the performance of the RANDGREEDY and SP-GREEDY procedures against the baselines of RANDOM and GREEDY. While RANDOM provides a trivial lower benchmark for any procedure, GREEDY is a natural upper bound on the utility, given that OPT itself is intractable. To analyze the robustness of the procedures, we vary the level of privacy risk $r$. We further carried out experiments to understand the loss incurred from the obfuscation phase during the execution of SPGREEDY.
|
| 277 |
+
|
| 278 |
+
## Experimental Setup
|
| 279 |
+
|
| 280 |
+
We consider the application of providing location-based personalization for queries issued for the business domain (e.g., real-estate, financial services, etc.). The goal is to select a set of users $S$ who are experts at web search in this domain. We seek to leverage click data from these users to improve the relevance of search results shown to the broader population of users searching for local businesses. The experiments are based on using a surrogate utility function as introduced in Equation 1. As we study the domain of business-related queries, we modify the utility function in Equation 1 by restricting $S$ to users who are experts in the domain, as further described below. The acquired utility can be interpreted as the average reduction in the distance for any user $w$ in the population to the nearest expert $s \in S$.
|
| 281 |
+
---PAGE_BREAK---
|
| 282 |
+
|
| 283 |
+
**Figure 2:** Fig. 2(a) shows increases in the average utility of proposed procedures and GREEDY with increases in the budget *B* on the number of selected users, at a constant privacy risk of *r* = 1/10000. Fig. 2(b) displays smooth decreases in utility as the level of privacy risk *r* for the population is reduced for applying RANDGREEDY and SPGREEDY with a fixed budget *B* = 50. Fig. 2(c) shows small losses at each step incurred by SPGREEDY via obfuscation.
|
| 284 |
+
|
| 285 |
+
The primary source of data for the study is obtained from interaction logs on a major web search engine. We consider a fraction of users who issued at least one query in the month of October 2013, restricted to queries coming from IP addresses located within ten neighboring states in the western region of the United States. This results in a pool *W* of seven million users. We consider a setting where the system has access to metadata information of geo-coordinates of the users, as well as a probe of the last 20 search-result clicks for each user, which together constitute the observed attributes of user denoted as *o*<sub>w</sub>. Each of these clicks are then classified into a topical hierarchy from a popular web directory named the Open Directory Project (ODP) (dmoz.org), using automated techniques (Bennett, Svore, and Dumais 2010). With a similar objective to White, Dumais, and Teevan (2009), the system then uses this classification to identify users who are expert in the business domain. We used the simple rule of classifying a user as an expert if at least one click was issued in the domain of interest. With this, the system marks a set of users $W' \subseteq W$ as experts, and the set *S* in Equation 1 is restricted to $W'$. We note that the specific thresholds or variable choices do not influence the overall results below.
|
| 286 |
+
|
| 287 |
+
## Results
|
| 288 |
+
|
| 289 |
+
We now review results from the experiments.
|
| 290 |
+
|
| 291 |
+
**Varying the budget B:** In the first set of experiments, we vary the budget *B* of the number of users selected, and measure the utility acquired by different procedures. We fix the privacy risk *r* = 1/10000. Figure 2(a) illustrates that both RANDGREEDY and SPGREEDY are competitive w.r.t GREEDY and outperform the naive RANDOM baseline.
|
| 292 |
+
|
| 293 |
+
**Varying the privacy risk r:** We then vary the level of privacy risk, for a fixed budget *B* = 50, to measure the robustness of the RANDGREEDY and SPGREEDY. The results in Figure 2(b) demonstrate that the performance of RAND-GREEDY and SPGREEDY degrades smoothly, as per the performance analysis in Theorems 2 and 3.
|
| 294 |
+
|
| 295 |
+
**Analyzing performance of SPGREEDY:** Last, we perform experiments to understand the execution of SP-GREEDY and the loss incurred from the obfuscation step. SPGREEDY removes 1/r users from the pool at every iteration. As a result, for a small privacy risk *r*, the relative loss from obfuscation (i.e., relative % difference in marginal utility acquired by a user chosen by greedy selection as com-
|
| 296 |
+
|
| 297 |
+
pared to a user picked following obfuscation) can increase over the execution of the procedure. Such an increase is illustrated in Figure 2(a), which displays results computed using a moving average of window size 10. However, the diminishing returns property ensures that SPGREEDY incurs low absolute loss in marginal utility from obfuscation at each step.
|
| 298 |
+
|
| 299 |
+
## Summary and Future Directions
|
| 300 |
+
|
| 301 |
+
We introduced stochastic privacy, a new approach to managing privacy that centers on service providers abiding by guarantees about not exceeding a specified likelihood of accessing users' data, and maximizing information collection in accordance with these guarantees. We presented procedures and an overall system design for maximizing the quality of services while respecting an assessed or communicated privacy risk. We showed bounds on the performance of the RANDGREEDY and SPGREEDY procedures, as compared to the optimal, NP-Hard solution and evaluated the algorithms on a web personalization application.
|
| 302 |
+
|
| 303 |
+
Research directions ahead on stochastic privacy include studies of user preferences about the probability of sharing data. We are interested in understanding how people in different settings may trade increases in privacy risk for enhanced service and monetary incentives. We seek an understanding of preferences, policies, and corresponding analyses that consider the sharing of data as a privacy risk rate over time. We are also interested in exploring different overall designs for the operation of a large-scale system, spanning study of different ways that users might be engaged. In one design, a provider might simply publish a universal policy on privacy risk or privacy risk rate. In another approach, users might additionally be notified when they are selected to share data and can decide at that time whether to accept and receive a gratuity or to decline the request for data. Inferences about the preferences of subpopulations about privacy risk and incentives could be folded into the selection procedures, and systems could learn to recognize and counter informational biases that might be associated with data accessed from these subgroups. We are excited about the promise of stochastic privacy to provide understandable approaches to enhancing privacy while enabling rich, personalized online services.
|
| 304 |
+
---PAGE_BREAK---
|
| 305 |
+
|
| 306 |
+
References
|
| 307 |
+
|
| 308 |
+
[2007] Adar, E. 2007. User 4xxxxx9: Anonymizing query logs. In *Workshop on Query Log Analysis at WWW'07*.
|
| 309 |
+
|
| 310 |
+
[2006] Arrington, M. 2006. Aol proudly releases massive amounts of private data. http://techcrunch.com/2006/08/06/aol-proudly-releases-massive-amounts-of-user-search-data/.
|
| 311 |
+
|
| 312 |
+
[2011] Bennett, P. N.; Radlinski, F.; White, R. W.; and Yilmaz, E. 2011. Inferring and using location metadata to personalize web search. In *Proc. of SIGIR*, 135–144.
|
| 313 |
+
|
| 314 |
+
[2010] Bennett, P. N.; Svore, K.; and Dumais, S. T. 2010. Classification-enhanced ranking. In *Proc. of WWW*, 111–120.
|
| 315 |
+
|
| 316 |
+
[2008] Cooper, A. 2008. A survey of query log privacy-enhancing techniques from a policy perspective. *ACM Trans. Web* 2(4):19:1–19:27.
|
| 317 |
+
|
| 318 |
+
[1998] Feige, U. 1998. A threshold of Ωn for approximating set cover. *Journal of the ACM* 45:314–318.
|
| 319 |
+
|
| 320 |
+
[2011] FTC. 2011. FTC charges against Facebook. http://www.ftc.gov/opa/2011/11/privacysettlement.shtm.
|
| 321 |
+
|
| 322 |
+
[2012] FTC. 2012. FTC charges against Google. http://www.ftc.gov/opa/2012/08/google.shtm.
|
| 323 |
+
|
| 324 |
+
[2013] Hassan, A., and White, R. W. 2013. Personalized models of search satisfaction. In *Proc. of CIKM*, 2009–2018.
|
| 325 |
+
|
| 326 |
+
[2009] Kaufman, L., and Rousseeuw, P. J. 2009. *Finding groups in data: an introduction to cluster analysis*, volume 344. Wiley. com.
|
| 327 |
+
|
| 328 |
+
[2005] Krause, A., and Guestrin, C. 2005. A note on the budgeted maximization of submodular functions. Technical Report CMU-CALD-05-103, Carnegie Mellon University.
|
| 329 |
+
|
| 330 |
+
[2007] Krause, A., and Guestrin, C. 2007. Near-optimal observation selection using submodular functions. In *Proc. of AAAI*, *Nectar track*.
|
| 331 |
+
|
| 332 |
+
[2008] Krause, A., and Horvitz, E. 2008. A utility-theoretic approach to privacy and personalization. In *Proc. of AAAI*.
|
| 333 |
+
|
| 334 |
+
[2010] Krause, A., and Horvitz, E. 2010. A utility-theoretic approach to privacy in online services. *Journal of Artificial Intelligence Research (JAIR)* 39:633–662.
|
| 335 |
+
|
| 336 |
+
[2013] Mirzasoleiman, B.; Karbasi, A.; Sarkar, R.; and Krause, A. 2013. Distributed submodular maximization: Identifying representative elements in massive data. In *Proc. of NIPS*.
|
| 337 |
+
|
| 338 |
+
[2008] Narayanan, A., and Shmatikov, V. 2008. Robust de-anonymization of large sparse datasets. In *Proc. of the IEEE Symposium on Security and Privacy*, 111–125.
|
| 339 |
+
|
| 340 |
+
[1978] Nemhauser, G.; Wolsey, L.; and Fisher, M. 1978. An analysis of the approximations for maximizing submodular set functions. *Math. Prog.* 14:265–294.
|
| 341 |
+
|
| 342 |
+
[2005] Olson, J.; Grudin, J.; and Horvitz, E. 2005. A study of preferences for sharing and privacy. In *Proc. of CHI*.
|
| 343 |
+
|
| 344 |
+
[2010] Singla, A., and White, R. W. 2010. Sampling high-quality clicks from noisy click data. In *Proc. of WWW*, 1187–1188.
|
| 345 |
+
|
| 346 |
+
[2012] Technet. 2012. Privacy and technology in balance. http://blogs.technet.com/b/microsoft_on_the_issues/archive/2012/10/26/privacy-and-technology-in-balance.aspx.
|
| 347 |
+
|
| 348 |
+
[2009] White, R. W.; Dumais, S. T.; and Teevan, J. 2009. Characterizing the influence of domain expertise on web search behavior. In *Proc. of WSDM*, 132–141.
|
| 349 |
+
|
| 350 |
+
[2006] Wikipedia-comScore. 2006. ComScore#Data_collection_and_reporting. http://en.wikipedia.org/wiki/ComScore#Data_collection_and_reporting.
|
| 351 |
+
|
| 352 |
+
[2007] Xu, Y.; Wang, K.; Zhang, B.; and Chen, Z. 2007. Privacy-enhancing personalized web search. In *Proc. of WWW*, 591–600. ACM.
|
| 353 |
+
---PAGE_BREAK---
|
| 354 |
+
|
| 355 |
+
**Proof of Lemma 1**
|
| 356 |
+
|
| 357 |
+
We prove Lemma 1 by proving three other Lemmas 2, 3, 4 that are not in the main paper. In Lemma 2, by using the decomposable property of the function $f$ from Equation 1, we prove that the function $f$ is non-negative, monotonic (non-decreasing) and submodular. Then, we show that the function satisfies the properties of smoothness (in Lemma 3) and diversification (in Lemma 4) by showing an upper bound on the values of the parameters $\lambda_f$ and $\Upsilon_f$.
|
| 358 |
+
|
| 359 |
+
**Lemma 2.** Utility function *f* in Equation 1 is non-negative, monotone (non-decreasing) and submodular.
|
| 360 |
+
|
| 361 |
+
**Proof.** We begin by noting that *f* is decomposable, i.e., it can be written as a sum of simpler functions $f_w$ as:
|
| 362 |
+
|
| 363 |
+
$$
|
| 364 |
+
f(S) = \sum_{w \in W} f_w(S) \tag{3}
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
where $f_w(S)$ is given by:
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
f_w(S) = \frac{1}{|W|} \left( \min_{x \in X} D(o_x, o_w) - \min_{s \in S \cup X} D(o_s, o_w) \right) \quad (4)
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
Next, we prove that each of these functions $f_w$ is non-negative, non-decreasing and submodular. To prove that the function is non-decreasing, consider any two sets $S \subseteq S' \subseteq W$. Then,
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
\begin{equation}
|
| 377 |
+
\begin{split}
|
| 378 |
+
f_w(S') - f_w(S) &= \frac{1}{|W|} \left( \min_{s \in S \cup X} D(o_s, o_w) - \min_{s \in S' \cup X} D(o_s, o_w) \right) \\
|
| 379 |
+
&\ge 0
|
| 380 |
+
\end{split}
|
| 381 |
+
\tag{5}
|
| 382 |
+
\end{equation}
|
| 383 |
+
$$
|
| 384 |
+
|
| 385 |
+
In step 5, the inequality holds as the distance to the nearest user for $w$ in $S'$ cannot be more than that in $S$, hence proving that $f_w$ is non-decreasing. Also, it is easy to see that $f_w(\emptyset) = 0$, which along with the non-decreasing property, ensures that the function $f_w$ is non-negative.
|
| 386 |
+
|
| 387 |
+
To prove that the function is submodular, consider any two sets $S \subseteq S' \subseteq W$, and any given user $v \in W \setminus S'$. When $f_w(S' \cup \{v\}) - f_w(S) = 0$, submodularity holds trivially as we have $f_w(S \cup \{v\}) - f_w(S) \ge 0$ using non-decreasing property. Let us consider the case when $f_w(S' \cup \{v\}) - f_w(S') > 0$, i.e., $v$ is assigned as the nearest user to $w$ from the set $S' \cup \{v\}$, given by $v = \min_{s \in S' \cup \{v\} \cup X} D(o_s, o_w)$. In this case, it would also be the case that $v$ is the nearest user to $w$ from the set $S \cup \{v\}$. Then, we can write down the difference of marginal gains as follows:
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
\begin{align*}
|
| 391 |
+
& \left(f_w(S' \cup \{v\}) - f_w(S')\right) - \left(f_w(S \cup \{v\}) - f_w(S)\right) \\
|
| 392 |
+
&= \left(\frac{1}{|W|}\left(D(o_v, o_w) - \min_{s \in S' \cup X} D(o_s, o_w)\right)\right) - \left(\frac{1}{|W|}\left(D(o_v, o_w) - \min_{s \in S \cup X} D(o_s, o_w)\right)\right) \\
|
| 393 |
+
&= \frac{1}{|W|}\left(\min_{s \in S \cup X} D(o_s, o_w) - \min_{s \in S' \cup X} D(o_s, o_w)\right) \\
|
| 394 |
+
&\le 0 \tag{6}
|
| 395 |
+
\end{align*}
|
| 396 |
+
$$
|
| 397 |
+
|
| 398 |
+
In step 6, the inequality holds as the function is non-decreasing, thus showing that the marginal gains diminish and hence proving the submodularity of the function $f_w$.
|
| 399 |
+
|
| 400 |
+
By using the fact that these properties are preserved under linear combination with non-negative weights (all equal to 1 from Equation 3), *f* is non-negative, non-decreasing and submodular.
|
| 401 |
+
□
|
| 402 |
+
|
| 403 |
+
**Lemma 3.** Utility function *f* in Equation 1 satisfies the properties of smoothness, i.e. has bounded $\lambda_f$.
|
| 404 |
+
|
| 405 |
+
**Proof.** For any given set of users $S$, let us consider a set $\tilde{S}_\alpha$ obtained by replacing every $s \in S$ with any $w \in N_\alpha(s)$. The goal is to show that $|f(S) - f(\tilde{S}_\alpha)| \le \lambda_f \cdot |\alpha| |S|$ always holds for a fixed and bounded $\lambda_f$.
|
| 406 |
+
|
| 407 |
+
Let us again use the simpler functions $f_w$ from decomposition of $f$ in Equation 3 and consider the difference $|f_w(S) - f_w(\tilde{S}_\alpha)|$.
|
| 408 |
+
Then,
|
| 409 |
+
|
| 410 |
+
$$
|
| 411 |
+
|f_w(S) - f_w(\tilde{S}_\alpha)| \leq \frac{\alpha}{|W|} \tag{7}
|
| 412 |
+
$$
|
| 413 |
+
|
| 414 |
+
In step 7, the inequality holds as the deviation in distance to the nearest user for $w$ in $\tilde{S}_{\alpha}$ cannot be more than $\alpha$. Using this result, we have
|
| 415 |
+
|
| 416 |
+
$$
|
| 417 |
+
|f(S) - f(\tilde{S}_{\alpha})| = \left| \sum_{w \in W} f_w(S) - \sum_{w \in W} f(\tilde{S}_{\alpha}) \right|
|
| 418 |
+
$$
|
| 419 |
+
---PAGE_BREAK---
|
| 420 |
+
|
| 421 |
+
$$
|
| 422 |
+
\begin{align}
|
| 423 |
+
& \leq \sum_{w \in W} |f_w(S) - f_w(\tilde{S}_\alpha)| \notag \\
|
| 424 |
+
& \leq \sum_{w \in W} \frac{\alpha}{|W|} \tag{8} \\
|
| 425 |
+
& = \alpha \leq \alpha |S| \notag
|
| 426 |
+
\end{align}
|
| 427 |
+
$$
|
| 428 |
+
|
| 429 |
+
The inequality in step 8 holds by using the result of step 7 and inequality in step 9 holds trivially as $|S| \ge 1$. Hence, the smoothness parameter of the function $\lambda_f$ is bounded by 1. $\square$
|
| 430 |
+
|
| 431 |
+
**Lemma 4.** Utility function *f* in Equation 1 satisfies the properties of diversification, i.e. has bounded Υ<sub>*f*</sub>.
|
| 432 |
+
|
| 433 |
+
**Proof.** For any given set of users *S* and any new user *v* ∈ *W* \ *S*, let us define α = min<sub>s∈*S*</sub> *D(s,v)*. The goal is to show that *f(S ∪ v) − f(S)* ≤ Υ<sub>*f*</sub> · α always holds for a fixed and bounded Υ<sub>*f*</sub>.
|
| 434 |
+
|
| 435 |
+
Again, let us consider the function $f_w$ and consider the marginal of adding $v$ to $S$, given by $f_w(S \cup v) - f_w(S)$. When $v$ is not the nearest user to $w$ in the set $S \cup \{v\}$, we have $f(S \cup \{v\}) - f(S) = 0$. Let's consider the case where $f_w(S \cup v) - f(S) > 0$, i.e., $v$ is assigned as the nearest user to $w$ from the set $S \cup \{v\}$, given by $v = \min_{s \in S \cup \{v\} \cup X} D(o_s, o_w)$. Let us denote the nearest user assigned to $w$ before adding $v$ to the set by $v'$. Then, we have:
|
| 436 |
+
|
| 437 |
+
$$
|
| 438 |
+
\begin{align}
|
| 439 |
+
f_w(S \cup v) - f_w(S) &= \frac{1}{|W|} (D(s, v') - D(s, v)) \nonumber \\
|
| 440 |
+
&\le \frac{D(v, v')}{|W|} \tag{10} \\
|
| 441 |
+
&\le \frac{\alpha}{|W|} \tag{11}
|
| 442 |
+
\end{align}
|
| 443 |
+
$$
|
| 444 |
+
|
| 445 |
+
Step 10 uses the triangular inequality of the underlying metric space. In step 11, the inequality holds by the definition of $\alpha$. Then, we have
|
| 446 |
+
|
| 447 |
+
$$
|
| 448 |
+
\begin{align}
|
| 449 |
+
f(S \cup v) - f(S) &= \sum_{w \in W} (f_w(S \cup v) - \sum_{w \in W} f(S)) \nonumber \\
|
| 450 |
+
&\leq \sum_{w \in W} \frac{\alpha}{|W|} \tag{12} \\
|
| 451 |
+
&= \alpha
|
| 452 |
+
\end{align}
|
| 453 |
+
$$
|
| 454 |
+
|
| 455 |
+
The inequality in step 12 holds by using the result of step 11. Hence, the diversification parameter of the function $\Upsilon_f$ is bounded by 1. $\square$
|
| 456 |
+
|
| 457 |
+
**Proof of Lemma 1.** The proof directly follows from the results in Lemmas 2, 3, 4. $\square$
|
| 458 |
+
|
| 459 |
+
**Proof of Theorem 2**
|
| 460 |
+
|
| 461 |
+
**Proof of Theorem 2.** Let $S^{\text{OPT}}$ be the set returned by OPT for Problem 2 without the privacy constraints. By the hypothesis of the theorem, for each of the element $s \in S^{\text{OPT}}$, the $\alpha_{rg}$ neighborhood of $s$ contains a set of at least $1/r \cdot \log(B/\epsilon)$ users. Furthermore, by hypothesis, these sets of size at least $1/r \cdot \log(B/\epsilon)$ can be constructed to be mutually disjoint for every $s, s' \in S^{\text{OPT}}$, let us denote these mutually disjoint sets by $\tilde{N}_{\alpha_{rg}}(s)$. Formally, this means that for $s \in S^{\text{OPT}}$, we have $|\tilde{N}_{\alpha_{rg}}(s)| \geq 1/r \cdot \log(B/\epsilon)$ and for any pairs of $s, s' \in S^{\text{OPT}}$, we have $\tilde{N}_{\alpha_{rg}}(s) \cap \tilde{N}_{\alpha_{rg}}(s) = \emptyset$.
|
| 462 |
+
|
| 463 |
+
Recall that the simpler version of RANDGREEDY first samples the users from $W$ at rate $r$ to create a subset $\tilde{W}$ such that $|\tilde{W}| = |W| \cdot r$. We first show that sampling at a rate $r$ by RANDGREEDY ensures that with high probability (given by $1 - \epsilon$), at least one user is sampled from $\tilde{N}_{\alpha_{rg}}(s)$ for each of the $s \in S^{\text{OPT}}$. Consider the process of sampling for $s$ and $\tilde{N}_{\alpha_{rg}}(s)$. Each of the users in $\tilde{N}_{\alpha_{rg}}(s)$ has probability of being sampled given by $r$. Hence, the probability that none of the users in $\tilde{N}_{\alpha_{rg}}(s)$ are included in $\tilde{W}$ for a given $s$ is given by:
|
| 464 |
+
|
| 465 |
+
$$
|
| 466 |
+
\begin{align*}
|
| 467 |
+
P(\tilde{N}_{\alpha_{rg}}(s) \cap \widetilde{W} = \emptyset) &= (1-r)^{1/r \cdot \log(B/\epsilon)} \\
|
| 468 |
+
&\le e^{-\log(B/\epsilon)} \\
|
| 469 |
+
&= \epsilon/B
|
| 470 |
+
\end{align*}
|
| 471 |
+
$$
|
| 472 |
+
---PAGE_BREAK---
|
| 473 |
+
|
| 474 |
+
By using union bound, the probability that none of the users in $\tilde{N}_{\alpha_{rg}}(s)$ gets included in $\tilde{W}$ for any $s \in S^{\text{OPT}}$ is bounded by $\epsilon$ (given by $B \cdot \epsilon/B$). Hence, with probability at least $1 - \epsilon$, the sampled set $\tilde{W}$ contains at least one user from $\tilde{N}_{\alpha_{rg}}(s)$ for every $s \in S^{\text{OPT}}$.
|
| 475 |
+
|
| 476 |
+
This is equivalent to saying that, with probability at least $1 - \epsilon$, the $\tilde{W}$ contains a set $\tilde{S}_{\alpha_{rg}}^{\text{OPT}}$ that can be obtained by replacing every $s \in S^{\text{OPT}}$ with some $w \in N_{\alpha_{rg}}(s)$, and hence $f(\tilde{S}_{\alpha_{rg}}^{\text{OPT}}) \ge f(S^{\text{OPT}}) - \alpha_{rg} \cdot \lambda_f \cdot B$ (by using the definition of smoothness property). And, running the GREEDY on $\tilde{W}$ ensures that the utility obtained is at least $(1-1/e) \cdot f(\tilde{S}_{\alpha_{rg}}^{\text{OPT}})$. Hence, with probability at least $(1-\epsilon)$,
|
| 477 |
+
|
| 478 |
+
$$
|
| 479 |
+
\begin{aligned}
|
| 480 |
+
\mathbb{E}[f(\text{RANDGREEDY})] &\ge (1 - 1/e) \cdot f(\tilde{S}_{\alpha_{rg}}^{\text{OPT}}) \\
|
| 481 |
+
&\ge (1 - 1/e) \cdot (f(\text{OPT}) - \alpha_{rg} \cdot \lambda_f \cdot B)
|
| 482 |
+
\end{aligned}
|
| 483 |
+
\tag*{\hspace*{\fill} \square}
|
| 484 |
+
$$
|
| 485 |
+
|
| 486 |
+
**Proof of Theorem 3**
|
| 487 |
+
|
| 488 |
+
**Proof of Theorem 3.** Let $S^{\text{GREEDY}}$ be the set returned by GREEDY for Problem 2 without the privacy constraints. By the hypothesis of the theorem, for each of the element $s \in S^{\text{GREEDY}}$, the $\alpha_{spg}$ neighborhood of $s$ contains a set of at least $1/r$ users. The loss of utility for the procedure SPGREEDY compared w.r.t to GREEDY at iteration $i$ can be attributed to two following reasons: (1) obfuscation of $s_i^*$ with set $\psi(s_i^*)$ to select $s_i^*$, where the size of $\psi(s_i^*)$ is $1/r$, and (2) removal of the entire set $\psi(s_i^*)$ for further consideration. We analyze these two factors separately to get the desired bounds on the utility of SPGREEDY.
|
| 489 |
+
|
| 490 |
+
We being by stating a more general result on the approximation guarantees of GREEDY from (Krause and Guestrin 2005) when the submodular objective function can only be evaluated approximately within an absolute error of $\epsilon$. Results from (Krause and Guestrin 2005) states that the utility obtained by this noisy greedy selection is guaranteed to be at least $((1-1/e) \cdot \text{OPT} - 2 \cdot \epsilon \cdot B)$, where B is the budget.
|
| 491 |
+
|
| 492 |
+
Now, consider an alternate procedure that operates similar to SPGREEDY, by obfuscating $s_i^*$ with set $\psi(s_i^*)$ to pick $\tilde{s}_i^*$ at each iteration $i$. However, this alternate procedure does not eliminate the entire set of users $\psi(s_i^*)$ from the pool, but only removes $\tilde{s}_i^*$. Instead, it tags the users of $\psi(s_i^*) \setminus \{\tilde{s}_i^*\}$ as <invalid, i>, i.e. these users are marked as invalid and are tagged with the iteration $i$ at which they became invalid (in case a user was already marked as invalid, the iteration tag is not updated). Let us denote this alternate procedure by SPGREEDY. This can alternatively be viewed as similar to GREEDY, though it can pick the user at every iteration only approximately, because of the noise added by obfuscation. We now bound the absolute value of this approximation error at every iteration. As $s_i^*$ is obfuscated with a set of users of size $1/r$ nearest to $s_i^*$ from the hypothesis of the theorem, we are certain that set $\psi(s_i^*)$ is contained within a radius of $\alpha_{spg}$ neighborhood. Now, from the smoothness assumptions, the maximum absolute error that could be introduced by the obfuscation compared to greedy selection (i.e. the difference in marginal utilities of $s_i^*$ and $\tilde{s}_i^*$) at a given iteration $i$ is bounded by $\lambda_f \cdot \alpha_{spg}$. Hence, the utility obtained by SPGREEDY can be lower-bounded as:
|
| 493 |
+
|
| 494 |
+
$$ f(\overline{\text{SPGREEDY}}) \ge (1 - 1/e) \cdot \text{OPT} - 2 \cdot \lambda_f \cdot \alpha_{spg} \cdot B \quad (13) $$
|
| 495 |
+
|
| 496 |
+
Next, we consider the loss associated with the removal of entire set $\psi(s_i^*)$ at iteration $i$. Let us consider the execution of SPGREEDY and let $l+1$ be the first iteration when the obfuscation set $\psi(s_{l+1}^*)$ created by the procedure contains at least one element marked as invalid, with the associated iteration of invalidity as $k$. Note that when $l+1 > B$, there is no loss associated with this step of removing $\psi(s_i^*)$ and hence we only consider the case when $l+1 \le B$. As the users are embedded in euclidean space, this means that the $\alpha_{spg}$ centered around $s_{l+1}^*$ and $s_k^*$ overlaps and hence $\overline{D}(s_{l+1}^*, s_k^*) \le 2 \cdot \alpha_{spg}$. From the diversification assumption, this means that the marginal utility of $s_{l+1}^*$ cannot be more than $2 \cdot \Upsilon_f \cdot \alpha_{spg}$. And, furthermore, the submodularity ensures that for all $j > l+1$, the marginal utility of users selected can only be lesser than the marginal utility of $s_{l+1}^*$.
|
| 497 |
+
|
| 498 |
+
Let us consider a truncated version of SPGREEDY that stops after $l$ steps, denoted by $\overline{\text{SPGREEDY}}_V$, where $V$ denotes the fact that this procedure is always valid as it never touches invalid marked users. The utility of the truncated version can be lower-bounded as follows:
|
| 499 |
+
|
| 500 |
+
$$
|
| 501 |
+
\begin{align}
|
| 502 |
+
f(\overline{\text{SPGREEDY}}_V) &\ge f(\overline{\text{SPGREEDY}}) - (B-l) \cdot (2 \cdot \Upsilon_f \cdot \alpha_{spg}) \\
|
| 503 |
+
&\ge f(\overline{\text{SPGREEDY}}) - 2 \cdot \Upsilon_f \cdot \alpha_{spg} \cdot B \\
|
| 504 |
+
&\ge (1 - 1/e) \cdot \text{OPT} - (2 \cdot \lambda_f + 2 \cdot \Upsilon_f) \cdot \alpha_{spg} \cdot B
|
| 505 |
+
\end{align}
|
| 506 |
+
\tag{14}
|
| 507 |
+
$$
|
| 508 |
+
|
| 509 |
+
The step 14 follows by using the result in step 13. For the first $l$ iterations, the execution of the mechanism SPGREEDY is exactly same as $\overline{\text{SPGREEDY}}_V$. Hence, SPGREEDY acquires utility at least that acquired by $\overline{\text{SPGREEDY}}_V$, which completes the proof. □
|
samples/texts_merged/6274397.md
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
1.
|
| 5 |
+
|
| 6 |
+
(a) Rate = k [H₂]²[Br₂]⁰
|
| 7 |
+
1.92 x 10⁻³ M⁻¹ s⁻¹
|
| 8 |
+
|
| 9 |
+
(b) 2<sup>nd</sup> order, (2 + 0 = 2)
|
| 10 |
+
|
| 11 |
+
2.
|
| 12 |
+
|
| 13 |
+
(a) Rate = k [O₂] [NO]²
|
| 14 |
+
|
| 15 |
+
(b) 500 M⁻² s⁻¹
|
| 16 |
+
|
| 17 |
+
3.
|
| 18 |
+
|
| 19 |
+
(a) (mol L⁻¹)² s⁻¹
|
| 20 |
+
|
| 21 |
+
(b) M⁻¹ min⁻¹
|
| 22 |
+
|
| 23 |
+
(c) g s⁻¹
|
| 24 |
+
|
| 25 |
+
There are, of course, other ways to express these units correctly
|
| 26 |
+
|
| 27 |
+
4. Start with known concentrations of the two reactants and carry out the reaction. Measure the rate of reaction by collecting the gas, and record the volume produced per unit time. This is the rate
|
| 28 |
+
|
| 29 |
+
Repeat the experiment, this time changing the concentration of one of the reactants by a specific, known amount, but leaving the other concentration unchanged. Once again measure the rate of reaction by collecting the gas, and record the volume produced per unit time. This is the rate
|
| 30 |
+
|
| 31 |
+
Repeat the experiment once more, this time changing the concentration of the reactant that remained constant in the first repetition by a specific, known amount, but reverting back to the original concentration the reactant that was changed in the first repetition. Once again measure the rate of reaction by collecting the gas, and record the volume produced per unit time. This is the rate
|
| 32 |
+
|
| 33 |
+
A comparison of the rate changes with associated concentration changes leads to
|
| 34 |
+
determination of the orders with respect to each reactant
|
samples/texts_merged/6422547.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples/texts_merged/6708780.md
ADDED
|
@@ -0,0 +1,448 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
UDK 517.5
|
| 5 |
+
|
| 6 |
+
M. Ye. Tkachenko*, V. M. Traktynska**
|
| 7 |
+
|
| 8 |
+
* Oles Honchar Dnipro National University,
|
| 9 |
+
Dnipro, 49050. E-mail: mtkachenko2009@ukr.net
|
| 10 |
+
|
| 11 |
+
** Oles Honchar Dnipro National University,
|
| 12 |
+
Dnipro, 49050. E-mail: traktynskaviktoriia@gmail.com
|
| 13 |
+
|
| 14 |
+
# The uniqueness of the best non-symmetric $L_1$-approximant for continuous functions with values in $\mathbb{R}_p^m$
|
| 15 |
+
|
| 16 |
+
**Abstract.** The article considers the questions of the uniqueness of the best non-symmetric $L_1$-approximations of continuous functions with values in $\mathbb{R}_p^m$, $p \in (1; +\infty)$ by elements of the two-dimensional subspace $H_2 = \text{span}\{1, g_{a,b}\}$, where
|
| 17 |
+
|
| 18 |
+
$$g_{a,b}(x) = \begin{cases} -b \cdot (x-1)^2, & x \in [0; 1), \\ 0, & x \in [1; a-1), \\ (x-a+1)^2, & x \in [a-1, a], \end{cases} \quad (a \ge 2, b > 0),$$
|
| 19 |
+
|
| 20 |
+
It is obtained that when $b \in (0; 1) \cup (1; +\infty)$, $a \ge 2$, the subspace $H_2$ is a unicity space of the best $(\alpha, \beta)$-approximations for continuous on the $[0; a]$ functions with values in the space $\mathbb{R}_p^m$, $p \in (1; +\infty)$. In case $b=1$, $a \ge 4$ it is proved that the subspace $H_2$ is not a unicity subspace of the best non-symmetric approximations for these functions. Received the results summarize the previously obtained Strauss results for the real functions in the case $\alpha = \beta = 1$, as well as the results of Babenko and Glushko for the best $(\alpha, \beta)$-approximation for continuous functions on a segment with values in the space $\mathbb{R}_p^m$, $p \in (1; +\infty)$.
|
| 21 |
+
|
| 22 |
+
**Key words:** non-symmetric approximation, unicity space of the best non-symmetric approximations, vector-valued functions, integral metric
|
| 23 |
+
|
| 24 |
+
**Анотація.** У статті розглядаються питання єдиності елемента найкращого несметричного $L_1$-наближення неперервних функцій зі значеннями у просторі $\mathbb{R}_p^m$, $p \in (1; +\infty)$ елементами двовимірного підпростору $H_2 = \text{span}\{1, g_{a,b}\}$, де
|
| 25 |
+
|
| 26 |
+
$$g_{a,b}(x) = \begin{cases} -b \cdot (x-1)^2, & x \in [0; 1), \\ 0, & x \in [1; a-1), \\ (x-a+1)^2, & x \in [a-1, a], \end{cases} \quad (a \ge 2, b > 0),$$
|
| 27 |
+
|
| 28 |
+
Отримано, що, коли $b \in (0; 1) \cup (1; +\infty)$, $a \ge 2$, підпростір $H_2$ є простором єдиності елемента найкращого $(\alpha, \beta)$-наближення для неперервних на відрізку $[0; a]$ функцій зі значеннями у просторі $\mathbb{R}_p^m$, $p \in (1; +\infty)$. У випадку, коли $b=1$, $a \ge 4$ доведено, що підпростір $H_2$ не є підпростором єдиності елемента найкращого несметричного наближення для вказаних функцій. Отримані результати узагальноють отримані раніше результати Штрауса для дійсних функцій у випадку $\alpha = \beta = 1$, а також результати Бабенка й Глушко на випадок найкращого $(\alpha, \beta)$-наближення для неперервних на відрізку функцій зі значеннями у просторі $\mathbb{R}_p^m$, $p \in (1; +\infty)$.
|
| 29 |
+
---PAGE_BREAK---
|
| 30 |
+
|
| 31 |
+
**Ключові слова:** несиметричне наближення, простір единості елемента найкращого несиметричного наближення, векторнозначні функції, інтегральна метрика
|
| 32 |
+
|
| 33 |
+
MSC2020: PRI 41A52, SEC 41A65, 46B40
|
| 34 |
+
|
| 35 |
+
Let $X$ be a partially ordered set and its order is consistent with algebraic operations.
|
| 36 |
+
|
| 37 |
+
The following definitions are given in [4].
|
| 38 |
+
|
| 39 |
+
Let $E \subset X$ be a non-empty set. The element $y \in X$ is called supremum (infimum) of the set $E$ and is denoted by $\sup E$ (inf $E$) if the following conditions hold:
|
| 40 |
+
|
| 41 |
+
1) $x \le y$ ($x \ge y$) $\forall x \in E$;
|
| 42 |
+
|
| 43 |
+
2) for any element $z \in X$ such that $x \le z$ ($x \ge z$), it follows that $y \le z$ ($y \ge z$).
|
| 44 |
+
|
| 45 |
+
The supremum of the set $E$ is denoted by $x_1 \lor x_2 \lor \dots \lor x_n$ and the infimum of the set $E$ is denoted by $x_1 \land x_2 \land \dots \land x_n$ if the set $E$ consists of elements $x_1, x_2, \dots, x_n$.
|
| 46 |
+
|
| 47 |
+
Suppose in the space $X$ for any two elements $x, y \in X$ there exists their supremum $x \vee y$; then the element $x_+ = x \vee 0$ is called the positive part of the element $x \in X$, the element $x_- = (-x) \vee 0$ is its negative part, and the element $|x| = x_+ + x_-$ is the module of the element $x$.
|
| 48 |
+
|
| 49 |
+
Let a order of a partially ordered vector space $X$ is consistent with algebraic
|
| 50 |
+
operations and for any two elements $x,y \in X$ there exists their supremum $x \vee y$.
|
| 51 |
+
Then a space $X$ is called a KN-lineal if in $X$ the monotone norm is defined, i. e.,
|
| 52 |
+
$|x| \le |y| \Rightarrow \|x\|_X \le \|y\|_X.$
|
| 53 |
+
|
| 54 |
+
A KN-lineal is called a KN-space (or K<sub>σ</sub>N-space) if for any (or any numbered)
|
| 55 |
+
non-empty set bounded above or below there exists the its upper or lower bound
|
| 56 |
+
respectively.
|
| 57 |
+
|
| 58 |
+
A $K_{\sigma}N$-space is called a KB-space if its norm satisfies two conditions:
|
| 59 |
+
|
| 60 |
+
1) $||x_n||_X \to 0$ if $x_n \downarrow 0$;
|
| 61 |
+
|
| 62 |
+
2) $||x_n||_X \to +\infty$ if $x_n \uparrow +\infty$ ($x_n \ge 0$).
|
| 63 |
+
|
| 64 |
+
Let $Q$ be a metric compact set with metric $\rho$, $\Sigma$ be a $\sigma$-field of Borel subsets of $Q$, $\mu$
|
| 65 |
+
be a non-atomic non-negative finite measure. Furthermore, assume that $\mu$ be positive
|
| 66 |
+
on each non-empty open subset of $\Sigma$.
|
| 67 |
+
|
| 68 |
+
Let $X$ be a KB-space with the norm $\|\cdot\|_X$.
|
| 69 |
+
|
| 70 |
+
By $C(Q, X)$ denote the space of continuous functions $f : Q \rightarrow X$.
|
| 71 |
+
|
| 72 |
+
For any $x \in Q$ and positive numbers $\alpha, \beta$ put
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
|f(x)|_{\alpha, \beta} = \alpha \cdot f_{+}(x) + \beta \cdot f_{-}(x),
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
||f(x)||_{X;\alpha,\beta} = ||\alpha \cdot f_{+}(x) + \beta \cdot f_{-}(x)||_{X},
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
where $f_{\pm}(x) = (\pm f(x)) \vee 0$.
|
| 83 |
+
|
| 84 |
+
Suppose the space C(Q, X) is supplied with the non-symmetric L₁-norm:
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
||f||_{1; \alpha, \beta} = \int_Q ||f(x)||_{X; \alpha, \beta} d\mu(x).
|
| 88 |
+
$$
|
| 89 |
+
---PAGE_BREAK---
|
| 90 |
+
|
| 91 |
+
For $f \in C(Q, X)$, $H \subset C(Q, X)$ the quantity
|
| 92 |
+
|
| 93 |
+
$$E(f, H)_{1;\alpha,\beta} = \inf_{g \in H} \|f - g\|_{1;\alpha,\beta} \quad (1)$$
|
| 94 |
+
|
| 95 |
+
is called the best $(\alpha, \beta)$-approximation of a function $f$ by a set $H$ in the metric $L_1$. The function $g^* \in H$ is the best $(\alpha, \beta)$-approximant of a function $f$ by elements of a set $H$ in the metric $L_1$ if $g^*$ realizes the greatest lower bound in the equality (1). By $Z_f$ denote the set of zeros for a function $f$, and $N_f = Q \setminus Z_f$.
|
| 96 |
+
|
| 97 |
+
For $f,g \in C(Q,X)$, $x \in Q$ put
|
| 98 |
+
|
| 99 |
+
$$\tau_{-}^{(\alpha,\beta)}(f(x), g(x))_X = \lim_{t \to 0^-} \frac{||(f+tg)(x)||_{X;\alpha,\beta} - ||f(x)||_{X;\alpha,\beta}}{t}.$$
|
| 100 |
+
|
| 101 |
+
For $\alpha = \beta = 1$ such functional was considered in [5] and [1].
|
| 102 |
+
|
| 103 |
+
The following theorem was proved in [3].
|
| 104 |
+
|
| 105 |
+
**Theorem 1.** ([3]) Let $H$ be a subspace of $C(Q, X)$. An element $g^*$ is the best $(\alpha, \beta)$-approximant of a function $f \in C(Q, X)$ by elements from $H$ in the metric $L_1$ iff $\forall g \in H$
|
| 106 |
+
|
| 107 |
+
$$\int_{N_{f-g^*}} \tau_-^{(\alpha,\beta)} (f-g^*, g)_X d\mu(x) \leq \int_{Z_{f-g^*}} ||g(x)||_{X;\beta,\alpha} d\mu(x). \quad (2)$$
|
| 108 |
+
|
| 109 |
+
A normalized space $X$ is called strictly convex if for any $x, y \in X$ such that $||x+y|| = ||x|| + ||y||$, it follows that there is $\lambda \in \mathbb{R}$ such that $y = \lambda \cdot x$.
|
| 110 |
+
|
| 111 |
+
Let $X$ be a strictly convex KB-space with a strictly monotone norm, i.e., $|x| < |y| \Rightarrow ||x||_X < ||y||_X$.
|
| 112 |
+
|
| 113 |
+
Let $H$ be a subspace of the space $C(Q, X)$. We set
|
| 114 |
+
|
| 115 |
+
$$H' = \{h \in C(Q, X) : \exists g_h \in H \ \forall x \in Q \ h(x) = \pm g_h(x)\}.$$
|
| 116 |
+
|
| 117 |
+
Originaly such sets were introduced by Hans Strauss [2] for $X = \mathbb{R}$, $Q = [a, b]$.
|
| 118 |
+
|
| 119 |
+
Using the methods of Strauss [2] was the following theorem is proved in [3], it generalizes results of [2], [5], [6], [8].
|
| 120 |
+
|
| 121 |
+
**Theorem 2.** ([3]) Let $X$ be a strictly convex KB-space with a strictly monotone norm, $H$ be a subspace of $C(Q, X)$. Each function $f \in C(Q, X)$ has at most one best $(\alpha, \beta)$-approximant by elements from $H$ iff each function $h \in H'$ has at most one best $(\alpha, \beta)$-approximant by subspace $H$.
|
| 122 |
+
|
| 123 |
+
The Corollary 1 follows from Theorem 1 and Theorem 2.
|
| 124 |
+
|
| 125 |
+
**Corollary 1.** Let $X$ be a strictly convex KB-space with a strictly monotone norm, $H$ be a subspace of space $C(Q, X)$. Each function $f \in C(Q, X)$ has at most one the best $(\alpha, \beta)$-approximant by elements from $H$ iff for any function $h \in H' \setminus \{0\}$ there exists a function $g_0 \in H$ such that
|
| 126 |
+
|
| 127 |
+
$$\int_{N_h} \tau_-^{(\alpha,\beta)} (h(x), g_0(x))_X d\mu(x) > \int_{Z_h} ||g_0(x)||_{X;\beta,\alpha} d\mu(x).$$
|
| 128 |
+
---PAGE_BREAK---
|
| 129 |
+
|
| 130 |
+
The results stated above extend the known results Strauss (see [2]) for the case of nonsymmetric approximation of functions from the space $C(Q, X)$. In 1994, Babenko and Glushko (see [6]) indicated another a class of functions that has the same characteristic property, but is constructed independently of the form of functions of the approximating subspace. Their results were also generalized to the case of an nonsymmetric approximation of the function from the space $C(Q, X)$ in [3]. Namely, have been proven the following results.
|
| 131 |
+
|
| 132 |
+
The subspaces of the following type are considered as approximating subspaces. Let $\{u_i(t)\}_{i=1}^n$ be a system of linearly independent functions from $C(Q, \mathbf{R})$. We set
|
| 133 |
+
|
| 134 |
+
$$H_n = \{p(x) = \sum_{i=1}^{n} a_i u_i(x) : a_i \in X, i = 1, \dots, n\}.$$
|
| 135 |
+
|
| 136 |
+
Note that $H_n$ is a subspace of weak dimension $n$. The weak dimension was introduced in [7].
|
| 137 |
+
|
| 138 |
+
The following theorem was proved in [3].
|
| 139 |
+
|
| 140 |
+
**Theorem 3.** ([3]) Let $X$ -be a KB-space. Then the subspace $H_n$ of the space $C(Q,X)$ is the set of existence of the best $(\alpha, \beta)$-approximant for any function $g \in C(Q,X)$.
|
| 141 |
+
|
| 142 |
+
By $\omega(u, x)$ we denote the modulus of continuity of the function $u \in C(Q, X)$.
|
| 143 |
+
|
| 144 |
+
Let $Q$ be a metrically convex compact set, i.e., for any $x_0, x_1 \in Q$ and for any $\lambda \in (0; 1)$ there exists a point $x_\lambda \in Q$ such that $\rho(x_0, x_\lambda) = \lambda\rho(x_0, x_1)$ and $\rho(x_\lambda, x_1) = (1-\lambda) \cdot \rho(x_0, x_1)$.
|
| 145 |
+
|
| 146 |
+
For $g \in C(Q, X)$ put $\bar{g}(t) = \frac{g(t)}{||g(t)||_X}$, if $t \in Q \setminus Z_g$, $\bar{g}(t) = 0$, if $t \in Z_g$.
|
| 147 |
+
|
| 148 |
+
Let also $\omega(x) = \max_{i=1,...,n} \omega(u_i,x)$ and for a non-empty set $M \subset Q$
|
| 149 |
+
|
| 150 |
+
$$E(x, M) = \inf_{y \in M} \rho(x, y).$$
|
| 151 |
+
|
| 152 |
+
Put
|
| 153 |
+
|
| 154 |
+
$$H'' = \{h \in C(Q, X) : \exists p_h \in H_n \quad \forall x \in Q \ h(x) = \pm \bar{p}_h(x) \cdot \omega(E(x, Z_{p_h}))\}.$$
|
| 155 |
+
|
| 156 |
+
The following theorem is a generalization of Theorem 2 from [6] and Theorem 5 from [8] for the case of nonsymmetric approximation of functions from $C(Q, X)$ by elements of the subspace $H_n$.
|
| 157 |
+
|
| 158 |
+
**Theorem 4.** ([3]) Let $X$ be a strictly convex KB-space with strictly monotone norm, $Q$ is a metrically convex compact. Each function $f \in C(Q, X)$ has a unique best $(\alpha, \beta)$-approximant in $H_n$ iff each function $h \in H''$ has a unique best $(\alpha, \beta)$-approximant in $H_n$.
|
| 159 |
+
---PAGE_BREAK---
|
| 160 |
+
|
| 161 |
+
**Corollary 2.** (Ⅲ) Let $X$ be a strictly convex KB-space with a strictly monotone norm, $Q$ is a metrically convex compact. Each function $f \in C(Q, X)$ has a unique the best $(\alpha, \beta)$-approximant by elements from $H_n$ iff for any function $h \in H'' \setminus \{0\}$ there exists a function $p \in H_n$ such that
|
| 162 |
+
|
| 163 |
+
$$ \int_{N_h} \tau_{-}^{(\alpha,\beta)}(h(x), p(x))_X d\mu(x) > \int_{Z_h} ||p(x)||_{X;\beta,\alpha} d\mu(x). $$
|
| 164 |
+
|
| 165 |
+
Now let $X = \mathbb{R}_p^m$ be a space of vectors $\mathbf{f} = (f^1, f^2, ..., f^m)$ with the norm
|
| 166 |
+
|
| 167 |
+
$$ \|\mathbf{f}\|_{\mathbb{R}_p^m} = \left( \sum_{j=1}^{m} |f^j|^p \right)^{\frac{1}{p}}, \quad (1 < p < +\infty). $$
|
| 168 |
+
|
| 169 |
+
Let $\|\mathbf{f}\|_{\mathbb{R}_p^m; \alpha, \beta} = \left( \sum_{j=1}^{m} |f^j|_{\alpha, \beta}^p \right)^{\frac{1}{p}}$ be $(\alpha, \beta)$-norm.
|
| 170 |
+
|
| 171 |
+
The derivative $\tau_-^{(\alpha,\beta)}(f,g)$ in the space $\mathbb{R}_p^m$ will have the form
|
| 172 |
+
|
| 173 |
+
$$ \tau_{-}^{(\alpha,\beta)}(f,g)(x) = \frac{\sum_{j=1}^{m} g^j(x) |f^j(x)|_{\alpha,\beta}^{p-1} \cdot \operatorname{sgn}_{\alpha,\beta} f^j(x)}{\left( \sum_{j=1}^{m} |f^j(x)|_{\alpha,\beta}^p \right)^{1-1/p}}, \quad x \in Q \setminus Z_f, $$
|
| 174 |
+
|
| 175 |
+
where $\operatorname{sgn}_{\alpha,\beta} f^j(x) = \alpha \cdot \operatorname{sgn} f_+(x) - \beta \cdot \operatorname{sgn} f_-(x)$, $\mathbf{f} = (f^1, f^2, ..., f^m)$, $\mathbf{g} = (g^1, g^2, ..., g^m) \in C(Q, \mathbb{R}_p^m)$.
|
| 176 |
+
|
| 177 |
+
In what follows, we consider the linear span of set of functions
|
| 178 |
+
|
| 179 |
+
$$ g(x) = 1, \forall x \in [0, a], $$
|
| 180 |
+
|
| 181 |
+
$$ g_{a,b}(x) = \begin{cases} -b \cdot (x-1)^2, & x \in [0; 1), \\ 0, & x \in [1; a-1), \\ (x-a+1)^2, & x \in [a-1, a], \end{cases} \quad (a \ge 2, b > 0), $$
|
| 182 |
+
|
| 183 |
+
as an approximating subspace.
|
| 184 |
+
|
| 185 |
+
Such a subspace was considered by Strauss in [2]. He proved that $\operatorname{span}\{g, g_{a,b}\}$ is a weakly Chebyshev subspace.
|
| 186 |
+
|
| 187 |
+
Then the subspace $H_n$ can be written in the form
|
| 188 |
+
|
| 189 |
+
$$ H_2 = \{ \mathbf{p} = (p^1, p^2, ..., p^m) : p^j(x) = c_1^j + c_2^j \cdot g_{a,b}(x) \}. $$
|
| 190 |
+
|
| 191 |
+
Let's consider the cases:
|
| 192 |
+
|
| 193 |
+
1) $b \neq 1, a \geq 2$. Let us show that in this case, for any function $f \in C([0;a], \mathbb{R}_p^m)$ there is unique the best $(\alpha, \beta)$-approximant in $H_2$ in the metric $L_1$, i.e. that $\forall h \in H'' \setminus \{0\} \exists p_0 \in H_2$:
|
| 194 |
+
|
| 195 |
+
$$ \int_{N_h} \tau_{-}^{(\alpha,\beta)}(\mathbf{h}(x), \mathbf{p}_0(x))_X dx > \int_{Z_h} ||\mathbf{p}_0(x)||_{\mathbb{R}_p^m; \beta,\alpha} dx. $$
|
| 196 |
+
---PAGE_BREAK---
|
| 197 |
+
|
| 198 |
+
Let us introduce the notation:
|
| 199 |
+
|
| 200 |
+
$$I = \{j : h^j(x) \ge 0, \quad \forall x \in [0; a]\},$$
|
| 201 |
+
|
| 202 |
+
$$J = \{j : h^j(x) \le 0, \quad \forall x \in [0; a]\},$$
|
| 203 |
+
|
| 204 |
+
$$M = \{j : h^j(x) \text{ has a change of sign on } [0; a]\}.$$
|
| 205 |
+
|
| 206 |
+
Let $M \neq \emptyset$. Then for $j \in M \forall h^j \exists p_h^j : \operatorname{sgn}h^j(x) = \operatorname{sgn}p_h^j(x), \forall x \in [0; a]$. Take $\mathbf{p}_0 = (p_0^1, \dots, p_0^m)$, where
|
| 207 |
+
|
| 208 |
+
$$p_0^j = \begin{cases} p_h^j, & j \in M, \\ 0, & j \notin M. \end{cases}$$
|
| 209 |
+
|
| 210 |
+
Then, since $M \neq \emptyset$, then $\mathbf{p}_0 \neq \mathbf{0}$ and
|
| 211 |
+
|
| 212 |
+
$$\int_{N_h} \tau_{-}^{(\alpha,\beta)}(\mathbf{h}(x), \mathbf{p}_0(x))_X dx = \int_{[0;a] \setminus Z_h} \frac{\sum_{j=1}^{m} p_0^j(x) |h^j(x)|_{\alpha,\beta}^{p-1} \cdot \operatorname{sgn}_{\alpha,\beta} h^j(x)}{\left( \sum_{j=1}^{m} |h^j(x)|_{\alpha,\beta}^{p} \right)^{1-1/p}} dx =$$
|
| 213 |
+
|
| 214 |
+
$$= \int_{[0;a] \setminus Z_h} \frac{\sum_{j \in M} p_h^j(x) |h^j(x)|_{\alpha,\beta}^{p-1} \cdot \operatorname{sgn}_{\alpha,\beta} h^j(x)}{\left( \sum_{j=1}^{m} |h^j(x)|_{\alpha,\beta}^{p} \right)^{1-1/p}} dx = \int_{[0;a] \setminus Z_h} \frac{\sum_{j \in M} |p_h^j(x)|_{\alpha,\beta}^{p-1} \cdot |h^j(x)|_{\alpha,\beta}^{p-1}}{\left( \sum_{j=1}^{m} |h^j(x)|_{\alpha,\beta}^{p} \right)^{1-1/p}} dx > 0.$$
|
| 215 |
+
|
| 216 |
+
On the other hand, $Z_h = Z_{p_h^1} \cap Z_{p_h^2} \cap \dots \cap Z_{p_h^m}$. If $\exists j_0 : p_h^{j_0} = c_1^{j_0} + c_2^{j_0} g_{a,b}(x)$ and $c_1^{j_0} \neq 0$, then $\operatorname{card}Z_h \le 1$. If for all $j = 1, 2, \dots, m$ $p_h^j(x) = c_2^j g_{a,b}(x)$, then $Z_{p_h^j} = [1; a - 1]$, $j = 1, 2, \dots, m$ and $Z_h = [1; a - 1]$. Therefore
|
| 217 |
+
|
| 218 |
+
$$\int_{Z_h} \left( \sum_{j=1}^{m} |p_0^j(x)|_{\beta,\alpha}^p \right)^{1/p} dx = \int_{Z_h} \left( \sum_{j \in M} |p_h^j(x)|_{\beta,\alpha}^p \right)^{1/p} dx = 0.$$
|
| 219 |
+
|
| 220 |
+
Now let $M = \emptyset$. Consider an arbitrary function $h \in H'' \setminus \{0\}$. By the definition of the set $H''$ exist $\mathbf{p}_h = (p_h^1, p_h^2, \dots, p_h^m)$ such that
|
| 221 |
+
|
| 222 |
+
$$h^j(x) = \begin{cases} \pm \frac{p_h^j(x)}{||\mathbf{p}_h(x)||_{\mathbf{R}_p^m}} \cdot \omega(E(x, Z_{\mathbf{p}_h})), & x \notin Z_{\mathbf{p}_h}, \\ 0, & x \in Z_{\mathbf{p}_h}. \end{cases}$$
|
| 223 |
+
|
| 224 |
+
Let's consider two cases:
|
| 225 |
+
|
| 226 |
+
(a) If among the indices $j = 1, 2, ..., m$ $\exists j_0$ such that $p_h^{j_0} = c_1^{j_0} + c_2^{j_0} g_{a,b}(x)$ and $c_1^{j_0} \neq 0$, then $\operatorname{card}Z_h \le 1$. Then, as $\mathbf{p}_0 = (p_0^1, p_0^2, ..., p_0^m)$ we take: for $j \in I$ $p_0^j(x) = p_1(x)$, where $p_1(x)$ is any positive function on $[0; a]$ from $H_2$; for $j \in J$ $p_0^j(x) = p_2(x)$, where $p_2(x)$ is any negative function on $[0; a]$ from $H_2$.
|
| 227 |
+
|
| 228 |
+
Then $\forall j \in \{1, 2, ..., m\}$ $p_0^j(x) \cdot \operatorname{sgn}_{\alpha,\beta} h^j(x) = |p_0^j(x)|_{\alpha,\beta}$ a.e. by $[0; a] \setminus Z_h$.
|
| 229 |
+
---PAGE_BREAK---
|
| 230 |
+
|
| 231 |
+
Then for $h \in H'' \setminus \{0\}$ we have
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
\begin{align*}
|
| 235 |
+
\int_{N_h} \tau_-^{(\alpha, \beta)} (\mathbf{h}(x), \mathbf{p}_0(x)) x dx &= \int_{[0;a] \setminus Z_h} \frac{\sum_{j=1}^m p_0^j(x) |h^j(x)|_{\alpha,\beta}^{p-1} \cdot \operatorname{sgn}_{\alpha,\beta} h^j(x)}{\left( \sum_{j=1}^m |h^j(x)|_{\alpha,\beta}^p \right)^{1-1/p}} dx \\
|
| 236 |
+
&= \int_{[0;a] \setminus Z_h} \frac{\sum_{j=1}^m |p_0^j(x)|_{\alpha,\beta}^{p-1} \cdot |h^j(x)|_{\alpha,\beta}^{p-1}}{\left( \sum_{j=1}^m |h^j(x)|_{\alpha,\beta}^p \right)^{1-1/p}} dx > 0.
|
| 237 |
+
\end{align*}
|
| 238 |
+
$$
|
| 239 |
+
|
| 240 |
+
On the other hand, since $\text{card}Z_h \le 1$, then
|
| 241 |
+
|
| 242 |
+
$$
|
| 243 |
+
\int_{Z_h} \left( \sum_{j=1}^{m} |p_0^j(x)|_{\beta, \alpha}^{p} \right)^{1/p} dx = 0.
|
| 244 |
+
$$
|
| 245 |
+
|
| 246 |
+
By Corollary 2 $H_2$ is the subspace of uniqueness of the best $(\alpha, \beta)$-approximant for $C([0; a], \mathbb{R}_p^m)$.
|
| 247 |
+
|
| 248 |
+
(b) If for all indices $j = 1, 2, ..., m$ $p_h^j(x) = c_2^j \cdot g_{a,b}(x)$, then
|
| 249 |
+
|
| 250 |
+
$$
|
| 251 |
+
h^j(x) = \begin{cases} \pm \frac{c_2^j}{||c_2||_{\mathbb{R}_p^m}} \cdot \omega(E(x, Z_{g_{a,b}})), & x \notin Z_{g_{a,b}}, \\ 0, & x \in Z_{g_{a,b}}, \end{cases}
|
| 252 |
+
$$
|
| 253 |
+
|
| 254 |
+
where $\mathbf{c}_2 = (c_2^1, ..., c_2^m)$. In this case, for $b \in (0; 1)$ we choose the function $\mathbf{p}_0 = (p_0^1, ..., p_0^m)$ such that
|
| 255 |
+
|
| 256 |
+
$$
|
| 257 |
+
p_0^j(x) = \begin{cases} g_{a,b}, & j \in I, \\ -g_{a,b}, & j \in J, \end{cases}
|
| 258 |
+
$$
|
| 259 |
+
|
| 260 |
+
and for $b \in (1; +\infty)$ such that
|
| 261 |
+
|
| 262 |
+
$$
|
| 263 |
+
p_0^j(x) = \begin{cases} -g_{a,b}, & j \in I, \\ g_{a,b}, & j \in J, \end{cases}
|
| 264 |
+
$$
|
| 265 |
+
|
| 266 |
+
Now for $b \in (0; 1)$
|
| 267 |
+
|
| 268 |
+
$$
|
| 269 |
+
\int_{[0;a] \setminus Z_h} \frac{\sum_{j=1}^{m} p_0^j(x) |h^j(x)|_{\alpha, \beta}^{p-1} \cdot sgn_{\alpha, \beta} h^j(x)}{\left( \sum_{j=1}^{m} |h^j(x)|_{\alpha, \beta}^{p} \right)^{1-1/p}} dx =
|
| 270 |
+
$$
|
| 271 |
+
---PAGE_BREAK---
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
\begin{align*}
|
| 275 |
+
&= \int_{[0;a] \setminus Z_h} \frac{\alpha g_{a,b}(x) \sum_{j \in I} |h^j(x)|_{\alpha, \beta}^{p-1} + \beta g_{a,b}(x) \sum_{j \in J} |h^j(x)|_{\alpha, \beta}^{p-1}}{\left( \sum_{j=1}^{m} |h^j(x)|_{\alpha, \beta}^{p} \right)^{1-1/p}} dx \\
|
| 276 |
+
&= \int_0^1 \frac{-\alpha \cdot b \cdot (x-1)^2 \sum_{j \in I} (\alpha \cdot |c_2^j|)^{p-1} + \beta (-b \cdot (x-1)^2) \cdot \sum_{j \in J} (\beta \cdot |c_2^j|)^{p-1}}{\left( \sum_{j \in I} (\alpha \cdot |c_2^j|)^p + \sum_{j \in J} (\beta \cdot |c_2^j|)^p \right)^{1-1/p}} dx + \\
|
| 277 |
+
&\quad + \int_{a-1}^a \frac{-\alpha \cdot (x-a+1)^2 \sum_{j \in I} (\alpha \cdot |c_2^j|)^{p-1} + \beta \cdot (x-a+1)^2 \cdot \sum_{j \in J} (\beta \cdot |c_2^j|)^{p-1}}{\left( \sum_{j \in I} (\alpha \cdot |c_2^j|)^p + \sum_{j \in J} (\beta \cdot |c_2^j|)^p \right)^{1-1/p}} dx = \\
|
| 278 |
+
&= \frac{-b \cdot \left( \alpha^p \cdot \sum_{j \in I} |c_2^j|^{p-1} + \beta^p \cdot \sum_{j \in J} |c_2^j|^{p-1} \right)}{\left( \sum_{j \in I} (\alpha \cdot |c_2^j|)^p + \sum_{j \in J} (\beta \cdot |c_2^j|)^p \right)^{1-1/p}} \cdot \int_0^1 (x-1)^2 dx + \\
|
| 279 |
+
&\quad + \frac{\alpha^p \cdot \sum_{j \in I} |c_2^j|^{p-1} + \beta^p \cdot \sum_{j \in J} |c_2^j|^{p-1}}{\left( \sum_{j \in I} (\alpha \cdot |c_2^j|)^p + \sum_{j \in J} (\beta \cdot |c_2^j|)^p \right)^{1-1/p}} \cdot \int_{a-1}^a (x-a+1)^2 dx = \\
|
| 280 |
+
&= \frac{1}{3} \cdot \frac{\alpha^p \cdot \sum_{j \in I} |c_2^j|^{p-1} + \beta^p \cdot \sum_{j \in J} |c_2^j|^{p-1}}{\left( \sum_{j \in I} (\alpha \cdot |c_2^j|)^p + \sum_{j \in J} (\beta \cdot |c_2^j|)^p \right)^{1-1/p}} \cdot (1-b) > 0.
|
| 281 |
+
\end{align*}
|
| 282 |
+
$$
|
| 283 |
+
|
| 284 |
+
For $b > 1$
|
| 285 |
+
|
| 286 |
+
$$
|
| 287 |
+
\begin{align*}
|
| 288 |
+
&= -\int_{[0;a] \setminus Z_h} \frac{\alpha g_{a,b}(x) \sum_{j \in I} |h^j(x)|_{\alpha, \beta}^{p-1} + \beta g_{a,b}(x) \sum_{j \in J} |h^j(x)|_{\alpha, \beta}^{p-1}}{\left( \sum_{j=1}^{m} |h^j(x)|_{\alpha, \beta}^{p} \right)^{1-1/p}} dx \\
|
| 289 |
+
&= -\int_{[0;a] \setminus Z_h} \frac{\alpha g_{a,b}(x) \sum_{j \in I} |h^j(x)|_{\alpha, \beta}^{p-1} + \beta g_{a,b}(x) \sum_{j \in J} |h^j(x)|_{\alpha, \beta}^{p-1}}{\left( \sum_{j=1}^{m} |h^j(x)|_{\alpha, \beta}^{p} \right)^{1-1/p}} dx =
|
| 290 |
+
\end{align*}
|
| 291 |
+
$$
|
| 292 |
+
---PAGE_BREAK---
|
| 293 |
+
|
| 294 |
+
THE UNIQUENESS OF THE BEST NON-SYMMETRIC $L_1$-APPROXIMANT
|
| 295 |
+
|
| 296 |
+
$$ = \frac{1}{3} \cdot \frac{\alpha^p \cdot \sum_{j \in I} |c_2^j|^{p-1} + \beta^p \cdot \sum_{j \in J} |c_2^j|^{p-1}}{\left( \sum_{j \in I} (\alpha \cdot |c_2^j|)^p + \sum_{j \in J} (\beta \cdot |c_2^j|)^p \right)^{1-1/p}} \cdot (b-1) > 0. $$
|
| 297 |
+
|
| 298 |
+
On the other hand, for $b \in (0; 1) \cup (1; +\infty)$, $p_0^j(x) = 0, \forall x \in [1; a-1], j = 1, 2, ..., m,$
|
| 299 |
+
$Z_h = [1; a-1]$ and $\int_{Z_h} \left( \sum_{j=1}^m |p_0^j(x)|_{\beta, \alpha}^p \right)^{1/p} dx = 0.$
|
| 300 |
+
|
| 301 |
+
Therefore, the subspace $H_2$ for all $b \in (0; 1) \cup (1; +\infty)$ is the uniqueness space of the best $(\alpha, \beta)$-approximations for the functions from the space $C([0; a], \mathbb{R}_p^m)$.
|
| 302 |
+
|
| 303 |
+
2) Now let $b=1, a \ge 4$.
|
| 304 |
+
|
| 305 |
+
In this case, the subspace $H_2$ is not a uniqueness set of the best non-symmetric approximations for functions from space $C([0; a], \mathbb{R}_p^m)$ in the metric $L_1$. Let us show this using Corollary 2, that is, i.e. we show that there exists a function $h = (h^1, h^2, ..., h^m) \in H''$ such that for any $p = (p^1, p^2, ..., p^m) \in H_2$ the condition
|
| 306 |
+
|
| 307 |
+
$$ \int_{N_h} \tau_-^{(\alpha,\beta)}(h(x), p(x))_X dx \leq \int_{Z_h} ||p(x)||_{X;\beta,\alpha} dx $$
|
| 308 |
+
|
| 309 |
+
is true.
|
| 310 |
+
|
| 311 |
+
Take $h = (h^1, h^2, ..., h^m)$, in which
|
| 312 |
+
|
| 313 |
+
$$ h^j(x) = \frac{1}{m^{1/p}} \cdot \omega(E(x, Z_{g_{a,1}})), \quad j = 1, 2, ..., m. $$
|
| 314 |
+
|
| 315 |
+
It is clear that $\mathbf{h} \in H'',$ since as $\mathbf{p_h}$ we can take a vector function of the form
|
| 316 |
+
|
| 317 |
+
$$ \mathbf{p_h} = (g_{a,1}, g_{a,1}, \dots, g_{a,1}) \in H_2. \text{ Then } \| \mathbf{p_h} \|_{\mathbb{R}_p^m} = \left( \sum_{j=1}^m |g_{a,1}(x)|^p \right)^{\frac{1}{p}} = m^{1/p} \cdot |g_{a,1}(x)| \text{ and} $$
|
| 318 |
+
|
| 319 |
+
$$ \frac{p_h^j(x)}{\|\mathbf{p_h}\|_{\mathbb{R}_p^m}} = \frac{g_{a,1}(x)}{m^{1/p}|g_{a,1}(x)|} = m^{-1/p} \cdot sgn g_{a,1}(x). $$
|
| 320 |
+
|
| 321 |
+
Note that $Z_h = Z_{\omega(E(x,Z_{g_{a,1}}))} = Z_{g_{a,1}} = [1; a-1]$, and also that $h^j(x) \ge 0, \forall x \in [0; a], j = 1, 2, ..., m.$
|
| 322 |
+
|
| 323 |
+
Now, for the indicated function **h** and an arbitrary function **p** = (p¹, p², ..., pⁿ) ∈ H₂, where pⱼ(x) = c₁ⱼ + c₂ⱼ · gₐ₁(x), we have
|
| 324 |
+
|
| 325 |
+
$$
|
| 326 |
+
\begin{aligned}
|
| 327 |
+
\int_{[0;a] \setminus Z_h} \tau_-^{(\alpha,\beta)}(h(x), p(x))_X dx &= \int_{[0;a] \setminus Z_h} \frac{\sum_{j=1}^{m} p^j(x) |h^j(x)|_{\alpha,\beta}^{p-1} \cdot sgn_{\alpha,\beta} h^j(x)}{\left( \sum_{j=1}^{m} |h^j(x)|_{\alpha,\beta}^p \right)^{1-1/p}} dx \\
|
| 328 |
+
&= \int_0^1 \frac{\sum_{j=1}^{m} (c_1^j - c_2^j (x-1))^2 \cdot (\alpha \cdot m^{-1/p} \omega(E(x, Z_{g_{a,1}})))^{p-1} \cdot \alpha}{\left( \sum_{j=1}^{m} (\alpha \cdot m^{-1/p} \omega(E(x, Z_{g_{a,1}})))^p \right)^{1-1/p}} dx +
|
| 329 |
+
\end{aligned}
|
| 330 |
+
$$
|
| 331 |
+
---PAGE_BREAK---
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
\begin{align*}
|
| 335 |
+
& + \int_{a-1}^{a} \frac{\sum_{j=1}^{m} (c_1^j + c_2^j (x-a+1)^2) \cdot (\alpha \cdot m^{-1/p} \omega(E(x, Z_{g_{a,1}})))^{p-1} \cdot \alpha}{\left( \sum_{j=1}^{m} (\alpha \cdot m^{-1/p} \omega(E(x, Z_{g_{a,1}})))^p \right)^{1-1/p}} dx = \\
|
| 336 |
+
&= \frac{\alpha}{m^{1-1/p}} \cdot \int_{0}^{1} \sum_{j=1}^{m} (c_1^j - c_2^j (x-1)^2) dx + \frac{\alpha}{m^{1-1/p}} \cdot \int_{a-1}^{a} \sum_{j=1}^{m} (c_1^j + c_2^j (x-a+1)^2) dx = \\
|
| 337 |
+
&= \frac{\alpha}{m^{1-1/p}} \cdot \sum_{j=1}^{m} 2c_1^j \le \frac{2\alpha}{m^{1-1/p}} \cdot \sum_{j=1}^{m} |c_1^j|
|
| 338 |
+
\end{align*}
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
Let $M_+ = \{j : c_1^j \ge 0\}$, $M_- = \{j : c_1^j < 0\}$, $b^j = 1$, for $j \in M_+$, $b^j = -1$, for $j \in M_-.
|
| 342 |
+
|
| 343 |
+
Then the equality
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
\sum_{j=1}^{m} |c_{1}^{j}| = \sum_{j=1}^{m} |c_{1}^{j}|_{\beta,\alpha} \cdot |b^{j}|_{\beta^{-1},\alpha^{-1}}
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
is true.
|
| 350 |
+
|
| 351 |
+
Next, we apply Hölder's inequality, which for non-symmetric norms has view:
|
| 352 |
+
|
| 353 |
+
$$
|
| 354 |
+
\sum_{j=1}^{m} |x_i|_{\alpha, \beta} \cdot |y_i|_{\alpha^{-1}, \beta^{-1}} \leq \left( \sum_{j=1}^{m} |x_i|^{\beta}_{\alpha, \beta} \right)^{\frac{1}{\beta}} \cdot \left( \sum_{j=1}^{m} |y_i|^{\alpha}_{\alpha^{-1}, \beta^{-1}} \right)^{\frac{1}{\alpha}},
|
| 355 |
+
$$
|
| 356 |
+
|
| 357 |
+
where $\frac{1}{p} + \frac{1}{q} = 1$. Then
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
\begin{align*}
|
| 361 |
+
\sum_{j=1}^{m} |c_1^j|_{\beta, \alpha} \cdot |b^j|_{\beta^{-1}, \alpha^{-1}} &\le \left( \sum_{j=1}^{m} |c_1^j|_{\beta, \alpha}^p \right)^{1/p} \cdot \left( \sum_{j=1}^{m} |b^j|_{\beta^{-1}, \alpha^{-1}}^q \right)^{1/q} \\
|
| 362 |
+
&= \left( \sum_{j=1}^{m} |c_1^j|_{\beta, \alpha}^p \right)^{1/p} \cdot \left( \sum_{j \in M_+} (\beta^{-1})^q + \sum_{j \in M_-} (\alpha^{-1})^q \right)^{1/q} \\
|
| 363 |
+
&= \left( \sum_{j=1}^{m} |c_1^j|_{\beta, \alpha}^p \right)^{1/p} \cdot \left( \frac{1}{\beta^q} card M_+ + \frac{1}{\alpha^q} card M_- \right)^{1/q} \\
|
| 364 |
+
&\le \left( \sum_{j=1}^{m} |c_1^j|_{\beta, \alpha}^p \right)^{1/p} \cdot \left( \max \left( \frac{1}{\beta^q}; \frac{1}{\alpha^q} \right) \right)^{1/q} \cdot m^{1/q} \\
|
| 365 |
+
&= \left( \sum_{j=1}^{m} |c_1^j|_{\beta, \alpha}^p \right)^{1/p} \cdot \max \left( \frac{1}{\beta}; \frac{1}{\alpha} \right) \cdot m^{1/q}.
|
| 366 |
+
\end{align*}
|
| 367 |
+
$$
|
| 368 |
+
---PAGE_BREAK---
|
| 369 |
+
|
| 370 |
+
Then we get
|
| 371 |
+
|
| 372 |
+
$$
|
| 373 |
+
\begin{align*}
|
| 374 |
+
\int_{[0;a] \setminus Z_h} \tau_-^{(\alpha,\beta)} (h(x), p(x))_X dx &\le \frac{2\alpha}{m^{1-1/p}} \cdot \sum_{j=1}^m |c_1^j| \\
|
| 375 |
+
&\le 2\alpha \cdot \left( \sum_{j=1}^m |c_1^j|_{\beta,\alpha}^p \right)^{1/p} \cdot \max \left( \frac{1}{\beta}; \frac{1}{\alpha} \right).
|
| 376 |
+
\end{align*}
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
On the other hand,
|
| 380 |
+
|
| 381 |
+
$$
|
| 382 |
+
\int_{Z_h} ||p(x)||_{\mathbb{R}_p^m; \beta, \alpha} dx = \int_1^{a-1} \left( \sum_{j=1}^m |c_1^j|_{\beta, \alpha}^p \right)^{1/p} dx = (a-2) \left( \sum_{j=1}^m |c_1^j|_{\beta, \alpha}^p \right)^{1/p}.
|
| 383 |
+
$$
|
| 384 |
+
|
| 385 |
+
Comparing the values of the last two integrals, we find that for $a \ge 2 + 2\alpha$.
|
| 386 |
+
$\max(\frac{1}{\beta}; \frac{1}{\alpha})$ there are functions from the space $C([0;a], \mathbb{R}_p^m)$ that have at least two the
|
| 387 |
+
best $(\alpha, \beta)$-approximants in the subspace $H_2$ in metric $L_1$.
|
| 388 |
+
|
| 389 |
+
Further, let us take as $\mathbf{h} = (h^1, h^2, ..., h^m)$, where
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
h^j(x) = - \frac{1}{m^{1/p}} \cdot \omega(E(x, Z_{g_{a,1}})), \quad j = 1, 2, \dots, m,
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
and carry out similar reasoning. We get
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
\begin{align*}
|
| 399 |
+
& \int_{[0;a] \setminus Z_h} \tau_-^{(\alpha,\beta)} (h(x), p(x))_X dx \\
|
| 400 |
+
&= \int_0^1 \frac{\sum_{j=1}^m (c_1^j - c_2^j (x-1)^2) (\beta \cdot m^{-1/p} \omega(E(x, Z_{g_{a,1}})))^{p-1} (-\beta)}{\left( \sum_{j=1}^m (\beta \cdot m^{-1/p} \omega(E(x, Z_{g_{a,1}})))^p \right)^{1-1/p}} dx + \\
|
| 401 |
+
&+ \int_{a-1}^a \frac{\sum_{j=1}^m (c_1^j + c_2^j (x-a+1)^2) (\beta \cdot m^{-1/p} \omega(E(x, Z_{g_{a,1}})))^{p-1} (-\beta)}{\left( \sum_{j=1}^m (\beta \cdot m^{-1/p} \omega(E(x, Z_{g_{a,1}})))^p \right)^{1-1/p}} dx = \\
|
| 402 |
+
&= -\frac{\beta}{m^{1-1/p}} \cdot \sum_{j=1}^m 2c_1^j \le \frac{2\beta}{m^{1-1/p}} \cdot \sum_{j=1}^m |c_1^j| \le 2\beta \cdot \left( \sum_{j=1}^m |c_1^j|_{\beta,\alpha}^p \right)^{1/p} \cdot \max\left(\frac{1}{\beta}; \frac{1}{\alpha}\right).
|
| 403 |
+
\end{align*}
|
| 404 |
+
$$
|
| 405 |
+
|
| 406 |
+
On the other hand, also
|
| 407 |
+
|
| 408 |
+
$$
|
| 409 |
+
\int_{Z_h} ||p(x)||_{\mathbb{R}_p^m; \beta, \alpha} dx = (a-2) \left( \sum_{j=1}^{m} |c_1^j|_{\beta, \alpha}^{p} \right)^{1/p}.
|
| 410 |
+
$$
|
| 411 |
+
---PAGE_BREAK---
|
| 412 |
+
|
| 413 |
+
Now, after comparing the values of the last two integrals, we get that for $a \ge 2 + 2\beta \cdot \max(\frac{1}{\beta}; \frac{1}{\alpha})$ there also exist functions from the space two $C([0; a], \mathbb{R}_p^m)$ that have at least two the best $(\alpha, \beta)$-approximants in the subspace $H_2$ in the metric $L_1$.
|
| 414 |
+
|
| 415 |
+
Combining the obtained intervals, we get that the subspace $H_2$ is not the unicity set of the best $(\alpha, \beta)$-approximations for the space $C([0; a], (R_p^m)^n)$, in the case when $a \ge 2 + 2 \cdot \min\{\alpha; \beta\} \cdot \max(\frac{1}{\beta}; \frac{1}{\alpha})$ or, which is the same, $a \ge 4$.
|
| 416 |
+
|
| 417 |
+
Thus, we got a statement.
|
| 418 |
+
|
| 419 |
+
**Theorem 5. Subspace H<sub>2</sub>**
|
| 420 |
+
|
| 421 |
+
1) is a subspace of uniqueness of the best $(\alpha, \beta)$-approximants for functions from the space $C([0; a], \mathbb{R}_p^m)$ in the metric $L_1$ for all $b \in (0; 1) \cup (1 : +\infty), a \in [2 : +\infty)$;
|
| 422 |
+
|
| 423 |
+
2) is not a subspace of uniqueness of the best $(\alpha, \beta)$-approximants for functions from the space $C([0; a], \mathbb{R}_p^m)$ in the metric $L_1$ for $b = 1, a \ge 4$.
|
| 424 |
+
|
| 425 |
+
A similar result was obtained by H. Strauss in [2] for real functions and $\alpha = \beta = 1$, V.F. Babenko, V.N. Glushko [6] for $(\alpha, \beta)$-approximation of real functions. Our given results were extended to the case of non-symmetric approximation of vector-functions with values in the space $\mathbb{R}_p^m$, ($p \in (1; +\infty)$).
|
| 426 |
+
|
| 427 |
+
Our result was not obtained for all values $a \ge 2$ for $b = 1$. For the case $b = 1, 2 \le a < 4$, the question of the uniqueness of the best non-symmetric $L_1$-approximant for continuous functions by $H_2$ has not yet been studied.
|
| 428 |
+
|
| 429 |
+
References
|
| 430 |
+
|
| 431 |
+
1. Kroo A.: A general approach to the study of Chebyshev subspaces in $L_1$-approximation of continuous functions. J. Approx. Theory **51** (1987), 98–111.
|
| 432 |
+
doi:10.1016/0021-9045(87)90024-4
|
| 433 |
+
|
| 434 |
+
2. Strauß H.: Eindeutigkeit in der $L_1$-approximation. Math.Z. **176** (1981), 63–74.
|
| 435 |
+
|
| 436 |
+
3. Babenko V. F., Tkachenko M. Ye.: Questions of uniqueness of the best non-symmetric $L_1$-approximant of continuous functions with values in KB-space. Ukr. Mat. J. **60** (2008), 867–878.
|
| 437 |
+
|
| 438 |
+
4. Vulih B. Z.: Introduction to the theory of semi-ordered spaces. Moscow, Fizmattiz, (1961).
|
| 439 |
+
|
| 440 |
+
5. Pinkus A.: $L_1$-approximation. Cambridge, Cambridge Univ.Press, (1989).
|
| 441 |
+
|
| 442 |
+
6. Babenko V. F., Glushko V. N.: On the uniqueness of the best approximant in metric space $L_1$. Ukr. Mat. J. **5** (1994), 475–483.
|
| 443 |
+
|
| 444 |
+
7. Babenko V. F., Pichugov S. A.: Approximation of continuous vector-functions, Ukr. Mat. J. **46** (1994), 1435–1448.
|
| 445 |
+
|
| 446 |
+
8. Babenko V. F., Gorbenko M. Ye.: On the uniqueness of the best $L_1$-approximant for functions with values in a Banach space, Ukr. Mat. J. **52** (2000), 30–34.
|
| 447 |
+
|
| 448 |
+
Received: 20.05.2021. Accepted: 27.06.2021
|
samples/texts_merged/7113096.md
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
26th April
|
| 5 |
+
|
| 6 |
+
Foundation Plus 5-a-day
|
| 7 |
+
|
| 8 |
+
Solve $5(x + 3) = 31$
|
| 9 |
+
|
| 10 |
+
Corbettmaths
|
| 11 |
+
|
| 12 |
+
Not drawn accurately
|
| 13 |
+
|
| 14 |
+
Work out the length of FG.
|
| 15 |
+
|
| 16 |
+
Rectangles ABCD and EFGH are similar.
|
| 17 |
+
|
| 18 |
+
$$u = v - at$$
|
| 19 |
+
|
| 20 |
+
$$v = 9 \qquad a = -5 \qquad t = \frac{1}{4}$$
|
| 21 |
+
|
| 22 |
+
Work out the value of u.
|
| 23 |
+
|
| 24 |
+
$$\xi = \{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13\}$$
|
| 25 |
+
|
| 26 |
+
M = Multiples of 3
|
| 27 |
+
|
| 28 |
+
F = Factors of 30
|
| 29 |
+
|
| 30 |
+
Complete the Venn diagram
|
| 31 |
+
|
| 32 |
+
A number is chosen at random
|
| 33 |
+
Find P(M ∪ F)
|
| 34 |
+
|
| 35 |
+
A number is chosen at random
|
| 36 |
+
Find P(M ∩ F)
|
samples/texts_merged/7346654.md
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Several early researchers in optical oceanography built instruments to measure the volume scattering function (VSF) of oceanic waters. [See Jerlov (1976) for data and references for early measurements.] The most carefully made and widely cited scattering measurements are found in the classic report of Petzold (1972) (summarized in Petzold, 1977). He combined two instruments, one for VSF measurements at very small angles ($\psi = 0.172, 0.344$, and $0.688$ deg) and one for angles between 10 and 170 degrees, to obtain VSF measurements over almost the whole range of scattering angles. Petzold's report describes his instruments, their calibration and validation, and tabulates data from very clear (Bahamas), productive coastal (California), and turbid harbor (San Diego, California) waters. The Petzold VSFs and phase functions plotted on this page can be downloaded.
|
| 5 |
+
|
| 6 |
+
Figure figure1 shows three of Petzold's VSF curves displayed on a log-log plot to emphasize the forward scattering angles. The same data are displayed on log-linear axes in Fig. figure2, which emphasizes large scattering angles. The instruments he used had a spectral response centered at $\lambda = 514$ nm with a bandwidth of 75 nm (full width at half maximum). In these figures the top (red) curve was obtained in the very turbid water of San Diego Harbor, California; the center (green) curve comes from near-shore coastal water in San Pedro Channel, California; and the bottom (blue) curve is from very clear water in the Tongue of the Ocean, Bahama Islands. The striking feature of these volume scattering functions from very different waters is the similarity of their shapes.
|
| 7 |
+
|
| 8 |
+
Although the scattering coefficients $b$ of the curves in Figs. figure1 and figure2 vary by a factor of 50, the uniform shapes suggest that it may be reasonable to define a “typical” particle phase function $\tilde{\beta}_p(\psi)$. This has been done with three sets of Petzold’s data from waters with a high particulate load (one set being the top curve of Figs. figure1 and figure2), as follows (Mobley et al., 1993):
|
| 9 |
+
|
| 10 |
+
1. Subtract the pure sea water VSF at 514 nm from each curve to get three particle volume scattering functions $\beta_p^i(\psi)$, $i = 1, 2, 3$.
|
| 11 |
+
|
| 12 |
+
2. Obtain the corresponding particle scattering coefficients from $b_p^i = b^i - b_{water}$;
|
| 13 |
+
|
| 14 |
+
3. Compute three particle phase functions via $\tilde{\beta}_p^i(\psi) = \beta_p^i(\psi)/b_p^i$;
|
| 15 |
+
|
| 16 |
+
4. Average the three particle phase functions at each scattering angle to define the “average particle” phase function.
|
| 17 |
+
|
| 18 |
+
The three phase functions so obtained and the average-particle phase function are shown in Figs. figure3 and figure4. This average-particle phase function satisfies the normalization condition $2\pi \int_0^\pi \tilde{\beta}_p(\psi) \sin\psi d\psi = 1$ if a behavior of $\tilde{\beta}_p \propto \psi^{-m}$ is assumed for $\psi < 0.1$ deg and a trapezoidal-rule integration is used for $\psi \ge 0.1$ deg, with linear interpolation in $\log \tilde{\beta}_p(\psi)$ versus $\log \psi$ used between the tabulated values. Here $m = 1.346$ is the negative of the slope of $\log \tilde{\beta}_p$ versus $\log \psi$, as determined from the two smallest measured scattering angles.
|
| 19 |
+
|
| 20 |
+
This “Petzold average particle” phase function has been widely used in radiative transfer calculations and is one of the standard phase functions available in HydroLight. However, it must be remembered that this phase function is based on very limited data from turbid harbor waters at one wavelength and likely corresponds to a mixture of phytoplankton and
|
| 21 |
+
---PAGE_BREAK---
|
| 22 |
+
|
| 23 |
+
Figure 1: Log-log plots of Petzold’s measured volume scattering functions from three different waters, as labeled.
|
| 24 |
+
---PAGE_BREAK---
|
| 25 |
+
|
| 26 |
+
Figure 2: Log-linear plots of Petzold’s measured volume scattering functions from three different waters, as labeled.
|
| 27 |
+
---PAGE_BREAK---
|
| 28 |
+
|
| 29 |
+
Figure 3: Log-log plots of the phase functions for the VSF’s of Figs. figure1 and figure2, along with the average particle phase function.
|
| 30 |
+
---PAGE_BREAK---
|
| 31 |
+
|
| 32 |
+
Figure 4: Log-linear plots of the phase functions for the VSFs of Figs. figure1 and figure2, along with the average particle phase function.
|
| 33 |
+
---PAGE_BREAK---
|
| 34 |
+
|
| 35 |
+
mineral particles as might be found in harbor waters. This phase function thus corresponds closely to the turbid harbor phase function seen in Figs. figure3 and figure4.
|
| 36 |
+
|
| 37 |
+
Figure figure5 compares the Petzold average phase function with 62 phase functions measured in coastal waters at 530 nm using the more recently developed VSM (volume scattering meter) instrument of Lee and Lewis (2003). The Petzold average particle phase function does indeed give a good *average* for these phase functions. However, it is important to note that there is an order-of-magnitude variability in the VSM phase functions at backscattering angles. The large variability in the measured phase functions of Fig. figure5 will give corresponding variability in the remote-sensing reflectance, for example.
|
| 38 |
+
|
| 39 |
+
Figure 5: 62 phase functions measured in coastal waters at 530 nm (green curves). The red curve is the Petzold average particle phase function of Figs. figure3 and figure4.
|
| 40 |
+
|
| 41 |
+
Thus, as is always the case with a simple model, the average-particle phase function may be satisfactory on average, but may be very wrong in a simulation of a particular water body. When attempting to model a particular water body, it is always best to use a VSF or phase
|
| 42 |
+
---PAGE_BREAK---
|
| 43 |
+
|
| 44 |
+
function measured at the particular time and place being modeled, rather than relying on
|
| 45 |
+
a “generic” phase function or analytic model. Examples of this are given in Mobley et al.
|
| 46 |
+
(2002).
|
samples/texts_merged/7421586.md
ADDED
|
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Supplementary Material: Strategyproof Mean Estimation from
|
| 5 |
+
Multiple-Choice Questions
|
| 6 |
+
|
| 7 |
+
**1. Proof of Theorem 3**
|
| 8 |
+
|
| 9 |
+
To prove the theorem, we reduce from the following problem. Given a rational $x \in [0, 1]$ and nonnegative integer weights $\alpha_1, \dots, \alpha_n$, WEIGHTED-BINOMIAL-MEDIAN (WBM) asks for a median of the random variable
|
| 10 |
+
|
| 11 |
+
$$Z := \sum_{i=1}^{n} \alpha_i \text{Bernoulli}(x),$$
|
| 12 |
+
|
| 13 |
+
where the Bernoulli($x$) are independent (and identically distributed).
|
| 14 |
+
|
| 15 |
+
This weighted binomial distribution (WBD) is comparable to the Poisson binomial distribution (PBD) in that both generalize the binomial distribution. However the PBD is an unweighted sum of Bernoulli random variables with distinct probabilities $x_i$, while the WBD is a sum of Bernoulli random variables with a common $x$ but distinct integer weights.
|
| 16 |
+
|
| 17 |
+
**Lemma 1.** WBM is \#P-Hard.
|
| 18 |
+
|
| 19 |
+
*Proof.* In order to show that WBM is \#P-complete, we will reduce from the counting version of the knapsack problem, which is known to be \#P-complete (?): Given a list of nonnegative integer weights $w_1, \dots, w_n$ and an integer capacity $W$, \#Knapsack asks how many sets $S \subseteq [n]$ exist such that $\sum_{i \in S} w_i \le W$. And we will make use of a slight variant of counting knapsack: Given an integer $k$, a list of nonnegative integer weights $w_1, \dots, w_n$ an integer capacity $W$, and an integer threshold $N$, $k$ \#Knapsack finds $|S|$, where
|
| 20 |
+
|
| 21 |
+
$$S := \{S \subseteq [n] : \sum_{i \in S} w_i \le W \text{ and } |S| = k\}.$$
|
| 22 |
+
|
| 23 |
+
It can be seen that $k$#Knapsack is \#P-complete via an easy reduction from \#Knapsack: given an instance of \#Knapsack, simply query $k$#Knapsack for all values of $k$ and return the sum of the answers.
|
| 24 |
+
|
| 25 |
+
Turning to the hardness of WBM, we begin by arguing that WBM may be assumed to return the largest possible median. This is because, for an instance of WBM given by $(x, \alpha_1, \dots, \alpha_n)$, we may instead take a perturbed probability $\bar{x} = x + \gamma$. By choosing $\gamma$ small enough, we can ensure that the median $\bar{m}$ of $\bar{Z} := \sum_i \alpha_i \text{Bernoulli}(\bar{x})$ is a median of $Z$, but that it is the largest possible such median. Informally, we may tweak $x$ gently enough that we preserve the median but break any median ties.
|
| 26 |
+
|
| 27 |
+
Formally, let $F_Z$ be the cumulative density function (CDF) of $Z$. Since $Z$ is a distribution comprised solely of atoms of weight $x^k(1-x)^{n-k}$ for $k \in [n]$, it suffices to find some perturbation $\gamma$ for which
|
| 28 |
+
|
| 29 |
+
$$F_Z(m) - F_{\bar{Z}}(m) < a,$$
|
| 30 |
+
|
| 31 |
+
where $m$ is a median of $Z$ and $a$ is a lower bound on the size of an atom in both $Z$ and $\bar{Z}$. To show that we may choose such an $a$, note we may assume that $(1-x)^n \le 1/2$, since otherwise the largest possible $m$ is 0, and similarly that $\bar{x}^n \le 1/2$, since otherwise we may easily check if the largest possible $m$ is $\sum_{i \in [n]} \alpha_i$. Among all $Z$ for which $x^n \le 1/2$ and $(1-x)^n \le 1/2$, the smallest possible atom is of size $\frac{1}{2}(2^{1/n} - 1)^n$, and so $a := 1/n^n$ is a lower bound on the atom size in $Z$ for any value of $x$ that concerns us.
|
| 32 |
+
|
| 33 |
+
Since $Z$ is atomic, we then have that
|
| 34 |
+
|
| 35 |
+
$$
|
| 36 |
+
\begin{align}
|
| 37 |
+
F_Z(y) &= \sum_{z \le y} \Pr[Z = z] \tag{1} \\
|
| 38 |
+
&= \sum_{S \subseteq [n]} x^{|S|} (1-x)^{n-|S|} \mathbb{1}_{\{\sum_{i \in S} w_i \le y\}} \tag{2}
|
| 39 |
+
\end{align}
|
| 40 |
+
$$
|
| 41 |
+
---PAGE_BREAK---
|
| 42 |
+
|
| 43 |
+
and so
|
| 44 |
+
|
| 45 |
+
$$ \frac{\partial F_Z(y)}{\partial x} \leq \sum_{S \subseteq [n]} \frac{\partial}{\partial x} x^{|S|} (1-x)^{n-|S|} \leq n 2^n. \quad (3) $$
|
| 46 |
+
|
| 47 |
+
Therefore taking $\gamma = \frac{a}{n2^n}$ will suffice, and $\bar{x} = x + \gamma$ will have a binary representation which is polynomial in the number of input bits.
|
| 48 |
+
|
| 49 |
+
We now reduce from $k\#\text{Knapsack}$. Given an instance of $k\#\text{Knapsack}$ described by $(k, w_1, \dots, w_n, W)$, let $\Gamma := \langle k \rangle + \sum_i \langle w_i \rangle + \langle W \rangle$ be the length of the binary representation of these integers. For each $i$, let
|
| 50 |
+
|
| 51 |
+
$$ \alpha_i := G + w_i, $$
|
| 52 |
+
|
| 53 |
+
where $G := (n+1) \sum_i w_i$. If $Z = \sum_i \alpha_i \text{Bernoulli}(x)$ for some rational $x \in [0, 1]$, then since the $w_i$ are positive, the support of $Z$ is clustered to the left of the integers 0, $G$, $\dots$, $nG$. Specifically, we have by Equation (2) that
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\begin{aligned}
|
| 57 |
+
F_Z(Gk) &= \sum_{S \subseteq [n]} x^{|S|} (1-x)^{n-|S|} \mathbb{1}_{\{\sum_{i \in S} w_i \leq Gk\}} \\
|
| 58 |
+
&= \sum_{j=0}^{k-1} \binom{n}{j} x^j (1-x)^{n-j},
|
| 59 |
+
\end{aligned}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
and so $F_Z(Gk)$ can be computed in time polynomial in $\Gamma + \langle x \rangle$.
|
| 63 |
+
|
| 64 |
+
Next, with $k$ given, consider a binary search over (rational) $x$ which searches for the largest possible $x$ for which $m \le Gk+W$. Once the binary search is far enough along and the change in $x$ is sufficiently small, $F_Z(m)$ approaches $1/2$ and the remaining change possible in $F_Z(m)$ will be small with respect to the atomic lower bound $a$. We may terminate our search, say, when $F_Z(m) \in [1/2, 1/2+a/10]$. At this point $m$ is the largest value of size at most $Gk+W$ in the support of $Z$, and so by this maximality of $m \le Gk+W$,
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
\begin{aligned}
|
| 68 |
+
F_Z(m) &= \sum_{S \subseteq [n]} x^{|S|} (1-x)^{n-|S|} \mathbb{1}_{\{\sum_{i \in S} w_i \le m\}} \\
|
| 69 |
+
&= \sum_{j=0}^{k-1} \binom{n}{j} x^j (1-x)^{n-j} + |\mathcal{S}_k| x^k (1-x)^{n-k}.
|
| 70 |
+
\end{aligned}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
At this point $a$ is much smaller than the other terms, and we may solve for $|\mathcal{S}_k|$, round, and solve $k\#\text{Knapsack}$:
|
| 74 |
+
|
| 75 |
+
$$ |\mathcal{S}_k| \in \frac{1/2 \pm a/10 - \sum_{j=0}^{k-1} \binom{n}{j} x^j (1-x)^{n-j}}{x^k (1-x)^{n-k}} $$
|
| 76 |
+
|
| 77 |
+
It remains only to justify that this binary search for $x$ terminates sufficiently quickly. By Equation (3) in order to guarantee that $F_Z(m)$ is within $a/10$ of $1/2$ it suffices to guarantee that the binary search step for $x$ has size at most $\frac{a}{10n2^n}$. This requires $\log(10n^{n+1}2^n)$ steps, which is polynomial in $n$. $\square$
|
| 78 |
+
|
| 79 |
+
*Proof of Theorem 3.* We reduce from WBM. If $k=1$ then the reduction is immediate: if each of the $P_i$ is a scaled down copy of $\alpha_i$Bernoulli($x$), then finding the optimal report for the random variable $\sum_i P_i$ amounts to finding the (scaled down) median of $\sum_i \alpha_i$Bernoulli($x$).
|
| 80 |
+
|
| 81 |
+
More generally, given an instance of WBM described by $(x, \alpha_1, \dots, \alpha_n)$, we will construct an instance of our problem, MAE-ESTIMATOR, for any $k \ge 2$ for which determining optimal partitions and reporting scheme will solve our instance of WBM.
|
| 82 |
+
---PAGE_BREAK---
|
| 83 |
+
|
| 84 |
+
Our $P_i$ will be discrete distributions given by
|
| 85 |
+
|
| 86 |
+
$$ \mathrm{Pr}\left[p_i = \frac{1}{2k}\right] = \frac{1-x}{k} \qquad (4) $$
|
| 87 |
+
|
| 88 |
+
$$ \mathrm{Pr}\left[p_i = \frac{1 + \delta \frac{\alpha_i}{\sum_t \alpha_t}}{2k}\right] = \frac{x}{k} \qquad (5) $$
|
| 89 |
+
|
| 90 |
+
$$ \mathrm{Pr}\left[p_i = \frac{2j-1}{2k}\right] = \frac{1}{k} \quad \text{for } j = 2, \dots, k. \qquad (6) $$
|
| 91 |
+
|
| 92 |
+
We will choose $\delta$ small enough such that the optimal partition of each of the $P_i$ necessarily groups the atoms described in Equation (4) and Equation (5) together, and gives each of the atoms of Equation (6) its own interval in the partition. To find such a $\delta$, first consider the “good” case when the partitions are of this form. In this case, there are $k^n$ total boxes, each with weight $1/k^n$. Within each box $C$, the distribution of $\ell_1$ norms has range upper bounded by $\delta/(2k)$. Within each $C$, the range of this distribution is an upper bound on the $\ell_1$ distance between any atom in $C$ and the optimal report for $C$. Therefore, a loose upper bound on total MAE is
|
| 93 |
+
|
| 94 |
+
$$ \sum_{c \in [k]^n} P(C_c) \frac{\delta}{2k} = \frac{\delta}{2k}. \qquad (7) $$
|
| 95 |
+
|
| 96 |
+
On the other hand, consider the “bad” case when at least one of the partitions groups either two of the Equation (6) atoms together or the Equation (5) atom together with at least one of the Equation (6) atoms. Assume without loss of generality that the $i=1$ partitioning is “bad”. We will focus on the case when an Equation (5) and at least one Equation (6) atom are grouped together (because it is an interval, necessarily $j=2$ is included), since in the best case it is the least costly scenario. Because of the product structure of the boxes induced by the partitions, for every pair of vectors $u$ and $u'$ in the support of $P$ of the form
|
| 97 |
+
|
| 98 |
+
$$ u = \left( \frac{1 + \delta \frac{\alpha_i}{\sum_j \alpha_j}}{2k}, u^{-} \right) \qquad u' = \left( \frac{3}{2k}, u^{-} \right), $$
|
| 99 |
+
|
| 100 |
+
where $u^{-} \sim \prod_{j=2}^{n} P_j$, necessarily $u$ and $u'$ are contained in the same box. Therefore among each pair of $u$ and $u'$, at least $M_{u^{-}} = \frac{\min\{x,1-x\}}{k} \prod_{j=2}^{n} P_j(u_j^{-})$ mass must travel $\|u'\|_1 - \|u\|_1$ to the estimate for their shared box, which yields a lower bound on the error of
|
| 101 |
+
|
| 102 |
+
$$ \sum_{u^{-}} \left( \frac{1 - \delta}{k} M_{u^{-}} \right) = \frac{(1 - \delta) \min\{x, 1-x\}}{k^2}. \qquad (8) $$
|
| 103 |
+
|
| 104 |
+
By Equations (7) and (8), choosing a $\delta < \frac{\min\{x,1-x\}}{k}$ guarantees that the optimal partitioning for our instance is the “good” partitioning, and so all of the Equation (4) and Equation (5) atoms appear in the same box $C^* := \prod_i B_{i,1}$.
|
| 105 |
+
|
| 106 |
+
Recall that by ??, the MAE-minimizing estimate for a fixed box $C$ is a median of the distribution of $\ell_1$ norms of the vectors $u \in C$ according to $P$. Therefore MAE-ESTIMATOR finds some MAE-optimal report $r^*$ for the box $C^*$, which by Equation (4) and Equation (5) implies that $\frac{r^*-n/2k}{\delta}$ is a median of $\sum_i \alpha_i$Bernoulli($x$), solving the given instance of WBM. $\square$
|
| 107 |
+
|
| 108 |
+
## 2. Additional Experiments
|
| 109 |
+
|
| 110 |
+
Here we give a more thorough account of the experimental performance of the MSE-optimal estimation scheme as compared to the uniform estimation scheme, benchmarked against the families of distributions described in ??.
|
| 111 |
+
|
| 112 |
+
Figure 1 shows their relative performance for a fixed value of $n = 50$ and a range of possible $k$, with constant sample distribution support of size 100. On the other hand, Figure 2 shows their relative performance for a fixed value of $k = 3$ and a range of possible $n$, again with constant sample distribution support of size 100.
|
| 113 |
+
---PAGE_BREAK---
|
| 114 |
+
|
| 115 |
+
Figure 1: MSE of the uniform and optimal algorithms for fixed $n = 50$ and a range of $k$, averaged over 100 distributions sampled from the various families. Bars are standard error of the mean.
|
| 116 |
+
|
| 117 |
+
Figure 2: MSE of the uniform and optimal algorithms for fixed $k = 3$ and a range of $n$, averaged over 100 distributions sampled from various families. Bars are standard error of the mean.
|
samples/texts_merged/7548747.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples/texts_merged/7621530.md
ADDED
|
@@ -0,0 +1,456 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
On the influence of sample length and measurement
|
| 5 |
+
noise on the stochastic subspace damage detection
|
| 6 |
+
technique
|
| 7 |
+
|
| 8 |
+
Saeid Allahdadian, Michael Döhler, Carlos Ventura, Laurent Mevel
|
| 9 |
+
|
| 10 |
+
► To cite this version:
|
| 11 |
+
|
| 12 |
+
Saeid Allahdadian, Michael Döhler, Carlos Ventura, Laurent Mevel. On the influence of sample length and measurement noise on the stochastic subspace damage detection technique. IMAC – 34th International Modal Analysis Conference, Jan 2016, Orlando, FL, United States. 10.1007/978-3-319-29956-3_4. hal-01262256
|
| 13 |
+
|
| 14 |
+
HAL Id: hal-01262256
|
| 15 |
+
|
| 16 |
+
https://hal.inria.fr/hal-01262256
|
| 17 |
+
|
| 18 |
+
Submitted on 26 Jan 2016
|
| 19 |
+
|
| 20 |
+
**HAL** is a multi-disciplinary open access
|
| 21 |
+
archive for the deposit and dissemination of sci-
|
| 22 |
+
entific research documents, whether they are pub-
|
| 23 |
+
lished or not. The documents may come from
|
| 24 |
+
teaching and research institutions in France or
|
| 25 |
+
abroad, or from public or private research centers.
|
| 26 |
+
|
| 27 |
+
L'archive ouverte pluridisciplinaire **HAL**, est
|
| 28 |
+
destinée au dépôt et à la diffusion de documents
|
| 29 |
+
scientifiques de niveau recherche, publiés ou non,
|
| 30 |
+
émanant des établissements d'enseignement et de
|
| 31 |
+
recherche français ou étrangers, des laboratoires
|
| 32 |
+
publics ou privés.
|
| 33 |
+
---PAGE_BREAK---
|
| 34 |
+
|
| 35 |
+
# On the influence of sample length and measurement noise on the stochastic subspace damage detection technique
|
| 36 |
+
|
| 37 |
+
Saeid Allahdadian<sup>†,a</sup>, Michael Döhler<sup>ψ,b</sup>, Carlos E. Ventura<sup>†,c</sup> and Laurent Mevel<sup>ψ,d</sup>
|
| 38 |
+
|
| 39 |
+
†University of British Columbia, Vancouver, Canada
|
| 40 |
+
|
| 41 |
+
<sup>ψ</sup> Inria/IFSTTAR, I4S, Campus de Beaulieu, 35042 Rennes, France
|
| 42 |
+
|
| 43 |
+
<sup>a</sup> saeid@civil.ubc.ca, <sup>b</sup> michael.doehler@inria.fr, <sup>c</sup> ventura@civil.ubc.ca, <sup>d</sup> laurent.mevel@inria.fr
|
| 44 |
+
|
| 45 |
+
## ABSTRACT
|
| 46 |
+
|
| 47 |
+
In this paper the effects of measuring noise and number of samples is studied on the stochastic subspace damage detection (SSDD) technique. In this technique, i.e., SSDD, the need of evaluating the eigenstructure of the system is circumvented, making this approach capable of dealing with real-time measurements of structures. In previous studies, the effect of these practical parameters was examined on simulated measurements from a model of a real structure. In this study, these effects are formulated for the expected damage index evaluated from a Chi-square distributed value. Several theorems are proposed and proved. These theorems are used to develop a guideline to serve the user of the SSDD method to face these effects.
|
| 48 |
+
|
| 49 |
+
**Keywords:** damage detection, subspace method, health monitoring, signal noise, sampling, stochastic subspace method
|
| 50 |
+
|
| 51 |
+
## 1. Introduction
|
| 52 |
+
|
| 53 |
+
Structural health monitoring is regarded as the main tool in assessing the functionality of existing structures. The importance of these techniques and researches becomes obvious by considering that failure of a structure can result in catastrophic loss. Existing civil structures deteriorate by aging and under different loading conditions imposed from natural phenomena such as earthquakes, typhoons, flood and etc. Therefore, it is imperative to investigate the safety of continuing using these structures, especially after occurring major demands on the structure from these phenomena.
|
| 54 |
+
|
| 55 |
+
Numerous researches can be found in the literature and different approaches are proposed to deal with this problem by detecting possible damages in a structure. Some of these tests include sampling of the structure, which may affect the functionality of structure. These tests are named destructive tests. However the other type of the tests, namely non-destructive tests, do not involve with any action that can damage the structure or affect its functionality. Due to the need of continuation of the serviceability of the structure, more researchers have been focusing on the latter approach.
|
| 56 |
+
|
| 57 |
+
Nondestructive damage detection techniques can be categorized into two groups based on their requirements [1-2]: (I) local techniques, which need access to all parts of the structure or the location of damage if known, and (II) global damage techniques which use vibration data to evaluate global dynamic characteristics of the structure. Employing the local techniques may lead to interference in the operation of the structure and is not suitable for major structures. However, in the latter method there is no need to know or have access to the location of damage in priori.
|
| 58 |
+
|
| 59 |
+
The global techniques can be also categorized into two groups based on their approach to the problem. In the first category, the structural properties are identified and employed to assess the condition of the structure. The structural properties identified from these approaches include stiffness, damping, mass, load paths and boundary conditions (supports, connections, etc.). In the second category, the eigen-structure of the problem is employed to evaluate the safety condition of the structure. In these methods, modal properties such as natural frequencies, modal damping values and mode shapes are used to identify any changes in the structure. Any change in the structural properties leads to a change in the modal parameters of the structure. However generally, identifying the modal parameters in a structure is more practical and accurate than the structural properties.
|
| 60 |
+
|
| 61 |
+
In order to keep the structure in operation, shaking the structure artificially or using impact loads are not promising. Therefore by employing ambient vibration testing, the operation of the structure will not be interfered. In this case due to the fact that the input excitation to the structure, e.g. wind, traffic, earth vibration, cannot be measured practically, output-only damage detection
|
| 62 |
+
---PAGE_BREAK---
|
| 63 |
+
|
| 64 |
+
techniques are of interest. Moreover, the process of evaluating and matching the modal parameters of a structure is also time consuming [3] and it usually cannot be employed in real-time monitoring of structures which are not well instrumented. In addition, local damage in a structure affects typically on higher frequency modes ([2,4]) which are not usually identifiable to be used in damage detection due to their high modal density and low participation factors [5]. Evaluation of these modal characteristics can be avoided by using output-only statistical approaches e.g. Kalman filter technique [6], outlier analysis method [7] or the stochastic subspace damage detection technique (SSDD) ([8-9]).
|
| 65 |
+
|
| 66 |
+
The SSDD technique evaluates the global condition of a structure by identifying changes in the eigen-structure of the system. The damage can be detected by comparing a statistical model from the possibly damaged structure to thresholds obtained from a reference state. A subspace based residual function between these states is defined and compared using a $\chi^2$ test. The results from $\chi^2$ test can be displayed and monitored in a chart, namely control chart [10]. Therefore, there is no need to estimate the natural frequencies and mode shapes, making this approach capable of being used in real-time monitoring of structures. In this way, the whole eigen-system of the measurements are included in the damage detection and the focus is not only on dominant frequencies. Including higher modes in this evaluation makes the damage detection approach more robust, considering that the main effect of local damages is on higher mode shapes.
|
| 67 |
+
|
| 68 |
+
Two main challenges in health monitoring of real structures are low number of sensors and existence of noise in the measurements. Statistical damage detection methods including SSDD have a robust architecture that can deal with sparsely instrumented structures, at least for the level one of damage detection, namely investigating the existence of damage. Moreover, these methods can also deal with noisy data due to their statistical approach to the problem and that the mean of the noise is usually zero in the time domain. However, this effect needs to be studied in detail for these damage detection techniques.
|
| 69 |
+
|
| 70 |
+
In this study we focus on the effect of measurement noise and number of samples, i.e. measurement length, on the SSDD technique. Existence of noise in experimental data is inevitable. There are different sources of the noise in measuring a structure [11] such as the change in excitation sources [12], noise of measuring instruments and human error. Moreover, the data quality (noise ratio) can affect significantly on the damage detection output (e.g. [13]). Therefore, investigating the effect of this inherent characteristic of the measurements on the SSDD technique is an important factor in assessing its functionality. It was demonstrated that SSDD technique can perform robustly under ambient excitations with changing statistics [11,14]. In our previous studies, the effect of measurement noise ratio [15-16], type of element damaged [17] and number of samples [16] were studied briefly by examining it for a model of real bridge, the S101 Bridge. It was shown that this technique can deal with even very high ratios of noise in the data.
|
| 71 |
+
|
| 72 |
+
In this paper the objective is to analyze the theory associated to the effects of the measurement noise and number of samples on this technique. This study helps in having a better understanding of the results from the SSDD approach and in creating some guidelines for using it.
|
| 73 |
+
|
| 74 |
+
This paper is organized as follows. In Section 2, the SSDD approach is recalled. In section 3, a theoretical analysis of its properties regarding measurement noise and sample length is carried out. Section 4 contains a numerical validation of the theoretical results and concluding remarks are presented in Section 5.
|
| 75 |
+
|
| 76 |
+
## 2. Stochastic subspace damage detection technique
|
| 77 |
+
|
| 78 |
+
The theories and formulations of stochastic subspace damage detection (SSDD) stem from the subspace based system identification. In this section, models, parameters and formulations needed to derive the final residual used in assessing the condition of the system is presented based on studies in [8-9].
|
| 79 |
+
|
| 80 |
+
### 2.1. Dynamic equilibrium equation in discrete time domain
|
| 81 |
+
|
| 82 |
+
The state-space representation of a dynamic system is well known. Herein, the governing equation for the dynamic behaviour of a structural system is presented and then it is reformed to the state-space representation. The dynamic behaviour of a structure can be modeled with the following formulation
|
| 83 |
+
|
| 84 |
+
$$ \begin{cases} M\ddot{u}(t) + C\dot{u}(t) + Ku(t) = p(t) \\ y(t) = L\ddot{u}(t) + e(t) \end{cases} \tag{1} $$
|
| 85 |
+
|
| 86 |
+
where *M*, *C* and *K* are mass, damping and stiffness matrices, respectively, and *u* represents the displacement vector in all degrees of freedom of the system. Vector *p* also shows the vector of forces and *t* denote continuous time. It should be noted that the external force *p* is unknown while it is assumed to be a non-stationary white noise. Vector *y* also contains the output responses measured from the structure. Based on the type of the sensor recording acceleration, velocity or displacement the
|
| 87 |
+
---PAGE_BREAK---
|
| 88 |
+
|
| 89 |
+
second part of the equation changes; herein the type of the sensor is assumed as accelerometers. Matrix $L$ states the location of the sensors in relation to the geometry of the degrees of freedom, and $e$ represents the measurement noise.
|
| 90 |
+
|
| 91 |
+
The discrete time state-space form of model (1) can be written by performing sampling with time step $\tau$ as
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\begin{cases}
|
| 95 |
+
x_{k+1} = Fx_k + w_k \\
|
| 96 |
+
y_k = Hx_k + v_k
|
| 97 |
+
\end{cases}
|
| 98 |
+
\quad (2)
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where, the state is represented by $x \in \mathbb{R}^n$ and the measured output is $y \in \mathbb{R}^r$. $r$ is the number of sensors and $n$ is the system order. The state transition matrix is represented by $F$, and $H$ shows the observation matrix with dimensions $n \times n$ and $r \times n$, respectively. The state noise $w_k$ and output noise $v_k$ are assumed to be white noise. The state $x$ and the measured output, $y$, are related to the displacement vector with equation (3).
|
| 102 |
+
|
| 103 |
+
$$ x_k = \begin{bmatrix} u(k\tau) \\ \dot{u}(k\tau) \end{bmatrix}, \quad y_k = y(k\tau) \quad (3) $$
|
| 104 |
+
|
| 105 |
+
The modal parameters of the dynamic model (1), which are present in its eigenvalues $\mu$, and mode shapes, $\Psi$, can be related to the eigenvalues $\lambda$ and eigenvectors $\phi$ of the state transition matrix, $F$:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\left\{
|
| 109 |
+
\begin{array}{l}
|
| 110 |
+
\lambda = e^{\mu\tau} \\
|
| 111 |
+
\varphi = \psi
|
| 112 |
+
\end{array}
|
| 113 |
+
\right.
|
| 114 |
+
\quad
|
| 115 |
+
\text{where } \varphi = H\phi \text{ and } \psi = L\Psi
|
| 116 |
+
\quad (4)
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
The canonical parameterization of system (2) is formed by pairs $(\lambda, \varphi)$ which is referred as the system eigenstructure and is employed as the system parameter $\theta \in C^{n(r+1)}$ defined as
|
| 120 |
+
|
| 121 |
+
$$ \theta = \begin{bmatrix} \Lambda \\ \operatorname{vec}(\Phi) \end{bmatrix} $$
|
| 122 |
+
|
| 123 |
+
in which $\Lambda$ is the vector containing all the eigenvalues $\lambda$ and $\Phi$ is the matrix composed of all eigenvectors $\phi$.
|
| 124 |
+
|
| 125 |
+
## 2.2. Output-only covariance based subspace system identification
|
| 126 |
+
|
| 127 |
+
In order to compute a residual vector between the reference and the current states of the system, the output-only covariance based subspace system identification method [18] is employed. By defining the output covariance as $R_i = E(y_k y_{k-i}^T)$, the block Hankel matrix $H_p$ can be composed as
|
| 128 |
+
|
| 129 |
+
$$ H_p = \begin{bmatrix} R_1 & R_2 & \cdots & R_p \\ R_2 & R_3 & \cdots & R_{p+1} \\ \vdots & \vdots & \ddots & \vdots \\ R_{p+1} & R_{p+2} & \cdots & R_{2p} \end{bmatrix} = \text{Hank}(R_i) \quad (5) $$
|
| 130 |
+
|
| 131 |
+
The output covariances satisfy $R_i = HF^{i-1}G$ [19], where $G = E(x_{k+1}y_k^T)$ is the cross covariance between the states and the outputs, which leads to the well-known factorization property of
|
| 132 |
+
|
| 133 |
+
$$ H_p = O_p C_p \quad (6) $$
|
| 134 |
+
|
| 135 |
+
where
|
| 136 |
+
|
| 137 |
+
$$ O_p = \begin{bmatrix} H \\ HF \\ \vdots \\ HF^p \end{bmatrix}, \quad C_p = [G \ F G^{-1} \ \dots \ F^{p-1} G]. \quad (7) $$
|
| 138 |
+
---PAGE_BREAK---
|
| 139 |
+
|
| 140 |
+
The observation matrix, *H*, state transition matrix, *F*, and subsequently the system parameters, $\theta$, can be computed from the
|
| 141 |
+
defined observability matrix, $O_p$. The residual employed in damage detection is directly linked to $O_p$ and thus there is no
|
| 142 |
+
need to identify the system matrices and parameters.
|
| 143 |
+
|
| 144 |
+
**2.3. Residual vector formation**
|
| 145 |
+
|
| 146 |
+
By assuming that the system parameter in reference state of the structure is $\theta_0$ and in current state is $\theta$, a residual function is defined between these states which reacts to the changes in the system due to, for instance, damage. In order to create such a residual, the left null-space of the observability matrix $O_p$, namely orthonormal matrix $S$, is computed from performing e.g. a singular value decomposition. The reference state $\theta = \theta_0$ is then characterized by
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
S^T(\theta_0)O_p(\theta_0) = 0. \tag{8}
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
Due to the factorization property (6), the left null-space of **H**<sub>p</sub> is equal to **S**(*θ*<sub>0</sub>) and hence (8) can be rewritten as
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
S^T(\theta_0) \mathbf{H}_p = 0. \tag{9}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
With the interpretation that if the system is damaged the system parameter $\theta$ becomes changed, i.e. $\theta \neq \theta_0$, two hypotheses are defined as following.
|
| 159 |
+
|
| 160 |
+
$$
|
| 161 |
+
\begin{equation}
|
| 162 |
+
\left\{
|
| 163 |
+
\begin{array}{ll}
|
| 164 |
+
H_0: & \theta = \theta_0 \quad \therefore \text{ unchanged system} \\
|
| 165 |
+
H_1: & \theta \neq \theta_0 \quad \therefore \text{ changed system (damaged)}
|
| 166 |
+
\end{array}
|
| 167 |
+
\right.
|
| 168 |
+
\tag{10}
|
| 169 |
+
\end{equation}
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
To test these hypothesis, a residual function needs to be defined based on property (9) that holds if and only if $\mathbf{H}_p$ corresponds to the reference state. Since matrix $S(\theta_0)$ depends implicitly on parameter $\theta_0$ (we are treating it as a function of $\theta_0$ [9]), a representation of the current state parameter of the structure, i.e. $\theta$ is needed. Therefore, by measuring data from the current state of the structure, an estimation of the block Hankel matrix, i.e. $\hat{\mathbf{H}}_p$, is computed from their covariances as
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
\hat{R}_i = \frac{1}{N} \sum_{k=1}^{N} y_k y_{k-i}^T, \quad \hat{\mathbf{H}}_p = \mathrm{Hank}(\hat{R}_i). \tag{11}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
In view of (9), this empirical block Hankel matrix is used to create residual function (12) which corresponds to the difference
|
| 179 |
+
between $\theta$ and $\theta_0$ [8-9].
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
\zeta_N^e = \sqrt{N} \operatorname{vec}(S(\theta_0)^T \hat{\mathbf{H}}_p) \quad (12)
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
The indexes *N* and *e* represent the number of samples and measurement noise level in the measured data, respectively. A change in the system parameter can be formulated based on the asymptotic local approach for change detection [20] as $\theta = \theta_0 + \delta\theta/\sqrt{N}$ where $\delta\theta$ is defined as the (unknown) parameter change vector normalized by $\sqrt{N}$. Using this basis, the asymptotical distribution of the residual function fulfills the Central Limit Theorem (CLT), and is for $N \to \infty$
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\zeta_N^e \rightarrow \begin{cases} \mathcal{N}(0, \Sigma^e) & \text{under } H_0 \\ \mathcal{N}(J^e \delta\theta, \Sigma^e) & \text{under } H_1 \end{cases} \quad (13)
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
in which $\Sigma^e$ is the asymptotic covariance and $J^e$ is the asymptotic sensitivity of the residual. In order to test these hypothesis,
|
| 192 |
+
a generalized likelihood ratio (GLR) test is employed [8], which will be presented in next section.
|
| 193 |
+
|
| 194 |
+
**2.4. Hypothesis test**
|
| 195 |
+
|
| 196 |
+
2.4.1. Parametric Chi-square test
|
| 197 |
+
|
| 198 |
+
The GLR test for hypothesis (10) can be written as
|
| 199 |
+
---PAGE_BREAK---
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
\mathrm{GLR}(\zeta_N^e) = -2 \log \frac{L(\zeta_N^e | \theta_0)}{\sup_{\theta \in H_1} L(\zeta_N^e | \theta)} \quad (14)
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
where $L(\bullet)$ represents the likelihood function. Plugging in the residual distributions from (13), it boils down to the test variable
|
| 206 |
+
[8,11]
|
| 207 |
+
|
| 208 |
+
$$
|
| 209 |
+
\chi_N^2 = (\zeta_N^e)^T (\Sigma^e)^{-1} J^e \left( (J^e)^T (\Sigma^e)^{-1} J^e \right)^{-1} (J^e)^T (\Sigma^e)^{-1} \zeta_N^e. \quad (15)
|
| 210 |
+
$$
|
| 211 |
+
|
| 212 |
+
which is asymptotically $\chi^2$-distributed with $d = \operatorname{rank}(J^e) = \dim(\theta)$ degrees of freedom. Its non-centrality parameter is 0 under $H_0$ and $\delta\theta^T(J^e)^T(\Sigma^e)^{-1}J^e\delta\theta$ under $H_1$.
|
| 213 |
+
|
| 214 |
+
The test variable in (15) is the parametric representation of a damage index and can be used to evaluate thresholds of safety,
|
| 215 |
+
since its distribution shifts with the given non-centrality parameter under $H_1$. If the test value surpasses these thresholds, then
|
| 216 |
+
it shows that the condition of the structure is being changed.
|
| 217 |
+
|
| 218 |
+
2.4.2. Non-parametric Chi-square test
|
| 219 |
+
|
| 220 |
+
By computing a null-space from a reference data set, a non-parametric residual is created for which there is no need to have a
|
| 221 |
+
parametric model and to evaluate its parameters. Therefore, no system identification is needed. This null-space $S_0$ can be
|
| 222 |
+
obtained by a singular value decomposition of the estimated Hankel matrix from the measurement data in reference state [21].
|
| 223 |
+
Similar to characterization in (8) and (9) it holds in the reference state:
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
S_0^T \hat{\mathbf{H}}_p^0 = 0. \tag{16}
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
$S_0$ and $\hat{\mathbf{H}}_p^0$ are, respectively, the estimated null-space and block Hankel matrix computed over a reference dataset. After measuring data from a possibly damaged structure, the block Hankel matrix is determined from the data and the residual is defined as
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
\zeta_N^e = \sqrt{N} \operatorname{vec}(S_0^T \hat{\mathbf{H}}_p). \quad (17)
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
Since no explicit system parameterization is used, we have $J^e = I$ in the residual distribution in (13), where $I$ is the identity
|
| 236 |
+
matrix, and the CLT (13) can be expressed as
|
| 237 |
+
|
| 238 |
+
$$
|
| 239 |
+
\zeta_N^e \rightarrow \begin{cases} \mathcal{N}(0, \Sigma^e) & \text{under } H_0 \\ \mathcal{N}(\delta, \Sigma^e) & \text{under } H_1 \end{cases} \qquad (18)
|
| 240 |
+
$$
|
| 241 |
+
|
| 242 |
+
where $\delta$ is now directly linked to the change in the residual vector (when normalized by $\sqrt{N}$). Then, the test variable
|
| 243 |
+
simplifies to
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
\chi_N^2 = (\zeta_N^e)^T (\Sigma^e)^{-1} \zeta_N^e . \tag{19}
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
Analogously to the previous section, this variable is asymptotically $\chi^2$-distributed with $d = \dim(\zeta_N^e)$ degrees of freedom. Its non-centrality parameter is 0 under $H_0$ and $\delta\Theta^T (\Sigma^e)^{-1}\delta\Theta$ under $H_1$.
|
| 250 |
+
|
| 251 |
+
For simplicity, this non-parametric test variable will be used in the following.
|
| 252 |
+
|
| 253 |
+
**3. Investigating the effect of noise and number of samples**
|
| 254 |
+
|
| 255 |
+
The residual $\zeta_N^e$ is a function of number of samples and the noise in the measured data. The dependence of this variable on
|
| 256 |
+
the number of samples is explicit in equation (12). Moreover, analogous to the effect of change in the excitation properties
|
| 257 |
+
[11], additional measurement noise superposed on the measured data will affect the cross covariance between the outputs, and
|
| 258 |
+
therefore the estimated Hankel matrix. Thus, the evaluated residual (12) and its covariance $\Sigma^e$ are a function of the superposed
|
| 259 |
+
noise.
|
| 260 |
+
---PAGE_BREAK---
|
| 261 |
+
|
| 262 |
+
Hence, both the number of samples and measurement noise can change the residual and the final evaluated $\chi^2$ value. In this section their effect on the non-parametric $\chi^2$ test is studied for a constant damage.
|
| 263 |
+
|
| 264 |
+
It is always assumed that the residual covariance $\Sigma^e$ is estimated once on healthy data from the reference state of the structure, where usually lots of data is available allowing for a good covariance estimate [11]. The covariance is never recalculated when testing a residual $\zeta_N^e$ for damage that is computed on new test data.
|
| 265 |
+
|
| 266 |
+
Before starting the analysis, we recall a basic property of the $\chi^2$ distribution: let $\gamma$ be a $\chi^2$ distributed variable, $d$ its number of degrees of freedom and $n_c$ its non-centrality parameter. Then,
|
| 267 |
+
|
| 268 |
+
$$E\gamma = d + n_c. \tag{20}$$
|
| 269 |
+
|
| 270 |
+
## 3.1. Effect of number of samples
|
| 271 |
+
|
| 272 |
+
The effect of number of samples can be seen in residual (17) both explicitly in terms of $\sqrt{N}$ and implicitly e.g. its variance and the change in the system parameter. The reason of pre-multiplying the square root of number of samples in the residual vector is that based on the Central Limit Theorem, the resultant product, i.e. (17), is distributed asymptotically normal as stated in (13) and (18), with its covariance being independent of the number of samples. Moreover, this framework allows for a trade-off between number of samples and damage size: the $\chi^2$ test variable may have the same value either using a longer dataset with a smaller damage, or using a shorter dataset with a bigger damage. This also means that for constant (non-zero) damage the test variable grows with the number of samples. A detailed analysis is made in this section.
|
| 273 |
+
|
| 274 |
+
### 3.1.1. Effect on the residual covariance
|
| 275 |
+
|
| 276 |
+
Since the asymptotic residual covariance is the same in reference and damaged states (see Eq. (13),(18)), an estimate $\hat{\Sigma}^e$ of the covariance matrix $\Sigma^e$ is more conveniently obtained from data in the reference state of the structure under the assumption of no changes in the noise properties of the system [11]. The computation of the covariance estimate is described in detail in [11]. Note that the asymptotic covariance $\Sigma^e$ is independent of the number of samples $N$, which can also be seen in the CLT (13) and (18). Hence, the expected value of the covariance estimate $\hat{\Sigma}^e$ neither depends on the number of datasets nor their length used in the estimation. Of course, the quality of the estimate improves when using more data, and we assume that sufficient data has been used to achieve an accurate estimate.
|
| 277 |
+
|
| 278 |
+
### 3.1.2. Effect on the $\chi^2$ test value
|
| 279 |
+
|
| 280 |
+
Due to the CLT (18), the residual is approximately Gaussian for any fixed number of samples $N$, and it holds
|
| 281 |
+
|
| 282 |
+
$$\zeta_N^e \approx \begin{cases} \mathcal{N}(0, \Sigma^e) & \text{under } H_0 \\ \mathcal{N}(\delta, \Sigma^e) & \text{under } H_1 \end{cases}, \tag{21}$$
|
| 283 |
+
|
| 284 |
+
where $\delta = \sqrt{N} \Upsilon^e$ with $\Upsilon^e = E(\text{vec}(S_0^T \hat{\mathbf{H}}_p)) = \text{vec}(S_0^T \mathbf{H}_p)$.
|
| 285 |
+
|
| 286 |
+
Note that $\Upsilon^e$ depends on the expected value $\mathbf{H}_p$ of the Hankel matrix of the current system (which is independent of the number of samples $N$), and $\hat{\mathbf{H}}_p$ is a consistent estimate of matrix $\mathbf{H}_p$. Note also that $\Upsilon^e = 0$ if the system is in the reference state due to the definition of the null-space.
|
| 287 |
+
|
| 288 |
+
In the following, the influence of $N$ on the expected value of the test variable $\chi_N^2$ in (19) is investigated.
|
| 289 |
+
|
| 290 |
+
**Theorem 1** Under the undamaged state of the structure, i.e. $H_0$ is true, increase or decrease of the number of samples does not change the mean of the $\chi^2$ value.
|
| 291 |
+
---PAGE_BREAK---
|
| 292 |
+
|
| 293 |
+
**Proof 1** Since $\zeta_N^e \approx \mathcal{N}(0, \Sigma^e)$ under $H_0$ (independently of the number of samples $N$), the non-centrality parameter of the resulting test variable $\chi_N^2$ in (19) is 0, as stated in Section 3. From the property (20) of the $\chi^2$ distribution it follows $\mathrm{E}\chi_N^2 = d$ where $d = \dim(\zeta_N^e)$ is the number of degrees of freedom of $\chi_N^2$, independently of $N$.
|
| 294 |
+
|
| 295 |
+
**Theorem 2** If the structure is damaged, i.e. $H_1$ is true, change of the number of samples will result in a change (in the same direction) in the mean of the evaluated $\chi^2$ test variable.
|
| 296 |
+
|
| 297 |
+
**Proof 2** Under $H_1$ the non-centrality parameter of $\chi_N^2$ is $\delta^T (\Sigma^e)^{-1} \delta$. Since $\delta = \sqrt{N} \Upsilon^e$, the non-centrality parameter yields $N(\Upsilon^e)^T (\Sigma^e)^{-1} \Upsilon^e$, where both $\Upsilon^e$ and $\Sigma^e$ are independent of $N$. From the property (20) of the non-central $\chi^2$ distribution, it follows $\mathrm{E}\chi_N^2 = d + N(\Upsilon^e)^T (\Sigma^e)^{-1} \Upsilon^e$. Thus the mean of the test variable grows (or decreases) when the number of samples of the same damaged system grows (or decreases).
|
| 298 |
+
|
| 299 |
+
## 3.2. Effect of measurement noise
|
| 300 |
+
|
| 301 |
+
Effect of the amount of measurement noise is investigated in two settings. In the first one, the properties of the measurement noise are the same in the reference state and possibly damaged state, while in the second setting they are different. Each of these settings are investigated in the following sections.
|
| 302 |
+
|
| 303 |
+
First, some properties regarding the noise properties of the state space system (2) are recalled [19]. They are given by
|
| 304 |
+
|
| 305 |
+
$$ \mathbf{E} \left[ \begin{pmatrix} w_k \\ v_k \end{pmatrix} \begin{pmatrix} w_k^T & v_k^T \end{pmatrix} \right] = \begin{pmatrix} Q & S \\ S^T & R \end{pmatrix} $$
|
| 306 |
+
|
| 307 |
+
Only matrix $R$ depends on the variance of the measurement noise. Note that the measurement noise is denoted as $e$ in system (1), and the output noise term $v_k$ is in fact a sum of the measurement noise and the excitation noise in the case of acceleration measurements. In this case matrix $S$ only depends on the excitation noise, assuming that excitation and measurement noise are independent.
|
| 308 |
+
|
| 309 |
+
With these definitions, it can be seen that the expected value of the Hankel matrix does not depend on the measurement noise, since $R_i = \mathbf{E}(y_k y_{k-i}^T) = HF^{i-1}G$ for $i \ge 1$, where $G = FDH^T + S$ with $D$ being the state covariance. None of these quantities depend on the measurement noise under the previous assumptions.
|
| 310 |
+
|
| 311 |
+
However, the residual covariance $\Sigma^e = \lim_N \mathbf{E}((\zeta_N^e - \mathbf{E}\zeta_N^e)(\zeta_N^e - \mathbf{E}\zeta_N^e)^T)$ depends on the measurement noise, since squared terms like $\text{vec}(\hat{R}_i)\text{vec}(\hat{R}_i)^T$ appear within the expectation, and the expected value of data correlations without lag, $\mathbf{E}(y_k y_k^T) = HDH^T + R$, indeed depends on the measurement noise [19]. However, we will not make a detailed mathematical analysis of the relationship between residual covariance and measurement noise in this paper, but content ourselves with a qualitative analysis for simplicity and clarity.
|
| 312 |
+
|
| 313 |
+
For the analysis of the effect of changes in the measurement noise between noise properties $e_1$ and $e_2$, we denote $e_1 > e_2$ if the respective output noise covariance matrices satisfy $R^{e_1} > R^{e_2}$ (i.e. $R^{e_1} - R^{e_2}$ is positive definite). $e_1$ represents a higher measurement noise than $e_2$. This is the case if each of the measured signals in the first configuration have a lower signal to noise ratio than the respective signals in the second configuration (while the properties of the ambient excitation noise remain the same). A higher measurement noise leads to larger variations in the residual and thus to a bigger residual covariance. For our qualitative analysis, assume respectively $\Sigma^{e_1} = \alpha\Sigma^{e_2}$ with a scalar magnification factor $\alpha > 1$ to be able to study this
|
| 314 |
+
---PAGE_BREAK---
|
| 315 |
+
|
| 316 |
+
effect in a closed form formulation. This magnification factor is in direct relation with the signal to noise ratio if the noise type is white. However, for colored noise this magnification factor is an approximate representation of the noise effect.
|
| 317 |
+
|
| 318 |
+
The effect of changes in the measurement noise is now investigated in two settings. In the first one, the noise properties between the reference state and possibly damaged state are constant, while in the second setting they are different.
|
| 319 |
+
|
| 320 |
+
### 3.2.1. Equal noise properties between the reference state and possibly damaged state
|
| 321 |
+
|
| 322 |
+
In this section, it is assumed that the measurement noise properties in data from reference state and possibly damaged state are equal. We compare different noise properties that are equal in both states. Note that the residual covariance matrices $\Sigma^{e_1}$ and $\Sigma^{e_2}$ for different noise properties $e_1$ and $e_2$ are assumed to be obtained from reference datasets under the respective conditions.
|
| 323 |
+
|
| 324 |
+
**Theorem 3** If the structure is *undamaged* and the noise properties of both the reference state data and the current state data are equal, then an increase or decrease of the noise in both states does not change the expected $\chi^2$ value. In other words,
|
| 325 |
+
|
| 326 |
+
$$ \mathbf{E}[(\zeta_N^{e_1})^T (\Sigma^{e_1})^{-1} \zeta_N^{e_1}] = \mathbf{E}[(\zeta_N^{e_2})^T (\Sigma^{e_2})^{-1} \zeta_N^{e_2}] \text{ under } H_0. $$
|
| 327 |
+
|
| 328 |
+
**Proof 3** From the property of the $\chi^2$ distribution in (20) it follows that the expected value of the respective $\chi^2$ values is $d = \dim(\zeta_N^{e_1}) = \dim(\zeta_N^{e_2})$ under $H_0$, as in proof 1, which is independent of the noise.
|
| 329 |
+
|
| 330 |
+
**Theorem 4** If the structure is *damaged* and the noise properties of both the reference state data and the current state data are equal, then an increase or decrease of the noise in both states results in a change (in inverse direction) in the expected $\chi^2$ value for a constant damage. In other words, if $e_1 > e_2$ then $\mathbf{E}[(\zeta_N^{e_1})^T (\Sigma^{e_1})^{-1} \zeta_N^{e_1}] < \mathbf{E}[(\zeta_N^{e_2})^T (\Sigma^{e_2})^{-1} \zeta_N^{e_2}]$ under $H_I$.
|
| 331 |
+
|
| 332 |
+
**Proof 4** As shown in the beginning of Section 4.2, the measurement noise does not influence the expected value of the respective Hankel matrices. Hence, $\delta = \mathbf{E}\zeta_N^{e_1} = \mathbf{E}\zeta_N^{e_2}$ is equal for both noise configurations (see also Eq. (21)), while the non-centrality parameters are $n_c^{e_1} = \delta^T (\Sigma^{e_1})^{-1} \delta$ and $n_c^{e_2} = \delta^T (\Sigma^{e_2})^{-1} \delta$, respectively. Due to assumption $\Sigma^{e_1} = \alpha\Sigma^{e_2}$ it follows $n_c^{e_1} = \frac{1}{\alpha}n_c^{e_2}$ with $\alpha > 1$, hence $n_c^{e_1} < n_c^{e_2}$. Then, the assertion follows from property (20) of the $\chi^2$ distribution.
|
| 333 |
+
|
| 334 |
+
Theorem 4 is also intuitive in the sense that higher noise, i.e. a lower signal-to-noise ratio, decreases the quality of the data and makes it harder to detect damage, which is reflected in the lower $\chi^2$ test value.
|
| 335 |
+
|
| 336 |
+
### 3.2.2. Different noise properties between the reference state and possibly damaged state
|
| 337 |
+
|
| 338 |
+
In this section it is assumed that the measurement noise will change in the test data irrespective to the noise in the reference data where the residual covariance was computed. Note that since the noise properties of the residual do not correspond to its covariance anymore, one would need to recompute the covariance matrix to accommodate noise changes in a correct test [11]. Moreover, the resulting $\chi^2$ test value does not satisfy the stated parameters of the $\chi^2$ distribution as in Section 3 anymore, which are shifted now. However, the numerical computation of the covariance is complex and impractical on each tested dataset in practice. Hence, the covariance is usually only computed once in the reference state, which is valid when the measurement noise properties are stable. In this section we investigate the consequence of different noise properties on the test results, violating the theoretical assumptions of the test.
|
| 339 |
+
|
| 340 |
+
First, the effect of changes in the measurement noise in the test data are investigated, while the noise level in the residual covariance remains constant. Second, different noise levels in the residual covariance are investigated, while the noise level in the test data remains constant.
|
| 341 |
+
---PAGE_BREAK---
|
| 342 |
+
|
| 343 |
+
**Theorem 5** Change in the noise properties of the test data results in a change in the expected $\chi^2$ value in the same direction, regardless to the state of the structure. In other words, if $e_1 > e_2$ then $\mathbf{E}[(\zeta_N^{e_1})^T (\Sigma^{e_1})^{-1} \zeta_N^{e_1}] > \mathbf{E}[(\zeta_N^{e_2})^T (\Sigma^{e_1})^{-1} \zeta_N^{e_2}]$ both under $H_0$ and $H_1$.
|
| 344 |
+
|
| 345 |
+
**Proof 5** Analogous to Proof 4, if follows from the property (20) of the $\chi^2$ distribution
|
| 346 |
+
$\mathbf{E}[(\zeta_N^{e_1})^T (\Sigma^{e_1})^{-1} \zeta_N^{e_1}] = d + \delta^T (\Sigma^{e_1})^{-1} \delta$. Using $\Sigma^{e_1} = \alpha\Sigma^{e_2}$ with $\alpha > 1$, it follows furthermore
|
| 347 |
+
$\mathbf{E}[(\zeta_N^{e_2})^T (\Sigma^{e_1})^{-1} \zeta_N^{e_2}] = \frac{1}{\alpha} \mathbf{E}[(\zeta_N^{e_2})^T (\Sigma^{e_2})^{-1} \zeta_N^{e_2}]$. The right expectation corresponds now to a standard $\chi^2$ distribution and hence
|
| 348 |
+
|
| 349 |
+
$$
|
| 350 |
+
\begin{align*}
|
| 351 |
+
\mathbf{E}[(\zeta_N^{e_2})^T (\Sigma^{e_1})^{-1} \zeta_N^{e_2}] &= \frac{1}{\alpha} (d + \delta^T (\Sigma^{e_2})^{-1} \delta) = \frac{1}{\alpha} (d + \delta^T [\alpha (\Sigma^{e_1})^{-1}] \delta) \\
|
| 352 |
+
&= \frac{1}{\alpha} d + \delta^T (\Sigma^{e_1})^{-1} \delta
|
| 353 |
+
\end{align*}
|
| 354 |
+
$$
|
| 355 |
+
|
| 356 |
+
Comparing now with $\mathbf{E}[(\zeta_N^{e_1})^T (\Sigma^{e_1})^{-1} \zeta_N^{e_1}]$, the assertion follows both for $H_0$ (where $\delta = 0$) and for $H_1$, since $\alpha > 1$.
|
| 357 |
+
|
| 358 |
+
Theorem 5 may be somewhat counterintuitive as it states “less noise leads to a weaker reaction of the test”. However, this would not be the case if the appropriate covariance matrix had been used, which would be of lower magnitude and thus would normalize the residual correctly by dividing it with lower values.
|
| 359 |
+
|
| 360 |
+
**Theorem 6** Regardless of the state of the system, change in the noise properties of the reference data, on which the residual covariance is computed, results in a change in inverse direction in the expected $\chi^2$ value. In other words, if $e_1 > e_2$ then $\mathbf{E}[(\zeta_N^{e_1})^T (\Sigma^{e_1})^{-1} \zeta_N^{e_1}] < \mathbf{E}[(\zeta_N^{e_2})^T (\Sigma^{e_2})^{-1} \zeta_N^{e_2}]$ both under $H_0$ and $H_1$.
|
| 361 |
+
|
| 362 |
+
**Proof 6** The proof is analogous to the Proof 5. We have $\mathbf{E}[(\zeta_N^{e_1})^T (\Sigma^{e_1})^{-1} \zeta_N^{e_1}] = d + \delta^T (\Sigma^{e_1})^{-1} \delta$, and since $\Sigma^{e_1} = \alpha\Sigma^{e_2}$ with $\alpha > 1$, $\mathbf{E}[(\zeta_N^{e_2})^T (\Sigma^{e_2})^{-1} \zeta_N^{e_2}] = \alpha\mathbf{E}[(\zeta_N^{e_2})^T (\Sigma^{e_2})^{-1} \zeta_N^{e_2}] = \alpha d + \alpha \delta^T (\Sigma^{e_2})^{-1} \delta$. Hence the assertion follows both for $H_0$ and for $H_1$.
|
| 363 |
+
|
| 364 |
+
**4. Numerical application**
|
| 365 |
+
|
| 366 |
+
In this section the theorems stated at section 4 are demonstrated for a simple mass-spring system. This system is composed of six degrees of freedom associated to six masses connected with springs as shown in Figure 1. There is a damping ratio of 2% associated to all modes. Damage is modeled as a stiffness reduction of 5% of the second spring, i.e. k₂. The excitation is simulated as Gaussian white noise and the resulting acceleration measurements are acquired from three sensors located on the masses at a sampling rate of 50 Hz. In order to illustrate the effects in the theorems, three case studies are performed. The first one is related to theorems 1 to 2, second one is related to theorems 3 to 4, and the last one is related to theorems 5 to 6.
|
| 367 |
+
|
| 368 |
+
Fig 1 The mass-spring model and the sensor locations
|
| 369 |
+
---PAGE_BREAK---
|
| 370 |
+
|
| 371 |
+
**4.1. Cases study 1, effect of number of samples**
|
| 372 |
+
|
| 373 |
+
For this case, the number of samples is changed from 1000, 2000, ..., to 10000 in 10 steps for both undamaged and damaged cases. In each step, 100 repetitions are made to calculate the mean, representing the expected value of the $\chi_N^2$. The measurement noise ratio in all cases is 5%. It can be seen from the results in Figure 2, that as stated in theorem 1, when the model is not damaged, the expected $\chi_N^2$ is not changed. However, when the model is damaged, it can be seen that this value grows linearly with the number of samples, which confirms the (linear) factor $N$ in the non-centrality parameter as shown in the proof of Theorem 2.
|
| 374 |
+
|
| 375 |
+
**Fig 2** Expected $\chi_N^2$ value evaluated for different number of samples in damaged and undamaged conditions (red line: 99 percentile, yellow line: 95 percentile)
|
| 376 |
+
|
| 377 |
+
**4.2. Case study 2, effect of noise with Equal properties**
|
| 378 |
+
|
| 379 |
+
In here, the number of samples is kept constant at 10000. However, the measurement noise, which has equal properties in reference and testing state, is changed. This noise ratio is changed from 5% to 125% in 25 steps for damaged and undamaged conditions. Again in each step the repetition is 100 times. The results are presented in Figure 3. The test values in the undamaged state are constant and independent of the noise ratio, confirming Theorem 3. The test values in the damaged state decrease when the noise ratio increases, as shown in Theorem 4. From Figure 3 it can be observed that the test values decrease approximately quadratically with increasing noise, thus the factor $\alpha$ in Section 3.2 seems to be quadratic in the measurement noise level.
|
| 380 |
+
|
| 381 |
+
**Fig 3** Expected $\chi_N^2$ value evaluated for different noise ratios with equal properties in damaged and undamaged conditions (red line: 99 percentile, yellow line: 95 percentile)
|
| 382 |
+
|
| 383 |
+
**4.3. Case study 3, effect of noise with unequal properties**
|
| 384 |
+
|
| 385 |
+
In this case study, the noise properties are not equal in the reference and test states as mentioned in section 4.2. For this purpose, same as previous case study, the number of samples is constantly equal to 10000. The noise is being increased in 25 steps from
|
| 386 |
+
---PAGE_BREAK---
|
| 387 |
+
|
| 388 |
+
5% to 125% with 100 repetitions in each step. This is also investigated for damaged and undamaged conditions. In Figure 4 the results are shown when the measurement noise is changed only for the testing state (both in undamaged and damaged conditions, respectively). The measurement noise in the reference state that was used to set up the residual covariance is constant at 5%. It can be seen that both in undamaged and damaged states the test value increases when the noise level increases, confirming Theorem 5. Again, the increase rate seems to be quadratic.
|
| 389 |
+
|
| 390 |
+
**Fig 4** Expected $\chi_N^2$ value evaluated for different noise ratios only in the test data, in damaged and undamaged conditions
|
| 391 |
+
|
| 392 |
+
In Figure 5, the same study is done for the changing of measurement noise in the residual covariance computed in the reference state while the measurement noise in the test data is kept constant at 5%. It can be seen that by increasing the measurement noise in the reference data, the expected $\chi_N^2$ value is being decreased for both undamaged and damaged conditions, as stated in Theorem 6.
|
| 393 |
+
|
| 394 |
+
**Fig 5** Expected $\chi_N^2$ value evaluated for different noise ratios only in the reference data, in damaged and undamaged conditions
|
| 395 |
+
|
| 396 |
+
# 5. Discussion and conclusion
|
| 397 |
+
|
| 398 |
+
In this paper, several theorems are proposed and proved on the effect of noise and number of samples on SSDD technique. From these theorems some conclusions can be inferred that will serve the user of SSDD technique as a guideline in dealing with these effects.
|
| 399 |
+
|
| 400 |
+
I) Considering theorems 1 and 2, data duration does not affect the expected $\chi^2$ value in the reference state. This is an advantage of this approach that will help in identifying a unique threshold in the reference state which can be compared to the $\chi^2$ value acquired from the test data. However, when the structure is damaged, by increasing the data length the $\chi^2$ value will be increased. In other words, if there is more data the damage state becomes more distinct and identifiable. Therefore, the more samples we have, the better results we get to detect the damage, and if there is not enough data, the damage state may not be identifiable.
|
| 401 |
+
|
| 402 |
+
II) Different noise levels in the system (each time both for reference and test data) lead to changes in the resulting $\chi^2$ values acquired from damaged structure changes in inverse direction (theorems 3 and 4). Therefore, an increase of measurement noise in the system results in decreasing the $\chi^2$ value for damaged state, making the damage possibly undetectable. Note
|
| 403 |
+
---PAGE_BREAK---
|
| 404 |
+
|
| 405 |
+
that this can be compensated by longer datasets (see previous point). The amount of measurement noise in the system should not be too high.
|
| 406 |
+
|
| 407 |
+
**III)** It can be inferred from theorem 5 that by having higher noise in the test or validation data, while the residual covariance is not re-evaluated, their $\chi^2$ value becomes higher. This can affect the damage detection process in two ways. Firstly, if the safety thresholds are evaluated from a low noise reference data, then a high noise test data from undamaged structure can be identified as being damaged leading to a false alarm. Secondly, if the safety thresholds are evaluated from a high noise reference data and the noise in test data of damaged structure is very low, then the damaged structure might not be detected. These two suggests that the measurement noise in the reference and test data should be about the same. The sensitivity of the threshold was studied in [15].
|
| 408 |
+
|
| 409 |
+
**IV)** Based on theorem 6, increasing noise in the reference data results in a decrease of the $\chi^2$ value for both undamaged and damaged test data. Nevertheless, the effect of damage for the same noise level is still visible.
|
| 410 |
+
|
| 411 |
+
It should be mentioned that, in all of these cases it is assumed that the reference data is not corrupted with too high noise and that the left null-space $S_0^T$ and the residual covariance matrix $\Sigma^e$ are evaluated properly.
|
| 412 |
+
|
| 413 |
+
## References
|
| 414 |
+
|
| 415 |
+
[1] Fan W, and Pizhong Q. "Vibration-based damage identification methods: a review and comparative study." *Structural Health Monitoring* 10.1, 83-111, 2011.
|
| 416 |
+
|
| 417 |
+
[2] Doebling SW, Farrar CR, Prime MB. "A summary review of vibration-based damage identification methods." *Shock and vibration digest* 30.2, 91-105, 1998.
|
| 418 |
+
|
| 419 |
+
[3] Salawu OS. "Detection of structural damage through changes in frequency: a review." *Engineering structures*, 19.9, 718-723, 1997.
|
| 420 |
+
|
| 421 |
+
[4] Worden K, Farrar CR, Manson G, Park G. "The fundamental axioms of structural health monitoring." *Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences*. 463.2082. The Royal Society, 2007.
|
| 422 |
+
|
| 423 |
+
[5] Farrar CR, Doebling SW, Nix DA. "Vibration-based structural damage identification." *Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences* 359.1778, 131-149, 2001.
|
| 424 |
+
|
| 425 |
+
[6] Yan AM, De Boe P, Golinval JC. "Structural damage diagnosis by Kalman model based on stochastic subspace identification." *Structural Health Monitoring* 3.2, 103-119, 2004.
|
| 426 |
+
|
| 427 |
+
[7] Worden K, Manson G, Fieller NRJ. "Damage detection using outlier analysis." *Journal of Sound and Vibration* 229.3, 647-667, 2000.
|
| 428 |
+
|
| 429 |
+
[8] Basseville M, Abdelghani M, Benveniste A. "Subspace-based fault detection algorithms for vibration monitoring", *Automatica* 36.1, 2000, pp. 101-109.
|
| 430 |
+
|
| 431 |
+
[9] Basseville M, Mevel L, Goursat M. "Statistical model-based damage detection and localization: subspace-based residuals and damage-to-noise sensitivity ratios", *Journal of Sound and Vibration*, 275. 3, 769-794, 2004.
|
| 432 |
+
|
| 433 |
+
[10] Carden EP, Fanning P. "Vibration based condition monitoring: a review." *Structural health monitoring* 3.4, 355-377, 2004.
|
| 434 |
+
|
| 435 |
+
[11] Döhler M, Mevel L, Hille F. "Subspace-based damage detection under changes in the ambient excitation statistics." *Mechanical Systems and Signal Processing* 45.1, 207-224, 2014.
|
| 436 |
+
|
| 437 |
+
[12] Döhler M, Hille F. "Subspace-based damage detection on steel frame structure under changing excitation." *Structural Health Monitoring*, Volume 5. Springer International Publishing, 2014. 167-174.
|
| 438 |
+
|
| 439 |
+
[13] Alvandi A, Cremona C. "Assessment of vibration-based damage identification techniques." *Journal of sound and vibration* 292.1, 179-202, 2006.
|
| 440 |
+
|
| 441 |
+
[14] Döhler M, Mevel L. "Subspace-based fault detection robust to changes in the noise covariances." *Automatica* 49.9 (2013): 2734-2743.
|
| 442 |
+
|
| 443 |
+
[15] Allahdadian S, Ventura C, Andersen P, Mevel L, Döhler M. "Investigation on the sensitivity of subspace based damage detection technique to damage and noise levels." *IOMAC-International Operational Modal Analysis Conference*, 2015.
|
| 444 |
+
---PAGE_BREAK---
|
| 445 |
+
|
| 446 |
+
[16] Allahdadian S, Ventura C, Andersen P, Mevel L, Döhler M. "Subspace based damage detection technique: investigation on the effect of number of samples", *CCEE-Canadian Conference on Earthquake Engineering*, 2015.
|
| 447 |
+
|
| 448 |
+
[17] Allahdadian S, Ventura C, Andersen P, Mevel L, Döhler M. "Sensitivity Evaluation of Subspace-based Damage Detection Method to Different Types of Damage." *Structural Health Monitoring and Damage Detection, Volume 7*. Springer International Publishing, 11-18, 2015.
|
| 449 |
+
|
| 450 |
+
[18] Basseville M, Benveniste A, Goursat M, Hermans L, Mevel L. Van der Auweraer H. "Output-only subspace-based structural identification: from theory to industrial testing practice." *Journal of Dynamic Systems, Measurement, and Control* 123.4, 668-676, 2001.
|
| 451 |
+
|
| 452 |
+
[19] Van Overschee P, De Moor B. *Subspace Identification for Linear Systems: Theory, Implementation, Applications*. Kluwer, 1996.
|
| 453 |
+
|
| 454 |
+
[20] Benveniste A, Basseville M, Moustakides G. "The asymptotic local approach to change detection and model validation." *IEEE Transactions on Automatic Control* 32.7, 583-592, 1987.
|
| 455 |
+
|
| 456 |
+
[21] Balmès E, Basseville M, Bourquin F, Mevel L, Nasser H, Treyssède F. "Merging sensor data from multiple temperature scenarios for vibration-based monitoring of civil structures." *Structural Health Monitoring*, 7(2):129-142, 2008.
|
samples/texts_merged/7693403.md
ADDED
|
@@ -0,0 +1,415 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# The Development of a Physical Theory of Braids
|
| 5 |
+
|
| 6 |
+
## An Extension of the Ropelength Model to Braids
|
| 7 |
+
|
| 8 |
+
**Trevor Oliveira-Smith**
|
| 9 |
+
|
| 10 |
+
Under the supervision of Professor Abigail Thompson
|
| 11 |
+
and Cameron Bjorklund
|
| 12 |
+
|
| 13 |
+
A thesis presented for the degree of
|
| 14 |
+
Bachelor of Science
|
| 15 |
+
|
| 16 |
+
Department of Mathematics
|
| 17 |
+
University of California, Davis
|
| 18 |
+
Davis, Ca. USA
|
| 19 |
+
June 2021
|
| 20 |
+
---PAGE_BREAK---
|
| 21 |
+
|
| 22 |
+
# The Development of a Physical Theory of Braids
|
| 23 |
+
|
| 24 |
+
Trevor Oliveira-Smith
|
| 25 |
+
|
| 26 |
+
June 2021
|
| 27 |
+
|
| 28 |
+
## Abstract
|
| 29 |
+
|
| 30 |
+
Physical knot theory is an area of study in the field of knot theory which seeks to create physical frameworks with which to study knots. One of the most well known models is the *ropelength model* for knots and links. The ropelength model of a knot or link seeks to model presentations of knots or links that are made of an ideally flexible, thickened rope and to define a tight presentation of a knot or link. It was established by J.W. Alexander in 1923 that every knot or link can be represented as the closure of a braid [1]. With this, we are inspired to extend the ropelength model to braids. In this paper we define a ropelength model for braids and prove the existence of a ropelength minimizing braid presentation within each braid type.
|
| 31 |
+
|
| 32 |
+
## 1 Acknowledgements
|
| 33 |
+
|
| 34 |
+
First and foremost, I would like to thank my Dad and Mom for their constant support of my mathematical endeavors and encouraging me to continue learning mathematics. In addition, I would also like to thank my friends for pretending to be interested when I would talk about braiding rope–I only hope I didn’t bore them too much.
|
| 35 |
+
|
| 36 |
+
I also want to thank Cameron Bjorklund for his time helping me. He gave absolutely great insight and opinions on the subject matter of this thesis. I would also like to thank Cameron for his proof in Section 6 that the sequence $R_n$ converges. This proof has of course been integral to our definition of ropelength for braids taken as elements of the braid group.
|
| 37 |
+
|
| 38 |
+
Most importantly, I would like to give a huge THANK YOU to Professor Abigail Thompson, without whom this project would not exist. Nearly two years ago professor Thompson had asked me if I wanted to work on a Summer REU defining the roundness of a braids. This proved to be quite a challenging task and required us to develop a physical model of braids to work in, ultimately leading to this project. I greatly appreciate the amount of time, energy and patience Professor Thompson has put into this project by helping me in these past two years. Professor Thompson has been a continuing inspiration to me and has facilitated much growth in me as a budding mathematician. So it is with much gratitude that I repeat myself: Thank you Professor Thompson.
|
| 39 |
+
|
| 40 |
+
## 2 Introduction
|
| 41 |
+
|
| 42 |
+
"Can you tie a knot on a foot long rope that is one inch thick?" This has been a long standing question in knot theory. The question was answered negatively in [4] by employing the *ropelength model of a knot*. [3, 6, 7] developed the ropelength model for knots and links prior to this as a way to mathematically model knots that are made out of a thickened, ideally flexible rope and measure how tightly tied a given knot or link in this model can be tied.
|
| 43 |
+
---PAGE_BREAK---
|
| 44 |
+
|
| 45 |
+
One of the first observations to make about knots and links that are made out of a real rope is that there
|
| 46 |
+
is a limit to how tightly one can tie the knot or link. This is, of course, due to the thickness of the rope.
|
| 47 |
+
It is only natural that the ropelength model for knots and links also models this sort of limiting behavior.
|
| 48 |
+
Before this limiting behavior can be rigorously defined, however, one must first give a reasonable account
|
| 49 |
+
of the thickness of a given knot or link. Intuitively, a person would buy a length of rope that has a given
|
| 50 |
+
thickness, then tightly tie the knot out of this length of rope. This is a very natural and valid approach to
|
| 51 |
+
solving this problem. However from a mathematical perspective, it makes sense to first construct a specific
|
| 52 |
+
presentation of a knot or link out of a curve (or disjoint union of curves) of zero thickness, then place a
|
| 53 |
+
normal tube(s) around the presentation whose core is the original curve (or disjoint union of curves). Given
|
| 54 |
+
this approach, we have to be careful to not make our normal tube too large, else it will intersect its own
|
| 55 |
+
interior. The interiors of the normal tube should not self-intersect because real world rope does not do this.
|
| 56 |
+
We note, however, that self-intersection of the normal tube is allowed if it is only along the boundary of the
|
| 57 |
+
normal tube i.e the outermost tube, since this behavior occurs in knots tied out of physical rope.
|
| 58 |
+
|
| 59 |
+
[6] gave a useful account of this maximal thickness for a given knot (or link) presentation by employing the *global radius of curvature*. The global radius of curvature for a presentation of a knot or link is a functional that provides a nice theoretical framework to work with the thickness of curves. The global radius of curvature takes advantage of the fact that for three distinct, non-collinear points $x, y, z \in \mathbb{R}^3$, there is a unique circle that passes through the three points—we call the radius of this circle $r(x, y, z)$. The global radius of curvature of a knot or link is defined locally by first fixing a point on $x$ the knot (or link) presentation and taking the infimum over all $y$ and $z$ (with $y \neq z$) on the knot presentation of $r(x, y, z)$. Then the global radius of curvature is defined as the infimum of all $x$ over the local radius of curvature.
|
| 60 |
+
|
| 61 |
+
Both [6] and [3] showed that the global radius of curvature corresponds to the maximal radius of thick-
|
| 62 |
+
ness for an embedded open normal tube around the given knot or link presentation. Using this charac-
|
| 63 |
+
terization of thickness is advantageous because it allows for control over the thickness in terms of local
|
| 64 |
+
curvature and distance between strands of the knot at the same time. In addition, this functional is upper
|
| 65 |
+
semicontinuous, which allows us to have better control over convergence properties which we will address
|
| 66 |
+
momentarily.
|
| 67 |
+
|
| 68 |
+
Now with a working account of the maximal thickness of an embedded normal tube, [6, 3] define a notion of how "tight" a knot (or link) presentation is through the *ropelength*, which is defined as follows: Let $L \subset \mathbb{R}^3$ be a parameterized presentation of a knot or link. We define the *ropelength* of L to be
|
| 69 |
+
|
| 70 |
+
$$Rl(L) := l(L)/\Delta[L],$$
|
| 71 |
+
|
| 72 |
+
where $l(K)$ is the total length of $L$ and $\Delta[L]$ is the maximal thickness of $L$. The smaller the ropelength,
|
| 73 |
+
the tighter the presentation is tied. This makes sense with our intuitive notion of tightness since the rope-
|
| 74 |
+
length of $L$ would be smallest when the total length of rope used to make $L$ is smallest and the thickness of
|
| 75 |
+
$L$ is largest. This would mean that $L$ is made out of the smallest amount of rope and that there is very little
|
| 76 |
+
space between the strands once they are thickened.
|
| 77 |
+
|
| 78 |
+
Using this definition of tightness, [3, 6] then frame the question of existence of ideal knot presentations as "For any knot (or link) type, does there exist a presentation $L^*$ such that $RL(L^*)$ is minimal?" This question itself can actually be rephrased as follows: "For each knot (or link) type, does there exist a presentation $L^* \in \mathcal{L}$ such that"
|
| 79 |
+
|
| 80 |
+
$$Rl(L^*) = \inf_{L \in \mathcal{L}} Rl(L),$$
|
| 81 |
+
|
| 82 |
+
where $\mathcal{L}$ is the isotopy class of a given knot (or link) type?"
|
| 83 |
+
|
| 84 |
+
In [7], this is proved for $C^{1,1}$ (once differentiable with Lipschitz derivative) presentations of knots. In [3], a more general result is proved for $C^{1,1}$ presentations of links and several classes of such ropelength minimizing presentations is constructed. This also shows that such presentations are not unique in general. For the purposes of this paper, we are interested in how these existence results were achieved. Since the
|
| 85 |
+
---PAGE_BREAK---
|
| 86 |
+
|
| 87 |
+
process was essentially the same in both papers, the next paragraph will solely refer to the process for proving the existence of ropelength minimizers in [3] since it allowed for the existence of ropelength minimizing link types as well.
|
| 88 |
+
|
| 89 |
+
In [3], Cantarella et al. first showed that the thickness functional (obtained by employing the global radius of curvature) was upper semicontinuous. Then it was shown that for a given presentation of a link $L$ with thickness $\tau > 0$, $L$ is $C^{1,1}$ with Lipschitz constant $1/2\tau$. After which, there was a lemma proven which stated that if one takes a sequence of links $L_i$ with thickness $\tau > 0$ which converge to a limiting link $L$ (in the $C^0$ norm), then $L_i \to L$ in the $C^1$ norm and $L$ is isotopic to (all but finitely many) of the $L_i$. It should also be noted that by application of the upper-semicontinuity of thickness, $L$ also has thickness of at least $\tau$. Then using these results, the existence result was proven as follows: Consider the compact space of all $C^{1,1}$ curves with length uniformly bounded by 1. Then, one can consider a sequence of $L_i$ which maximizes thickness over the isotopy class $\mathcal{L}$. Using the uniform boundedness of the lengths and the fact that each $L_i$ is uniformly Lipschitz, one can extract a uniformly convergent subsequence $L_{i_k} \to L_*$ by employing the Arzelà-Ascoli Theorem. By the previously stated lemma $L_*$ is isotopic to (all but finitely many) of the $L_{i_k}$ and must also have thickness equal to the supremal thickness (since thickness is upper semicontinuous). Then $L_*$ is, of course, a ropelength minimizer.
|
| 90 |
+
|
| 91 |
+
In this paper, we define a similar ropelength model for braids. We will be using the same notion of thickness for braids as was used in [3, 6, 7]. In addition, we will be using the same notion of *ropelength* for braid presentations as was used for links (i.e total length used in all strands divided by maximum thickness). The reason ropelength corresponds to tightness for braids would be the same as why it corresponds for links. We will also be considering a thickness maximizing sequence of braid presentations (for a given braid type) and employing the Arzelà-Ascoli theorem to extract a convergent subsequence, which will show the existence of ropelength minimizing braid presentations. There is however, a slight difference in our method for proving the existence of ropelength minimizing braid presentations and what was used in [3, 6, 7]. The difference arises from a certain subtlety that comes from dealing with braid presentations. This subtlety arises when we have to work with the endpoints of a braid presentation. When working with the typical definition of a braid, two braid presentations can have different endpoints but still be equivalent. This leads to a somewhat problematic viewpoint when we are treating our braid presentations as parameterized curves sitting in $\mathbb{R}^3$. This is because equivalence classes of braid presentations with arbitrary endpoints makes for an infinite maximal thickness since we can just keep considering braids whose endpoints are arbitrarily far apart; a problem that is not experienced by nontrivial presentations of knots or links. At first, one would think the solution to this endpoint problem would be to merely fix endpoints so that they are all evenly spaced in a line and a certain vertical distance apart. However, this is not a very natural definition to consider, because when we make braids out of physical rope and pull them as tight as we can, we have that the endpoints of the braid do not necessarily form a straight line as we can see pictured:
|
| 92 |
+
|
| 93 |
+
Figure 1: We can see that the perpendicularity of the endpoints and the "flat" behavior of the endpoints being stuck in a row is obviously not ideal.
|
| 94 |
+
|
| 95 |
+
As such, we are not capturing the maximally tightened behavior of the braid. We believe that the fix for this issue is to do the following: choose the endpoints in any suitable way, maximize thickness over all such presentations of the braid with the same endpoints, then compose this maximized presentation with itself. After which, tighten this composed braid maximally and consider the minimal ropelength of this new braid divided by 2. Next, compose this maximally tightened composed braid with itself to obtain a
|
| 96 |
+
---PAGE_BREAK---
|
| 97 |
+
|
| 98 |
+
new braid and consider this new ropelength divided by 3. Repeat this process indefinitely. We will then define the minimal ropelength of an element of the braid group to be the sequence of minimal ropelengths of the composed braids, divided by the number of compositions. The geometric intuition behind this decision is that when we compose the braid (that has fixed endpoints) with itself after an arbitrary number of times, the actual ideal presentation of the braid-as we would see from making it out of rope-is somewhere within this maximally tightened composition; then all we need to do is cut out this ideal presentation. This process of arbitrary composition would, intuitively, allow us to "forget the endpoints" that we have chosen.
|
| 99 |
+
|
| 100 |
+
With this strategy in mind, we spend sections 3 and 4 of this paper laying out the necessary definitions and results for a ropelength model of braid presentations with fixed endpoints. In the fifth section we lay out our more general process for finding ropelength minimizers in greater detail.
|
| 101 |
+
|
| 102 |
+
# 3 Preliminary Definitions
|
| 103 |
+
|
| 104 |
+
**Definition 3.1 (Braid Frame).** Let $C_2$ be a collection of $n$ equally spaced points on the $x$-axis in $\mathbb{R}^3$. Now let $C_1 = C_2 + \{(0,0,b)\}$ be the Minkowski sum and $b \in \mathbb{R}$ such that $b > 0$. We call the collection
|
| 105 |
+
|
| 106 |
+
$$F = \{C_1, C_2\}$$
|
| 107 |
+
|
| 108 |
+
a braid frame.
|
| 109 |
+
|
| 110 |
+
We give an example of a braid frame.
|
| 111 |
+
|
| 112 |
+
*Example 1.* Let $C_1 = \{(1/3, 0, 0), (2/3, 0, 0), (1, 0, 0)\}$ and take $C_2 = \{(1/3, 0, 1), (2/3, 0, 1), (1, 0, 1)\}$. Then we let $F = \{C_1, C_2\}$ and we can picture $F$ as follows:
|
| 113 |
+
|
| 114 |
+
Figure 2: The bottom row of purple points depict the set $C_1$ and the top row of purple points depict the set $C_2$. Together, they make the frame $F = \{C_1, C_2\}$.
|
| 115 |
+
|
| 116 |
+
**Definition 3.2 (braid strand, endpoints, and braid presentation).** Let $F = \{C_1, C_2\}$ be a braid frame and let $\gamma : [0,1] \to \mathbb{R}^3$ be a smooth ($C^\kappa$) curve that is (weakly) monotonic in $z$ such that $\gamma(a) \in C_1$ and $\gamma(b) \in C_2$ where $\gamma$ is perpendicular to the line containing $\gamma(a)$ and $\gamma(b)$. We call $\gamma$ a *braid strand* and $\gamma(a)$ and $\gamma(b)$ the endpoints of $\gamma$.
|
| 117 |
+
|
| 118 |
+
If for a given frame $F$, we have a collection, $S$, of $n$-braid strands $\gamma_i : [0, 1] \to \mathbb{R}^3 (i \in \{1, ..., n\})$ such that $\gamma_i(t) \neq \gamma_j(s)$ for all $t \in [0, 1]$ and $s \in [0, 1]$ and $i \neq j$, then we call the structure $B = (F, S)$ a *framed presentation of an n-braid* or a *framed braid presentation*.
|
| 119 |
+
|
| 120 |
+
*Example 2.* It would be rather cumbersome to give explicit parameterizations of a disjoint union of smooth strands which form a framed braid presentation as we have defined, so we present this example using a picture:
|
| 121 |
+
---PAGE_BREAK---
|
| 122 |
+
|
| 123 |
+
Figure 3: With the red, green, and blue curves, we note that each of them is drawn to be perpendicular to the endpoints. Taken along with $F$ from the previous example we have a framed braid presentation.
|
| 124 |
+
|
| 125 |
+
We note that we can have two different framed braid presentations which contain all of the same crossing information such that one presentation can be continuously deformed into the other within the ambient space of $\mathbb{R}^3$. Since the two framed presentations carry all of the same "relevant" information, we define an equivalence on braid presentations.
|
| 126 |
+
|
| 127 |
+
**Definition 3.3 (equivalence of framed braid presentations).** Let $F$ be a braid frame and let $B_0 = (F, S_0)$ and $B_1 = (F, S_1)$ be braid presentations. We say $B_0$ is equivalent to $B_1$, denoted $B_0 \simeq B_1$, if there exist $n$ ambient isotopies
|
| 128 |
+
|
| 129 |
+
$$H^i : [0, 1] \times [0, 1] \to \mathbb{R}^3$$
|
| 130 |
+
|
| 131 |
+
relative to the respective endpoints of the $i$-th strands where, after a suitable change of coordinates for each $\alpha_i \in S_0$, there is a $\beta_i \in S_1$ such that $H^i(s, 0) = \alpha_i(s)$, $H^i(s, 1) = \beta_i(s)$ and the collection $S_t = \{H^i(s, t) : s \in [0, 1]\}_{i=1}^n$ forms a framed braid presentation $B_t = (F, S_t)$ for all $t \in [0, 1]$.
|
| 132 |
+
|
| 133 |
+
*Example 3.* We convey this example of equivalence pictorially:
|
| 134 |
+
|
| 135 |
+
Figure 4: Starting with the framed braid presentation on the far left, we can obtain the framed braid presentation on the far right as depicted by the middle picture. In the middle picture we are continuously pushing the blue strand outward while also continuously pushing the red strand and green strand until they are straight.
|
| 136 |
+
---PAGE_BREAK---
|
| 137 |
+
|
| 138 |
+
**Definition 3.4.** Let $F = \{C_2, C_1 = C_2 + \{(0,0,b)\}\}$ (for some $b > 0$) be a braid frame. We label the points of $C_1$ and $C_2$ from left-to-right by 1, ..., $n$. Let $B_1 = (F, S_1)$ and $B_2 = (F, S_2)$ be two braid presentations. We define the composition of a braid presentation of $B_1$ and $B_2$, denoted $B_1 * B_2$, in the following manner:
|
| 139 |
+
|
| 140 |
+
Let $\alpha_i : [0, 1] \to \mathbb{R}^3$ be the *i*-th strand of $B_1$. By the *i*-th strand of $B_1$ we mean that $\alpha_i(0)$ is the *i*-th points of $C_1$. We note that $\alpha_i(1)$ is the *j*-th point of $C_2$. Let $\beta_j : [0, 1] \to \mathbb{R}^3$ be the *j*-th strand of $B_2$. We reparametrize $\alpha_i$ and $\beta_j$ so that they are defined on $[0, 1/2]$ and $[1/2, 1]$ respectively (while keeping both strands smooth $(C^k)$) and we rename the reparametrizations as $\alpha_i$ and $\beta_j$. In addition, we translate $\alpha_i$ along the *z*-axis by $(0, 0, b)$ (and again name the resulting curve $\alpha_i$). Now we construct the curve $\gamma_i : [0, 1] \to \mathbb{R}^3$ defined by
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
\left\{
|
| 144 |
+
\begin{array}{l}
|
| 145 |
+
\alpha_i(t), t \in [0, 1/2] \\
|
| 146 |
+
\beta_j(t), t \in [1/2, 1]
|
| 147 |
+
\end{array}
|
| 148 |
+
\right.
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
(where $\gamma_i$ is smooth $(C^k)$). We then consider the frame
|
| 152 |
+
|
| 153 |
+
$$ F^* = \{C_2, C_2 + \{(0,0,2b)\}\} \text{ and the collection } S^* = \{\gamma_i\}_{i=1}^n. $$
|
| 154 |
+
|
| 155 |
+
We then define $B_1 * B_2$ as
|
| 156 |
+
|
| 157 |
+
$$ B_1 * B_2 := (F^*, S^*). $$
|
| 158 |
+
|
| 159 |
+
**Example 4.** We convey this example of braid composition pictorially:
|
| 160 |
+
|
| 161 |
+
Figure 5: Here we have composed our framed braid presentation (from the previous examples) with itself. Note that where each strand on the top presentation ends is where the bottom presentation begins. This continuation of strands from the first presentation to the second must be carried out in a smooth manner. Note that the resulting composed braid has a different frame than its two component braid presentations. Lastly, we note that in this example since we are composing two braid presentations within the same isotopy class, the order of the composition does not matter. However, in general the composition operation is not commutative.
|
| 162 |
+
|
| 163 |
+
We can easily observe that since all our $\gamma_i$ in the above definition were smooth, and the strands $\alpha_i$ and $\beta_i$ were monotonic, had endpoints in the frame, and met the frame perpendicularly, that each $\gamma_i$ is a braid strand of $B_1 * B_2$.
|
| 164 |
+
|
| 165 |
+
Now we turn our attention towards developing a ropelength model for our framed braid presentations. In order to define a maximum thickness for a given framed presentation, we require the following geometric notion also used in [6]. For any three non-collinear points $x, y, z \in \mathbb{R}^3$, there is a unique circle sitting in $\mathbb{R}^3$ containing $x, y$, and $z$. We denote the radius of this circle by $r(x, y, z)$. As in [6], we can actually continuously extend $r(x, y, z)$ if it is defined on a smooth $(C^k)$ curve. We recount this construction here: Let $\gamma : [a, b] \to \mathbb{R}^3$ be a simple, smooth $(C^k)$ such that $x = \gamma(t), y = \gamma(s)$, and $z = \gamma(u)$ for some $t, s, u \in [a, b]$. Then define
|
| 166 |
+
|
| 167 |
+
$$ r(x,y,y) := \lim_{u \to s} r(x,y,z). $$
|
| 168 |
+
---PAGE_BREAK---
|
| 169 |
+
|
| 170 |
+
We can define other such extensions of $r$ (such as $r(x, x, y)$ or $r(x, z, z)$) similarly. As a note, we can also consider cases such as
|
| 171 |
+
|
| 172 |
+
$$r(x, x, x) := \lim_{s,u \to t} r(x, y, z),$$
|
| 173 |
+
|
| 174 |
+
and we note that $r(x, x, x)$ is actually the radius of curvature for the curve. Now we can give our definition of maximal thickness for a framed braid presentation.
|
| 175 |
+
|
| 176 |
+
**Definition 3.5.** Let $B = (F, S)$ be a framed braid presentation. We define the maximum thickness for a framed braid presentation, $\Delta[B]$, in terms of the local thickness $\Delta_x(B)$ where $x$ is a point on any strand, $\gamma_i$ and
|
| 177 |
+
|
| 178 |
+
$$\Delta_x(B) := \inf_{y,z \in S} r(x,y,z) \text{ and } \Delta[B] := \inf_{x \in S} \Delta_x(B).$$
|
| 179 |
+
|
| 180 |
+
As a sanity check, we would like to show that this definition corresponds to a maximum thickness for a given framed braid presentation. Using [3, 6], we know that for any $C^1$ presentation of a link $L$, the maximal thickness defined from the global radius of curvature is equal to the normal injectivity radius (i.e the largest radius of an injective tube composed of open disks normal to the curve centered on the curve). Let $B = (F, S)$ be a framed braid presentation, where $\Delta[B] = \tau > 0$. We smoothly close the braid into a presentation of a link, $L$, in such a manner where $\Delta[L] = \tau$, and immediately apply the result from [3] to know that the functional, $\Delta$, corresponds to the thickness of the link. Since $\Delta[B] = \tau$ was not changed by closing up our braid, we conclude that the global radius of curvature defined over a framed braid presentation corresponds to the maximal thickness of the presentation.
|
| 181 |
+
|
| 182 |
+
Now that we have a notion of maximum thickness for a given framed presentation of a braid, we would like to numerically quantify just how tightly wound a given framed presentation is. For this, we turn to [3, 6, 7]. These papers describe the ropelength model for a given presentation of a knot (or link). The ropelength of a presentation of a knot or link is the total length of the strands divided by the maximum thickness of the presentation. We now define the corresponding version of ropelength for a framed braid presentation as follows:
|
| 183 |
+
|
| 184 |
+
**Definition 3.6.** Let $F$ be a braid frame with $B = (F, S)$ a framed braid presentation. In addition, let $\Delta[B]$ be the maximum thickness of $B$, and let $\ell(B)$ be the sum of the lengths of all strands of $B$. The ropelength of $B$ is defined as
|
| 185 |
+
|
| 186 |
+
$$Rl(B) := \ell(B)/\Delta[B].$$
|
| 187 |
+
|
| 188 |
+
We now direct our attention to the following situation: in the real world, when we buy rope of a given thickness and length, it is obvious that for a given braid we can make a tightest version of that braid out of rope (given that we have enough rope). In order to have a somewhat physically accurate model for braids, we would like our model to allow us to have presentations of braids that can be maximally tightened for every framed braid type. So we seek to answer the following question: "For every (framed) braid type, does there exist a tightest (framed) braid presentation?" Of course, using our new definitions we can rephrase the question as follows: "For every (framed) isotopy class, $B_F$, does there exist a (framed) braid presentation $B^* \in B$ such that
|
| 189 |
+
|
| 190 |
+
$$\inf_{B \in \mathcal{B}} Rl(B) = Rl(B^*)?$$
|
| 191 |
+
|
| 192 |
+
If such a $B^*$ exists, we call it the *ropelength minimizer of* $B_F$. We note that we can alternatively denote the ropelength minimizer for a given isotopy class for a framed presentation as $Rl(B_F)$.
|
| 193 |
+
|
| 194 |
+
In order to prove that there exist ropelength minimizers for framed presentations, we are going to have to consider sequences of framed braid presentations. We give a definition of a sequence of framed braid presentations and what it means for a sequence of braid presentations to converge.
|
| 195 |
+
|
| 196 |
+
**Definition 3.7.** Let $F$ be a frame and for each $i \in \{1, ..., n\}$ let $\{\gamma_m^i\}_{i=1}^n$ be a sequence of curves such that the disjoint union $S_n = \{\gamma_m^i\}_{i=1}^n$ forms a framed braid presentation
|
| 197 |
+
|
| 198 |
+
$$B_m = (F, S_m)$$
|
| 199 |
+
|
| 200 |
+
for each $m \in \mathbb{N}$. We call $B_m$ a *sequence of framed braid presentations*. If each $\gamma_m^i$ converges (uniformly) to a curve $\gamma_*^i$, then we say $B_m$ converges (uniformly) to $B = (F, S_*)$ where $S_* = \{\gamma_*^i\}_{i=1}^n$ and we express this as $B_m \rightarrow B$.
|
| 201 |
+
---PAGE_BREAK---
|
| 202 |
+
|
| 203 |
+
It is important to note that if we have a convergent sequence of framed braid presentations $B_m \to B$, the limiting framed collection of curves $B$ need not necessarily be a framed braid presentation. This is because we can have a sequence in which the braid strands are never touching but converge to two braid strands which do intersect, thereby making the collection to which it converges not a framed braid presentation. However we actually have great control over the convergence if we consider sequences of framed braids, $B_m$ that have thickness $\Delta[B_m] \ge \tau > 0$ for all $m$. Then the collection of framed curves to which it converges must be a braid presentation. We prove this fact later in section 4.
|
| 204 |
+
|
| 205 |
+
# 4 Basic Results
|
| 206 |
+
|
| 207 |
+
Our goal is to show that for every framed isotopy class of a braid, there is a ropelength minimizing framed presentation. In order to show this, we need to take some detours.
|
| 208 |
+
|
| 209 |
+
**Lemma 5.** Let $\mathcal{B}_F$ be a framed isotopy class of a braid. Then
|
| 210 |
+
|
| 211 |
+
$$ \Delta[\mathcal{B}_F] := \sup_{B \in \mathcal{B}_F} \Delta[B] $$
|
| 212 |
+
|
| 213 |
+
exists and is finite.
|
| 214 |
+
|
| 215 |
+
*Proof.* Since our frame is fixed, we see that we cannot make any of the strands thicker than half of the distance between two consecutive top (or bottom) endpoints else we would have that the normal tubes would self intersect along interiors. Because of this, we have that $\Delta[B]$ is bounded for all $B \in \mathcal{B}_F$. By the completeness of the real numbers we conclude $\Delta[\mathcal{B}_F]$ exists and is finite. $\square$
|
| 216 |
+
|
| 217 |
+
The importance of this result is that we can consider a maximizing sequence $\{B_m\}_{m=1}^\infty \subset \mathcal{B}_F$ such that $\Delta[B_m] \to \Delta[\mathcal{B}_F]$.
|
| 218 |
+
|
| 219 |
+
We note that in [3] it was proven that maximum thickness is upper semicontinuous with respect to the $C^0$ topology on the space of $C^{0,1}$ curves. Then in [3], it was shown that for a given presentation of a link $L$ with thickness $\tau > 0$, $L$ is $C^{1,1}$ with Lipschitz constant $1/2\tau$. We can obtain a similar result for framed braid presentations as follows:
|
| 220 |
+
|
| 221 |
+
**Lemma 6.** If $B = (F, S)$ is a framed braid presentation with thickness $\tau = \Delta[B] > 0$, then each curve $\gamma^i \in S$ is $C^{1,1}$ with Lipschitz constant $1/2\tau$.
|
| 222 |
+
|
| 223 |
+
*Proof.* We smoothly close the braid into a presentation of a link, $L$, in such a manner that $\Delta[L] = \tau$. We can then apply the result from Lemma 4 of [3] to conclude that the presentation $L$ must be $C^{1,1}$ with Lipschitz constant $1/2\tau$. Then we can recover $B$ from $L$ by cutting it at the proper points to conclude that it is also $C^{1,1}$ with Lipschitz constant $1/2\tau$. $\square$
|
| 224 |
+
|
| 225 |
+
**Lemma 7.** Let $F$ be a braid frame and let $B_m \to B$ be a uniformly convergent sequence of framed braid presentations such that each $\Delta[B_m] \ge \tau > 0$ for all $m \in \mathbb{N}$. Then $B$ is a framed braid presentation with $\Delta[B] \ge \tau$ and is equivalent to (all but finitely many) $B_m$.
|
| 226 |
+
|
| 227 |
+
*Proof.* We first want to show that $B$ is actually a framed braid presentation. Since $\Delta[B_m] \ge \tau > 0$ for all $m$, we then have by the semicontinuity of $\Delta$ we know
|
| 228 |
+
|
| 229 |
+
$$ \Delta[B] \ge \tau > 0. $$
|
| 230 |
+
|
| 231 |
+
Then since $\Delta[B] > 0$, we know that none of the strands of $B$ intersect each other. In addition, if we let $\gamma_m^i : [a_i, b_i] \to \mathbb{R}^3$ be the $i$-th strand of the $m$-th framed presentation $B_m$ such that $\gamma_m^i \to \gamma^i$. Since each $\gamma_m^i(a_i) = \gamma_1^i(a_i)$ and $\gamma_m^i(b_i) = \gamma_1^i(b_i)$ for all $m \in \mathbb{N}$, we must have that
|
| 232 |
+
|
| 233 |
+
$$ \gamma^i(a_i) = \gamma_1^i(a_i) \text{ and } \gamma_i(b_i) = \gamma_1^i(b_i). $$
|
| 234 |
+
|
| 235 |
+
So each $\gamma^i$ respects the frame $F$. In addition, we have that since each $\gamma_m^i$ must be weakly monotonic in the $z$-axis. Hence (wlog) for $t \le s$, we have
|
| 236 |
+
|
| 237 |
+
$$ \pi_z(\gamma_m^i(t)) \le \pi_z(\gamma_m^i(s)). $$
|
| 238 |
+
---PAGE_BREAK---
|
| 239 |
+
|
| 240 |
+
Then by limit inequality of real numbers, we have
|
| 241 |
+
|
| 242 |
+
$$
|
| 243 |
+
\lim_{m \to \infty} \pi_z(\gamma_m^i(t)) \le \lim_{m \to \infty} \pi_z(\gamma_m^i(s)),
|
| 244 |
+
$$
|
| 245 |
+
|
| 246 |
+
and so we conclude for $s \le t$ and
|
| 247 |
+
|
| 248 |
+
$$
|
| 249 |
+
\pi_z(\gamma^i(t)) \le \pi_z(\gamma^i(s)).
|
| 250 |
+
$$
|
| 251 |
+
|
| 252 |
+
Thus *B* is a framed collection of strands that do not intersect and are monotonic in the *z*-axis, hence *B* = (*F*, {γ<sup>*i*</sup>}{i=1}) is a framed braid presentation. We now prove that *B* is equivalent to all but finitely many *B*<sub>*m*</sub>. Since Δ[*B*] ≥ τ > 0, each curve of *B* must be surrounded by an embedded normal tube of diameter τ. In addition, all but finitely many of the strands of the *B*<sub>*m*</sub> lie within the respective *i*-th normal tube since the convergence is assumed to be uniform. By the C<sup>1</sup> convergence of the previous lemma, we also have that the all but finitely many of the strands are tranverse to each normal disk of the normal tube. Each γ<sub>*m*</sub><sup>*i*</sup> is isotopic to γ<sup>*i*</sup> by straight line homotopy within the normal disk. Hence the result is proven. □
|
| 253 |
+
|
| 254 |
+
Now we can answer our existence question for ropelength minimizers from framed braid presentations.
|
| 255 |
+
**Theorem 8.** Fix a nonzero length *l* ∈ R and pick a frame *F*. Consider all framed presentations isotopic to *B*
|
| 256 |
+
of total length at most *l*; call this class *B*<sub>*F*</sub>. Then there exists a framed ropelength minimizer *B*<sup>*</sup> ∈ B<sub>*F*</sub>
|
| 257 |
+
|
| 258 |
+
*Proof.* By Lemma 5 we know that thickness is bounded and so Δ[$\mathcal{B}_F$] is finite. Hence there must exist a sequence $\mathcal{B}_m \in \mathcal{B}_F$ that maximizes thickness, i.e $\Delta[\mathcal{B}_m] \rightarrow \Delta[\mathcal{B}_F]$ as $m \rightarrow \infty$. Now take each $\mathcal{B}_m$ and smoothly connect arcs of finite length stemming from each point of $F$ onto each $\mathcal{B}_m$ such that
|
| 259 |
+
|
| 260 |
+
1. The thickness of this new object is the same as Δ[$\mathcal{B}_m$] for all m.
|
| 261 |
+
|
| 262 |
+
2. The resulting object can be represented as a smooth embedding of a circle.
|
| 263 |
+
|
| 264 |
+
3. The total lengths of the resulting sequence remains uniformly bounded.
|
| 265 |
+
|
| 266 |
+
We demonstrate these rules with the following picture:
|
| 267 |
+
|
| 268 |
+
Figure 6: Starting with a framed braid presentation (left), we smoothly connect arcs around the endpoints of each $B_m$ (pictured in orange) so that we obtain a knot presentation, $K_m$. We connect these arcs in such a way that $\Delta[K_m] = \Delta[B_m]$ and the lengths of $K_m$ are uniformly bounded. We note that it does not matter whether or not the resulting knot is trivial, we just require a sequence of knots.
|
| 269 |
+
|
| 270 |
+
We call the resulting sequence of knot presentations $K_m$. We note that $\Delta[K_m]$, the maximal thickness of these knot presentations, must approach $\Delta[\mathcal{B}_F]$ as $m \to \infty$. We also note that we can parameterize each $K_m$ be a function $\gamma_m : S^1 \to \mathbb{R}^3$. Now we have a sequence of knots which has uniformly bounded lengths, so we can apply the Arzelà-Ascoli Theorem as stated in [8] to extract a uniformly convergent subsequence $\gamma_{m_k} \to \gamma_*$.
|
| 271 |
+
|
| 272 |
+
Since we are considering a thickness maximizing sequence, after sufficiently large M we must have that there is some $\tau \in \mathbb{R}$ such that $\Delta[\gamma_{m_k}] \ge \tau > 0$ for all $k > M$. We apply Lemma 6 from [3] to assert that $\Delta[\gamma_*] \ge \tau > 0$, so we know that the strands of $\gamma_*$ are non-intersecting. We point out that by construction, $F$ must have remained constant in each $\gamma_{m_k}$. Hence we can easily cut out a framed collection
|
| 273 |
+
---PAGE_BREAK---
|
| 274 |
+
|
| 275 |
+
of strands from each $\gamma_{m_k}$ and $\gamma_*$ that we call $B_{m_k}$ and $B_*$ respectively. We must have that $B_{m_k} \in \mathcal{B}_F$ for all $m_k$. We now want to show that $B_*$ is our ropelength minimizer. We note that $B_{m_k}$ converges uniformly to $B_*$ by construction of these subsequences. Since each $B_{m_k}$ is a framed braid presentation equivalent to $\mathcal{B}_F$ and $\Delta[B_{m_k}] \ge \tau > 0$ for some $\tau \in \mathbb{R}$ and sufficiently large $k$, we can apply Lemma 7 to conclude that $B_*$ must also be a framed braid presentation that is equivalent to all (but finitely many) of the $B_{m_k}$. Since length is lower semicontinuous and thickness is upper semicontinuous, we must have that
|
| 276 |
+
|
| 277 |
+
$$Rl(B_*) \le l/\Delta[\mathcal{B}_F],$$
|
| 278 |
+
|
| 279 |
+
and in particular that
|
| 280 |
+
|
| 281 |
+
$$\Delta[B_*] \ge \mathcal{B}_F.$$
|
| 282 |
+
|
| 283 |
+
Since all but finitely many of the $B_{m_k}$ are equivalent to $B_*$, we conclude that $B_*$ must be in $\mathcal{B}_F$. Then since $\Delta[\mathcal{B}_F]$ is supremal, we must have
|
| 284 |
+
|
| 285 |
+
$$\Delta[B_*] \le \Delta[\mathcal{B}_F].$$
|
| 286 |
+
|
| 287 |
+
So $\Delta[B_*] = \Delta[\mathcal{B}_F]$. Hence we have shown the existence of a framed ropelength minimizer. $\square$
|
| 288 |
+
|
| 289 |
+
# 5 A Definition of Ropelength For More General Braids
|
| 290 |
+
|
| 291 |
+
We would like to define minimal ropelength for any element of the braid group, which would be more closely to the braids we would make out of physical rope. To do this we consider the following construction:
|
| 292 |
+
|
| 293 |
+
Let $\mathcal{B}_F$ be an isotopy class of framed braids whose presentations are total length at most $l$. Now using Theorem 8, we know that there exists a framed ropelength minimizing presentation $B_* \in \mathcal{B}_F$, and we consider $Rl(B_*)$, the minimal ropelength of $\mathcal{B}_F$. Using our definition of braid composition in conjunction with our notion of framed equivalence, we can define the equivalence class $\mathcal{B}_{Fn}^n = \mathcal{B}_F * ... * \mathcal{B}_F$ inductively (where $Fn$ is the natural frame after the $n$-th composition). We consider the sequence
|
| 294 |
+
|
| 295 |
+
$$R_n = \frac{Rl(\mathcal{B}_{Fn}^n)}{n}$$
|
| 296 |
+
|
| 297 |
+
as $n \to \infty$. We want to show that the sequence $R_n$ converges. First we will show that $R_n$ always contains a convergent subsequence. We note that since the thickness of our presentations are always greater than zero and the total lengths of our presentations are always non-zero, we must have $0 < R_n$ for all $n$. Next, we note that since $Rl(\mathcal{B}_{Fn}^n)$ is infimal, $Rl(\mathcal{B}_F) \le nRl(\mathcal{B}_F)$, where $nRl(\mathcal{B}_F)$ corresponds to the ropelength of the composition of presentations given by
|
| 298 |
+
|
| 299 |
+
$$\mathcal{B}_*^n \in \mathcal{B}_F^n,$$
|
| 300 |
+
|
| 301 |
+
where $B_* \in \mathcal{B}_F$ is the ropelength minimizer of $\mathcal{B}_F$. Hence
|
| 302 |
+
|
| 303 |
+
$$R_n = \frac{Rl(\mathcal{B}_{Fn}^n)}{n} \le Rl(\mathcal{B}_F),$$
|
| 304 |
+
|
| 305 |
+
for all $n$. Thus we have
|
| 306 |
+
|
| 307 |
+
$$0 < R_n \le Rl(\mathcal{B}_F), \text{ for all } n \in \mathbb{N}.$$
|
| 308 |
+
|
| 309 |
+
So by the Bolzano-Weierstrass theorem, we know that there must exist some convergent subsequence $R_{n_k} \to R^*$. Now we claim that $\limsup_{n\to\infty} R_n = R^*$. We let $\varepsilon > 0$ and choose $m \in \mathbb{N}$ such that
|
| 310 |
+
|
| 311 |
+
$$R_m < R^* + \frac{\varepsilon}{2}.$$
|
| 312 |
+
|
| 313 |
+
Now choose $k \in \mathbb{N}$ such that
|
| 314 |
+
|
| 315 |
+
$$\frac{Rl(\mathcal{B}_F)}{k} < \frac{\varepsilon}{2}.$$
|
| 316 |
+
|
| 317 |
+
Select $n > km$. Then by the division algorithm we write
|
| 318 |
+
|
| 319 |
+
$$n = lm + r, l \ge k, 0 \le r \le m.$$
|
| 320 |
+
---PAGE_BREAK---
|
| 321 |
+
|
| 322 |
+
Now since $Rl(\mathcal{B}_F^n)$ is infimal and we are composing braid presentations, we can obtain the following inequality
|
| 323 |
+
|
| 324 |
+
$$R_n = R_{lm+r} \le \frac{R_{lm} + rRl(\mathcal{B}_F)}{lm+r},$$
|
| 325 |
+
|
| 326 |
+
as the tightest presentation of the $lm$-times composed braid composed with $r$ copies of the ropelength minimizer for $\mathcal{B}_F$ cannot be tighter than the ropelength minimizer for $\mathcal{B}_{F^n}$. Then since $lm+r > lm$ and $r \le m$, we have
|
| 327 |
+
|
| 328 |
+
$$R_n \le \frac{R_{lm} + rRl(\mathcal{B}_F)}{lm+r} \le R_{lm} + \frac{mRl(\mathcal{B}_F)}{lm}.$$
|
| 329 |
+
|
| 330 |
+
By our assumptions on $l, k, r$, and $m$ we obtain
|
| 331 |
+
|
| 332 |
+
$$R_n \le R_{lm} + \frac{mRl(\mathcal{B}_F)}{lm} \le R^* + \frac{\epsilon}{2} + \frac{\epsilon}{2} = R^* + \epsilon.$$
|
| 333 |
+
|
| 334 |
+
Hence if we have a convergent subsequence that converges to $R^*$, then the lim sup of the sequence must converge to $R^*$ as well. Now assume that the lim inf of the sequence is not equal to the lim sup of the sequence. By this assumption we can certainly extract a subsequence $R_{n_j}$ such that
|
| 335 |
+
|
| 336 |
+
$$R_{n_j} \to \liminf_{n \to \infty} R_n,$$
|
| 337 |
+
|
| 338 |
+
but this would mean $\liminf_{n \to \infty} R_n = \limsup_{n \to \infty} R_n$ per the above result. Thus, our sequence converges.
|
| 339 |
+
|
| 340 |
+
As we just saw, for any isotopy class $\mathcal{B}_F$, the sequence $R_n = Rl(\mathcal{B}_F^n)/n$ converges. Since this sequence has a well defined value for all isotopy classes, we define the *minimal ropelength of a word in the braid group* as the limit $R_n$. We demonstrate the intuition behind this construction in the following picture:
|
| 341 |
+
|
| 342 |
+
Figure 7: We begin the construction by drawing the $n=1$ case. On the left, we took any presentation of our braid and applied Theorem 8, resulting in the image on the right (we omitted the thicknesses for a clearer picture). In the $n=2$ case, we took the previous minimizer and composed it with itself (left). Then we took the ropelength minimizer from the isotopy class of the resulting braid (right). By this construction we can see that the minimal ropelength of the resulting braid must be at least double the minimal ropelength of the previous braid. We continue this process infinitely. Using the same principle, we note that the minimum ropelength of the current step is always at most the sum of the minimum ropelengths of the previous two steps. We included an orange box in the $n=3$ case which demonstrates the intuition of "cutting the out the ideal braid".
|
| 343 |
+
---PAGE_BREAK---
|
| 344 |
+
|
| 345 |
+
As stated in the introduction, we consider this repeated composition construction in order to "forget the endpoints", thereby allowing us to achieve the tightest braid presentation as it would be made from rope. Although we have not yet managed it, we believe that this construction will not depend on the positioning of the endpoints. In other words, we can pick two presentations that are equivalent (in the traditional sense) and after sufficient compositions and tightenings, we will be able to cut out identical braid presentations from the braid. We also believe that we will be able to find the ideal presentation as the middle part of the third composition of our construction–however this is yet to be shown as well.
|
| 346 |
+
|
| 347 |
+
We have also yet to find any ropelength minimizers, framed or otherwise. This is because computing such ropelength minimizers is actually rather challenging. We do however note that once we are able to find ropelength minimizing framed braid presentations, we will then easily be able to find ropelength minimizing ideal presentations. We turn our attention in the next section to a discussion on framed ropelength minimizers.
|
| 348 |
+
|
| 349 |
+
# 6 Ropelength Minimizers
|
| 350 |
+
|
| 351 |
+
Now that we have a definition of a minimal ropelength for both framed and non-framed braids, the question becomes "how do we find such ropelength minimizers?" Not surprisingly, this question has proven to be very difficult to answer in even the simplest cases. Hence the question of finding such minimizers exceeds the scope of this paper.
|
| 352 |
+
|
| 353 |
+
With this in mind, we turn our attention to the braid $\sigma_1^n$ in the braid group on two strands. Although we have yet to actually find the ropelength minimizer for $\sigma_1^n$, by constructing models made of real rope, we arrive at two possible candidates for what the ropelength minimizer would look like. This first possible candidate is where one strand is straight, acting as a core, and the other strand wraps around that strand (staying as tightly wrapped as possible) $n$-times. From here-on-out we call this candidate the "single twist candidate" since out of the two strands, one is being twisted around the other. The other candidate is what we call the "double twist candidate" and is given by letting both the strands twist around each other equally (in a tightest manner). We give a picture of the single and double twist candidates in the follow figure.
|
| 354 |
+
|
| 355 |
+
Figure 8: On the left we draw the ideal framed single twist candidate (unthickened so we can better see the behavior of the strands). We note that the red strand should be perfectly straight with the blue strand tightly wrapped around it. On the right, we have drawn the ideal framed double-twist candidate (again, unthickened so we can better see the strands' behavior). We note that both the red strand and the blue strand are moving around each other in a symmetric manner.
|
| 356 |
+
|
| 357 |
+
Between these two candidates, we conjecture that the single-twist candidate is the ropelength minimizing presentation for $\sigma_1^n$. The intuition behind our conjecture is that by allowing one strand to be completely straight, it will use less of the thickened rope than by allowing both strands to move around each other.
|
| 358 |
+
---PAGE_BREAK---
|
| 359 |
+
|
| 360 |
+
Since our model seeks to describe braids made out of real rope, we decided to construct our candidates out of thickened rope and experimentally determine which of the two has a shorter total length. The experiment had the following instructions:
|
| 361 |
+
|
| 362 |
+
1. Take a piece of rope and measure its length precisely by stretching it and measuring it. Fold the rope in half, stick a pencil in the fold. Hold the ends of the rope tightly. Twist the pencil as many times as you can, counting the total number of half-twists. This, of course, would obtain the double twist candidate
|
| 363 |
+
|
| 364 |
+
2. Take the rope, fold it over, stick a pencil in the fold. As tightly as you can, keep one strand straight and wrap the other strand around it the same number of half-twists as step 1. Once you have done this, mark the ends of the rope and measure the amount of rope it took to create this presentation.
|
| 365 |
+
|
| 366 |
+
3. Compare the two total lengths. Which is shorter?
|
| 367 |
+
|
| 368 |
+
After carrying out this experiment with a length of $29\frac{7}{8}$" long and $\frac{1}{8}$" diameter rope we came to the following data:
|
| 369 |
+
|
| 370 |
+
<table><thead><tr><td>Candidate</td><td>Diameter</td><td>Length After 36 Half-Twists</td></tr></thead><tbody><tr><td>Single Twist</td><td>1/8"</td><td>29<sup>7</sup>⁄<sub>8</sub>"</td></tr><tr><td>Double Twist</td><td>1/8"</td><td>18<sup>1</sup>⁄<sub>4</sub>"</td></tr></tbody></table>
|
| 371 |
+
|
| 372 |
+
Meaning that there is a approximately 38.9% decrease in the amount of rope needed to make the single-twist candidate over the double-twist candidate. This particular experiment (although admittedly limited) confirms our intuition that the single-twist candidate is the likely ropelength minimizer for $\sigma_1^n$.
|
| 373 |
+
|
| 374 |
+
# 7 Where Do We Go From Here?
|
| 375 |
+
|
| 376 |
+
In this section, we would like to discuss some potential subjects to investigate with our ropelength model. From the previous section, we know that one potential area of interest is to construct ropelength minimizers for non-trivial braids of two or more strands.
|
| 377 |
+
|
| 378 |
+
Although we spent the previous section talking about the challenges of finding ropelength minimizers, there is another interesting question involving ropelength minimizers: "are ropelength minimizing braid presentations unique?" In [3], Cantarella et al. showed that the ropelength minimizing presentations for knots and links are not necessarily unique by constructing two different classes of ropelength minimizing presentations of links. The reason we ask this question is because we are finding ropelength minimizing presentation of braids by first considering convergent subsequences of framed presentations that minimize ropelength, then taking the limit of repeated compositions of these framed minimizers. Since we are only considering convergent subsequences of these framed presentations, there is a chance that we can extract two subsequences that converge to two different ropelength minimizing presentations within the same isotopy class. By answering this question, we can determine how much control we have over the convergence of these framed subsequences. If the answer is that the presentations are not unique, this opens up other similar questions such as "are there any braid types that whose minimizers are unique?" and will inspire some interesting questions.
|
| 379 |
+
|
| 380 |
+
In addition to finding ropelength minimizing presentations, there are several other avenues that we are also considering as we go forward. One task is to define the "roundness" of a braid. As mentioned in the Acknowledgements section, this project was based on a Summer REU which sought to define how "round" a given presentation of a braid is. The question was based on the observation that when some braids are made out of physical rope, they can lie flat on a table, whereas other braids will appear much more round and cannot be laid flat. An example of this is taking the standard three stranded braid that is used for braiding hair given by the word $(\sigma_2^{-1}\sigma_1)^n$, and the standard four stranded found in Challah bread given by $(\sigma_3\sigma_2^2\sigma_1^{-1}\sigma_2^{-2})^n$. Making these braids as tight as possible out of physical rope, one will find that the standard hair braid looks fairly "flat" and the Challah braid looks fairly "round." A good starting point for trying
|
| 381 |
+
---PAGE_BREAK---
|
| 382 |
+
|
| 383 |
+
to define the roundness of a braid would be to look at the convex hulls of the cross sections of a ropelength minimizing presentation of the braid, measure the eccentricities of these convex hulls, then average using an integral. However, this will not necessarily work in general as one needs to account for how the convex hull can rotate along the cross sections.
|
| 384 |
+
|
| 385 |
+
Another interesting question to look at is "how does the ropelength of a braid presentation relate to the ropelength of a knot or link?" This is actually a fairly broad question as it stands and we can choose to interpret it in a few possible ways. One possible interpretation of the question is "Given a presentation of a knot or link, how does the ropelength of the knot or link compare to the ropelength of the corresponding (framed) braid whose closure is the knot or link?" When considering this question, we would probably want to look at the corresponding braids (equivalent through Markov moves) that have a minimal number of strands. Otherwise, there would be an unnecessary length of rope which could lead to poor bounds on the ropelength of the knot or link. Another interpretation of the question is "Given a ropelength minimizing presentation of a braid, if we close up the braid in a 'natural' way, can we obtain a ropelength minimizing presentation of the corresponding knot or link?" Of course, the first step to answering the question would be to decide what is meant by the "natural way". It should be obvious that any 'natural' way of closing up the braid would require not adding any length of rope to the presentation; so we cannot use the same method that we used in proving Lemma 6. As a result, we believe that the best way to define a natural presentation would be to take the presentation and, in such a way without changing the thickness and length of rope used, curve the braid until its top endpoints meet its bottom endpoints. This account of a 'natural' braid closure is only intuitive, of course, and it has been very challenging to give a rigorous construction of this process. The reason why we are interested in investigating these two questions is because, in general, computing ropelength and minimal ropelength is very hard for a given knot presentation. As such, it is rather helpful to create bounds on the minimal ropelength. [4, 5] and many other such papers have been written on creating tighter bounds for minimal ropelength of knots and links, and there is a possibility of creating a tighter bound here. This question is interesting to explore because it might allow for the rope-length model of braids to contribute to existing research.
|
| 386 |
+
|
| 387 |
+
As we saw, there are many interesting questions that one can ask about the ropelength model for braids. However, much like in physical knot theory, we can create other theories that model different physical phenomenon. Besides the ropelength model for knots and links, there are also so-called "knot energy models" The inspiration for knot energies comes from the following situation as described in [9]: Make a knot out of a conductive wire, then run a current through the wire. Due to Coulomb's law, the strands of the knot will repel each other. This repelling force, in turn, creates an ideal presentation of a knot. Knot energies are models which seek to model these sorts of situations in order to find other ideal conformations of knots and links. As it turns out, inventor Alexander Graham Bell discovered in [2] that in an (analog) telephone transmission circuit, when transmitting cables are twisted around one another the electrical disturbance from their inductive action is reduced. In other words, the transmitted signal is cleaner when the two transmitting cables form a two stranded braid. This braiding action seems to be a natural way to define braid energies, and minimizing these braid energies will give ideal presentations of such braids. Many interesting questions can arise depending on the type of cross-talk interference being chosen such as: are there cross-talk minimizing braid presentations for each isotopy class of braids? And also what is the optimal way to braid $n$ strands so that cross-talk is minimized? Would this be unique? We can also ask: "What is the relationship between the ideal braid-energy model of a given braid and the ideal ropelength model of the braid?" We conjecture that given the nature of the twisted pair, the ideal presentations of both of these models will likely correspond.
|
| 388 |
+
|
| 389 |
+
As we can see, there are many interesting questions that can be asked when we develop physical theories of braids.
|
| 390 |
+
---PAGE_BREAK---
|
| 391 |
+
|
| 392 |
+
# 8 Conclusion
|
| 393 |
+
|
| 394 |
+
In this paper, we defined a ropelength model for braids based on the works of [3, 6, 7]. We adapted this model by first creating a so-called "framed braid presentation" which required a choice of endpoints. After which we developed a ropelength model for these framed presentations by defining the maximal thickness for a presentation through employment of an upper-semicontinuous functional created from the global radius of curvature. Then we showed that there exist ropelength minimizing braid presentations on a given frame. However, since frames are problematic for an accurate physical model, we defined the actual ropelength minimizing braid presentation by using a sequence that composes the braid with itself an arbitrary number of times and tightening the resulting braid after each composition. This was done so we can "forget the endpoints" and "cut the ideal presentation" somewhere along the middle of this maximally tightened braid that has been composed with itself some arbitrary number of times. We acknowledge that we have yet to define either of these notions of "forgetting the endpoints" and "cutting out the ideal braid", however the notions make intuitive sense. We showed that this construction is well defined for all braid types, however we also yet to prove that this presentation will be the same regardless of choice of frame. We also talked about the challenges of finding the ropelength minimizers for even the simplest of braid types, i.e those of the form $\sigma_1^n$ for arbitrary $n \in \mathbb{N}$. We noted that in order to find the desired form of ropelength minimizing presentations, it is likely sufficient to study the framed ropelength minimizers first. We also stated that we believe the framed ropelength minimizer of $\sigma_1^n$ is likely to be given by a core of rope with the second rope wrapping around it $n$ times, and we gave an experimental and intuitive justification of why we believe this to be true. Lastly, we mentioned potential areas of development and possible uses for this model as well as the development of other, similar physical models for braids.
|
| 395 |
+
---PAGE_BREAK---
|
| 396 |
+
|
| 397 |
+
References
|
| 398 |
+
|
| 399 |
+
[1] Adams, C. (2004). The knot book: An elementary introduction to the mathematical theory of knots. American Mathematical Society.
|
| 400 |
+
|
| 401 |
+
[2] Bell, A. G. (July 19, 1881). *Telephone-Circut* (United States Patent Office Patent No. 244,426).
|
| 402 |
+
|
| 403 |
+
[3] Cantarella, J., Kusner, R. B., & Sullivan, J. M. (2002). *On the minimum ropelength of knots and links*. Inventiones Mathematicae, **150**(2), 257-286. https://doi.org/10.1007/s00222-002-0234-y
|
| 404 |
+
|
| 405 |
+
[4] Denne, E., Diao, Y., & Sullivan, J. M. (2006). *Quadriscants give new lower bounds for the ropelength of a knot*. Geometry & Topology, **10**(1), 1-26. https://doi.org/10.2140/gt.2006.10.1
|
| 406 |
+
|
| 407 |
+
[5] Diao, Y. (2020). *Braid index bounds ropelength from below*. Journal of Knot Theory and Its Ramifications, **29**(04), 2050019. https://doi.org/10.1142/S0218216520500194
|
| 408 |
+
|
| 409 |
+
[6] Oscar Gonzalez, John H. Maddocks. *Global Curvature, thickness, and the ideal shape of knots*. Proceedings of the National Academy of Sciences of the United States of America, Vol 96, pp. 4769-4773, Applied Mathematics and Biophysics, April 1999.
|
| 410 |
+
|
| 411 |
+
[7] Oscar Gonzalez, Rafael de la Llave. *Existence of Ideal Knots*. Journal of Knot Theory and Its Ramifications. October 2002.
|
| 412 |
+
|
| 413 |
+
[8] Dmitri Burago, Yuri Burago, Sergei Ivanov. *A Course in Metric Geometry*. The American Mathematical Society: Graduate Studies in Mathematics Vol. 33. 2001.
|
| 414 |
+
|
| 415 |
+
[9] Stasiak, A., Katritch, V., & Kauffman, L. H. (Eds.). (1998). *Ideal knots*. World Scientific.
|
samples/texts_merged/7856253.md
ADDED
|
@@ -0,0 +1,933 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Two-scale analysis for very rough thin layers. An
|
| 5 |
+
explicit characterization of the polarization tensor
|
| 6 |
+
|
| 7 |
+
Ionel Sorin Ciuperca, Ronan Perrussel, Clair Poignard
|
| 8 |
+
|
| 9 |
+
► To cite this version:
|
| 10 |
+
|
| 11 |
+
Ionel Sorin Ciuperca, Ronan Perrussel, Clair Poignard. Two-scale analysis for very rough thin layers. An explicit characterization of the polarization tensor. Journal de Mathématiques Pures et Appliquées, Elsevier, 2011, 95 (3), pp.227-295. 10.1016/j.matpur.2010.12.001. inria-00401835
|
| 12 |
+
|
| 13 |
+
HAL Id: inria-00401835
|
| 14 |
+
|
| 15 |
+
https://hal.inria.fr/inria-00401835
|
| 16 |
+
|
| 17 |
+
Submitted on 6 Jul 2009
|
| 18 |
+
|
| 19 |
+
**HAL** is a multi-disciplinary open access
|
| 20 |
+
archive for the deposit and dissemination of sci-
|
| 21 |
+
entific research documents, whether they are pub-
|
| 22 |
+
lished or not. The documents may come from
|
| 23 |
+
teaching and research institutions in France or
|
| 24 |
+
abroad, or from public or private research centers.
|
| 25 |
+
|
| 26 |
+
L'archive ouverte pluridisciplinaire **HAL**, est
|
| 27 |
+
destinée au dépôt et à la diffusion de documents
|
| 28 |
+
scientifiques de niveau recherche, publiés ou non,
|
| 29 |
+
émanant des établissements d'enseignement et de
|
| 30 |
+
recherche français ou étrangers, des laboratoires
|
| 31 |
+
publics ou privés.
|
| 32 |
+
---PAGE_BREAK---
|
| 33 |
+
|
| 34 |
+
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE
|
| 35 |
+
|
| 36 |
+
*Two-scale analysis for very rough thin layers. An explicit characterization of the polarization tensor*
|
| 37 |
+
|
| 38 |
+
Ionel Ciuperca — Ronan Perrussel — Clair Poignard
|
| 39 |
+
|
| 40 |
+
N° 6975
|
| 41 |
+
|
| 42 |
+
Juin 2009
|
| 43 |
+
|
| 44 |
+
Thème NUM
|
| 45 |
+
|
| 46 |
+
Rapport
|
| 47 |
+
de recherche
|
| 48 |
+
---PAGE_BREAK---
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
---PAGE_BREAK---
|
| 52 |
+
|
| 53 |
+
Two-scale analysis for very rough thin layers.
|
| 54 |
+
An explicit characterization of the polarization
|
| 55 |
+
tensor
|
| 56 |
+
|
| 57 |
+
Ionel Ciuperca *, Ronan Perrussel†, Clair Poignard‡
|
| 58 |
+
|
| 59 |
+
Thème NUM — Systèmes numériques
|
| 60 |
+
Équipes-Projets MC2
|
| 61 |
+
|
| 62 |
+
Rapport de recherche n° 6975 — Juin 2009 — 23 pages
|
| 63 |
+
|
| 64 |
+
**Abstract:** We study the behaviour of the steady-state voltage potential in a material composed of a two-dimensional object surrounded by a very rough thin layer and embedded in an ambient medium. The roughness of the layer is described by a quasi $\varepsilon$-periodic function, $\varepsilon$ being a small parameter, while the mean thickness of the layer is of magnitude $\varepsilon^{\beta}$, where $\beta \in (0, 1)$. Using the two-scale analysis, we replace the very rough thin layer by appropriate transmission conditions on the boundary of the object, which lead to an explicit characterization of the polarization tensor of Vogelius and Capdeboscq (ESAIM:M2AN. 2003; 37:159-173). This paper extends the previous works Poignard (Math. Meth. App. Sci. 2009; 32:435-453) and Ciuperca et al. (Research report INRIA RR-6812), in which $\beta \ge 1$.
|
| 65 |
+
|
| 66 |
+
**Key-words:** Asymptotic analysis, Finite Element Method, Laplace equations
|
| 67 |
+
|
| 68 |
+
* Université de Lyon, Université Lyon 1, CNRS, UMR 5208, Institut Camille Jordan, Bat. Braconnier, 43 boulevard du 11 novembre 1918, F - 69622 Villeurbanne Cedex, France
|
| 69 |
+
|
| 70 |
+
† Laboratoire Ampère UMR CNRS 5005, Université de Lyon, École Centrale de Lyon, F-69134 Écully, France
|
| 71 |
+
|
| 72 |
+
‡ INRIA Bordeaux-Sud-Ouest, Institut de Mathématiques de Bordeaux, CNRS UMR 5251 & Université de Bordeaux1, 351 cours de la Libération, 33405 Talence Cedex, France
|
| 73 |
+
---PAGE_BREAK---
|
| 74 |
+
|
| 75 |
+
Analyse double échelle pour des couches minces
|
| 76 |
+
très rugueuses. Une caractérisation explicite du
|
| 77 |
+
tenseur de polarisation
|
| 78 |
+
|
| 79 |
+
Résumé :
|
| 80 |
+
|
| 81 |
+
Mots-clés : Analyse Asymptotique, Méthode des Eléments Finis, Equations de Laplace
|
| 82 |
+
---PAGE_BREAK---
|
| 83 |
+
|
| 84 |
+
# Contents
|
| 85 |
+
|
| 86 |
+
<table><tr><td>1</td><td>Introduction</td><td>4</td></tr><tr><td>1.1</td><td>Description of the geometry</td><td>4</td></tr><tr><td>1.2</td><td>Statement of the problem</td><td>5</td></tr><tr><td>2</td><td>Main results</td><td>6</td></tr><tr><td>2.1</td><td>Variational formulations</td><td>6</td></tr><tr><td>2.2</td><td>Approximate transmission conditions.</td><td>8</td></tr><tr><td>3</td><td>Some preliminary results</td><td>9</td></tr><tr><td>3.1</td><td>Preliminary estimates</td><td>9</td></tr><tr><td>3.2</td><td>Change of variables.</td><td>10</td></tr><tr><td>3.3</td><td>First convergence results</td><td>11</td></tr><tr><td>4</td><td>Computation of the limit of</td><td>12</td></tr><tr><td>4.1</td><td>Two-scale convergence of ε<sup>-β</sup>∂<sub>s</sub>z<sub>ε</sub> and ∂<sub>t</sub>z<sub>ε</sub></td><td>13</td></tr><tr><td>4.2</td><td>Proofs of Theorem 2.3 and Theorem 2.7</td><td>17</td></tr><tr><td>4.2.a</td><td>Proof of Theorem 2.3</td><td>17</td></tr><tr><td>4.2.b</td><td>Proof of Theorem 2.7</td><td>17</td></tr><tr><td>5</td><td>Conclusion</td><td>18</td></tr></table>
|
| 87 |
+
---PAGE_BREAK---
|
| 88 |
+
|
| 89 |
+
Figure 1: Geometry of the problem.
|
| 90 |
+
|
| 91 |
+
# 1 Introduction
|
| 92 |
+
|
| 93 |
+
Consider a material composed of a two-dimensional object surrounded by a very rough thin layer. We study the asymptotic behaviour of the steady-state voltage potential when the thickness of the layer tends to zero. We present approximate transmission conditions to take into account the effects due to the layer without fully modeling it. This paper ends a series of 3 papers dealing with the steady-state voltage potential in domains with thin layer with a non constant thickness. Unlike [16, 17] in which the layer is weakly oscillating, and unlike [11], which deals with the periodic roughness case, we consider here the case of a very rough thin layer. This means that the period of the oscillations is much smaller than the mean thickness of the layer. More precisely, we consider a period equal to $\varepsilon$, while the mean thickness of the layer is of magnitude $\varepsilon^\beta$, where $\beta$ is a positive constant strictly smaller than 1. As for [11], the motivation comes from a collaborative research on the modeling of silty soil, however we are confident that our result is useful for more different applications, particularly in the electromagnetic research area.
|
| 94 |
+
|
| 95 |
+
## 1.1 Description of the geometry
|
| 96 |
+
|
| 97 |
+
For sake of simplicity, we deal with the two-dimensional case, however the three-dimensional case can be studied in the same way up to few appropriate modifications.
|
| 98 |
+
|
| 99 |
+
Let $\Omega$ be a bounded smooth domain of $\mathbb{R}^2$ with connected boundary $\partial\Omega$. For $\varepsilon > 0$, we split $\Omega$ into three subdomains: $\Omega^1$, $\Omega_\varepsilon^m$ and $\Omega_\varepsilon^0$. $\Omega^1$ is a smooth domain strictly embedded in $\Omega$. We denote by $\Gamma$ its connected boundary. The domain $\Omega_\varepsilon^m$ is the thin oscillating layer surrounding $\Omega^1$ (see Fig. 1). We denote
|
| 100 |
+
---PAGE_BREAK---
|
| 101 |
+
|
| 102 |
+
by $\Gamma_\varepsilon$ the oscillating boundary of $\Omega_\varepsilon^m$:
|
| 103 |
+
|
| 104 |
+
$$ \Gamma_\varepsilon = \partial\Omega_\varepsilon^m \setminus \Gamma. $$
|
| 105 |
+
|
| 106 |
+
The domain $\Omega_\varepsilon^0$ is defined by
|
| 107 |
+
|
| 108 |
+
$$ \Omega_\varepsilon^0 = \Omega \setminus (\overline{\Omega^1} \cup \Omega_\varepsilon^m). $$
|
| 109 |
+
|
| 110 |
+
We also write
|
| 111 |
+
|
| 112 |
+
$$ \Omega^0 = \Omega \setminus \overline{\Omega^1}. $$
|
| 113 |
+
|
| 114 |
+
We suppose that the curve $\Gamma$ is a smooth closed curve of $\mathbb{R}^2$ of length 1, which is parametrized by its curvilinear coordinate:
|
| 115 |
+
|
| 116 |
+
$$ \Gamma = \{ \gamma(t), t \in \mathbb{T} \}, $$
|
| 117 |
+
|
| 118 |
+
where $\mathbb{T}$ is the torus $\mathbb{R}/\mathbb{Z}$. Denote by $\nu$ the normal to $\Gamma$ outwardly directed to $\Omega^1$. The rough boundary $\Gamma_\varepsilon$ is defined by
|
| 119 |
+
|
| 120 |
+
$$ \Gamma_{\varepsilon} = \{\gamma_{\varepsilon}(t), t \in \mathbb{T}\}, $$
|
| 121 |
+
|
| 122 |
+
where
|
| 123 |
+
|
| 124 |
+
$$ \gamma_{\varepsilon}(t) = \gamma(t) + \varepsilon^{\beta} f\left(t, \frac{t}{\varepsilon}\right) \nu(t), $$
|
| 125 |
+
|
| 126 |
+
where $0 < \beta < 1$ and $f$ is a smooth, (1, 1)-periodic and positive function such that $\frac{1}{2} \le f \le \frac{3}{2}$. Observe that the membrane has a fast oscillation compared with the size $\varepsilon^\beta$ of the perturbation.
|
| 127 |
+
|
| 128 |
+
## 1.2 Statement of the problem
|
| 129 |
+
|
| 130 |
+
Define the piecewise regular function $\sigma_\varepsilon$ by
|
| 131 |
+
|
| 132 |
+
$$ \forall x \in \Omega, \quad \sigma_{\varepsilon}(x) = \begin{cases} \sigma_1, & \text{if } x \in \Omega^1, \\ \sigma_m, & \text{if } x \in \Omega_{\varepsilon}^m, \\ \sigma_0, & \text{if } x \in \Omega_{\varepsilon}^0, \end{cases} $$
|
| 133 |
+
|
| 134 |
+
where $\sigma_1, \sigma_m$ and $\sigma_0$ are given positive$^1$ constants and let $\sigma: \Omega \to \mathbb{R}$ be defined by$^2$
|
| 135 |
+
|
| 136 |
+
$$ \sigma(x) = \begin{cases} \sigma_1, & \text{if } x \in \Omega^1, \\ \sigma_0, & \text{if } x \in \Omega^0. \end{cases} $$
|
| 137 |
+
|
| 138 |
+
Let $g$ belong to $H^s(\Omega)$, for $s \ge 1$. We consider the unique solution $u_\varepsilon$ to
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\begin{align}
|
| 142 |
+
\nabla \cdot (\sigma_\varepsilon \nabla u_\varepsilon) &= 0, &&\text{in } \Omega, \tag{1a} \\
|
| 143 |
+
u_\varepsilon|_{\partial\Omega} &= g|_{\partial\Omega}. &&\text{(1b)}
|
| 144 |
+
\end{align}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
Let $u$ be the unique solution to the limit problem
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\begin{align}
|
| 151 |
+
\nabla \cdot (\sigma \nabla u) &= 0, &&\text{in } \Omega, \tag{2a} \\
|
| 152 |
+
u|_{\partial\Omega} &= g|_{\partial\Omega}. &&\text{(2b)}
|
| 153 |
+
\end{align}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
$^1$The same following results are obtained if $\sigma_1, \sigma_m$ and $\sigma_0$ are given complex and regular functions with imaginary parts (and respectively real parts) with the same sign.
|
| 157 |
+
|
| 158 |
+
$^2\sigma$ represents the piecewise-constant conductivity of the whole domain $\Omega$.
|
| 159 |
+
---PAGE_BREAK---
|
| 160 |
+
|
| 161 |
+
Since the domains $\Omega$, $\Omega^1$ and $\Omega^0$ are smooth, the above function $u$ belongs to $H^s(\Omega^1)$ and $H^s(\Omega^0)$. In the following we suppose that $s > 3$ hence by Sobolev embeddings there exists $s_0 > 0$ such that $u \in C^{1,s_0}(\Omega^1)$ and $u \in C^{1,s_0}(\Omega^0)$. We aim to give the first two terms of the asymptotic expansion of $u_\epsilon$ for $\epsilon$ tending to zero.
|
| 162 |
+
|
| 163 |
+
Several papers are devoted to the modeling of thin layers: see for instance [8, 7, 16] for smooth thin layers and [1, 2, 4, 14, 11] for rough layers. However, as far as we know, the case of very rough thin layer has not been treated yet. In [10] Vogelius and Capdeboscq derive a general representation formula of the steady-state potential in the very general framework of inhomogeneities of low volume fraction, including the case of very rough thin layers. However their result involves the polarization tensor, which is not precisely given. This paper can be seen as an explicit characterization of the polarization tensor for very rough thin layers.
|
| 164 |
+
|
| 165 |
+
Our main result (see Theorem 2.3) is weaker than the results of [16, 11], since we do not prove error estimates. Actually, using variational techniques we prove that the sequence $(u_\epsilon - u)/\epsilon^\beta$ weakly converges in $L^p(\Omega)$, for all $p \in (1,2)$ to a function $z$. This function $z$ is uniquely determined by the elliptic problem (11), and the convergence does hold in $L^s$, for $s \ge 1$ far from the layer (see Theorem 2.7).
|
| 166 |
+
|
| 167 |
+
In the present paper it seems difficult to obtain the $H^1$ strong convergence in $\Omega$ as in [11]. The main reason comes from the fact that according to Bonder *et al.*, the best Sobolev trace constant blows up for $\epsilon$ tending to zero in the case of a very rough layer. Therefore, the analysis performed previously can not be applied. To obtain our present result, we use a variational technique based on the two-scale analysis. We emphasize that this technique can be applied to obtain the limit problems presented in [16, 11], even if the error estimates are more complex to be achieved in such a way. We conclude by observing that the two-scale convergence enables us to draw the target to be reached: another asymptotic analysis as to be performed to obtain error estimates, however the result is sketched.
|
| 168 |
+
|
| 169 |
+
The outline of the paper is the following. In the next section we present precisely our main results using a variational formulation. Section 3 is devoted to preliminary results. In particular, we show the first two limits easy to be reached. In Section 4, we end the proof of the main theorems by computing the limit of $E_\epsilon''$ defined by (19). We then conclude the paper with numerical simulations, which illustrate the theoretical results. We shall first present our main results.
|
| 170 |
+
|
| 171 |
+
# 2 Main results
|
| 172 |
+
|
| 173 |
+
## 2.1 Variational formulations
|
| 174 |
+
|
| 175 |
+
Denote by $z_\epsilon$ the element of $H_0^1(\Omega)$ defined by
|
| 176 |
+
|
| 177 |
+
$$z_{\epsilon} = \frac{u_{\epsilon} - u}{\epsilon^{\beta}}.$$
|
| 178 |
+
|
| 179 |
+
INRIA
|
| 180 |
+
---PAGE_BREAK---
|
| 181 |
+
|
| 182 |
+
We shall obtain the limit of $z_\epsilon$ with the help of variational techniques. Since $g$
|
| 183 |
+
belongs to $H^s(\Omega)$, for $s > 3$, we define by $g + H_0^1(\Omega)$ the affine space
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
g + H_0^1(\Omega) = \{ v \in H^1(\Omega) : v|_{\partial\Omega} = g|_{\partial\Omega} \}.
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
The variational formulation of Problem (1) is
|
| 190 |
+
|
| 191 |
+
Find $u_\varepsilon \in g + H_0^1(\Omega)$ such that: $\displaystyle \int_\Omega \sigma_\varepsilon \nabla u_\varepsilon \cdot \nabla \varphi = 0$, $\quad \forall \varphi \in H_0^1(\Omega)$,
|
| 192 |
+
|
| 193 |
+
and respectively for Problem (2)
|
| 194 |
+
|
| 195 |
+
Find $u \in g + H_0^1(\Omega)$ such that: $\displaystyle \int_{\Omega} \sigma \nabla u \cdot \nabla \varphi = 0$, $\quad \forall \varphi \in H_0^1(\Omega)$.
|
| 196 |
+
|
| 197 |
+
Taking the difference between the above equalities, $z_\epsilon$ belongs to $H_0^1(\Omega)$ and satisfies
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
\int_{\Omega} \sigma_{\epsilon} \nabla z_{\epsilon} \cdot \nabla \varphi = - \frac{1}{\epsilon \beta} \int_{\Omega} (\sigma_{\epsilon} - \sigma) \nabla u \cdot \nabla \varphi, \quad \forall \varphi \in H_{0}^{1}(\Omega), \quad (3)
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
or equivalently
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
\int_{\Omega} \sigma \nabla z_{\epsilon} \cdot \nabla \varphi = - \int_{\Omega} (\sigma_{\epsilon} - \sigma) \nabla z_{\epsilon} \cdot \nabla \varphi - \frac{1}{\epsilon\beta} \int_{\Omega} (\sigma_{\epsilon} - \sigma) \nabla u \cdot \nabla \varphi, \quad \forall \varphi \in H_{0}^{1}(\Omega). \tag{4}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
**Notation 2.1** (Normal and tangential derivatives). Denote by $\theta(t)$ the tangent vector to $\Gamma$ in any point $\gamma(t):$
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
\forall t \in T, \quad \theta(t) = (\gamma'_1(t), \gamma'_2(t))^T.
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
The normal vector $\nu$ outwardly directed to $\Omega^1$ is then given by
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
\forall t \in T, \quad \nu(t) = (\nu_1(t), \nu_2(t))^T = (\gamma'_2(t), -\gamma'_1(t))^T.
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
In the following, for any $x \in \Gamma$ and for any function $\varphi$ smooth enough, we
|
| 222 |
+
denote the normal and tangential derivatives of $\varphi$ respectively by
|
| 223 |
+
|
| 224 |
+
$$
|
| 225 |
+
\begin{align*}
|
| 226 |
+
\frac{\partial \varphi^+}{\partial \nu}(x) &= \lim_{y \to x, y \in \Omega^0} \nabla \varphi(y) \cdot \nu, &
|
| 227 |
+
\frac{\partial \varphi^-}{\partial \nu}(x) &= \lim_{y \to x, y \in \Omega^1} \nabla \varphi(y) \cdot \nu, \\
|
| 228 |
+
\frac{\partial \varphi}{\partial \theta}(x) &= \nabla \varphi(x) \cdot \theta.
|
| 229 |
+
\end{align*}
|
| 230 |
+
$$
|
| 231 |
+
|
| 232 |
+
We also write
|
| 233 |
+
|
| 234 |
+
$\varphi^{+}(x) = \lim_{y \to x, y \in \Omega^0} \varphi(y),$ $\qquad \varphi^{-}(x) = \lim_{y \to x, y \in \Omega^1} \varphi(y).$
|
| 235 |
+
|
| 236 |
+
**Notation 2.2** (Green operator). We introduce the Green operator $G : H^{-1}(\Omega) \to H_0^1(\Omega)$ given by $G(\psi) = \varphi$ iff $\varphi$ is the unique solution of the problem
|
| 237 |
+
|
| 238 |
+
$$
|
| 239 |
+
-\nabla \cdot (\sigma\nabla\varphi) = \psi \quad \text{in } \Omega, \tag{5a}
|
| 240 |
+
$$
|
| 241 |
+
|
| 242 |
+
$$
|
| 243 |
+
\varphi|_{\partial\Omega} = 0. \tag{5b}
|
| 244 |
+
$$
|
| 245 |
+
|
| 246 |
+
It is well known that if $\psi \in L^{p'}(\Omega)$ with $p' > 2$ then $\varphi \in W^{2,p'}(\Omega^k)$, $k = 0,1$,
|
| 247 |
+
then by Sobolev embeddings there exists $s_0 > 0$ such that $\varphi \in C^{1,s_0}(\bar{\Omega}^1)$ and
|
| 248 |
+
$\varphi \in C^{1,s_0}(\bar{\Omega}^0)$.
|
| 249 |
+
---PAGE_BREAK---
|
| 250 |
+
|
| 251 |
+
## 2.2 Approximate transmission conditions
|
| 252 |
+
|
| 253 |
+
Let $f_{min}$ and $f_{max}$ be
|
| 254 |
+
|
| 255 |
+
$$f_{min} = \min_{t, \tau \in T} f(t, \tau) \quad \text{and} \quad f_{max} = \max_{t, \tau \in T} f(t, \tau).$$
|
| 256 |
+
|
| 257 |
+
For sake of simplicity, we suppose that
|
| 258 |
+
|
| 259 |
+
$$\frac{1}{2} \le f_{min} \le f_{max} \le \frac{3}{2}.$$
|
| 260 |
+
|
| 261 |
+
For any fixed $t \in T$ and $s \in R$ we denote by $Q(s,t)$ the one-dimensional set
|
| 262 |
+
|
| 263 |
+
$$\forall (s,t) \in \mathbb{R} \times T, \quad Q(s,t) = \{\tau \in T, s \le f(t, \tau)\},$$
|
| 264 |
+
|
| 265 |
+
and let $q(s,t)$ be the Lebesgue-measure of $Q(s,t):$
|
| 266 |
+
|
| 267 |
+
$$\forall (s,t) \in \mathbb{R} \times T, \quad q(s,t) = \int_{T} \chi_{Q(s,t)}(\tau) \, d\tau, \qquad (6)$$
|
| 268 |
+
|
| 269 |
+
where $\chi_A$ is the characteristic function of the set A. Observe that $q$ satisfies
|
| 270 |
+
$0 \le q(s,t) \le 1$, $q(s,t) = 1$ for $s < f_{min}$ and $q(s,t) = 0$ for $s > f_{max}$. Moreover
|
| 271 |
+
since $q$ is a measurable function it belongs to $L^{\infty}$. We also write
|
| 272 |
+
|
| 273 |
+
$$\tilde{f}(t) = \int_{0}^{1} f(t, \tau) \, d\tau. \qquad (7)$$
|
| 274 |
+
|
| 275 |
+
Our approximate transmission conditions need the two following functions
|
| 276 |
+
|
| 277 |
+
$$\forall t \in T, \quad r_1(t) = \int_0^{f_{max}} \frac{q^2(s,t)}{\sigma_m(\gamma(t))q(s,t) + \sigma_0(\gamma(t))[1-q(s,t)]} \, ds, \quad (8)$$
|
| 278 |
+
|
| 279 |
+
$$\forall t \in T, \quad r_2(t) = \int_{f_{min}}^{f_{max}} \frac{q(s,t)[1-q(s,t)]}{\sigma_0(\gamma(t))q(s,t) + \sigma_m(\gamma(t))[1-q(s,t)]} \, ds. \quad (9)$$
|
| 280 |
+
|
| 281 |
+
To simplify notations, we still denote by $r_k$ the function of $\Gamma$ equal to $r_k \circ \gamma^{-1}$,
|
| 282 |
+
for $k = 1, 2$. The aim of the paper is to prove the following theorem.
|
| 283 |
+
|
| 284 |
+
**Theorem 2.3 (Main result).** There exists $z \in \cap_{1<p<2} L^p(\Omega)$ such that $z_\varepsilon$ weakly converges to $z$ in $L^p(\Omega)$ for all $p \in (1, 2)$. The limit $z$ is the unique solution to
|
| 285 |
+
|
| 286 |
+
$$\begin{array}{l}
|
| 287 |
+
\forall \psi \in \cup_{p'>2} L^{p'}(\Omega), \\
|
| 288 |
+
\displaystyle \int_{\Omega} z\psi dx = \int_{\Gamma} [\sigma_0 - \sigma_m] (\tilde{f} + (\sigma_0 - \sigma_m)r_1) \frac{\partial u^+}{\partial\nu} \frac{\partial \varphi^+}{\partial\nu} d\Gamma \\
|
| 289 |
+
\phantom{\displaystyle \int_{\Omega} z\psi dx = } + \int_{\Gamma} [\sigma_0 - \sigma_m] (\tilde{f} + (\sigma_0 - \sigma_m)r_2) \frac{\partial u}{\partial\theta} \frac{\partial \varphi}{\partial\theta} d\Gamma,
|
| 290 |
+
\end{array}
|
| 291 |
+
\tag{10}$$
|
| 292 |
+
|
| 293 |
+
where $\varphi = G(\psi)$.
|
| 294 |
+
|
| 295 |
+
**Remark 2.4.** The existence and the uniqueness of $z \in \cap_{1<p<2} L^p(\Omega)$ solution of (10) comes from the fact that for any $p' > 2$ the dual of $L^{p'}(\Omega)$ is $L^p(\Omega)$ with $1/p + 1/p' = 1$ and that the expression of the right-hand side of (10) is a continuous linear application from $L^{p'}(\Omega)$ to $\mathbb{R}$ with argument $\psi$.
|
| 296 |
+
---PAGE_BREAK---
|
| 297 |
+
|
| 298 |
+
**Remark 2.5.** From the uniqueness of $z$ we deduce that the whole sequence $z_\varepsilon$ converges to $z$.
|
| 299 |
+
|
| 300 |
+
**Remark 2.6 (Strong formulation).** We can write a strong formulation of (10). Supposing that $z$ is regular enough on $\Omega^0$ and on $\Omega^1$, and taking in (10) appropriate test functions, we infer that $z$ satisfies the following problem
|
| 301 |
+
|
| 302 |
+
$$ \nabla \cdot (\sigma_k \nabla z) = 0 \quad \text{in } \Omega^k, \quad k=0,1, \qquad (11a) $$
|
| 303 |
+
|
| 304 |
+
$$ z^{+} - z^{-} = \left(1 - \frac{\sigma_m}{\sigma_0}\right) [\tilde{f} + (\sigma_0 - \sigma_m)r_1] \frac{\partial u^{+}}{\partial \nu} \quad \text{on } \Gamma, \qquad (11b) $$
|
| 305 |
+
|
| 306 |
+
$$ \sigma_0 \frac{\partial z^+}{\partial \nu} - \sigma_1 \frac{\partial z^-}{\partial \nu} = \frac{\partial}{\partial \theta} \left[ (\sigma_0 - \sigma_m) (\tilde{f} + (\sigma_0 - \sigma_m)r_2) \frac{\partial u}{\partial \theta} \right] \quad \text{on } \Gamma, \qquad (11c) $$
|
| 307 |
+
|
| 308 |
+
$$ z|_{\partial\Omega} = 0. \qquad (11d) $$
|
| 309 |
+
|
| 310 |
+
Moreover, using the regularity of $u$ in $H^s(\Omega^0)$, with $s > 3$, we infer easily the existence and the uniqueness of $z$ in $H^{s-1}(\Omega^0)$ and $H^{s-1}(\Omega^1)$.
|
| 311 |
+
|
| 312 |
+
**Theorem 2.7 (Strong convergence far from the layer).** Let $D$ be an open set such that $\Gamma \subset D$ and $\bar{D} \subset \Omega$. Then the sequence $z_\varepsilon$ converges strongly to $z$ in $L^p(\Omega \setminus D)$, for all $p \ge 1$.
|
| 313 |
+
|
| 314 |
+
**Remark 2.8** (The case of a thin layer with constant thickness). In the particular case where $f$ is independent on $\tau$, we have $\tilde{f} = f(t)$ and
|
| 315 |
+
|
| 316 |
+
$$ q(s,t) = \begin{cases} 1 & \text{for } s \le f(t), \\ 0 & \text{for } s \ge f(t), \end{cases} \qquad (12) $$
|
| 317 |
+
|
| 318 |
+
and
|
| 319 |
+
|
| 320 |
+
$$ r_1(t) = \frac{f(t)}{\sigma_m(\gamma(t))} \quad \text{and} \quad r_2(t) = 0. $$
|
| 321 |
+
|
| 322 |
+
Then (11) becomes
|
| 323 |
+
|
| 324 |
+
$$ \nabla \cdot (\sigma_k \nabla z) = 0 \quad \text{in } \Omega^k, \quad k=0,1, \qquad (13a) $$
|
| 325 |
+
|
| 326 |
+
$$ z^{+} - z^{-} = \left( \frac{\sigma_0}{\sigma_m} - 1 \right) f \frac{\partial u^{+}}{\partial \nu} \quad \text{on } \Gamma, \qquad (13b) $$
|
| 327 |
+
|
| 328 |
+
$$ \sigma_0 \frac{\partial z^+}{\partial \nu} - \sigma_1 \frac{\partial z^-}{\partial \nu} = \frac{\partial}{\partial \theta} \left( f(\sigma_0 - \sigma_m) \frac{\partial u}{\partial \theta} \right) \quad \text{on } \Gamma, \qquad (13c) $$
|
| 329 |
+
|
| 330 |
+
$$ z|_{\partial\Omega} = 0. \qquad (13d) $$
|
| 331 |
+
|
| 332 |
+
which is the result obtained in [16, 17].
|
| 333 |
+
|
| 334 |
+
# 3 Some preliminary results
|
| 335 |
+
|
| 336 |
+
## 3.1 Preliminary estimates
|
| 337 |
+
|
| 338 |
+
**Lemma 3.1.** The following estimates hold.
|
| 339 |
+
|
| 340 |
+
i) There exists $C > 0$ such that
|
| 341 |
+
|
| 342 |
+
$$ \|z_{\varepsilon}\|_{H^1(\Omega)} \le C\varepsilon^{-\beta/2}. $$
|
| 343 |
+
|
| 344 |
+
ii) For any $p \in ]1, 2[$ there exists $C_p > 0$ such that
|
| 345 |
+
|
| 346 |
+
$$ \|z_{\varepsilon}\|_{L^p(\Omega)} \le C_p. $$
|
| 347 |
+
---PAGE_BREAK---
|
| 348 |
+
|
| 349 |
+
*Proof.* i): Take $\varphi = z_{\varepsilon}$ in (3) and use the regularity of $u$.
|
| 350 |
+
|
| 351 |
+
ii): For any $p \in ]1, 2[$ we introduce the function $z_{\epsilon p}$ defined on $\Omega$ by $z_{\epsilon p}(x) = z_{\epsilon}(x)|z_{\epsilon}(x)|^{p-2}\chi_{\{z_{\epsilon}(x) \neq 0\}}$. We have $z_{\epsilon p}z_{\epsilon} = |z_{\epsilon}|^p$.
|
| 352 |
+
Then we take $\varphi = \mathcal{G}(z_{\epsilon p})$ as a test function in (4); in the left-hand side we obtain $\|z_{\epsilon}\|_{L^p(\Omega)}^p$. Let $p_1 = \frac{p}{p-1} > 2$, then
|
| 353 |
+
|
| 354 |
+
$$
|
| 355 |
+
\|\nabla \varphi\|_{L^{\infty}(\Omega)} \le C_p \|z_{\epsilon p}\|_{L^{p_1}(\Omega)} = \|z_{\epsilon}\|_{L^p(\Omega)}^{p-1},
|
| 356 |
+
$$
|
| 357 |
+
|
| 358 |
+
and using *i*) we easily see that the right-hand side of (4) can be bounded by a term like $C \|z_\epsilon\|_{L^p(\Omega)}^{p-1}$. This gives the result. $\square$
|
| 359 |
+
|
| 360 |
+
## 3.2 Change of variables
|
| 361 |
+
|
| 362 |
+
We shall use the change of variables:
|
| 363 |
+
|
| 364 |
+
$$
|
| 365 |
+
x = \alpha_{\epsilon}(s, t), \tag{14}
|
| 366 |
+
$$
|
| 367 |
+
|
| 368 |
+
where $\alpha_{\epsilon}: \mathbb{R} \times \mathbb{T} \rightarrow \mathbb{R}^2$ is an application given by
|
| 369 |
+
|
| 370 |
+
$$
|
| 371 |
+
\alpha_{\varepsilon}(s,t) = \gamma(t) + \varepsilon^{\beta}s\nu(t).
|
| 372 |
+
$$
|
| 373 |
+
|
| 374 |
+
Denote by $\kappa$ the curvature³ of $\Gamma$. For $\varepsilon > 0$, we denote by $C_{\varepsilon}$ the rough cylinder
|
| 375 |
+
|
| 376 |
+
$$
|
| 377 |
+
C_{\varepsilon} = \{(s,t), t \in \mathbb{T}, 0 \le s \le f(t, t/\varepsilon)\}.
|
| 378 |
+
$$
|
| 379 |
+
|
| 380 |
+
Let $d_0$ be such that
|
| 381 |
+
|
| 382 |
+
$$
|
| 383 |
+
0 < d_0 < \frac{1}{\|\kappa\|_{\infty}}. \tag{15}
|
| 384 |
+
$$
|
| 385 |
+
|
| 386 |
+
For all $\varepsilon \in (0, d_0^{1/\beta})$, $\alpha_\varepsilon$ is a diffeomorphism between the rough cylinder $C_\varepsilon$ and $\Omega_\varepsilon^m$. The Jacobian matrix $A_\varepsilon$ of $\alpha_\varepsilon$ equals
|
| 387 |
+
|
| 388 |
+
$$
|
| 389 |
+
\forall (s,t) \in (-1,1) \times \mathbb{T}, \quad A_{\varepsilon}(s,t) = J_0(t) \begin{pmatrix} \varepsilon^{\beta} & 0 \\ 0 & 1+\varepsilon^{\beta}s\kappa(t) \end{pmatrix},
|
| 390 |
+
$$
|
| 391 |
+
|
| 392 |
+
where
|
| 393 |
+
|
| 394 |
+
$$
|
| 395 |
+
\forall t \in \mathbb{T}, \quad J_0(t) = \begin{pmatrix} \nu_1(t) & -\nu_2(t) \\ \nu_2(t) & \nu_1(t) \end{pmatrix}.
|
| 396 |
+
$$
|
| 397 |
+
|
| 398 |
+
According to (15), $A_\epsilon$ is invertible. Denote by $B_\epsilon$ its inverse matrix
|
| 399 |
+
|
| 400 |
+
$$
|
| 401 |
+
\forall (s,t) \in (-1,1) \times \mathbb{T}, \quad B_{\epsilon}(s,t) = \begin{pmatrix} \epsilon^{-\beta} & 0 \\ 0 & 1/(1+\epsilon^{\beta}s\kappa(t)) \end{pmatrix} J_{0}^{T}(t).
|
| 402 |
+
$$
|
| 403 |
+
|
| 404 |
+
For any functions $v$ and $w$ belonging to $H^1(\mathbb{R}^2)$, define the functions $v$ and $w$ by
|
| 405 |
+
|
| 406 |
+
$$
|
| 407 |
+
\forall (s,t) \in (-1,1) \times \mathbb{T}, \quad v(s,t) = v \circ \alpha_{\epsilon}(s,t), \quad w(s,t) = w \circ \alpha_{\epsilon}(s,t).
|
| 408 |
+
$$
|
| 409 |
+
|
| 410 |
+
³$\kappa$ is the function defined by
|
| 411 |
+
|
| 412 |
+
$$
|
| 413 |
+
\forall t \in T, v'(t) = \kappa(t)v'(t).
|
| 414 |
+
$$
|
| 415 |
+
---PAGE_BREAK---
|
| 416 |
+
|
| 417 |
+
Let $\nabla_{s,t}$ be the gradient operator $(\partial_s, \partial_t)^T$. Using the change of variables, and since $J_0^T = J_0^{-1}$ we obviously have on $(0, 2) \times \mathbb{T}$
|
| 418 |
+
|
| 419 |
+
$$
|
| 420 |
+
\begin{aligned}
|
| 421 |
+
(\nabla_x v \cdot \nabla_x w) \circ \alpha_\varepsilon &= (\nabla_{s,t} v)^T B_\varepsilon (B_\varepsilon)^T \nabla_{s,t} w, \\
|
| 422 |
+
&= \frac{1}{\varepsilon^{-2\beta}} \partial_s v \partial_s w + \frac{1}{(1+\varepsilon^{\beta} s \kappa)^2} \partial_t v \partial_t w.
|
| 423 |
+
\end{aligned}
|
| 424 |
+
\quad (16) $$
|
| 425 |
+
|
| 426 |
+
Hence $\nabla_x v \circ \alpha_\varepsilon \cdot \nabla_x w \circ \alpha_\varepsilon$ is “close” to $\frac{\partial v}{\partial t} + \varepsilon^{-2\beta} \frac{\partial v}{\partial s}$ on $(0, 2) \times \mathbb{T}$.
|
| 427 |
+
|
| 428 |
+
### 3.3 First convergence results
|
| 429 |
+
|
| 430 |
+
For any fixed $\psi \in \cup_{p'>2} L^{p'}(\Omega)$ we take $\varphi = G(\psi)$ as a test function in (4). We obtain
|
| 431 |
+
|
| 432 |
+
$$ \int_{\Omega} z_{\varepsilon} \psi \, dx = (\sigma_0 - \sigma_m) (E'_{\varepsilon} + E''_{\varepsilon}), \quad (17) $$
|
| 433 |
+
|
| 434 |
+
where
|
| 435 |
+
|
| 436 |
+
$$ E'_{\varepsilon} = \frac{1}{\varepsilon^{\beta}} \int_{\Omega_{\varepsilon}^{m}} \nabla u \cdot \nabla \varphi \, dx, \quad (18) $$
|
| 437 |
+
|
| 438 |
+
$$ E''_{\varepsilon} = \int_{\Omega_{\varepsilon}^{m}} \nabla z_{\varepsilon} \cdot \nabla \varphi. \quad (19) $$
|
| 439 |
+
|
| 440 |
+
We pass to the limit in the left-hand side of (17) thanks to Lemma 3.1. Up to an appropriate subsequence we infer
|
| 441 |
+
|
| 442 |
+
$$ \lim_{\varepsilon \to 0} \int_{\Omega} z_{\varepsilon} \psi \, dx = \int_{\Omega} z \psi \, dx. \quad (20) $$
|
| 443 |
+
|
| 444 |
+
The aim of the paper is to obtain the limits of $E'_\epsilon$ and $E''_\epsilon$. It is easy to compute the limit of $E'_\epsilon$. Actually, using the change of variables $(s, t)$ in the expression of $E'_\epsilon$ we infer, for $\epsilon$ small enough$^4$,
|
| 445 |
+
|
| 446 |
+
$$ E'_{\epsilon} = \int_{\mathbb{T}} \int_{0}^{f(t,t/\epsilon)} (1 + \epsilon^{\beta} s \kappa(t)) \nabla u \circ \alpha_{\epsilon}(s,t) \cdot \nabla \varphi \circ \alpha_{\epsilon}(s,t) ds dt. \quad (21) $$
|
| 447 |
+
|
| 448 |
+
The regularity of $u$ and $\varphi$ implies that
|
| 449 |
+
|
| 450 |
+
$$ \sup_{s \in (0, f_{\max})} \left\| \nabla u \circ \alpha_{\varepsilon}(s,.) \cdot \nabla \varphi \circ \alpha_{\varepsilon}(s,.) - \left( \left. \frac{\partial u}{\partial \nu} \frac{\partial \varphi}{\partial \nu} \right|_{\gamma+} + \left. \frac{\partial u}{\partial \theta} \frac{\partial \varphi}{\partial \theta} \right|_{\gamma+} \right) \right\|_{L^2(\mathbb{T})} = O(\varepsilon^{\beta}). $$
|
| 451 |
+
|
| 452 |
+
We then deduce from the weak convergence of $f(t, t/\epsilon)$ to $\tilde{f}$ the limit of $E'_\epsilon$:
|
| 453 |
+
|
| 454 |
+
$$ \lim_{\epsilon \to 0} E'_{\epsilon} = \int_{\Gamma} \left( \frac{\partial u^{+}}{\partial\nu} \frac{\partial \varphi^{+}}{\partial\nu} + \frac{\partial u}{\partial\theta} \frac{\partial \varphi}{\partial\theta} \right) \tilde{f} d\sigma_{\Gamma}. \quad (22) $$
|
| 455 |
+
|
| 456 |
+
Therefore we have proved that up to a subsequence
|
| 457 |
+
|
| 458 |
+
$$ (\sigma_0 - \sigma_m) \lim_{\epsilon \to 0} E''_{\epsilon} = \int_{\Omega} z\psi - (\sigma_0 - \sigma_m) \int_{\Gamma} \left( \frac{\partial u^{+}}{\partial\nu}\frac{\partial\varphi^{+}}{\partial\nu} + \frac{\partial u}{\partial\theta}\frac{\partial\varphi}{\partial\theta} \right) \tilde{f} d\sigma_{\Gamma}. \quad (23) $$
|
| 459 |
+
|
| 460 |
+
To end the proof of Theorem 2.3, it remains to determine the limit of $E''_\epsilon$.
|
| 461 |
+
|
| 462 |
+
$^4$ i.e. such that $\epsilon^\beta < (d_0/f_{max})$.
|
| 463 |
+
---PAGE_BREAK---
|
| 464 |
+
|
| 465 |
+
# 4 Computation of the limit of $E_{\varepsilon}''$
|
| 466 |
+
|
| 467 |
+
The limit of $E_{\varepsilon}''$ is more complex to be achieved. Now for simplicity we still denote by $z_{\varepsilon}$ the composition $z_{\varepsilon} \circ \alpha_{\varepsilon}$. Using the change of variables $(s, t)$ we infer:
|
| 468 |
+
|
| 469 |
+
$$E_{\varepsilon}'' = \varepsilon^{\beta} \int_{T} \int_{0}^{f(t,t/\varepsilon)} (1 + \varepsilon^{\beta} s\kappa) \left( \frac{1}{\varepsilon^{2\beta}} \partial_s z_{\varepsilon} \partial_s \varphi + \frac{1}{(1 + \varepsilon^{\beta} s\kappa)^2} \partial_t z_{\varepsilon} \partial_t \varphi \right) ds dt.$$
|
| 470 |
+
|
| 471 |
+
Unlike for $E_{\varepsilon}'$, the derivatives of $z_{\varepsilon}$ inside the brackets do not converge strongly. In the following, we show that for all $M > f_{max}$ these derivatives two-scale converge in the cylinder $P_M = (-M, M) \times T$, for $\varepsilon$ tending to zero such that $\varepsilon^{\beta} \le d_0/M$.
|
| 472 |
+
|
| 473 |
+
Denote by $\Omega_M^\varepsilon$ the tubular neighbourhood of $\Gamma$ composed by the points at the distance smaller than $\varepsilon^\beta M$ of $\Gamma$. By definition, $\alpha_\varepsilon$ is a diffeomorphism from $P_M$ onto $\Omega_M^\varepsilon$ and $\alpha_\varepsilon(P_M)$ contains $\Omega_M^\mu$.
|
| 474 |
+
|
| 475 |
+
According to Lemma 4.1, in order to obtain the limit of $E_{\varepsilon}''$ we just have to prove the two-scale convergence of the derivatives of $z_{\varepsilon}$ in $P_M$. Actually we have the following general result on the two-scale convergence.
|
| 476 |
+
|
| 477 |
+
**Lemma 4.1.** Let $M > f_{max}$. Let $v_{\varepsilon}$ be a bounded sequence in $L^2(P_M)$ and let $v \in L^2(P_M \times T^2)$ be a two-scale limit of $v_{\varepsilon}$ for $\varepsilon$ tending to zero such that $\varepsilon^{\beta} < d_0/M$. Let also $\phi$ be a regular enough function, defined on $P_M \times T$. Then we have
|
| 478 |
+
|
| 479 |
+
$$\lim_{\varepsilon \to 0} \int_T \int_0^{f(t,t/\varepsilon)} v_{\varepsilon} \phi \left(s, t, \frac{t}{\varepsilon}\right) ds dt = \int_T \int_0^{f(t,\tau)} \int_{T^2} v \phi(s,t,\tau) d\tau dy ds dt.$$
|
| 480 |
+
|
| 481 |
+
*Proof.* Denote by $b(s,t,\tau) = \phi(s,t,\tau)\chi_{\{0<s<f(t,\tau)\}}$ defined on the set $P_M \times T$, which is independent on $\varepsilon$. The difficulty comes from the fact that the function $b$ is not regular in $\tau$, so we can not take it directly as a test function in the two-scale convergence. Using the change of variables $s = rf(t, t/\varepsilon)$ with $r \in [0,1]$, we infer
|
| 482 |
+
|
| 483 |
+
$$\int_T \int_0^{f(t,t/\varepsilon)} \left|\phi\left(s,t,\frac{t}{\varepsilon}\right)\right|^2 dsdt = \int_T \int_0^1 \left|\phi\left(rf\left(t,\frac{t}{\varepsilon}\right),t,\frac{t}{\varepsilon}\right)\right|^2 f\left(t,\frac{t}{\varepsilon}\right) drdt.$$
|
| 484 |
+
|
| 485 |
+
By regularity, this last integral converges, when $\varepsilon$ tends to 0 to
|
| 486 |
+
|
| 487 |
+
$$\int_T \int_0^1 \int_T |\phi(rf(t, \tau), t, \tau)|^2 [f(t, \tau)] d\tau dr dt = \int_T \int_0^{f(t, \tau)} \int_T |\phi(s, t, \tau)|^2 d\tau ds dt.$$
|
| 488 |
+
|
| 489 |
+
We thus proved the following result:
|
| 490 |
+
|
| 491 |
+
$$\int_{P_M} \left| b \left( s, t, \frac{t}{\epsilon} \right) \right|^2 dt ds \rightarrow \int_{P_M} \int_T |b(s, t, \tau)|^2 d\tau ds dt \quad \text{for } \epsilon \rightarrow 0. \quad (24)$$
|
| 492 |
+
|
| 493 |
+
We similarly prove that for any $\phi_1$ belonging to $L^2(P_M, C(T))$ we have⁵
|
| 494 |
+
|
| 495 |
+
$$\int_{P_M} b\left(s, t, \frac{t}{\epsilon}\right) \phi_1\left(s, t, \frac{t}{\epsilon}\right) dt ds \rightarrow_{P_M \times T} b(s, t, \tau) \phi_1(s, t, \tau) d\tau ds dt \quad \text{for } \epsilon \rightarrow 0. \quad (25)$$
|
| 496 |
+
|
| 497 |
+
⁵We can interpret (25) as a result of “partial” two-scale convergence of $b(s,t, \frac{t}{\epsilon})$ to $b(s,t, \tau)$. Moreover (24) says that this two-scale convergence is “strong”.
|
| 498 |
+
---PAGE_BREAK---
|
| 499 |
+
|
| 500 |
+
By simply adapting the proof of Theorem 11 of Lukassen et al. [15] (see also
|
| 501 |
+
Allaire [3], Theorem 1.8) we prove that the convergences (24) and (25) imply
|
| 502 |
+
|
| 503 |
+
$$
|
| 504 |
+
\lim_{\epsilon \to 0} \int_{P_M} v_\epsilon b \left(s, t, \frac{t}{\epsilon}\right) ds dt = \int_{P_M} \int_{\mathbb{T}^2} v b(s, t, \tau) d\tau dy ds dt,
|
| 505 |
+
$$
|
| 506 |
+
|
| 507 |
+
which is the desired result.
|
| 508 |
+
|
| 509 |
+
4.1 Two-scale convergence of $\varepsilon^{-\beta} \partial_s z_\varepsilon$ and $\partial_t z_\varepsilon$
|
| 510 |
+
|
| 511 |
+
Prove now the two-scale convergence of the derivatives of $z_\varepsilon$.
|
| 512 |
+
|
| 513 |
+
**Lemma 4.2.** Let $p \in (1, 2)$. There exist two constants $C$ and $C_p$ such that for any $M > 2$, for any $0 < \varepsilon^\beta < d_0/M$, we have
|
| 514 |
+
|
| 515 |
+
$$
|
| 516 |
+
i) \quad \left\| \frac{\partial z_{\varepsilon}}{\partial t} \right\|_{L^2(P_M)} + \left\| \varepsilon^{-\beta} \frac{\partial z_{\varepsilon}}{\partial s} \right\|_{L^2(P_M)} \leq C \varepsilon^{-\beta}.
|
| 517 |
+
$$
|
| 518 |
+
|
| 519 |
+
$$
|
| 520 |
+
ii) \|z_{\varepsilon}\|_{L^{p}(P_{M})} \leq C_{p}\varepsilon^{-\beta/p}.
|
| 521 |
+
$$
|
| 522 |
+
|
| 523 |
+
Proof. According to Lemma 3.1 and with the help of the change of variables (14) we straightforwardly obtain (ii). For (i) we use the formula (16) with $v = w = z_\epsilon$.
|
| 524 |
+
|
| 525 |
+
By two-scale convergence there exist a subsequence of $\varepsilon$ still denoted by $\varepsilon$
|
| 526 |
+
and $\xi_k^M(s,t,\tau,y) \in L^2(P_M \times ]0,1^2), k=1,2$, such that
|
| 527 |
+
|
| 528 |
+
$$
|
| 529 |
+
\frac{\partial z_{\epsilon}}{\partial s} \rightarrow \xi_1^M \quad \text{in} \quad P_M,
|
| 530 |
+
$$
|
| 531 |
+
|
| 532 |
+
and
|
| 533 |
+
|
| 534 |
+
$$
|
| 535 |
+
\varepsilon^{\beta} \frac{\partial z_{\varepsilon}}{\partial t} \rightarrow \xi_2^M \quad \text{in } P_M,
|
| 536 |
+
$$
|
| 537 |
+
|
| 538 |
+
where $\to$ denotes the two-scale convergence.
|
| 539 |
+
|
| 540 |
+
For $k = 1, 2$ let $\hat{\xi}_k^M(s, t, \tau) = \int_0^1 \xi_k^M(s, t, \tau, y) dy$, which are functions defined on the domain $P_M \times \mathbb{T}$. The following estimate is obvious:
|
| 541 |
+
|
| 542 |
+
$$
|
| 543 |
+
\exists C > 0, \forall M > 2, \quad \left\| \hat{\xi}_k^M \right\|_{L^2(P_M \times ]0,1[)} \le C, \quad k = 1,2. \tag{26}
|
| 544 |
+
$$
|
| 545 |
+
|
| 546 |
+
Moreover if $M_1 < M_2$ then the restriction of $\hat{\xi}_k^{M_2}$ to the set $\{|s| \le M_1\}$ is exactly
|
| 547 |
+
$\hat{\xi}_k^{M_1}$ for $k=1,2$.
|
| 548 |
+
|
| 549 |
+
**Lemma 4.3.** For any $M > f_{max}$ the following results hold.
|
| 550 |
+
|
| 551 |
+
i) $\hat{\xi}_1^M$ is independent on $\tau$.
|
| 552 |
+
|
| 553 |
+
$$
|
| 554 |
+
ii) \quad \int_0^1 \hat{\xi}_2^M d\tau = 0 \quad \text{a.e. } (s,t).
|
| 555 |
+
$$
|
| 556 |
+
|
| 557 |
+
Proof. i) Consider $\theta_1(s, t, \tau)$ and $\theta_2(s, t, \tau)$ in $\mathcal{D}(P_M \times \mathbb{T})$ arbitrary, such that
|
| 558 |
+
|
| 559 |
+
$$
|
| 560 |
+
\frac{\partial \theta_1}{\partial s} + \frac{\partial \theta_2}{\partial \tau} = 0. \tag{27}
|
| 561 |
+
$$
|
| 562 |
+
|
| 563 |
+
Using the two-scale convergence and also the fact that $\beta < 1$, we infer
|
| 564 |
+
|
| 565 |
+
$$
|
| 566 |
+
\int_{P_M} \left[ \frac{\partial z_{\epsilon}}{\partial s} \theta_1 \left( s, t, \frac{t}{\epsilon} \right) + \epsilon \frac{\partial z_{\epsilon}}{\partial t} \theta_2 \left( s, t, \frac{t}{\epsilon} \right) \right] \to \int_{P_M} \int_0^1 \hat{\xi}_1^M \theta_1, \quad \text{for } \epsilon \to 0.
|
| 567 |
+
$$
|
| 568 |
+
---PAGE_BREAK---
|
| 569 |
+
|
| 570 |
+
On the other hand, by Green formula and according to (27) and to Lemma 4.2(ii):
|
| 571 |
+
|
| 572 |
+
$$ \int_{P_M} \left[ \frac{\partial z_\varepsilon}{\partial s} \theta_1 \left( s, t, \frac{t}{\varepsilon} \right) + \varepsilon \frac{\partial z_\varepsilon}{\partial t} \theta_2 \left( s, t, \frac{t}{\varepsilon} \right) \right] = -\varepsilon \int_{P_M} z_\varepsilon \frac{\partial \theta_2}{\partial t} \left( s, t, \frac{t}{\varepsilon} \right) \to 0, \quad \text{for } \varepsilon \to 0. $$
|
| 573 |
+
|
| 574 |
+
We then infer
|
| 575 |
+
|
| 576 |
+
$$ \int_{P_M} \int_{\mathbb{T}} \hat{\xi}_1^M \theta_1 = 0, \quad \text{for any } (\theta_1, \theta_2) \text{ satisfying (27).} $$
|
| 577 |
+
|
| 578 |
+
Using now the De Rham theorem, we deduce that the vector $(\hat{\xi}_1^M, 0)$ is a gradient in the variables $(s, \tau)$. Hence there exists a function $H$ such that
|
| 579 |
+
|
| 580 |
+
$$ \frac{\partial H}{\partial s} = \hat{\xi}_1^M \quad \text{and} \quad \frac{\partial H}{\partial \tau} = 0, $$
|
| 581 |
+
|
| 582 |
+
which proves i).
|
| 583 |
+
|
| 584 |
+
ii) From Lemma 4.2 (ii), for any $p \in [1, 2[$ and $M > 0$ fixed we have
|
| 585 |
+
|
| 586 |
+
$$ \varepsilon^{\beta} z_{\varepsilon} \to 0 \quad \text{in } L^{p}(P_{M}) - \text{strongly} \quad \text{for } \varepsilon \to 0, \qquad (28) $$
|
| 587 |
+
|
| 588 |
+
which implies
|
| 589 |
+
|
| 590 |
+
$$ \varepsilon^{\beta} \frac{\partial z_{\varepsilon}}{\partial t} \to 0 \quad \text{in} \quad D'(P_M). $$
|
| 591 |
+
|
| 592 |
+
On the other hand, from Lemma 4.2 (i) there exists $\tilde{\xi} \in L^2(P_M)$ such that, up to a subsequence of $\varepsilon$, we have
|
| 593 |
+
|
| 594 |
+
$$ \varepsilon^{\beta} \frac{\partial z_{\varepsilon}}{\partial t} \to \tilde{\xi} \quad \text{in } D'(P_M). $$
|
| 595 |
+
|
| 596 |
+
By identification we obtain
|
| 597 |
+
|
| 598 |
+
$$ \tilde{\xi} = 0. $$
|
| 599 |
+
|
| 600 |
+
Since by the two-scale theory
|
| 601 |
+
|
| 602 |
+
$$ \tilde{\xi} = \int_{0}^{1} \int_{0}^{1} \xi_{2}^{M} d\tau dy, $$
|
| 603 |
+
|
| 604 |
+
we infer the result.
|
| 605 |
+
|
| 606 |
+
Define now the space $H_{per,0}^{1}(P_M)$ by
|
| 607 |
+
|
| 608 |
+
$$ H_{per,0}^{1}(P_M) = \{ \varphi \in H^1(P_M), \varphi|_{|s|=M} = 0 \}, $$
|
| 609 |
+
|
| 610 |
+
and let
|
| 611 |
+
|
| 612 |
+
$$ D_0 = [0, 2] \times \mathbb{T} \times \mathbb{T} \quad \text{and} \quad D = \{(s, t, \tau) \in D_0, 0 \le s \le f(t, \tau)\}. $$
|
| 613 |
+
|
| 614 |
+
The next lemma shows that $\hat{\xi}_1^M$ is independent on $M$, for $0 \le s \le 2$.
|
| 615 |
+
|
| 616 |
+
**Lemma 4.4.** For any $M > f_{max}$,
|
| 617 |
+
|
| 618 |
+
$$ \hat{\xi}_1^M = \frac{(\sigma_0 - \sigma_m)q}{\sigma_m q + \sigma_0(1-q)} \frac{\partial u^+}{\partial v}, \quad \text{for } 0 \le s \le 2, $$
|
| 619 |
+
|
| 620 |
+
where $\sigma_0$, $\sigma_m$ and $\frac{\partial u^+}{\partial v}$ are evaluated in $x = \gamma(t)$ and $q$ is defined by (6).
|
| 621 |
+
---PAGE_BREAK---
|
| 622 |
+
|
| 623 |
+
*Proof.* We take as test function in (3) an element $\varphi \in H_0^1(\Omega)$ with support in $\alpha_\varepsilon(P_M)$. Using the local coordinates $(s, t)$ and (16) we infer
|
| 624 |
+
|
| 625 |
+
$$
|
| 626 |
+
\begin{equation}
|
| 627 |
+
\begin{aligned}
|
| 628 |
+
& \varepsilon^\beta \int_0^1 \int_{-M}^M (1 + \varepsilon^\beta s \kappa) \sigma_\varepsilon(\alpha_\varepsilon) \left( \frac{1}{\varepsilon^{2\beta}} \partial_s z_\varepsilon \partial_s \varphi + \frac{1}{(1 + \varepsilon^\beta s \kappa)^2} \partial_t z_\varepsilon \partial_t \varphi \right) ds dt \\
|
| 629 |
+
& = (\sigma_0 - \sigma_m) \int_0^1 \int_0^{f(t,t/\varepsilon)} (1 + \varepsilon^\beta s \kappa) (\nabla_{s,t} \varphi)^T B_\varepsilon \nabla_x u(\alpha_\varepsilon) ds dt.
|
| 630 |
+
\end{aligned}
|
| 631 |
+
\tag{29}
|
| 632 |
+
\end{equation}
|
| 633 |
+
$$
|
| 634 |
+
|
| 635 |
+
Take in the above equality a test function $\varphi(s,t)$ which is an element of $H_{per,0}^1(P_M)$ and multiply by $\varepsilon^\beta$. Observe that $J_0^T\nabla_x u(\gamma) = (\frac{\partial u}{\partial\nu}(\gamma), \frac{\partial u}{\partial\theta}(\gamma))^T$ hence
|
| 636 |
+
|
| 637 |
+
$$
|
| 638 |
+
\lim_{\epsilon \to 0} \left[ \int_0^1 \int_{-M}^0 \sigma_1 \frac{\partial z_\epsilon}{\partial s} \frac{\partial \varphi}{\partial s} + \int_0^1 \int_0^{f(t,t/\epsilon)} \sigma_m \frac{\partial z_\epsilon}{\partial s} \frac{\partial \varphi}{\partial s} + \int_0^1 \int_{f(t,t/\epsilon)}^M \sigma_0 \frac{\partial z_\epsilon}{\partial s} \frac{\partial \varphi}{\partial s} \right] = (\sigma_0 - \sigma_m) \lim_{\epsilon \to 0} \int_0^1 \int_0^{f(t,t/\epsilon)} \frac{\partial \varphi}{\partial s} \frac{\partial u^+}{\partial \nu}|_{\gamma(t)^+}.
|
| 639 |
+
$$
|
| 640 |
+
|
| 641 |
+
According to Lemma 4.1 with $v_\epsilon = \frac{\partial z_\epsilon}{\partial s}$ and $\Phi$ in appropriate manner (for example for the second integral we take $\Phi(s,t,\tau) = \sigma_m \frac{\partial \varphi}{\partial s}(s,t)$), we infer
|
| 642 |
+
|
| 643 |
+
$$
|
| 644 |
+
\begin{equation}
|
| 645 |
+
\begin{split}
|
| 646 |
+
& \int_0^1 \int_0^1 \int_{-M}^0 \sigma_1 \hat{\xi}_1^M \frac{\partial \varphi}{\partial s} ds + \int_D \sigma_m \hat{\xi}_1^M \frac{\partial \varphi}{\partial s} ds + \int_0^1 \int_0^1 \int_{f(t,\tau)}^M \sigma_0 \hat{\xi}_1^M \frac{\partial \varphi}{\partial s} ds \\
|
| 647 |
+
& = (\sigma_0 - \sigma_m) \int_D \frac{\partial \varphi}{\partial s} \frac{\partial u^+}{\partial \nu}.
|
| 648 |
+
\end{split}
|
| 649 |
+
\tag{30}
|
| 650 |
+
\end{equation}
|
| 651 |
+
$$
|
| 652 |
+
|
| 653 |
+
Let $\varphi$ be arbitrary such that $\varphi = 0$ for $s \le f_{max}$. We deduce that $\hat{\xi}_1^M$ is independent on $s$ for $s \ge f_{max}$. On the other hand, according to (26), the $L^2$-norm of $\hat{\xi}_1^M$ is uniformly bounded in $M$ hence
|
| 654 |
+
|
| 655 |
+
$$
|
| 656 |
+
\hat{\xi}_1^M = 0, \quad \text{for } s \geq f_{\text{max}}. \tag{31}
|
| 657 |
+
$$
|
| 658 |
+
|
| 659 |
+
Now choose $\varphi \in H_{per,0}^1(P_M)$ arbitrary such that $\varphi = 0$ for $s \le 0$ or $s \ge 2$. Integrating (30) first in $\tau$ and using the independence of $\hat{\xi}_1^M$ on $\tau$, we obtain
|
| 660 |
+
|
| 661 |
+
$$
|
| 662 |
+
\int_{\mathbb{T}}^2 \int_0^2 [\sigma_m q + \sigma_0 (1-q)] \hat{\xi}_1^M \frac{\partial \varphi}{\partial s} ds dt = \int_{\mathbb{T}}^2 \int_0^2 (\sigma_0 - \sigma_m) \frac{\partial u^+}{\partial \nu} q \frac{\partial \varphi}{\partial s} ds dt,
|
| 663 |
+
$$
|
| 664 |
+
|
| 665 |
+
which gives
|
| 666 |
+
|
| 667 |
+
$$
|
| 668 |
+
\frac{\partial}{\partial s}[(\sigma_m q + \sigma_0(1-q))\hat{\xi}_1^M] = \frac{\partial}{\partial s}\left[(\sigma_0 - \sigma_m)\frac{\partial u^+}{\partial \nu}q\right], \quad \text{for } 0 \le s \le 2.
|
| 669 |
+
$$
|
| 670 |
+
|
| 671 |
+
Taking into account (31) we obtain the result.
|
| 672 |
+
|
| 673 |
+
The next lemma gives an useful information about $\hat{\xi}_2^M$.
|
| 674 |
+
|
| 675 |
+
**Lemma 4.5.** For any $M > f_{max}$ and any function $d \in C(\mathbb{T})$ we have
|
| 676 |
+
|
| 677 |
+
$$
|
| 678 |
+
\int_{\mathbb{T}}^2 \int_{\mathbb{T}}^2 \int_0^{f(t,\tau)} d(t) \hat{\xi}_2^M ds d\tau dt = (\sigma_0 - \sigma_m) \int_{\mathbb{T}}^2 \frac{\partial u}{\partial \theta} d(t) r_2(t) dt,
|
| 679 |
+
$$
|
| 680 |
+
|
| 681 |
+
where $r_2$ is defined by (9).
|
| 682 |
+
---PAGE_BREAK---
|
| 683 |
+
|
| 684 |
+
*Proof.* In (29) we take a test function $\varphi$ in the form $\varphi(s,t) = \Phi(s, t, \frac{t}{\varepsilon})$ where $\Phi$ is an enough regular function defined on $]-M, M[\times\mathbb{T}^2$. Multiplying (29) by $\varepsilon$ we obtain
|
| 685 |
+
|
| 686 |
+
$$
|
| 687 |
+
\begin{aligned}
|
| 688 |
+
& \lim_{\varepsilon \to 0} \left[ \int_0^1 \int_{-M}^0 \sigma_1 \varepsilon^\beta \frac{\partial z_\varepsilon}{\partial t} \frac{\partial \Phi}{\partial \tau} \left( s, t, \frac{t}{\varepsilon} \right) + \int_0^1 \int_0^{f(t,t/\varepsilon)} \sigma_m \varepsilon^\beta \frac{\partial z_\varepsilon}{\partial t} \frac{\partial \Phi}{\partial \tau} \left( s, t, \frac{t}{\varepsilon} \right) + \right. \\
|
| 689 |
+
& \qquad \left. \int_0^1 \int_{f(t,t/\varepsilon)}^M \sigma_0 \varepsilon^\beta \frac{\partial z_\varepsilon}{\partial t} \frac{\partial \Phi}{\partial \tau} \left( s, t, \frac{t}{\varepsilon} \right) \right] = (\sigma_0 - \sigma_m) \lim_{\varepsilon \to 0} \int_0^1 \int_0^{f(t,t/\varepsilon)} \frac{\partial u}{\partial \theta}(\gamma) \frac{\partial \Phi}{\partial \tau} \left( s, t, \frac{t}{\varepsilon} \right).
|
| 690 |
+
\end{aligned}
|
| 691 |
+
$$
|
| 692 |
+
|
| 693 |
+
Passing to the limit and using again Lemma 4.1 we obtain
|
| 694 |
+
|
| 695 |
+
$$
|
| 696 |
+
\begin{aligned}
|
| 697 |
+
& \int_0^1 \int_0^1 \int_{-M}^0 \sigma_1 \hat{\xi}_2^M \frac{\partial \Phi}{\partial \tau} + \int_D \sigma_m \hat{\xi}_2^M \frac{\partial \Phi}{\partial \tau} + \int_0^1 \int_0^1 \int_{f(t,\tau)}^M \sigma_0 \hat{\xi}_2^M \frac{\partial \Phi}{\partial \tau} = \\
|
| 698 |
+
& \qquad \qquad \qquad \qquad \qquad \qquad \int_D (\sigma_0 - \sigma_m) \frac{\partial u}{\partial \theta} \frac{\partial \Phi}{\partial \tau}.
|
| 699 |
+
\end{aligned}
|
| 700 |
+
\tag{32}
|
| 701 |
+
$$
|
| 702 |
+
|
| 703 |
+
By density argument, this equation is also valid for $\Phi$ not regular in $(s,t)$ but with the $H^1$-regularity in $\tau$.
|
| 704 |
+
|
| 705 |
+
Taking first $\Phi$ arbitrary such that $\Phi = 0$ for $s \ge 0$, we deduce that $\hat{\xi}_2^M$ is independent on $\tau$. With the help of Lemma 4.3(ii) we obtain
|
| 706 |
+
|
| 707 |
+
$$
|
| 708 |
+
\hat{\xi}_2^M = 0, \quad \text{for } s \le 0.
|
| 709 |
+
\tag{33}
|
| 710 |
+
$$
|
| 711 |
+
|
| 712 |
+
We similarly obtain
|
| 713 |
+
|
| 714 |
+
$$
|
| 715 |
+
\hat{\xi}_2^M = 0, \quad \text{for } s \ge f_{max}.
|
| 716 |
+
\tag{34}
|
| 717 |
+
$$
|
| 718 |
+
|
| 719 |
+
Let $\Phi$ be a test function such that
|
| 720 |
+
|
| 721 |
+
$$
|
| 722 |
+
\begin{align*}
|
| 723 |
+
& \sigma_m \frac{\partial \Phi}{\partial \tau} = d(t) + c(s,t) && \text{on } D, \\
|
| 724 |
+
& \sigma_0 \frac{\partial \Phi}{\partial \tau} = c(s,t), && \text{on } D_0 \setminus D,
|
| 725 |
+
\end{align*}
|
| 726 |
+
$$
|
| 727 |
+
|
| 728 |
+
where $c(s,t)$ must be chosen such that $\int_0^1 \frac{\partial \Phi}{\partial \tau} d\tau = 0$ in order to have the periodicity in $\tau$. Obviously, the function $\Phi$ given on $D_0$ by $\Phi(s,t,\tau) = \int_0^\tau \varphi_1(s,t,\tau') d\tau'$ where
|
| 729 |
+
|
| 730 |
+
$$
|
| 731 |
+
\varphi_1 =
|
| 732 |
+
\begin{cases}
|
| 733 |
+
\dfrac{d(t)}{\sigma_m} + \dfrac{c(s,t)}{\sigma_m} & \text{on } D, \\
|
| 734 |
+
\dfrac{c(s,t)}{\sigma_0} & \text{on } D_0 \setminus D,
|
| 735 |
+
\end{cases}
|
| 736 |
+
$$
|
| 737 |
+
|
| 738 |
+
with
|
| 739 |
+
|
| 740 |
+
$$ c(s,t) = -\frac{d\sigma_0 q}{\sigma_0 q + \sigma_m(1-q)},
|
| 741 |
+
\tag{35} $$
|
| 742 |
+
|
| 743 |
+
satisfies the required conditions. We then extend $\Phi$ on $s < 0$ or $s > 2$ such that $\Phi = 0$ on $s = \pm M$.
|
| 744 |
+
|
| 745 |
+
Taking this $\Phi$ as a test function in (32) and according to (33)-(34) we infer:
|
| 746 |
+
|
| 747 |
+
$$
|
| 748 |
+
\int_D d(t) \hat{\xi}_2^M + \int_{D_0} c(s,t) \hat{\xi}_2^M = \int_D (\sigma_0 - \sigma_m) \frac{\partial u}{\partial \theta} \frac{d+c}{\sigma_m}.
|
| 749 |
+
\tag{36}
|
| 750 |
+
$$
|
| 751 |
+
|
| 752 |
+
From Lemma 4.3 (ii) the second integral of this equality is equal to 0, which gives the result, according to (35). □
|
| 753 |
+
---PAGE_BREAK---
|
| 754 |
+
|
| 755 |
+
## 4.2 Proofs of Theorem 2.3 and Theorem 2.7
|
| 756 |
+
|
| 757 |
+
We now end the proof of our main results.
|
| 758 |
+
|
| 759 |
+
### 4.2.a Proof of Theorem 2.3
|
| 760 |
+
|
| 761 |
+
To prove Theorem 2.3 it remains to compute the limit of $E_{\varepsilon}''$. Using local coordinates $(s, t)$, $E_{\varepsilon}''$ equals
|
| 762 |
+
|
| 763 |
+
$$E_{\varepsilon}'' = \int_{T} \int_{0}^{f(t,t/\varepsilon)} (\nabla_x \varphi)^T(\alpha_{\varepsilon})(B_{\varepsilon})^T \nabla_{s,t} z_{\varepsilon} \det(A_{\varepsilon}) \, ds \, dt.$$
|
| 764 |
+
|
| 765 |
+
Using the regularity of $\sigma_0, \sigma_m$ and $\varphi$ we infer
|
| 766 |
+
|
| 767 |
+
$$\lim_{\varepsilon \to 0} E_{\varepsilon}'' = \lim_{\varepsilon \to 0} \int_{T} \int_{0}^{f(t,t/\varepsilon)} (\nabla_x \varphi^+)^T(\gamma) J_0 \left( \frac{\partial z_{\varepsilon}}{\partial s} \frac{\partial z_{\varepsilon}}{\partial t} \right) dsdt.$$
|
| 768 |
+
|
| 769 |
+
Using now Lemma 4.1 we obtain
|
| 770 |
+
|
| 771 |
+
$$\lim_{\varepsilon \to 0} E_{\varepsilon}'' = \int_D \frac{\partial \varphi}{\partial \theta}(\gamma) \hat{\xi}_2^M + \int_D \frac{\partial \varphi^+}{\partial \nu}(\gamma) \hat{\xi}_1^M.$$
|
| 772 |
+
|
| 773 |
+
From Lemma 4.5 with $d(t) = \frac{\partial \varphi}{\partial \theta}(\gamma(t))$, we deduce
|
| 774 |
+
|
| 775 |
+
$$\int_D \frac{\partial \varphi}{\partial \theta}(\gamma) \hat{\xi}_2^M = (\sigma_0 - \sigma_m) \int_T \frac{\partial u}{\partial \theta}(\gamma) \frac{\partial \varphi}{\partial \theta}(\gamma) r_2(t) dt.$$
|
| 776 |
+
|
| 777 |
+
The expression of $\hat{\xi}_1^M$ of Lemma 4.4 leads to
|
| 778 |
+
|
| 779 |
+
$$\int_D \frac{\partial \varphi^+}{\partial \nu}(\gamma) \hat{\xi}_1^M = (\sigma_0 - \sigma_m) \int_T \frac{\partial u^+}{\partial \nu}(\gamma) \frac{\partial \varphi^+}{\partial \nu}(\gamma) r_1(t) dt$$
|
| 780 |
+
|
| 781 |
+
and this last three equalities give
|
| 782 |
+
|
| 783 |
+
$$\lim_{\varepsilon \to 0} E_{\varepsilon}'' = (\sigma_0 - \sigma_m) \int_{\Gamma} \left( \frac{\partial u^+}{\partial \nu} \frac{\partial \varphi^+}{\partial \nu} r_1(t) + \frac{\partial u}{\partial \theta} \frac{\partial \varphi}{\partial \theta} r_2(t) \right) d\Gamma. \quad (37)$$
|
| 784 |
+
|
| 785 |
+
Inserting (37) into (23) leads to equality (10) of Theorem 2.3.
|
| 786 |
+
|
| 787 |
+
### 4.2.b Proof of Theorem 2.7
|
| 788 |
+
|
| 789 |
+
Let us show that far away from the thin layer, the sequence $z_\varepsilon$ is bounded in $H^1$. Then using a compacity argument we infer that $z$ is the strong limit of $z_\varepsilon$ in $L^s$, for all $s \ge 1$, which is exactly Theorem 2.7.
|
| 790 |
+
|
| 791 |
+
**Lemma 4.6.** Let $D$ be an open set such that $\Gamma \subset D$ and $\bar{D} \subset \Omega$. Then there exist two positive constants $\varepsilon_0$ and $c$ depending on $D$ such that, for any $\varepsilon \in ]0, \varepsilon_0[$ we have
|
| 792 |
+
|
| 793 |
+
$$||z_{\varepsilon}||_{H_1(\Omega\setminus D)} \leq c.$$
|
| 794 |
+
---PAGE_BREAK---
|
| 795 |
+
|
| 796 |
+
*Proof.* We proceed as in [9]. We introduce the linear operator $\mathcal{R}: H^1(\Omega \setminus D) \to H^1(D)$ given by $\mathcal{R}(\psi) = \varphi$ iff $\varphi$ is the unique solution of the problem
|
| 797 |
+
|
| 798 |
+
$$ \begin{cases} -\nabla \cdot (\sigma \nabla \varphi) = 0 & \text{in } D \\ \varphi = \psi & \text{on } \partial D. \end{cases} \qquad (38) $$
|
| 799 |
+
|
| 800 |
+
It is clear, by interior regularity, that for any open set $D_1$ with $\bar{D}_1 \subset D$ there exists a positive constant $c_1$ depending on $D_1$ such that
|
| 801 |
+
|
| 802 |
+
$$ \|\mathcal{R}(\psi)\|_{W^{1,\infty}(D_1)} \le c_1 \|\psi\|_{H^1(\Omega\setminus D)}, \quad \forall \psi \in H^1(\Omega \setminus D). \quad (39) $$
|
| 803 |
+
|
| 804 |
+
We now introduce the function $\varphi_\varepsilon$ defined in $\Omega$ by
|
| 805 |
+
|
| 806 |
+
$$ \varphi_\varepsilon = \begin{cases} z_\varepsilon & \text{in } \Omega \setminus D \\ \mathcal{R}(z_\varepsilon) & \text{in } D. \end{cases} \qquad (40) $$
|
| 807 |
+
|
| 808 |
+
It is clear that $\varphi_\varepsilon \in H_0^1(\Omega)$ so we can take it as a test function in the variational formulation (4). We obtain
|
| 809 |
+
|
| 810 |
+
$$ \int_{\Omega} \sigma \nabla z_{\varepsilon} \cdot \nabla \varphi_{\varepsilon} = - \int_{\Omega_{\varepsilon}^m} (\sigma^{\varepsilon} - \sigma) \nabla z_{\varepsilon} \cdot \nabla R(z_{\varepsilon}) - \frac{1}{\varepsilon^{\beta}} \int_{\Omega_{\varepsilon}^m} (\sigma^{\varepsilon} - \sigma) \nabla u \cdot \nabla R(z_{\varepsilon}). \quad (41) $$
|
| 811 |
+
|
| 812 |
+
On the other hand, taking $\mathcal{R}(z_\varepsilon) - z_\varepsilon \in H_0^1(D)$ as a test function in (38) with $\psi = z_\varepsilon$, we obtain
|
| 813 |
+
|
| 814 |
+
$$ \int_D \sigma |\nabla R(z_\epsilon)|^2 dx = \int_D \sigma \nabla z_\epsilon \cdot \nabla R(z_\epsilon) $$
|
| 815 |
+
|
| 816 |
+
so, the left-hand side of (41) becomes
|
| 817 |
+
|
| 818 |
+
$$ \int_{\Omega-D} \sigma |\nabla z_{\epsilon}|^2 dx + \int_D \sigma |\nabla R(z_{\epsilon})|^2 dx $$
|
| 819 |
+
|
| 820 |
+
Now using $i)$ of Lemma 3.1 and the inequality (39) we easily control the terms of the right of (41) and with the help of the Poincaré inequality on $\Omega \setminus D$ we obtain the desired result. $\square$
|
| 821 |
+
|
| 822 |
+
# 5 Conclusion
|
| 823 |
+
|
| 824 |
+
In this paper, we have derived appropriate transmission conditions to tackle the numerical difficulties inherent in the geometry of a very rough thin layer. These transmission conditions lead to an explicit characterization of the polarization tensor of Vogelius and Capdeboscq [10]. More precisely, suppose that $\sigma_0 = \sigma_1$ and denote by $G(x,y)$ the Dirichlet solution for the Laplace operator defined in [5] pp33 by
|
| 825 |
+
|
| 826 |
+
$$ \left\{ \begin{array}{ll} \nabla_x (\sigma_0(x) \nabla_x G(x, y)) = -\delta_y, & \text{in } \Omega \\ G(x, y) = 0, & \text{for } x \in \partial\Omega. \end{array} \right. $$
|
| 827 |
+
|
| 828 |
+
According to Theorem 2.7, the following equality holds almost everywhere in $\partial\Omega$
|
| 829 |
+
|
| 830 |
+
$$ (u_\varepsilon - u)(y) = \varepsilon^\beta \int_\Omega \Delta_x G(x, y) z(x) dx + o(\varepsilon^\beta), \quad y \in \partial\Omega. $$
|
| 831 |
+
---PAGE_BREAK---
|
| 832 |
+
|
| 833 |
+
According to (11), simple calculations lead for almost every $y \in \partial\Omega$ to
|
| 834 |
+
|
| 835 |
+
$$ (u_\varepsilon - u)(y) = \varepsilon^\beta \int_\Gamma (\sigma_m - \sigma_0) M(s) \left( \frac{\partial_n u}{\nabla_\Gamma u} \right) \cdot \left( \frac{\partial_n G}{\nabla_\Gamma G} \right) (s, y) d_\Gamma(s) + o(\varepsilon^\beta), $$
|
| 836 |
+
|
| 837 |
+
where *M* is the polarization tensor defined by
|
| 838 |
+
|
| 839 |
+
$$ \forall s \in \Gamma, M(s) = \begin{pmatrix} \tilde{f} + (\sigma_0 - \sigma_m)r_1 & 0 \\ 0 & \tilde{f} + (\sigma_0 - \sigma_m)r_2 \end{pmatrix}. $$
|
| 840 |
+
|
| 841 |
+
Observe that if *f* is constant, then $M(s) = \begin{pmatrix} \sigma_0/\sigma_m & 0 \\ 0 & 1 \end{pmatrix}$, which is the polarization tensor given by Beretta et al. [6, 7].
|
| 842 |
+
|
| 843 |
+
One of the main feature of our result is the following. Unlike the case of the weakly oscillating thin membrane (see [16]), if the quasi $\varepsilon$-period of the oscillations of the rough layer is fast compared to its thickness, then the layer influence on the steady-state potential may not be approximated by only considering the mean effect of the rough layer.
|
| 844 |
+
|
| 845 |
+
Actually, if we were to consider the mean effect of the roughness, the approximate transmission conditions would be these presented in (13), by replacing $\tilde{f}$ by its average $\tilde{f}$ defined in (7). Observe that our transmission conditions (11) are different since they involve parameters $r_1$ and $r_2$ quantifying the roughness of $\Omega_\varepsilon^m$. More precisely, denote by $\tilde{z}$ the correction, which only takes into account the mean effect of the layer. Then according to (13), $\tilde{z}$ will satisfy (for simplicity, we consider the $\varepsilon$-periodic case):
|
| 846 |
+
|
| 847 |
+
$$
|
| 848 |
+
\begin{align*}
|
| 849 |
+
\nabla \cdot (\sigma_k \nabla \tilde{z}) &= 0 && \text{in } \Omega^k, \quad k=0,1, \\
|
| 850 |
+
z^+ - z^- &= \left( \frac{\sigma_0}{\sigma_m} - 1 \right) \tilde{f} \frac{\partial u^+}{\partial \nu} && \text{on } \Gamma, \\
|
| 851 |
+
\sigma_0 \frac{\partial z^+}{\partial \nu} - \sigma_1 \frac{\partial z^-}{\partial \nu} &= (\tilde{f}(\sigma_0 - \sigma_m)) \frac{\partial^2 u}{\partial \theta^2} && \text{on } \Gamma, \\
|
| 852 |
+
z|_{\partial\Omega} &= 0.
|
| 853 |
+
\end{align*}
|
| 854 |
+
$$
|
| 855 |
+
|
| 856 |
+
To illustrate this assertion, we conclude the paper by numerical simulations obtained using the mesh generator *Gmsh* [13] and the finite element library *Getfem++* [18].
|
| 857 |
+
|
| 858 |
+
The computational domain $\Omega$ is delimited by the circles of radius 2 and of radius 0.2 centered in 0, while $\Omega^1$ is the intersection of $\Omega$ with the concentric disk of radius 1. The rough layer is then described by $f(y) = 1 + \frac{1}{2}\sin(y)$ and we choose $\beta = 1/2$. One period of the domain is shown Fig. 2(a). The Dirichlet boundary data is identically 1 on the outer circle and 0 on the inner circle. The conductivities $\sigma_0$, $\sigma_1$ and $\sigma_m$ are respectively equal to 1, 1 and 0.1. The computed coefficients for quantifying the roughness are $r_1 = 5.87$ and $r_2 = 0.413$ (three significant digits are kept).
|
| 859 |
+
|
| 860 |
+
The numerical convergence rates for both the $H^1$- and the $L^2$-norms in $\Omega^1$ of the three following errors $u_\varepsilon - u$, $u_\varepsilon - u - \varepsilon^\beta z$ and $u_\varepsilon - u - \varepsilon^\beta \tilde{z}$ as $\varepsilon$ goes to zero are given Fig. 3 for$^6$ $\beta = 1/2$. The numerical convergence rates with the thickness of the layer are comparable between the $H^1$- and the $L^2$-norms.
|
| 861 |
+
|
| 862 |
+
⁶The same numerical simulations have been performed for several values of $\beta < 1$. All the results are very similar, hence we just show here the case $\beta = 1/2$.
|
| 863 |
+
---PAGE_BREAK---
|
| 864 |
+
|
| 865 |
+
Figure 2: Representation of one period of the domain and the corresponding errors with approximate solutions $u$ and $u + \varepsilon^{\beta}z$. $\varepsilon = 2\pi/60$. Do not consider the error inside the rough layer because a proper reconstruction of the solution in it is not currently implemented.
|
| 866 |
+
---PAGE_BREAK---
|
| 867 |
+
|
| 868 |
+
Observe that they are also similar to the rates shown in [17, 16] and in [11], respectively for the case of constant thickness and for the periodic roughness case. More precisely they are close to 1 for $u_\varepsilon - u$ and for $u_\varepsilon - (u+\varepsilon^\beta \tilde{z})$, whereas the convergence rate is close to 2 for $u_\varepsilon - (u + \varepsilon^\beta z)$. Therefore according to these numerical simulations, the convergence of $z_\varepsilon$ to $z$ seems to hold strongly in $H^1$ far from the layer, even if our method does not lead to such result: another analysis should be performed.
|
| 869 |
+
|
| 870 |
+
To conclude, Fig. 4 demonstrates that the convergence rate decreases dramatically for $\beta = 1$. This is in accordance with the theory, since the approximate transmission conditions for $\beta = 1$ given in [11, 12] are very different from the conditions proved in the present paper.
|
| 871 |
+
|
| 872 |
+
## References
|
| 873 |
+
|
| 874 |
+
[1] T. Abboud and H. Ammari. Diffraction at a curved grating: TM and TE cases, homogenization. *J. Math. Anal. Appl.*, 202(3):995-1026, 1996.
|
| 875 |
+
|
| 876 |
+
[2] Y. Achdou and O. Pironneau. Domain decomposition and wall laws. *C. R. Acad. Sci. Paris Sér. I Math.*, 320(5):541-547, 1995.
|
| 877 |
+
|
| 878 |
+
[3] G. Allaire. Homogenization and two-scale convergence. *SIAM J. Math. Anal.*, 23(6):1482-1518, 1992.
|
| 879 |
+
|
| 880 |
+
[4] G. Allaire and M. Amar. Boundary layer tails in periodic homogenization. *ESAIM Control Optim. Calc. Var.*, 4:209-243 (electronic), 1999.
|
| 881 |
+
|
| 882 |
+
[5] H. Ammari and H. Kang. Reconstruction of conductivity inhomogeneities of small diameter via boundary measurements. In *Inverse problems and spectral theory*, volume 348 of *Contemp. Math.*, pages 23-32. Amer. Math. Soc., Providence, RI, 2004.
|
| 883 |
+
|
| 884 |
+
[6] E. Beretta and E. Francini. Asymptotic formulas for perturbations in the electromagnetic fields due to the presence of thin inhomogeneities. In *Inverse problems: theory and applications (Cortona/Pisa, 2002)*, volume 333 of *Contemp. Math.*, pages 49-62. Amer. Math. Soc., Providence, RI, 2003.
|
| 885 |
+
|
| 886 |
+
[7] E. Beretta, E. Francini, and M. S. Vogelius. Asymptotic formulas for steady state voltage potentials in the presence of thin inhomogeneities. A rigorous error analysis. *J. Math. Pures Appl.* (9), 82(10):1277-1301, 2003.
|
| 887 |
+
|
| 888 |
+
[8] E. Beretta, A. Mukherjee, and M. S. Vogelius. Asymptotic formulas for steady state voltage potentials in the presence of conductivity imperfections of small area. *Z. Angew. Math. Phys.*, 52(4):543-572, 2001.
|
| 889 |
+
|
| 890 |
+
[9] G.C. Buscaglia, I.S. Ciuperca, and M. Jai. Topological asymptotic expansions for the generalized Poisson problem with small inclusions and applications in lubrication. *Inverse Problems*, 23(2):695-711, 2007.
|
| 891 |
+
|
| 892 |
+
[10] Y. Capdeboscq and M. S. Vogelius. A general representation formula for boundary voltage perturbations caused by internal conductivity inhomogeneities of low volume fraction. *M2AN Math. Model. Numer. Anal.*, 37(1):159-173, 2003.
|
| 893 |
+
---PAGE_BREAK---
|
| 894 |
+
|
| 895 |
+
Figure 3: Error in the cytoplasm vs $\epsilon^{\beta}$ for three approximate solutions. We choose $\beta = 1/2$.
|
| 896 |
+
---PAGE_BREAK---
|
| 897 |
+
|
| 898 |
+
Figure 4: $L^2$-error in the cytoplasm vs $\epsilon$ for four approximate solutions.
|
| 899 |
+
|
| 900 |
+
[11] I.S. Ciuperca, M. Jai, and C. Poignard. Approximate transmission conditions through a rough thin layer. The case of the periodic roughness. To appear in European Journal of Applied Mathematics. Research report INRIA RR-6812. http://hal.inria.fr/inria-00356124/fr/.
|
| 901 |
+
|
| 902 |
+
[12] I.S. Ciuperca, R. Perrussel, and C. Poignard. Influence of a Rough Thin Layer on the Steady-state Potential. Research report INRIA RR-6812. http://hal.inria.fr/inria-00384198/fr/.
|
| 903 |
+
|
| 904 |
+
[13] C. Geuzaine and J. F. Remacle. Gmsh mesh generator. http://geuz.org/gmsh/.
|
| 905 |
+
|
| 906 |
+
[14] W. Jäger, A. Mikelić, and N. Neuss. Asymptotic analysis of the laminar viscous flow over a porous bed. *SIAM J. Sci. Comput.*, 22(6):2006–2028 (electronic), 2000.
|
| 907 |
+
|
| 908 |
+
[15] Dag Lukkassen, Gabriel Nguetseng, and Peter Wall. Two-scale convergence. *Int. J. Pure Appl. Math.*, 2(1):35–86, 2002.
|
| 909 |
+
|
| 910 |
+
[16] C. Poignard. Approximate transmission conditions through a weakly oscillating thin layer. *Math. Meth. App. Sci.*, 32:435–453, 2009.
|
| 911 |
+
|
| 912 |
+
[17] C. Poignard, P. Dular, R. Perrussel, L. Krähenbühl, L. Nicolas, and M. Schatzman. Approximate conditions replacing thin layer. *IEEE Trans. on Mag.*, 44(6):1154–1157, 2008.
|
| 913 |
+
|
| 914 |
+
[18] Y. Renard and J. Pommier. Getfem finite element library. http://home.gna.org/getfem/.
|
| 915 |
+
---PAGE_BREAK---
|
| 916 |
+
|
| 917 |
+
Centre de recherche INRIA Bordeaux – Sud Ouest
|
| 918 |
+
Domaine Universitaire - 351, cours de la Libération - 33405 Talence Cedex (France)
|
| 919 |
+
|
| 920 |
+
Centre de recherche INRIA Grenoble – Rhône-Alpes : 655, avenue de l'Europe - 38334 Montbonnot Saint-Ismier
|
| 921 |
+
|
| 922 |
+
Centre de recherche INRIA Lille – Nord Europe : Parc Scientifique de la Haute Borne - 40, avenue Halley - 59650 Villeneuve d'Ascq
|
| 923 |
+
|
| 924 |
+
Centre de recherche INRIA Nancy – Grand Est : LORIA, Technopôle de Nancy-Brabois - Campus scientifique
|
| 925 |
+
615, rue du Jardin Botanique - BP 101 - 54602 Villers-lès-Nancy Cedex
|
| 926 |
+
|
| 927 |
+
Centre de recherche INRIA Paris – Rocquencourt : Domaine de Voluceau - Rocquencourt - BP 105 - 78153 Le Chesnay Cedex
|
| 928 |
+
|
| 929 |
+
Centre de recherche INRIA Rennes – Bretagne Atlantique : IRISA, Campus universitaire de Beaulieu - 35042 Rennes Cedex
|
| 930 |
+
|
| 931 |
+
Centre de recherche INRIA Saclay – Île-de-France : Parc Orsay Université - ZAC des Vignes : 4, rue Jacques Monod - 91893 Orsay Cedex
|
| 932 |
+
|
| 933 |
+
Centre de recherche INRIA Sophia Antipolis – Méditerranée : 2004, route des Lucioles - BP 93 - 06902 Sophia Antipolis Cedex
|
samples/texts_merged/901380.md
ADDED
|
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Graphs of Polynomial Functions
|
| 5 |
+
|
| 6 |
+
For each function: (1) determine the real zeros and state the multiplicity of any repeated zeros, (2) list the x-intercepts where the graph crosses the x-axis and those where it does not cross the x-axis, and (3) sketch the graph.
|
| 7 |
+
|
| 8 |
+
1) $f(x) = -x^3$
|
| 9 |
+
|
| 10 |
+
2) $f(x) = 2x^3 - 3x^2$
|
| 11 |
+
|
| 12 |
+
3) $f(x) = x^4 + x^3 - 4x^2 - 4x$
|
| 13 |
+
|
| 14 |
+
4) $f(x) = x^4 + x^3$
|
| 15 |
+
---PAGE_BREAK---
|
| 16 |
+
|
| 17 |
+
5) $f(x) = -x^3 + 6x^2 - 12x + 8$
|
| 18 |
+
|
| 19 |
+
6) $f(x) = x^3 - 2x^2$
|
| 20 |
+
|
| 21 |
+
**Describe the end behavior of each function.**
|
| 22 |
+
|
| 23 |
+
7) $f(x) = -x^5 + 2x^3 - x + 1$
|
| 24 |
+
|
| 25 |
+
8) $f(x) = 2x^2 - 4x - 3$
|
| 26 |
+
|
| 27 |
+
9) $f(x) = x^4 - 2x^2 - x + 1$
|
| 28 |
+
|
| 29 |
+
10) $f(x) = -x^3 - 9x^2 - 24x - 20$
|
| 30 |
+
|
| 31 |
+
11) $f(x) = -x^5 + 3x^3 + 1$
|
| 32 |
+
|
| 33 |
+
12) $f(x) = x^2 + 6x + 6$
|
| 34 |
+
|
| 35 |
+
**Critical thinking questions:**
|
| 36 |
+
|
| 37 |
+
13) Write a polynomial function $f$ with the following properties:
|
| 38 |
+
|
| 39 |
+
(a) Zeros at 1, 2, and 3
|
| 40 |
+
|
| 41 |
+
(b) $f(x) \leq 0$ for all values of $x$
|
| 42 |
+
|
| 43 |
+
(c) Degree greater than 1
|
| 44 |
+
|
| 45 |
+
14) Write a polynomial function $g$ with degree greater than one that passes through the points (0, 1), (1, 1), and (2, 1).
|
| 46 |
+
---PAGE_BREAK---
|
| 47 |
+
|
| 48 |
+
Graphs of Polynomial Functions
|
| 49 |
+
|
| 50 |
+
For each function: (1) determine the real zeros and state the multiplicity of any repeated zeros, (2) list the x-intercepts where the graph crosses the x-axis and those where it does not cross the x-axis, and (3) sketch the graph.
|
| 51 |
+
|
| 52 |
+
1) $f(x) = -x^3$
|
| 53 |
+
|
| 54 |
+
Real zeros: {0 mult. 3}
|
| 55 |
+
x-int, crosses: 0
|
| 56 |
+
x-int, doesn't cross: None
|
| 57 |
+
|
| 58 |
+
2) $f(x) = 2x^3 - 3x^2$
|
| 59 |
+
|
| 60 |
+
Real zeros: {0 mult. 2, $\frac{3}{2}$}
|
| 61 |
+
x-int, crosses: $\frac{3}{2}$
|
| 62 |
+
x-int, doesn't cross: 0
|
| 63 |
+
|
| 64 |
+
3) $f(x) = x^4 + x^3 - 4x^2 - 4x$
|
| 65 |
+
|
| 66 |
+
Real zeros: {0, 2, -2, -1}
|
| 67 |
+
x-int, crosses: 0, 2, -2, -1
|
| 68 |
+
x-int, doesn't cross: None
|
| 69 |
+
|
| 70 |
+
4) $f(x) = x^4 + x^3$
|
| 71 |
+
|
| 72 |
+
Real zeros: {0 mult. 3, -1}
|
| 73 |
+
x-int, crosses: 0, -1
|
| 74 |
+
x-int, doesn't cross: None
|
| 75 |
+
---PAGE_BREAK---
|
| 76 |
+
|
| 77 |
+
5) $f(x) = -x^3 + 6x^2 - 12x + 8$
|
| 78 |
+
|
| 79 |
+
Real zeros: {2 mult. 3}
|
| 80 |
+
x-int, crosses: 2
|
| 81 |
+
x-int, doesn't cross: None
|
| 82 |
+
|
| 83 |
+
6) $f(x) = x^3 - 2x^2$
|
| 84 |
+
|
| 85 |
+
Real zeros: {0 mult. 2, 2}
|
| 86 |
+
x-int, crosses: 2
|
| 87 |
+
x-int, doesn't cross: 0
|
| 88 |
+
|
| 89 |
+
**Describe the end behavior of each function.**
|
| 90 |
+
|
| 91 |
+
7) $f(x) = -x^{5} + 2x^{3} - x + 1$
|
| 92 |
+
|
| 93 |
+
$\lim_{x \to -\infty} f(x) = \infty$
|
| 94 |
+
|
| 95 |
+
$\lim_{x \to \infty} f(x) = -\infty$
|
| 96 |
+
|
| 97 |
+
8) $f(x) = 2x^2 - 4x - 3$
|
| 98 |
+
|
| 99 |
+
$\lim_{x \to -\infty} f(x) = \infty$
|
| 100 |
+
|
| 101 |
+
$\lim_{x \to \infty} f(x) = \infty$
|
| 102 |
+
|
| 103 |
+
9) $f(x) = x^4 - 2x^2 - x + 1$
|
| 104 |
+
|
| 105 |
+
$\lim_{x \to -\infty} f(x) = \infty$
|
| 106 |
+
|
| 107 |
+
$\lim_{x \to \infty} f(x) = \infty$
|
| 108 |
+
|
| 109 |
+
10) $f(x) = -x^3 - 9x^2 - 24x - 20$
|
| 110 |
+
|
| 111 |
+
$\lim_{x \to -\infty} f(x) = \infty$
|
| 112 |
+
|
| 113 |
+
$\lim_{x \to \infty} f(x) = -\infty$
|
| 114 |
+
|
| 115 |
+
11) $f(x) = -x^{5} + 3x^{3} + 1$
|
| 116 |
+
|
| 117 |
+
$\lim_{x \to -\infty} f(x) = \infty$
|
| 118 |
+
|
| 119 |
+
$\lim_{x \to \infty} f(x) = -\infty$
|
| 120 |
+
|
| 121 |
+
12) $f(x) = x^2 + 6x + 6$
|
| 122 |
+
|
| 123 |
+
$\lim_{x \to -\infty} f(x) = \infty$
|
| 124 |
+
|
| 125 |
+
$\lim_{x \to \infty} f(x) = \infty$
|
| 126 |
+
|
| 127 |
+
**Critical thinking questions:**
|
| 128 |
+
|
| 129 |
+
13) Write a polynomial function $f$ with the following properties:
|
| 130 |
+
|
| 131 |
+
(a) Zeros at 1, 2, and 3
|
| 132 |
+
|
| 133 |
+
(b) $f(x) \leq 0$ for all values of $x$
|
| 134 |
+
|
| 135 |
+
(c) Degree greater than 1
|
| 136 |
+
|
| 137 |
+
$f(x) = -(x-1)^2 \cdot (x-2)^2 \cdot (x-3)^2$
|
| 138 |
+
|
| 139 |
+
14) Write a polynomial function $g$ with degree greater than one that passes through the points (0, 1), (1, 1), and (2, 1).
|
| 140 |
+
|
| 141 |
+
$g(x) = x(x-1)(x-2) + 1$
|
samples/texts_merged/93120.md
ADDED
|
@@ -0,0 +1,835 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Entanglement-enhanced quantum rectification
|
| 5 |
+
|
| 6 |
+
Kasper Poulsen¹,*, Alan C. Santos², Lasse B. Kristensen¹ and Nikolaj T. Zinner¹,³
|
| 7 |
+
|
| 8 |
+
¹Department of Physics and Astronomy, Aarhus University, Ny munkegade 120, 8000 Aarhus C, Denmark
|
| 9 |
+
|
| 10 |
+
²Departamento de Física, Universidade Federal de São Carlos,
|
| 11 |
+
Rodovia Washington Luís, km 235 - SP-310, 13565-905 São Carlos, SP, Brazil
|
| 12 |
+
|
| 13 |
+
³Aarhus Institute of Advanced Studies, Aarhus University,
|
| 14 |
+
Høegh-Guldbergs Gade 6B, 8000 Aarhus C, Denmark
|
| 15 |
+
|
| 16 |
+
Quantum mechanics dictates the band-structure of materials that is essential for functional electronic com-
|
| 17 |
+
ponents. With increased miniaturization of devices it becomes possible to exploit the full potential of quantum
|
| 18 |
+
mechanics through the principles of superpositions and entanglement. We propose a new class of quantum rec-
|
| 19 |
+
tifiers that can leverage entanglement to dramatically increase performance by coupling two small spin chains
|
| 20 |
+
through an effective double-slit interface. Simulations show that rectification is enhanced by several orders of
|
| 21 |
+
magnitude even in small systems and should be realizable using several of the quantum technology platforms
|
| 22 |
+
currently available.
|
| 23 |
+
|
| 24 |
+
Classical electronic components such as transistors and diodes are based on the band-structure of materials[1], and their integration into circuits and chips constitutes the first quantum revolution. Presently, increased miniaturization requires us to deal with the quantum nature of the information carriers themselves. This is particularly important, as we push towards the new paradigm of quantum computing, and a new toolbox of quantum components needs to be developed.
|
| 25 |
+
|
| 26 |
+
Transport properties are one of the most basic and essential features of versatile components, and it is hoped that not only charge but also magnetic (spin) [2, 3] and thermal (phonon) [4–7] currents can be leveraged in future technologies. A key component is a current rectifier, well known in electronics as the diode, which features an asymmetry in its forward and reverse transport ability. Important steps towards high-quality acoustic [8–11] and thermal diodes [12–17] have been reported recently. A particularly promising platform for rectification is quantum spin chains coupled to thermal baths [18–20] that can realize components like minimal motors [21], thermal transistors [22], thermal diodes [23–26], and spin current diodes [27–29]. The common mechanism of most of these components is a mismatch of energetics in the vibrational spectra [6], or in the electronic [1] or the magnetic (spin excitation) band gap [27].
|
| 27 |
+
|
| 28 |
+
Here we introduce a new class of rectifiers that utilize the quintessential quantum mechanical property of entanglement. By coupling two segments of a quantum spin chain through a two-way junction that entangles the interface spins, we demonstrate boosts of spin and thermal current rectification factors of at least three orders of magnitude even for few-spin systems, which implies that components based on entanglement may vastly outperform those based on band-structure. To illustrate the mechanism, we concentrate on the few-spin example shown in Fig. 1. It consists of six spin-1/2 particles
|
| 29 |
+
|
| 30 |
+
in a two-segment chain connected by a 'double-slit' interface
|
| 31 |
+
and described by an XXZ Heisenberg Hamiltonian of the form
|
| 32 |
+
|
| 33 |
+
$$
|
| 34 |
+
\begin{equation}
|
| 35 |
+
\begin{split}
|
| 36 |
+
\hat{H}/J = \hat{X}_{12} &+ (1+\delta)\hat{X}_{23} + \hat{X}_{24} + J_{34}/J \hat{X}_{34} \\
|
| 37 |
+
& + \hat{X}_{35} + \hat{X}_{45} + \hat{X}_{56} + \Delta \hat{Z}_{12}
|
| 38 |
+
\end{split}
|
| 39 |
+
\tag{1}
|
| 40 |
+
\end{equation}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
where $\hat{X}_{ij} = \hat{\sigma}_x^{(i)} \hat{\sigma}_x^{(j)} + \hat{\sigma}_y^{(i)} \hat{\sigma}_y^{(j)}$ is the XX spin exchange operator, while $\hat{Z}_{ij} = \hat{\sigma}_z^{(i)} \hat{\sigma}_z^{(j)}$ is the Z coupling that induces relative energy shifts. The Pauli matrices for the $i$th spin are denoted $\hat{\sigma}^{(i)}$ for $\alpha = x,y,z$ and we are using units where $\hbar = k_B = 1$. The exchange coupling $J$ gives the overall scale of the problem, while the exchange between the interface spins is $J_{34}$. A prerequisite of rectification is a breaking of left-right symmetry which we implement by a non-zero $Z$ coupling parametrized by $\Delta$ although we note that this may as well have been provided by local magnetic fields applied to spins 1 and 2 [23, 24]. Due to the interface, we also have to consider up-down symmetry, i.e. the symmetry between the upper and lower part of the interface, and we parametrize its breaking by adding $\delta$ to the exchange between spins 2 and 3 in Fig. 1.
|
| 44 |
+
|
| 45 |
+
To study rectification of currents in the system, we couple it
|
| 46 |
+
to thermal baths on the left and right, see Fig. 1. One bath is
|
| 47 |
+
cold and forces the adjacent spin into its ground state, while
|
| 48 |
+
the other is hot and forces the adjacent spin into a statistical
|
| 49 |
+
mixture of up and down. The presence of the baths means we
|
| 50 |
+
have an open (non-unitary) quantum system that we describe
|
| 51 |
+
using the density operator $\hat{\rho}$ and the corresponding Lindblad
|
| 52 |
+
master equation formalism. The evolution of the density op-
|
| 53 |
+
erator $\hat{\rho}$ of the system is determined by the Lindblad master
|
| 54 |
+
equation [30, 31]
|
| 55 |
+
|
| 56 |
+
$$
|
| 57 |
+
\frac{\partial \hat{\rho}}{\partial t} = \mathcal{L}[\hat{\rho}] = -i[\hat{H}, \hat{\rho}] + \mathcal{D}_1[\hat{\rho}] + \mathcal{D}_6[\hat{\rho}], \quad (2)
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+
where [•,•] is the commutator, $\mathcal{L}[\hat{\rho}]$ is the Lindblad superop-
|
| 61 |
+
erator and $\mathcal{D}_n[\hat{\rho}]$ is a dissipative term describing the action of
|
| 62 |
+
|
| 63 |
+
* poulsen@phys.au.dk
|
| 64 |
+
---PAGE_BREAK---
|
| 65 |
+
|
| 66 |
+
Figure 1. Entanglement-enhanced quantum spin diode. Illustration of a few-spin model of a quantum rectification device consisting of two segments, XXZ chain on the left and XX chain on the right, connected by a 'two-way' interface. The device is connected to thermal baths at each end, one at low (blue) and one at high (red) temperature. The exchange coupling is *J*, while the two spins in the interface are coupled with an exchange coupling *J*<sub>34</sub>. The Z coupling (anisotropy) is Δ *J* and controls the left-right asymmetry of the diode. The dimensionless δ measures the up-down symmetry-breaking in the system. During operation in reverse bias as shown here, the central interface spins are in the maximally-entangled Bell state illustrated in the top left-hand corner. In the bottom right-hand corner the numbering of the spins is shown.
|
| 67 |
+
|
| 68 |
+
the baths:
|
| 69 |
+
|
| 70 |
+
$$
|
| 71 |
+
\mathcal{D}_n[\hat{\rho}] = \gamma \left[
|
| 72 |
+
\begin{aligned}
|
| 73 |
+
& \lambda_n \left( \hat{\sigma}_+^{(n)} \hat{\rho} \hat{\sigma}_-^{(n)} - \frac{1}{2} \left\{ \hat{\sigma}_-^{(n)} \hat{\sigma}_+^{(n)}, \hat{\rho} \right\} \right) \\
|
| 74 |
+
& + (1 - \lambda_n) \left( \hat{\sigma}_-^{(n)} \hat{\rho} \hat{\sigma}_+^{(n)} - \frac{1}{2} \left\{ \hat{\sigma}_+^{(n)} \hat{\sigma}_-^{(n)}, \hat{\rho} \right\} \right)
|
| 75 |
+
\end{aligned}
|
| 76 |
+
\right],
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where $\hat{\sigma}_{+}^{(n)} = (\hat{\sigma}_{-}^{(n)})^{\dagger} = (\hat{\sigma}_{x}^{(n)} + i\hat{\sigma}_{y}^{(n)})/2$ and $\{\bullet, \bullet\}$ denotes the anti commutator. $\gamma$ is the strength of the interaction with the baths which we have set to $\gamma = J$ unless otherwise stated. How the rectification is affected by the coupling strength, will be studied in Appendix A13. The nature of the interaction is determined by $\lambda_n$, and we have focused on $\lambda_1$ and $\lambda_6$ set to either 0 or 0.5. If $\lambda_n = 0$, the bath will force the spin to tend down ($|\downarrow\rangle_n\langle\downarrow|$) corresponding to a low temperature bath, and if $\lambda_n = 0.5$, the bath will force the spin into a statistical mixture of up and down ($(|\downarrow\rangle_n\langle\downarrow| + |\uparrow\rangle_n\langle\uparrow|)/2$) corresponding to a high temperature bath. The baths induce currents and the system is generally in a non-equilibrium state. However, after sufficient time, it will reach a steady-state (ss), $\hat{\rho}_{ss} = 0$. It is this steady-state that determines the rectification properties. For $\delta \neq 0$, the steady-state will be unique and independent of the initial state (see Appendix A1 for further details). We define the steady-state spin current [23] $\mathcal{J} = \text{tr}(\hat{j}_{12}\hat{\rho}_{ss}) = \text{tr}(\hat{j}_{56}\hat{\rho}_{ss})$ as the expectation value of the operator $\hat{j}_{ij} = 2J(\hat{\sigma}_x^{(i)}\hat{\sigma}_y^{(j)} - \hat{\sigma}_y^{(i)}\hat{\sigma}_x^{(j)})$ in the steady-state. By forward bias, we denote the situation where the hot bath interacts with spin 1, while the cold bath interacts with spin 6 and a current $\mathcal{J}_f$ flows from left to right. In reverse bias the cold bath is at spin 1 and the hot bath at spin 6, with a (generally negative) current $\mathcal{J}_f$ flowing from right to left in Fig. 1. To obtain a well-functioning diode, we must demand that
|
| 80 |
+
|
| 81 |
+
1. no spin current is allowed to flow in reverse bias $\mathcal{J}_r \sim 0$,
|
| 82 |
+
|
| 83 |
+
2. an appreciable spin current can flow in forward bias $\mathcal{J}_f \gg -\mathcal{J}_r$.
|
| 84 |
+
|
| 85 |
+
A measure of quality that contains both requirements is the rectification
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\mathcal{R} = -\frac{\mathcal{J}_f}{\mathcal{J}_r},
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
which tends to $\mathcal{R} = 1$ when transport is symmetric, while a good diode needs $\mathcal{R} \gg 1$. An alternative quality measure is the contrast defined as
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
C = \left| \frac{\mathcal{J}_f + \mathcal{J}_r}{\mathcal{J}_f - \mathcal{J}_r} \right|,
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
such that $C = 0$ is equivalent to $\mathcal{R} = 1$, while $C = 1$ for $\mathcal{J}_r \to 0$, i.e. for the perfect diode.
|
| 98 |
+
|
| 99 |
+
The rectification results for the six-spin implementation of Fig. 1 are shown in Fig. 2 as a function of the relevant parameters of the model. The contour plot in Fig. 2A shows $\mathcal{R}$ for a small up-down symmetry-breaking of $\delta = 0.01$ as function of $J_{34}$ and $\Delta$. Our key discovery is the region in the bottom right-hand corner where values of $\mathcal{R} > 10^5$ are reached. These occur approximately around a critical $J_{34}^c(\Delta) = -(Δ + 1.3)J$ (see Appendix A2). Fig. 2B demonstrates the dependence on $\delta$ along these lines showing that $\delta \ll 1$ gives higher $\mathcal{R}$ as function of $\Delta$. This may be advantageous for experimental realization as a small asymmetry in the up-down symmetry is likely to occur and is a useful control parameter. We also confirm that large rectifications are mainly due to suppression of $\mathcal{J}_r$, see Fig. 2C.
|
| 100 |
+
|
| 101 |
+
In previous studies [23, 29], it has been shown that significant rectification can occur in linear two-segment chains as function of Δ due to the band gap induced by the Z coupling in one segment. For comparison, the dashed line in Fig. 2B shows the rectification when spin 3 in Fig. 1 is removed, effectively yielding a linear chain. The increase in rectification, and hence diode quality, of the two-way design that includes spin 3 is seen to be three orders of magnitude or more, and
|
| 102 |
+
---PAGE_BREAK---
|
| 103 |
+
|
| 104 |
+
Figure 2. **Rectification magnitudes.** (A) $\mathcal{R}$ as a function of $\Delta$ and $J_{34}$ for $\delta = 0.01$. (B) $\mathcal{R}$ as a function of $\Delta$ for different values of $\delta$ and $J_{34} = J_{34}^c(\Delta)$ (solid lines). The dashed line displays $\mathcal{R}$ for a linear chain with spin 3 removed. (C) Steady-state currents $J_f$ and $J_r$ for $\delta = 0.01$ and $J_{34} = J_{34}^c(\Delta)$.
|
| 105 |
+
|
| 106 |
+
is our main finding. Moreover, it is clearly not a result of the band gap between the segments as it already occurs in very small systems that should display far lower $\mathcal{R}$.
|
| 107 |
+
|
| 108 |
+
To elucidate the mechanism for the large rectifications in our design, we focus on reverse bias and the associated steady-state $\hat{\rho}_{ss,r}$. Compared to a linear chain, we have spins 3 and 4 in the two-way interface, and to study their behavior we take a partial trace over all the other spins, i.e. $\hat{\rho}_{ss,r}^{(34)} = \text{tr}_{\hat{\sigma}_z^{(1)}\hat{\sigma}_z^{(2)}\hat{\sigma}_z^{(5)}\hat{\sigma}_z^{(6)}}[\hat{\rho}_{ss,r}]$. Furthermore, we define the maximally entangled two-particle Bell state $|\Psi_-⟩ = (|↑↓⟩ - |↓↑⟩)/\sqrt{2}$. To quantify the entanglement between spins 3 and 4, we consider the fidelity (overlap) of the density matrix and the Bell state, i.e.
|
| 109 |
+
|
| 110 |
+
$$F(\hat{\rho}_{ss,r}^{(34)}, |\Psi_-⟩) = \langle\Psi_-|\hat{\rho}_{ss,r}^{(34)}|\Psi_-\rangle$$
|
| 111 |
+
|
| 112 |
+
which is unity if and only if $\hat{\rho}_{ss,r}^{(34)} = |\Psi_-⟩\langle\Psi_-|$. This fidelity is shown in Fig. 3A along with the contrast C, and we see very clearly that $C \sim 1$ is associated with spins 3 and 4 being in the maximally-entangled state $|\Psi_-⟩$. Hence, our numerical results leads us to conclude that the large rectification is a direct result of entanglement at the interface.
|
| 113 |
+
|
| 114 |
+
The entanglement-enhanced rectification can be explained analytically in the following way. For $J_r \sim 0$ spins 1 and 2 are well-described by $|↓⟩$ in reverse bias due to the cold bath (see Appendix A3). This implies a factorization of the steady-state density operator according to $\hat{\rho}_{ss,r} = |\psi⟩\langle\psi| \otimes \hat{\rho}_{ss,r}^{(56)}$, where $|\psi\rangle = |↓↓⟩|\Psi_−\rangle$ and $\hat{\rho}_{ss,r}^{(56)}$ describes spins 5 and 6. The Hamiltonian acting on spins 1 through 4 yields
|
| 115 |
+
|
| 116 |
+
$$\hat{H}_{1-4}/J|\psi\rangle = \sqrt{2}\delta|\downarrow↑↓↓⟩ + (Δ - 2J_{34}/J)|ψ⟩.$$
|
| 117 |
+
|
| 118 |
+
Hence, $|\psi\rangle$ is an eigenstate of $\hat{H}_{1-4}$ for $\delta \ll 1$. Furthermore, a spin excitation at spin 3 and 4 will not be able to propagate
|
| 119 |
+
|
| 120 |
+
Figure 3. **Rectification mechanism, sensitivity and heat rectification.** (A) Contrast C and fidelity $F(\hat{\rho}_{ss,r}^{(34)}, |\Psi_-⟩)$ (see text) as a function of $\Delta$ where $\delta = 0.01$. (B) $\mathcal{R}$ as a function of $h_3$ (for $h_4 = \delta' = 0$), $h_4$ (for $h_3 = \delta' = 0$) and $\delta'$ (for $h_3 = h_4 = 0$) with $\delta = 0.03$ and $\Delta = 5$. (C) Rectification $\mathcal{R}$ as a function of the coherence time T for $\delta = 0.1$ and $\Delta = 5$. This is done without error-correction (solid black), with error-correction (solid red) and for the linear chain with spin 3 removed (dashed black) (D) $\mathcal{R}_Q$ as a function of h for different values of $\delta$ and $J_{34} = -J_{34}^c(h/J)$ (solid lines). The dashed line displays $\mathcal{R}_Q$ for a linear chain with spin 3 removed. For plots A-C the parametrization $J_{34} = J_{34}^c(\Delta)$ was used.
|
| 121 |
+
|
| 122 |
+
to spin 2 since the exchange terms $\hat{X}_{23}$ and $\hat{X}_{24}$ destructively interfere. The same is true for excitations at spin 5 or 6, see Appendix A4. This explains why the $|\Psi_-⟩$ state of spin 3 and 4 naturally leads to $J_r \sim 0$ for $\delta \ll 1$ as seen in Fig. 2B. While this seems to indicate that $\delta = 0$ constitutes the perfect diode, this system has similar currents both in forward- and reverse bias and therefore a small but non-zero $\delta$ is required (see Appendix A5 for further details).
|
| 123 |
+
|
| 124 |
+
Next, we study the sensitivity of the rectification to local magnetic fields, coupling strength perturbations and finite coherence times. We expect spins 3 and 4 to be most sensitive to magnetic fields, while the coupling of spins 4 and 5 should be the more sensitive coupling parameter due to the symmetry of the system (see Appendix A6). Hence, we add to Eq. 1 perturbations of the form $\hat{H}' = h_3\hat{\sigma}_z^{(3)} + h_4\hat{\sigma}_z^{(4)} + \delta'J\hat{X}_{45}$. Fig. 3B shows $\mathcal{R}$ as function of $h_3$ ($h_4 = \delta' = 0$), $h_4$ ($h_3 = \delta' = 0$) and $\delta'$ ($h_3 = h_4 = 0$) for $\delta = 0.03$. The largest $\mathcal{R}$ requires magnetic fields of less than 20% of J, which is within experimental precision for, e.g., superconducting circuits [32]. Fig. 3B also shows $\mathcal{R}$ as function of $\delta'$ and indicates that $\delta' < \delta$ is the region of large rectification. The rapid decrease in $\mathcal{R}$ could be used to detect variations in couplings in the system. Since the functionality of the diode is based on purely quantum mechanical effects, decoherence of especially spin 3 and 4 is expected to reduce the rectification. In Fig. 3C the rectification for both the diode and the linear reduced system is plotted for a setup of spins with finite coherence times $T = T_1 = T_2$ for both de-
|
| 125 |
+
---PAGE_BREAK---
|
| 126 |
+
|
| 127 |
+
cay $T_1$ and dephasing $T_2$. Current quantum technologies have an estimated $T_J \sim 4 \cdot 10^4$ for superconducting circuits [33] and $T_J \sim 10^4$ for trapped ions [34, 35], for which the entanglement enhanced diode performs better (see Appendix A7). For near-term devices we provide an autonomous error correction scheme that may enhance the rectification as seen in Fig. 3C, the details of which are given in Appendix A7. With improving coherence times in future devices, the rectification and thus the benefit of the diode increases essentially linearly.
|
| 128 |
+
|
| 129 |
+
Finally, we generalize to heat currents. For this the relevant Hamiltonian is
|
| 130 |
+
|
| 131 |
+
$$ \hat{H}_Q = \hat{H}(\Delta = 0) + h(\hat{\sigma}_z^{(1)} + \hat{\sigma}_z^{(2)}) + \omega \sum_{i=1}^{6} \hat{\sigma}_z^{(i)}. $$
|
| 132 |
+
|
| 133 |
+
To study rectification of heat currents in the system, we couple the system to thermal baths at finite temperature. One can define the heat current $K_{f/t}$ as the heat exchanged between the system and one of the baths. Like before the heat rectification is defined as $R_Q = -K_f/K_r$. For $\omega \gg h$, $J_{34}$, J the baths behave like the original model and similar rectification values to those seen in Fig. 2B are observed (see Appendix A8). For $\omega = 0$ the rectification of the heat diode is shown in Fig. 3D for a cold bath of temperature 0.1J and a hot bath of temperature 10.1J. For comparison, the rectification of the reduced system where spin 3 is removed is shown with a dashed line
|
| 134 |
+
|
| 135 |
+
in Fig. 3D. The proposed diode thus generalizes very well to heat currents where rectifications of $\gtrsim 10^8$ can be reached. For full details on the model, see Appendix A9.
|
| 136 |
+
|
| 137 |
+
The entanglement-enhanced rectification diode proposed here has many variations (see Appendix A10) and generalizes to larger systems straightforwardly (see Appendix A11). It is built within a generic model with no particular implementation in mind and could be realized with several of the current quantum technology platforms including surface chains of atoms [36, 37], trapped ions [38, 39], semiconductor structures, doped silicon systems, quantum dots and NV centers [40, 41], Rydberg atoms [42] and superconducting circuits [33]. Finally, we note that the rectification found here is due to loss in conductance from the addition of or increase in exchange between spin 3 and 4. Interestingly, this unintuitive behavior has a classical analog known as Braess paradox [43] and is seen in lesser extend in traffic, mechanical, electrical, and microfluidic networks [44–46].
|
| 138 |
+
|
| 139 |
+
We thank Philip Hofmann and Jill Miwa for feedback on the text, as well as Kristen Kaasbjerg and Antti-Pekka Jauho for discussion. We are particularly grateful to Sai Vinjanampathy, Suddhasatta Mahapatra, and Bhaskaran Muralidharan for careful feedback and discussions on the setup and technical details. K.P. and N.T.Z. acknowledge funding from The Independent Research Fund Denmark DFF-FNU. A.C.S. acknowledges financial support from the São Paulo Research Foundation (FAPESP) (Grant No 2019/22685-1). L.B.K. acknowledges financial support from the Carlsberg Foundation.
|
| 140 |
+
|
| 141 |
+
[1] N. W. Ashcroft, N. D. Mermin, *Solid state physics* (Harcourt Collage Publishers, 1976).
|
| 142 |
+
|
| 143 |
+
[2] I. Žutić, J. Fabian, S. Das Sarma, Quantum Spintronics: Engineering and Manipulating Atom-Like Spins in Semiconductors. *Rev. Mod. Phys.* **76**, 323 (2004).
|
| 144 |
+
|
| 145 |
+
[3] S. A. Wolf, D. D. Awschalom, R. A. Buhrman, J. M. Daughton, S. Von Molnar, M. L. Roukes, A. Yu Chtchelkanova, D. M. Treger, Spintronics: a spin-based electronics vision for the future. *science* **294**, 1488 (2001).
|
| 146 |
+
|
| 147 |
+
[4] F. Giazotto, T. T. Heikkilä, A. Luukanen, A. M. Savin, J. P. Pekola, Opportunities for mesoscopics in thermometry and refrigeration: Physics and applications. *Rev. Mod. Phys.* **78**, 217 (2006).
|
| 148 |
+
|
| 149 |
+
[5] N. A. Roberts, D. G. Walker, A review of thermal rectification observations and models in solid materials. *International Journal of Thermal Sciences* **50**, 648 (2011).
|
| 150 |
+
|
| 151 |
+
[6] N. Li, J. Ren, L. Wang, G. Zhang, P. Hänggi, B. Li, et al., Colloquium: Phononics: Manipulating heat flow with electronic analogs and beyond. *Rev. Mod. Phys.* **84**, 1045 (2012).
|
| 152 |
+
|
| 153 |
+
[7] G. Benenti, G. Casati, K. Saito, R. S. Whitney, Fundamental aspects of steady-state conversion of heat to work at the nanoscale. *Physics Reports* **694**, 1 (2017).
|
| 154 |
+
|
| 155 |
+
[8] B. Liang, B. Yuan, J.-C. Cheng, Acoustic Diode: Rectification of Acoustic Energy Flux in One-Dimensional Systems. *Phys. Rev. Lett.* **103**, 104301 (2009).
|
| 156 |
+
|
| 157 |
+
[9] B. Liang, X. S. Guo, J. Tu, D. Zhang, J. C. Cheng, An acoustic rectifier. *Nature materials* **9**, 989 (2010).
|
| 158 |
+
|
| 159 |
+
[10] R. Fleury, D. L. Sounas, C. F. Sieck, M. R. Haberman, A. Alù, Sound isolation and giant linear nonreciprocity in a compact acoustic circulator. *Science* **343**, 516 (2014).
|
| 160 |
+
|
| 161 |
+
|
| 162 |
+
|
| 163 |
+
[11] T. Nomura, X. X. Zhang, S. Zherlitsyn, J. Wosnitza, Y. Tokura, N. Nagaosa, S. Seki, Phonon Magnetochiral Effect. *Phys. Rev. Lett.* **122**, 145901 (2019).
|
| 164 |
+
|
| 165 |
+
[12] M. Terraneo, M. Peyrard, G. Casati, Controlling the Energy Flow in Nonlinear Lattices: A Model for a Thermal Rectifier. *Phys. Rev. Lett.* **88**, 094302 (2002).
|
| 166 |
+
|
| 167 |
+
[13] B. Li, L. Wang, G. Casati, Thermal Diode: Rectification of Heat Flux. *Phys. Rev. Lett.* **93**, 184301 (2004).
|
| 168 |
+
|
| 169 |
+
[14] B. Li, L. Wang, G. Casati, Negative differential thermal resistance and thermal transistor. *Applied Physics Letters* **88**, 143501 (2006).
|
| 170 |
+
|
| 171 |
+
[15] C. W. Chang, D. Okawa, A. Majumdar, A. Zettl, Solid-state thermal rectifier. *Science* **314**, 1121 (2006).
|
| 172 |
+
|
| 173 |
+
[16] M. J. Martínez-Pérez, A. Fornieri, F. Giazotto, Rectification of electronic heat current by a hybrid thermal diode. *Nature nanotechnology* **10**, 303 (2015).
|
| 174 |
+
|
| 175 |
+
[17] H. Wang, S. Hu, K. Takahashi, X. Zhang, H. Takamatsu, J. Chen, Experimental study of thermal rectification in suspended monolayer graphene. *Nature communications* **8**, 15843 (2017).
|
| 176 |
+
|
| 177 |
+
[18] G. Benenti, G. Casati, T. Prosen, D. Rossini, Negative differential conductivity in far-from-equilibrium quantum spin chains. *EPL (Europhysics Letters)* **85**, 37001 (2009).
|
| 178 |
+
|
| 179 |
+
[19] T. Prosen, Open XXZ Spin Chain: Nonequilibrium Steady State and a Strict Bound on Ballistic Transport. *Phys. Rev. Lett.* **106**, 217206 (2011).
|
| 180 |
+
|
| 181 |
+
[20] D. Karevski, V. Popkov, G. M. Schütz, Exact Matrix Product Solution for the Boundary-Driven Lindblad XXZ Chain. *Phys.*
|
| 182 |
+
---PAGE_BREAK---
|
| 183 |
+
|
| 184 |
+
Rev. Lett. 110, 047201 (2013).
|
| 185 |
+
|
| 186 |
+
[21] U. Bissbort, C. Teo, C. Guo, G. Casati, G. Benenti, D. Poletti, Minimal motor for powering particle motion from spin imbalance. Phys. Rev. E 95, 062143 (2017).
|
| 187 |
+
|
| 188 |
+
[22] K. Joulain, J. Drevillon, Y. Ezzahri, J. Ordonez-Miranda, Quantum Thermal Transistor. Phys. Rev. Lett. 116, 200601 (2016).
|
| 189 |
+
|
| 190 |
+
[23] Y. Yan, C. Q. Wu, B. Li, Control of heat transport in quantum spin systems Phys. Rev. B 79, 014207 (2009).
|
| 191 |
+
|
| 192 |
+
[24] L. Zhang, Y. Yan, C. Q. Wu, J. S. Wang, B. Li, Reversal of thermal rectification in quantum systems. Phys. Rev. B 80, 172301 (2009).
|
| 193 |
+
|
| 194 |
+
[25] P. H. Guimarães, G. T. Landi, M. J. de Oliveira, Thermal rectification in anharmonic chains under an energy-conserving noise. Phys. Rev. E 92, 062120 (2015).
|
| 195 |
+
|
| 196 |
+
[26] V. Balachandran, G. Benenti, E. Pereira, G. Casati, D. Poletti, Heat current rectification in segmented XXZ chains. Phys. Rev. E 99, 032136 (2019).
|
| 197 |
+
|
| 198 |
+
[27] G. T. Landi, E. Novais, M. J. de Oliveira, D. Karevski, Flux rectification in the quantum XXZ chain. Phys. Rev. E 90, 042142 (2014).
|
| 199 |
+
|
| 200 |
+
[28] K. A. van Hoogdalem, D. Loss, Rectification of spin currents in spin chains. Phys. Rev. B 84, 024402 (2011).
|
| 201 |
+
|
| 202 |
+
[29] V. Balachandran, G. Benenti, E. Pereira, G. Casati, D. Poletti, Perfect Diode in Quantum Spin Chains. Phys. Rev. Lett. 120, 200603 (2018).
|
| 203 |
+
|
| 204 |
+
[30] G. Lindblad, On the generators of quantum dynamical semigroups. Communications in Mathematical Physics 48, 119 (1976).
|
| 205 |
+
|
| 206 |
+
[31] H. P. Breuer, F. Petruccione, The theory of open quantum systems (Oxford University Press on Demand, 2002).
|
| 207 |
+
|
| 208 |
+
[32] Y. Chen, C. Neill, P. Roushan, N. Leung, M. Fang, R. Barends, J. Kelly, B. Campbell, Z. Chen, B. Chiaro, A. Dunsworth, E. Jeffrey, A. Megrant, J. Y. Mutus, P. J. J. O'Malley, C. M. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, M. R. Geller, A. N. Cleland, J. M. Martinis, Qubit Architecture with High Coherence and Fast Tunable Coupling Phys. Rev. Lett. 113, 220502 (2014).
|
| 209 |
+
|
| 210 |
+
[33] M. H. Devoret, R. J. Schoelkopf, Superconducting circuits for quantum information: an outlook. Science 339, 1169 (2013).
|
| 211 |
+
|
| 212 |
+
[34] H. Häffner, C. F. Roos, R. Blatt, Quantum computing with trapped ions. Physics Reports 469, 155 (2008).
|
| 213 |
+
|
| 214 |
+
[35] M. Johanning, A. F. Varón, C. Wunderlich, Quantum simulations with cold trapped ions. Journal of Physics B: Atomic, Molecular and Optical Physics 42, 154009 (2009).
|
| 215 |
+
|
| 216 |
+
[36] C. F. Hirjibehedin, C. P. Lutz, A. J. Heinrich, Spin coupling in engineered atomic structures. Science 312, 1021 (2006).
|
| 217 |
+
|
| 218 |
+
[37] A. A. Khajetoorians, J. Wiebe, B. Chilian, R. Wiesendanger, Realizing all-spin-based logic operations atom by atom. Science 332, 1062 (2011).
|
| 219 |
+
|
| 220 |
+
[38] D. Porras, J. I. Cirac, Effective quantum spin systems with trapped ions. Phys. Rev. Lett. 92, 207901 (2004).
|
| 221 |
+
|
| 222 |
+
[39] R. Blatt, C. F. Roos, Quantum simulations with trapped ions. Nature Physics 8, 277 (2012).
|
| 223 |
+
|
| 224 |
+
[40] J. J. L. Morton, D. R. McCamey, M. A. Eriksson, S. A. Lyon, Embracing the quantum limit in silicon computing. Nature 479, 345 EP (2011).
|
| 225 |
+
|
| 226 |
+
[41] D. D. Awschalom, L. C. Bassett, A. S. Dzurak, E. L. Hu, J. R. Petta, Quantum Spintronics: Engineering and Manipulating Atom-Like Spins in Semiconductors. Science 339, 1174 (2013).
|
| 227 |
+
|
| 228 |
+
[42] M. Saffman, T. G. Walker, K. Mølmer, Quantum information with Rydberg atoms. Rev. Mod. Phys. 82, 2313 (2010).
|
| 229 |
+
|
| 230 |
+
[43] D. Braess, A. Nagurney, T. Wakolbinger, On a Paradox of Traffic Planning. Transportation Science 39, 446 (2005).
|
| 231 |
+
|
| 232 |
+
[44] J. E. Cohen, P. Horowitz, Paradoxical behaviour of mechanical and electrical networks. Nature 352, 699 (1991).
|
| 233 |
+
|
| 234 |
+
[45] L. S. Nagurney, A. Nagurney, Physical proof of the occurrence of the Braess Paradox in electrical circuits. EPL (Europhysics Letters) 115, 28004 (2016).
|
| 235 |
+
|
| 236 |
+
[46] D. J. Case, Y. Liu, I. Z. Kiss, J.-R. Angilella, A. E. Motter, Braess's paradox and programmable behaviour in microfluidic networks. Nature 574, 647 (2019).
|
| 237 |
+
---PAGE_BREAK---
|
| 238 |
+
|
| 239 |
+
Supplementary Material
|
| 240 |
+
|
| 241 |
+
Appendix A1. Uniqueness of the Steady State
|
| 242 |
+
|
| 243 |
+
In this section, we focus on the question: *Is the steady state dependent on the initial state?* To answer this question, we define the super-operator $\mathcal{L}$, which describes the evolution of the density matrix $\hat{\rho}$ of the diode through
|
| 244 |
+
|
| 245 |
+
$$ \frac{\partial \hat{\rho}}{\partial t} = \mathcal{L}[\hat{\rho}] = -i[\hat{H}, \hat{\rho}(t)] + \mathcal{D}_1[\hat{\rho}(t)] + \mathcal{D}_6[\hat{\rho}(t)] \quad (3) $$
|
| 246 |
+
|
| 247 |
+
as defined in Eq. (2). $\mathcal{D}_n[\hat{\rho}]$ is another super-operator describing the action of the environment on our system and is defined by
|
| 248 |
+
|
| 249 |
+
$$ \mathcal{D}_n[\hat{\rho}] = \gamma \left[ \lambda_n \left( \hat{\sigma}_+^{(n)} \hat{\rho} \hat{\sigma}_-^{(n)} - \frac{1}{2} \{ \hat{\sigma}_-^{(n)} \hat{\sigma}_+^{(n)}, \hat{\rho} \} \right) + (1-\lambda_n) \left( \hat{\sigma}_-^{(n)} \hat{\rho} \hat{\sigma}_+^{(n)} - \frac{1}{2} \{ \hat{\sigma}_+^{(n)} \hat{\sigma}_-^{(n)}, \hat{\rho} \} \right) \right]. $$
|
| 250 |
+
|
| 251 |
+
Thus it contains operators acting from both left and right making Eq. (3) difficult to solve in the current form. Therefore, one can define the operation $|\rho\rangle\rangle = \text{vec}(\hat{\rho})$ that stacks the columns of $\hat{\rho}$ on top of each other resulting in a vector of length $D^2 = 2^{2^6}$. For example one would get
|
| 252 |
+
|
| 253 |
+
$$ \text{vec} \begin{pmatrix} \rho_{1,1} & \rho_{1,2} \\ \rho_{2,1} & \rho_{2,2} \end{pmatrix} = \begin{pmatrix} \rho_{1,1} \\ \rho_{2,1} \\ \rho_{1,2} \\ \rho_{2,2} \end{pmatrix}. $$
|
| 254 |
+
|
| 255 |
+
Using this operation one can show that
|
| 256 |
+
|
| 257 |
+
$$ \text{vec}(\hat{A}\hat{\rho}\hat{C}) = (\hat{C}^\dagger \otimes \hat{A})\text{vec}(\hat{\rho}). $$
|
| 258 |
+
|
| 259 |
+
With this identity we can write Eq. (3) as
|
| 260 |
+
|
| 261 |
+
$$ \frac{\partial}{\partial t} |\rho\rangle\rangle = \hat{\mathcal{L}} |\rho\rangle\rangle, \quad (4) $$
|
| 262 |
+
|
| 263 |
+
where $\hat{\mathcal{L}}$ is now a $D^2 \times D^2$ matrix, with $D = 2^6$, that acts on $|\rho\rangle\rangle$ only from the left. It can be written as
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
\begin{aligned}
|
| 267 |
+
\hat{\mathcal{L}} &= -i (\mathbb{I} \otimes \hat{H} - \hat{H} \otimes \mathbb{I}) + \hat{\mathcal{D}}_1 + \hat{\mathcal{D}}_6 \\
|
| 268 |
+
\hat{\mathcal{D}}_n &= \gamma \Biggl[ \lambda_n \Biggl( \hat{\sigma}_n^+ \otimes \hat{\sigma}_n^+ - \frac{1}{2} (\mathbb{I} \otimes \hat{\sigma}_n^- \hat{\sigma}_n^+ + \hat{\sigma}_n^- \hat{\sigma}_n^+ \otimes \mathbb{I}) \Biggr) + \\
|
| 269 |
+
&\qquad (1-\lambda_n) \Biggl( \hat{\sigma}_n^- \otimes \hat{\sigma}_n^- - \frac{1}{2} (\mathbb{I} \otimes \hat{\sigma}_n^+ \hat{\sigma}_n^- + \hat{\sigma}_n^+ \hat{\sigma}_n^- \otimes \mathbb{I}) \Biggr) \Biggr].
|
| 270 |
+
\end{aligned}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
Since $\mathcal{L}$ is not hermitian it is not necessarily diagonalizable, but if we assume that it is, we can write the initial state as an expansion in right eigen-vectors $|e_n\rangle\rangle$ of $\mathcal{L}$
|
| 274 |
+
|
| 275 |
+
$$ |\rho(0)\rangle\rangle = \sum_{n=1}^{D^2} c_n |e_n\rangle\rangle. $$
|
| 276 |
+
|
| 277 |
+
Using this the differential equation (4) can easily be solved
|
| 278 |
+
|
| 279 |
+
$$ |\rho(t)\rangle\rangle = \sum_{n=1}^{D^2} c_n e^{v_n t} |e_n\rangle\rangle. $$
|
| 280 |
+
|
| 281 |
+
Here $v_n$ are eigenvalues of $\mathcal{L}$ such that $|\mathcal{L}|e_n\rangle\rangle = v_n|e_n\rangle\rangle$. The eigenvalues $v_n$ are generally complex. The imaginary part of $v_n$ thus gives a time-dependent phase. The real part of $v_n$ ($\text{Re}(v_n) \le 0$) gives an exponential decay of the corresponding eigen-vector until after sufficient time only eigen states with eigenvalue $v_n = 0$ are left. Therefore, the steady state is found to be a zero eigen-vector of $\mathcal{L}$
|
| 282 |
+
|
| 283 |
+
$$ |\mathcal{L}| |\rho_{ss}\rangle\rangle = 0. \quad (5) $$
|
| 284 |
+
---PAGE_BREAK---
|
| 285 |
+
|
| 286 |
+
Figure 4. **Uniqueness of steady state.** Eigenvalues $v_n$ for the Lindbladian $\mathcal{L}$ plotted for forward **A** and reverse bias **B**. Below the fidelity $F_{f/r}(t)$ between the density matrix $\hat{\rho}(t)$ and the desired steady state $\hat{\rho}_{ss,f/r}$ for each of the 10 states as initial states is plotted both in forward bias (**C**) and reverse bias (**D**). For this the values $\delta = 0.1$, $\Delta = 5$, $J_{34} = J_{34}^c(\Delta = 5)$ and $\gamma = J$ were used.
|
| 287 |
+
|
| 288 |
+
If only one such vector exists, all initial states will eventually decay to this vector and consequently the steady state will be unique. If more than one null-vector exists, the steady state will depend on the specific expansion coefficients $c_n$ of the initial state. Usually only one null-vector exists [47], but this is not a given. For $\delta = 0$ the system has a symmetry, discussed in Appendix A5, resulting in a conserved quantity and therefore multiple steady states can be found. As an illustration we have plotted the $D^2 = 2^{2.6}$ eigenvalues of $\mathcal{L}$ in forward and reverse bias in Figs. 4A-B respectively. Even though it is difficult to see from the figure, it is easily verified from the data used to plot Figs. 4A-B that there is in fact only one null eigen-vector and thus only one unique steady state. The discussion here assumes that $\mathcal{L}$ can be diagonalized. A more rigorous approach can be used in general [48–51], but the above is sufficient for the present problem.
|
| 289 |
+
|
| 290 |
+
To further emphasize that the system discussed here does in fact only exhibit one steady state for $\delta \neq 0$, we consider some different initial states. The initial states considered are
|
| 291 |
+
|
| 292 |
+
$$|\psi_1\rangle = |\uparrow\uparrow\uparrow\uparrow\rangle, \quad |\psi_2\rangle = |\downarrow\downarrow\downarrow\downarrow\rangle \qquad (6a)$$
|
| 293 |
+
|
| 294 |
+
$$|\psi_3\rangle = (|\psi_1\rangle + |\psi_2\rangle)/\sqrt{2}, \quad |\psi_4\rangle = |+ +++++\rangle, \qquad (6b)$$
|
| 295 |
+
|
| 296 |
+
$$|\psi_5\rangle = |--- --- --\rangle, \quad |\psi_6\rangle = |\uparrow\downarrow\uparrow\downarrow\rangle, \qquad (6c)$$
|
| 297 |
+
|
| 298 |
+
$$|\psi_7\rangle = |\downarrow\uparrow\downarrow\uparrow\rangle, \quad |\psi_8\rangle = (|\psi_6\rangle + |\psi_7\rangle)/\sqrt{2}, \qquad (6d)$$
|
| 299 |
+
|
| 300 |
+
where $|\pm\rangle = (|\uparrow\rangle \pm |\downarrow\rangle)/\sqrt{2}$ and the state $|\psi_3\rangle$ is the maximally entangled GHZ (Greenberger–Horne–Zeilinger) state for 6 spins. Furthermore, we also consider starting from the steady state in forward bias $\hat{\rho}_{ss,f}$ and reverse bias $\hat{\rho}_{ss,r}$. First, we compute the steady state $\hat{\rho}_{ss}$ by solving the eigenvalue problem from Eq. (5). From this result, we numerically evolve each state in Eq. (6) in time to obtain the density operator at a later time $\hat{\rho}(t)$. Then we compute the distance measure from each state $\hat{\rho}(t)$ to $\hat{\rho}_{ss}$ using the fidelity measure as provided by
|
| 301 |
+
|
| 302 |
+
$$F_{f/r}(t) = \left( \text{tr} \sqrt{\hat{\rho}_{ss,f/r}} \hat{\rho}(t) \sqrt{\hat{\rho}_{ss,f/r}} \right)^2 .$$
|
| 303 |
+
|
| 304 |
+
The results in Figs. 4C-D show that the steady state for each of the initial states $\hat{\rho}_{n,ss}$ in Eq. (6) is close to $\hat{\rho}_{ss}$ after sufficient time.
|
| 305 |
+
|
| 306 |
+
## Appendix A2. Rectification Dependence on $J_{34}$ and $\Delta$
|
| 307 |
+
|
| 308 |
+
From Fig. 2C, we see that the current in forward bias changes very little under a change in $J_{34}$ and $\Delta$. Therefore, we will focus on understanding the insulating behavior in reverse bias. To do this, we split the system into 3 segments of two spins. The left two spins are denoted by *L*, the middle two gate spins are denoted by *M* and the right two spins are denoted by *R*. We use the
|
| 309 |
+
---PAGE_BREAK---
|
| 310 |
+
|
| 311 |
+
basis $|↓↓⟩$, $|\Psi_±⟩$ and $|↑↑⟩$, where $|\Psi_±⟩ = (|↑↓⟩ ± |↓↑⟩)/\sqrt{2}$, for each pair of spins. The diagonal part of the Hamiltonian, $\hat{H}_0$, then becomes
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
\begin{align*}
|
| 315 |
+
\hat{H} &= \hat{H}_0 + \hat{H}_{int} \\
|
| 316 |
+
\hat{H}_0/J &= \Delta |\downarrow\downarrow\rangle_L \langle\downarrow\downarrow| + \Delta |\uparrow\uparrow\rangle_L \langle\uparrow\uparrow| - (\Delta-2)|\Psi_+\rangle_L \langle\Psi_+| - (\Delta+2)|\Psi_-\rangle_L \langle\Psi_-| \\
|
| 317 |
+
&\quad + 2J_{34}/J |\Psi_+\rangle_M \langle\Psi_+|-2J_{34}/J |\Psi_-\rangle_M \langle\Psi_-| + 2|\Psi_+\rangle_R \langle\Psi_+|-2|\Psi_-\rangle_R \langle\Psi_-|,
|
| 318 |
+
\end{align*}
|
| 319 |
+
$$
|
| 320 |
+
|
| 321 |
+
where $|\downarrow\downarrow\rangle_L \langle\downarrow\downarrow|$ is an operator acting only on the left two spins. In Fig. 2A, we see that large $\mathcal{R}$ is associated with the two features $J_{34} \sim -(\Delta \pm 1)J$ and $\Delta \gg 1$. Therefore we set $J_{34} = -(\Delta + 1)$ and change to the interaction picture with respect to $\hat{H}_0$
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
\begin{aligned}
|
| 325 |
+
\hat{H}_I/J = -(2+\delta)&(\lvert\Psi_\text{−}\downarrow\downarrow\rangle_L \langle\downarrow\downarrow\Psi_\text{+}| + \lvert\downarrow\downarrow\Psi_\text{+}\rangle_L \langle\Psi_\text{−}\downarrow\downarrow|) \\
|
| 326 |
+
&+ \delta(\lvert\Psi_\text{−}\Psi_\text{−}\rangle_L \langle\downarrow\downarrow\uparrow| + \lvert\downarrow\downarrow\uparrow\rangle_L \langle\Psi_\text{−}\Psi_\text{−}| + \lvert\uparrow\uparrow\downarrow\rangle_L \langle\Psi_\text{−}\Psi_\text{−}| + \lvert\Psi_\text{−}\Psi_\text{−}\rangle_L \langle\uparrow\uparrow\downarrow|) + R.T.,
|
| 327 |
+
\end{aligned}
|
| 328 |
+
$$
|
| 329 |
+
|
| 330 |
+
where R.T. contains any terms containing a time-dependent phase. Here *LM* denotes an operator that acts on spin 1-4. The transitions contained in the Hamilton above are the ones obeying energy conservation. Here, the most interesting transition is
|
| 331 |
+
|
| 332 |
+
$$
|
| 333 |
+
|\downarrow\downarrow\uparrow\rangle_{LM} \leftrightarrow |\Psi_{-}\Psi_{-}\rangle_{LM} \leftrightarrow |\uparrow\uparrow\downarrow\rangle_{LM}.
|
| 334 |
+
$$
|
| 335 |
+
|
| 336 |
+
However, due to the cold bath the transition will instead become
|
| 337 |
+
|
| 338 |
+
$$
|
| 339 |
+
|\downarrow\downarrow\uparrow\rangle_{LM} \leftrightarrow |\Psi_{-}\Psi_{-}\rangle_{LM} \rightarrow |\downarrow\downarrow\Psi_{-}\rangle_{LM}.
|
| 340 |
+
$$
|
| 341 |
+
|
| 342 |
+
This transition results in the gate spins being in the entangled bell state which thus closes the diode. To see that this is the mechanism the diode is initialized to the state $|\downarrow\downarrow\uparrow\uparrow\downarrow\downarrow\rangle$ and the fidelity with the states $|\downarrow\downarrow\uparrow\uparrow\downarrow\downarrow\rangle$, $|\Psi_{-}\Psi_{-}\downarrow\downarrow\rangle$ and $|\downarrow\downarrow\Psi_{-}\downarrow\downarrow\rangle$ are plotted in Fig. 5A. The fidelity between a density matrix and a pure state is defined as
|
| 343 |
+
|
| 344 |
+
$$
|
| 345 |
+
F(\hat{\rho}(t), |\Psi\rangle_X) = \langle\Psi|\hat{\rho}(t)|\Psi\rangle.
|
| 346 |
+
$$
|
| 347 |
+
|
| 348 |
+
For a very large $\Delta = 100$ it is clearly seen how the above transition causes the gate spins to end up in the entangled bell state. This explains why the parametrization $J_{34} \sim -(\Delta + 1)J$ leads to a small current in reverse bias. A similar calculation can be done for $J_{34} \sim -(\Delta - 1)J$, where the transition becomes
|
| 349 |
+
|
| 350 |
+
$$
|
| 351 |
+
|\downarrow\downarrow\uparrow\rangle_{LM} \leftrightarrow |\Psi_+\Psi_-\rangle_{LM} \rightarrow |\downarrow\downarrow\Psi_-\rangle_{LM}.
|
| 352 |
+
$$
|
| 353 |
+
|
| 354 |
+
## Appendix A3. Magnetization Profile
|
| 355 |
+
|
| 356 |
+
To understand the rectification better, Figs. 5B-C show the magnetization profiles, $\langle\hat{\sigma}_z^{(n)}\rangle$, in forward and reverse bias respectively. The near-linear slope of magnetization in Fig. 5B signifies (diffusive) forward transport [52, 53] while the very abrupt change seen in Fig. 5C shows a strongly insulating behavior in reverse bias.
|
| 357 |
+
|
| 358 |
+
## Appendix A4. Interference and Entanglement in Reverse Bias
|
| 359 |
+
|
| 360 |
+
Here we focus on describing the diamond part of the diode (spin 2, 3, 4 and 5), which in reverse bias will be in one of two states $|\downarrow\Psi_{-}\downarrow\rangle$ or $|\downarrow\Psi_{-}\uparrow\rangle$, where $|\Psi_{-}\rangle = \frac{1}{\sqrt{2}}(|\uparrow\downarrow\rangle - |\downarrow\uparrow\rangle)$ is the Bell state that closes the diode. If we use the Hamiltonian for these four spins
|
| 361 |
+
|
| 362 |
+
$$
|
| 363 |
+
\hat{H}/J = (1 + \delta)\hat{X}_{23} + \hat{X}_{24} + J_{34}/J \hat{X}_{34} + \hat{X}_{45} + \hat{X}_{35}
|
| 364 |
+
$$
|
| 365 |
+
|
| 366 |
+
on the first state, we get
|
| 367 |
+
|
| 368 |
+
$$
|
| 369 |
+
\hat{H}/J |\downarrow \Psi_{-} \downarrow\rangle = -2 J_{34}/J |\downarrow \Psi_{-} \downarrow\rangle + \sqrt{2}\delta |\uparrow \downarrow \downarrow \uparrow\rangle.
|
| 370 |
+
$$
|
| 371 |
+
|
| 372 |
+
So for $\delta \to 0$, the single spin excitation can not propagate since the terms from $\hat{X}_{23}$ and $\hat{X}_{24}$ interfere destructively and likewise for the terms coming from $\hat{X}_{45}$ and $\hat{X}_{35}$. For the other state, we get
|
| 373 |
+
|
| 374 |
+
$$
|
| 375 |
+
\begin{align*}
|
| 376 |
+
\hat{H}/J |\downarrow\Psi_{-}\uparrow\rangle &= -2J_{34}/J(|\downarrow\Psi_{-}\uparrow\rangle + \sqrt{2}\delta|\uparrow\downarrow\uparrow\rangle + (\hat{X}_{45} + \hat{X}_{35})\frac{1}{\sqrt{2}}(|\downarrow\uparrow\downarrow\rangle - |\downarrow\downarrow\uparrow\rangle)) \\
|
| 377 |
+
&= -2J_{34}/J |\downarrow\Psi_{-}\uparrow\rangle + \sqrt{2}\delta|\uparrow\downarrow\uparrow\rangle.
|
| 378 |
+
\end{align*}
|
| 379 |
+
$$
|
| 380 |
+
---PAGE_BREAK---
|
| 381 |
+
|
| 382 |
+
Figure 5. **Rectification mechanism, magnetization profile, sensitivity and decoherence.** (A) The fidelity between the density matrix $\hat{\rho}(t)$ and three different states as a function of time for an initial state $|\downarrow\downarrow\uparrow\uparrow\downarrow\rangle$, $\delta = 0.1$, $\Delta = 100$ and $J_{34} = -(Δ + 1)J$. (B) Magnetization profile $\langle\hat{\sigma}_z^{(n)}\rangle$ for the forward bias steady-state with $\delta = 0.01$. (C) Same as B for reverse bias. (D) Rectification $\mathcal{R}$ as a function of $h_i$ (where $h_j = 0$ for $j \neq i$) for $\delta = 0.03$. (E) Rectification $\mathcal{R}$ as a function of $\Delta$ for different values of $T$ where $\delta = 0.1$. Solid lines denote the rectification for a model without error-correction, while dashed lines denote a model with error-correction. For all plots $\Delta = 5$ (except A and E), $J_{34} = J_{34}^c(Δ)$ (except A) and $\gamma = J$
|
| 383 |
+
|
| 384 |
+
The spin excitation on spin 3 and 4 can not propagate like before. However, the spin excitation at spin 5 can not propagate either.
|
| 385 |
+
This is due to the destructive interference in the term
|
| 386 |
+
|
| 387 |
+
$$ (\hat{X}_{45} + \hat{X}_{35}) |\downarrow\Psi_{-}\uparrow\rangle = 0 $$
|
| 388 |
+
|
| 389 |
+
which is the main mechanism behind the diode.
|
| 390 |
+
|
| 391 |
+
## Appendix A5. Vertical Symmetry for $\delta = 0$
|
| 392 |
+
|
| 393 |
+
In the main article, we found that if $\delta \sim 0$, the state
|
| 394 |
+
|
| 395 |
+
$$ |\Psi_{-}\rangle = \frac{1}{\sqrt{2}} (|\uparrow\downarrow\rangle - |\downarrow\uparrow\rangle) $$
|
| 396 |
+
|
| 397 |
+
on spin 3 and 4 closes the diode so that no spin current is observed. To study the importance of $\delta$, we define a permutation operator $\hat{P}$ that permutes the Hilbert spaces of spin 3 and 4 such that a general operator acting on the system satisfies
|
| 398 |
+
|
| 399 |
+
$$ \hat{P}^{\dagger}(\hat{A} \otimes \hat{B} \otimes \hat{C} \otimes \hat{D} \otimes \hat{E} \otimes \hat{F})\hat{P} = \hat{A} \otimes \hat{B} \otimes \hat{D} \otimes \hat{C} \otimes \hat{E} \otimes \hat{F} $$
|
| 400 |
+
|
| 401 |
+
or $\hat{P}^*\hat{S}^{(3)}\hat{P} = \hat{S}^{(4)}$. That is, the operator switches around spin 3 and 4. Now consider $\delta = 0$. In this case $[\hat{H}, \hat{P}] = [\hat{\sigma}_{\pm}^{(1)}, \hat{P}] = [\hat{\sigma}_{\pm}^{(6)}, \hat{P}] = 0$. Therefore, $\hat{P}$ defines a conserved quantity [54] meaning that $\langle\hat{P}\rangle$ is conserved. If spin 3 and 4 are in the entangled state $|\Psi_{-}\rangle$ so that the density operator for the system satisfies $\hat{\rho}^{(34)} = \text{tr}_{\hat{\sigma}_1\hat{\sigma}_2\hat{\sigma}_3\hat{\sigma}_6}[\hat{\rho}] = |\Psi_{-}\rangle\langle\Psi_{-}|$, then we have $\langle\hat{P}\rangle = -1$. Spin 3 and 4 being in any of the other bell states
|
| 402 |
+
|
| 403 |
+
$$ |Psi_+\rangle = \frac{1}{\sqrt{2}} (|\uparrow\downarrow\rangle + |\downarrow\uparrow\rangle) $$
|
| 404 |
+
|
| 405 |
+
$$ |Phi_+\rangle = \frac{1}{\sqrt{2}} (|\downarrow\downarrow\rangle + |\uparrow\uparrow\rangle) $$
|
| 406 |
+
|
| 407 |
+
$$ |Phi_-\rangle = \frac{1}{\sqrt{2}} (|\downarrow\uparrow\rangle - |\uparrow\downarrow\rangle) $$
|
| 408 |
+
|
| 409 |
+
will result in $\langle\hat{P}\rangle = 1$. Since $\langle\hat{P}\rangle$ is conserved, then if the system starts in $\hat{\rho}^{(34)} = |\Psi_{-}\rangle\langle\Psi_{-}|$, it will stay in this state at all later times. The state $\hat{\rho}^{(34)} = |\Psi_{-}\rangle\langle\Psi_{-}|$ is associated with $J = 0$ while any other state results in $J \neq 0$. Since for a diode we must demand that $J \sim 0$ in one bias while $J \neq 0$ in the other bias, this symmetry needs to be broken for a diode. One way of achieving $[\hat{H}, \hat{P}] \neq 0$ is to have $\delta > 0$.
|
| 410 |
+
---PAGE_BREAK---
|
| 411 |
+
|
| 412 |
+
## Appendix A6. Sensitivity to External Magnetic Fields
|
| 413 |
+
|
| 414 |
+
To study the sensitivity of the rectification to magnetic fields, we look at the new Hamiltonian
|
| 415 |
+
|
| 416 |
+
$$ \hat{H}' = \hat{H} + \sum_{n=1}^{6} h_n \hat{\sigma}_z^{(n)} $$
|
| 417 |
+
|
| 418 |
+
where $\hat{H}$ is the original Hamiltonian from Eq. (1). The effect of $h_3$ and $h_4$ has already been studied in the main text, so here we will focus on the other fields. In Fig. 5D the rectification $\mathcal{R}$ is plotted as a function of $h_1, h_2, h_5$ and $h_6$ where one field is varied all the others are set to zero. The system is clearly less sensitive to the addition of a magnetic field on spin 1, 2, 5 or 6 than on spin 3 and 4.
|
| 419 |
+
|
| 420 |
+
## Appendix A7. Decoherence and Protection of the Entangled State
|
| 421 |
+
|
| 422 |
+
To study how the limited lifetime of the spins affects the rectification, we add both decay and dephasing on all spins. This is done by letting the density matrix evolve as
|
| 423 |
+
|
| 424 |
+
$$ \frac{\partial \hat{\rho}}{\partial t} = \mathcal{L}[\hat{\rho}] + \frac{1}{T} \sum_{n=1}^{6} \left( \hat{\sigma}_{-}^{(n)} \hat{\rho} \hat{\sigma}_{+}^{(n)} - \frac{1}{2} \left\{ \hat{\sigma}_{+}^{(n)} \hat{\sigma}_{-}^{(n)}, \hat{\rho} \right\} \right) + \frac{1}{4T} \sum_{n=1}^{6} \left( \hat{\sigma}_{z}^{(n)} \hat{\rho} \hat{\sigma}_{z}^{(n)} - \frac{1}{2} \left\{ \hat{\sigma}_{z}^{(n)} \hat{\sigma}_{z}^{(n)}, \hat{\rho} \right\} \right) $$
|
| 425 |
+
|
| 426 |
+
Which insures that, if $\mathcal{L}[\hat{\rho}] = 0$, then the lifetime for all spins for decay ($T_1$) and dephasing ($T_2$) is $T = T_1 = T_2$. The rectification as a function of $\Delta$ is plotted in Fig. 5E for different values of $T J$. To put this plot into perspective, superconducting circuits have $T_1 \sim T_2 \sim 100\mu s$ and $J/2\pi \approx 50MHz$ [33] resulting in $T J \sim 3 \cdot 10^4$. Ion trap based quantum computers have $T_1 \sim T_2 \sim 1$s and $J/2\pi \sim 1$kHz [34, 35] resulting in $T J \sim 10^4$. However, as technology improves and coherence times increase the rectification and thus the benefit of the diode also increases linearly.
|
| 427 |
+
The drop in rectification is mainly due to decoherence of the entangled bell state $|\Psi_-⟩$. To protect against this we can employ error-correction by forcing the transition $|↓↓⟩ → |\Psi_-⟩$. This is done by adding a shadow qubit with driving that allows the transition. We further add an excitation energy of $\Omega$ to all spins, since this will be present in most experimental setups
|
| 428 |
+
|
| 429 |
+
$$ \hat{H}' = \hat{H} + A \hat{\sigma}_x^{(3)} \hat{\sigma}_x^{(S)} \cos\{2(\Omega + \omega)t\} + \frac{\Omega}{2} \sum_k \hat{\sigma}_z^{(k)} $$
|
| 430 |
+
|
| 431 |
+
where the sum is over all spins and $\Omega \gg \omega$. Moving into the interacting picture with respect to the Hamiltonian $\hat{H}_0 = \frac{1}{2}(\Omega + 2\omega)\hat{\sigma}_z^{(S)} + \frac{\Omega}{2}\sum_{k=1}^6 \hat{\sigma}_z^{(k)}$ and performing the rotating wave approximation on terms rotating with angular frequency $\sim \Omega$ we get
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
\begin{aligned}
|
| 435 |
+
\hat{H}'_t &= e^{i\hat{H}_0 t} (\hat{H}' - \hat{H}_0) e^{-i\hat{H}_0 t} \\
|
| 436 |
+
&\approx \hat{H} + A (\hat{\sigma}_+^{(3)} \hat{\sigma}_+^{(S)} + \hat{\sigma}_-^{(3)} \hat{\sigma}_-^{(S)}) - \omega \hat{\sigma}_z^{(S)}
|
| 437 |
+
\end{aligned}
|
| 438 |
+
$$
|
| 439 |
+
|
| 440 |
+
Likewise the Lindblad equation and spin current operator can be transformed. We let the shadow qubit decay with rate $\gamma_S = J$ and the coupling be weak $A = 0.1 J$. If we let $\omega = -J_{34}$, the two gate spins and the shadow qubit will undergo the transition
|
| 441 |
+
|
| 442 |
+
$$ |↓↓⟩|↓⟩_S ↔ |\Psi_−⟩|↑⟩_S → |\Psi_−⟩|↓⟩_S . $$
|
| 443 |
+
|
| 444 |
+
Numerical simulations show that the best result is achieved for $\omega = (\Delta + 1.2)J$. The rectification with error-correction is plotted in Fig. 5E for different coherence times $T$. The error-correction works best for short coherence times where it results in about twice the rectification.
|
| 445 |
+
|
| 446 |
+
## Appendix A8. Validity of the Master Equation
|
| 447 |
+
|
| 448 |
+
In this section we want to motivate the master equation (2) used for modeling the diode. Eq. (2) is a local master equation which is valid if the internal couplings of the system are small compared to the energy levels of spin 1 and 6 [55]. The Hamiltonian (1) does not contain an excitation energy of the individual spins, and the local master equation is not directly applicable for thermal baths. However, we could instead consider the Hamiltonian
|
| 449 |
+
|
| 450 |
+
$$
|
| 451 |
+
\begin{aligned}
|
| 452 |
+
\hat{H}' &= \hat{H} + \hat{H}_0, \\
|
| 453 |
+
\hat{H}_0 &= \omega \sum_{n=1}^{6} \hat{\sigma}_z^{(n)},
|
| 454 |
+
\end{aligned}
|
| 455 |
+
$$
|
| 456 |
+
---PAGE_BREAK---
|
| 457 |
+
|
| 458 |
+
where $\hat{H}$ is defined in (1). Since all terms in $\hat{H}$ are energy conserving with respect to $\hat{H}_0$, if we go into the interaction picture with respect to $\hat{H}_0$, the hamiltonian $\hat{H}$ will remain unchanged. Likewise, we can transform the spin current and master equation after which we find the same model considered in the main text. If $\omega \gg J, J_{34}, \Delta J$, then the local approach will be valid and the results in the main article can be used directly. If $\omega \sim 0$, then the global master equation has to be used instead which is considered in Appendix A9.
|
| 459 |
+
|
| 460 |
+
Example model
|
| 461 |
+
|
| 462 |
+
Here we consider an example where the global master equation reduces to the local master equation. To this end, we consider
|
| 463 |
+
the diode as a part of a bigger system, namely interacting with two heat baths, one at spin 1 and one at spin 6. Here we consider
|
| 464 |
+
reverse bias where spin 1 is interacting with a cold bath and spin 6 is interacting with a hot bath. The total system of both diode
|
| 465 |
+
and environment is considered closed and the Hamiltonian for such three systems is
|
| 466 |
+
|
| 467 |
+
$$
|
| 468 |
+
\begin{align*}
|
| 469 |
+
\hat{H} &= \hat{H}_0 + \hat{H}_I \\
|
| 470 |
+
&= \hat{H}_C \otimes I_D \otimes I_H + I_C \otimes \hat{H}_D \otimes I_H + I_C \otimes I_D \otimes \hat{H}_H + \hat{H}_{CD} \otimes I_H + I_C \otimes \hat{H}_{DH}.
|
| 471 |
+
\end{align*}
|
| 472 |
+
$$
|
| 473 |
+
|
| 474 |
+
$\hat{H}_C$ and $\hat{H}_H$ are the Hamiltonians of the cold and hot bath, respectively, the exact form of which are not important for now. $\hat{H}_D$ is the Hamiltonian of the diode which together with $\hat{H}_C$ and $\hat{H}_H$ make up $\hat{H}_0$. $I_C$, $I_D$ and $I_H$ are the identity operators for the cold bath, diode and hot bath respectively. $\hat{H}_{CD}$ and $\hat{H}_{DH}$ are combined into $\hat{H}_I$ and describe the interaction between the diode and the baths. We assume that each bath contains a large number of spins interacting through the respective Hamiltonians, the cold bath contains $N_C$ and the hot $N_H$ spins. Only a few of these spins interact with the diode through the interaction Hamiltonians but to keep it general we include all of them
|
| 475 |
+
|
| 476 |
+
$$
|
| 477 |
+
\begin{align*}
|
| 478 |
+
\hat{H}_{CD} &= \sum_{i=1}^{N_C} J_{Ci} (\hat{\sigma}_x^{(Ci)} \hat{\sigma}_x^{(1)} + \hat{\sigma}_y^{(Ci)} \hat{\sigma}_y^{(1)}) \\
|
| 479 |
+
\hat{H}_{DH} &= \sum_{i=1}^{N_H} J_{Hi} (\hat{\sigma}_x^{(Hi)} \hat{\sigma}_x^{(6)} + \hat{\sigma}_y^{(Hi)} \hat{\sigma}_y^{(6)}).
|
| 480 |
+
\end{align*}
|
| 481 |
+
$$
|
| 482 |
+
|
| 483 |
+
Here $\hat{\sigma}_\alpha^{(Ci)}$ and $\hat{\sigma}_\alpha^{(Hi)}$ for $\alpha = x, y, z$ are spin operators for the spins in the cold and hot bath respectively, and $\hat{\sigma}_\alpha^{(1)}$ and $\hat{\sigma}_\alpha^{(6)}$ are spin operators for the 1st and 6th spin in the diode. Since we are not interested in the dynamics of the baths, which has many degrees of freedom and is thus very hard to simulate, we will trace over the environment to achieve the density matrix of just the diode.
|
| 484 |
+
If we assume that the total system starts in a product state $\hat{\rho}(0) = \hat{\rho}_C \otimes \hat{\rho}_D(0) \otimes \hat{\rho}_H$, the system will evolve as
|
| 485 |
+
|
| 486 |
+
$$
|
| 487 |
+
\hat{\rho}_D(t) = \text{tr}_{C,H}\{\hat{U}(t,0)(\hat{\rho}_C \otimes \hat{\rho}_D(0) \otimes \hat{\rho}_H)\hat{U}(t,0)^{\dagger}\}, \quad (7)
|
| 488 |
+
$$
|
| 489 |
+
|
| 490 |
+
where tr$_{C,H}${$\cdot$} is a partial trace over the baths, and $\hat{U}(t,0)$ is the time-evolution operator. Next, we assume that the cold bath has undergone phase transition, and all the spins are thus pointing down $\rho_C = |↓...↓×↓...↓| = |↓×↓↓|$. This could describe a cold bath, but depending on the nature of the cold bath Hamiltonian $\hat{H}_C$ this phase transition could also occur at higher temperatures. We also assume that the hot bath is described by a thermal state
|
| 491 |
+
|
| 492 |
+
$$
|
| 493 |
+
\hat{\rho}_{H}=\frac{\exp (-\beta \hat{H}_{H})}{\operatorname{tr}_{H}\left[\exp (-\beta \hat{H}_{H})\right]},
|
| 494 |
+
$$
|
| 495 |
+
|
| 496 |
+
where $\beta = 1/T$ is the inverse temperature of the hot bath ($k_B = 1$). For $T \gg 1$, we see that $\hat{\rho}_H \approx 2^{-N_H} I_H$. If we neglect all correlations between the cold and hot bath and omit the Lamb and Stark shift contributions, one version of the Redfield equation takes the form [31]
|
| 497 |
+
|
| 498 |
+
$$
|
| 499 |
+
\frac{d\hat{\rho}_D}{dt} = -i[H_D, \hat{\rho}_D] + \mathcal{D}_C[\hat{\rho}_D] + \mathcal{D}_H[\hat{\rho}_D], \quad (8)
|
| 500 |
+
$$
|
| 501 |
+
|
| 502 |
+
where
|
| 503 |
+
|
| 504 |
+
$$
|
| 505 |
+
\mathcal{D}_{C/H}[\hat{\rho}_D] = \frac{1}{2} \sum_{\omega,\omega'} \sum_{\alpha,\beta \in \{x,y\}} \gamma_{\alpha\beta}^{C/H}(\omega) (\hat{A}_{\alpha}^{(1)/(6)}(\omega) \hat{\rho}_D \hat{A}_{\beta}^{(1)/(6)\dagger}(\omega') - \hat{A}_{\beta}^{(1)/(6)\dagger}(\omega') \hat{A}_{\alpha}^{(1)/(6)}(\omega) \hat{\rho}_D) + h.c.
|
| 506 |
+
$$
|
| 507 |
+
---PAGE_BREAK---
|
| 508 |
+
|
| 509 |
+
Here h.c. means the hermitian conjugate of the expression. The operators $\hat{A}_{\alpha}^{(1)/(6)}(\omega)$ are eigen-operators of $\hat{H}_D$ defined as
|
| 510 |
+
|
| 511 |
+
$$ \hat{A}_{\alpha}^{(1)/(6)}(\omega) = \sum_{\epsilon' - \epsilon = \omega} \hat{\Pi}(\epsilon) \hat{\sigma}_{\alpha}^{(1)/(6)} \hat{\Pi}(\epsilon') $$
|
| 512 |
+
|
| 513 |
+
with $\Pi(\epsilon)$ being the projection operator onto the space of eigen-states of $\hat{H}_D$ with eigen-energy $\epsilon$. The coupling strengths are given by
|
| 514 |
+
|
| 515 |
+
$$ \gamma_{\alpha\beta}^{C}(\omega) = \sum_{i,j=1}^{N_C} J_{Ci}J_{Cj} \int_{-\infty}^{\infty} ds e^{i\omega s} \text{tr}_C\{\hat{\sigma}_{\beta}^{(Ci)}(s)\hat{\sigma}_{\alpha}^{(Cj)}(0)\hat{\rho}_C\} $$
|
| 516 |
+
|
| 517 |
+
$$ \gamma_{\alpha\beta}^{H}(\omega) = \sum_{i,j=1}^{N_H} J_{Hi}J_{Hj} \int_{-\infty}^{\infty} ds e^{i\omega s} \text{tr}_H\{\hat{\sigma}_{\beta}^{(Hi)}(s)\hat{\sigma}_{\alpha}^{(Hj)}(0)\hat{\rho}_H\}. $$
|
| 518 |
+
|
| 519 |
+
The spin operators are here written in the interaction picture with respect to $\hat{H}_0$
|
| 520 |
+
|
| 521 |
+
$$
|
| 522 |
+
\begin{aligned}
|
| 523 |
+
\hat{\sigma}_{\alpha}^{(Ci/Hi)}(t) &= e^{i\hat{H}_0 t}\hat{\sigma}_{\alpha}^{(Ci/Hi)}e^{-i\hat{H}_0 t} \\
|
| 524 |
+
&= e^{i\hat{H}_{C/Ht}}\hat{\sigma}_{\alpha}^{(Ci/Hi)}e^{-i\hat{H}_{C/Ht}}.
|
| 525 |
+
\end{aligned}
|
| 526 |
+
$$
|
| 527 |
+
|
| 528 |
+
The notation $C/H$ is used as a shorthand for two equations, one with $C$ and one with $H$. This version of the Redfield equation is true within the Born-Markov approximation where it is assumed that the bath correlation functions decay off quickly compared to the relaxation time of the diode. First, we look at the dissipator $\mathcal{D}_C(\hat{\rho}_D)$ of the cold bath. If $\hat{H}_C$ is spin preserving, then $\hat{\rho}_C = |\downarrow\chi\downarrow|$ must be an eigenstate of $\hat{H}_C$ with some eigenenergy $\epsilon_\downarrow$. Using this we note that
|
| 529 |
+
|
| 530 |
+
$$ \text{tr}_C\{\hat{\sigma}_{\beta}^{(Ci)}(s)\hat{\sigma}_{\alpha}^{(Cj)}(0)\hat{\rho}_C\} = \text{tr}_C\{\hat{\sigma}_{\beta}^{(Ci)}e^{-i(\hat{H}_C - \epsilon_\downarrow)s}\hat{\sigma}_{\alpha}^{(Cj)}|\downarrow\chi\downarrow\rangle. $$
|
| 531 |
+
|
| 532 |
+
The spin operators satisfy the property $\hat{\sigma}_y^{(Ci)}|\downarrow\rangle = -i\hat{\sigma}_x^{(Ci)}|\downarrow\rangle$ such that all coupling strength can be written in terms of $\gamma_{xx}^C(\omega)$.
|
| 533 |
+
|
| 534 |
+
$$ \gamma_{yy}^C(\omega) = \gamma_{xx}^C(\omega), \quad \gamma_{xy}^C(\omega) = \gamma_{yx}^*(\omega) = i\gamma_{xx}^C(\omega) $$
|
| 535 |
+
|
| 536 |
+
This can be summarized as $\gamma_{\alpha\beta}^C(\omega) = c_{\alpha\beta}\gamma_{x,x}^C(\omega)$ for an appropriate matrix $c_{\alpha\beta}$. We further assume that $\gamma_{xx}^C(\omega)$ is constant over the sum carried out in $\mathcal{D}_C(\hat{\rho}_C)$, and therefore we omit the $\omega$ dependence and write $\gamma_{xx}^C$. The dissipator thus becomes
|
| 537 |
+
|
| 538 |
+
$$
|
| 539 |
+
\begin{align*}
|
| 540 |
+
\mathcal{D}_C[\hat{\rho}_D] &= \frac{1}{2}\gamma_{xx}^C \sum_{\alpha,\beta \in \{x,y\}} c_{\alpha\beta} \sum_{\omega,\omega'} (\hat{A}_{\alpha}^{(1)}(\omega)\hat{\rho}_D \hat{A}_{\beta}^{(1)\dagger}(\omega') - \hat{A}_{\beta}^{(1)\dagger}(\omega')\hat{A}_{\alpha}^{(1)}(\omega)\hat{\rho}_D) + \text{h.c.} \\
|
| 541 |
+
&= \frac{1}{2}\gamma_{xx}^C \sum_{\alpha,\beta \in \{x,y\}} c_{\alpha\beta} (\hat{\sigma}_{\alpha}^{(1)}\hat{\rho}_D \hat{\sigma}_{\beta}^{(1)} - \hat{\sigma}_{\beta}^{(1)}\hat{\sigma}_{\alpha}^{(1)}\hat{\rho}_D) + \text{h.c.} \\
|
| 542 |
+
&= \gamma_{xx}^C \sum_{\alpha,\beta \in \{x,y\}} c_{\alpha\beta} \left( \hat{\sigma}_{\alpha}^{(1)}\hat{\rho}_D \hat{\sigma}_{\beta}^{(1)} - \frac{1}{2}\{\hat{\sigma}_{\beta}^{(1)}\hat{\sigma}_{\alpha}^{(1)}, \hat{\rho}_D\} \right),
|
| 543 |
+
\end{align*}
|
| 544 |
+
$$
|
| 545 |
+
|
| 546 |
+
where we have used the completeness relation $\sum_\epsilon \Pi(\epsilon) = I_D$ to write
|
| 547 |
+
|
| 548 |
+
$$
|
| 549 |
+
\begin{align*}
|
| 550 |
+
\sum_\omega \hat{A}_\alpha^{(1)}(\omega) &= \sum_\omega \sum_{\epsilon', \epsilon = \omega} \Pi(\epsilon) \hat{\sigma}_\alpha^{(1)} \Pi(\epsilon') \\
|
| 551 |
+
&= \sum_{\epsilon', \epsilon} \Pi(\epsilon) \hat{\sigma}_\alpha^{(1)} \Pi(\epsilon') \\
|
| 552 |
+
&= \hat{\sigma}_\alpha^{(1)}.
|
| 553 |
+
\end{align*}
|
| 554 |
+
$$
|
| 555 |
+
|
| 556 |
+
Through diagonalization of $c_{a,b}$ we get the form
|
| 557 |
+
|
| 558 |
+
$$ \mathcal{D}_C[\hat{\rho}_D] = 4 \gamma_{xx}^C \left( \hat{\sigma}_{-}^{(1)} \hat{\rho}_D \hat{\sigma}_{+}^{(1)} - \frac{1}{2} \left\{ \hat{\sigma}_{+}^{(1)} \hat{\sigma}_{-}^{(1)}, \hat{\rho}_D \right\} \right). $$
|
| 559 |
+
|
| 560 |
+
Next, we look at the dissipator $\mathcal{D}_H(\hat{\rho}_D)$ of the hot bath. Remembering that the state of the hot bath is $\hat{\rho}_H = 2^{-N_H} I_H$ we see that
|
| 561 |
+
|
| 562 |
+
$$ \text{tr}_H\{\hat{\sigma}_{\beta}^{(Hi)}(s)\hat{\sigma}_{\alpha}^{(Hj)}(0)\hat{\rho}_H\} = 2^{-N_H}\text{tr}_H\{e^{i\hat{H}_H s}\hat{\sigma}_{\beta}^{(Hi)}e^{-i\hat{H}_H s}\hat{\sigma}_{\alpha}^{(Hj)}\}. $$
|
| 563 |
+
---PAGE_BREAK---
|
| 564 |
+
|
| 565 |
+
We have already assumed that the correlation functions are sharply peaked around $s = 0$. Since for $s = 0$ the cross correlations vanish $\text{tr}_H\{\hat{\sigma}_x^{(H)} \hat{\sigma}_y^{(H)}\} = 0$, we can expect these to be zero for all later times as well. Thus we conclude that $\gamma_{xy}^H(\omega) \approx \gamma_{yx}^H(\omega) \approx 0$ which are then neglected. Once again we assume $\gamma_{xx}^H(\omega)$ and $\gamma_{yy}^H(\omega)$ to be constant over the sum carried out in $\mathcal{D}_H(\hat{\rho}_D)$, and we thus omit the $\omega$ dependence. The dissipator then becomes
|
| 566 |
+
|
| 567 |
+
$$
|
| 568 |
+
\begin{align*}
|
| 569 |
+
\mathcal{D}_H(\hat{\rho}_D) &= \frac{1}{2} \sum_{\alpha \in \{x,y\}} \gamma_{\alpha x}^H \sum_{\omega, \omega'} (\hat{A}_{\alpha}^{(6)}(\omega) \hat{\rho}_D \hat{A}_{\alpha}^{(6)\dagger}(\omega') - \hat{A}_{\alpha}^{(6)\dagger}(\omega') \hat{A}_{\alpha}^{(6)}(\omega) \hat{\rho}_D) + \text{h.c.} \\
|
| 570 |
+
&= \frac{1}{2} \sum_{\alpha \in \{x,y\}} \gamma_{\alpha x}^H (\hat{\sigma}_{\alpha}^{(6)} \hat{\rho}_D \hat{\sigma}_{\alpha}^{(6)} - \hat{\sigma}_{\alpha}^{(6)} \hat{\sigma}_{\alpha}^{(6)} \hat{\rho}_D) + \text{h.c.} \\
|
| 571 |
+
&= \sum_{\alpha \in \{x,y\}} \gamma_{\alpha x}^H (\hat{\sigma}_{\alpha}^{(6)} \hat{\rho}_D \hat{\sigma}_{\alpha}^{(6)} - \frac{1}{2} \{\hat{\sigma}_{\alpha}^{(6)} \hat{\sigma}_{\alpha}^{(6)}, \hat{\rho}_D\}) .
|
| 572 |
+
\end{align*}
|
| 573 |
+
$$
|
| 574 |
+
|
| 575 |
+
Again we can write this in terms of $\hat{\sigma}_+^{(6)}$ and $\hat{\sigma}_-^{(6)}$ instead
|
| 576 |
+
|
| 577 |
+
$$
|
| 578 |
+
\begin{align*}
|
| 579 |
+
\mathcal{D}_H(\hat{\rho}_D) ={}& (\gamma_{xx}^H - \gamma_{yy}^H) \sum_{\alpha \in \{+, -\}} \left( \hat{\sigma}_\alpha^{(6)} \hat{\rho}_D \hat{\sigma}_\alpha^{(6)} - \frac{1}{2} \{\hat{\sigma}_\alpha^{(6)} \hat{\sigma}_\alpha^{(6)}, \hat{\rho}_D \} \right) \\
|
| 580 |
+
& + (\gamma_{xx}^H + \gamma_{yy}^H) \left( \hat{\sigma}_+^{(6)} \hat{\rho}_D \hat{\sigma}_-^{(6)} - \frac{1}{2} \{\hat{\sigma}_-^{(6)} \hat{\sigma}_+^{(6)}, \hat{\rho}_D \} + \hat{\sigma}_-^{(6)} \hat{\rho}_D \hat{\sigma}_+^{(6)} - \frac{1}{2} \{\hat{\sigma}_+^{(6)} \hat{\sigma}_-^{(6)}, \hat{\rho}_D \} \right).
|
| 581 |
+
\end{align*}
|
| 582 |
+
$$
|
| 583 |
+
|
| 584 |
+
Thus we recover the master equation (2) in the special case where $\gamma = 4\gamma_{xx}^C = 4\gamma_{xx}^H = 4\gamma_{yy}^H$.
|
| 585 |
+
|
| 586 |
+
## Alternative approach (strong coupling)
|
| 587 |
+
|
| 588 |
+
A more straightforward approach is to start at equation (7) and make the Markov approximation. First, we define a dynamical map $V(t, 0)$ analogous to the time evolution operator
|
| 589 |
+
|
| 590 |
+
$$
|
| 591 |
+
\begin{align*}
|
| 592 |
+
\hat{\rho}_D(t) &= V(t, 0)\hat{\rho}_D(0) \\
|
| 593 |
+
&= \operatorname{tr}_{C,H}\{U(t, 0)(\rho_C \otimes \rho_D(0) \otimes \rho_H)U(t, 0)^{\dagger}\}.
|
| 594 |
+
\end{align*}
|
| 595 |
+
$$
|
| 596 |
+
|
| 597 |
+
The Markov approximation then takes the form $V(t_3, t_1) = V(t_3, t_2)V(t_2, t_1)$. This means that the total state will at all times be a product state $\hat{\rho}(t) = \hat{\rho}_C \otimes \hat{\rho}_D(t) \otimes \hat{\rho}_H$ and that the state of the baths are unaffected by the interaction with the diode [31]. Using this the master equation can be found through
|
| 598 |
+
|
| 599 |
+
$$
|
| 600 |
+
\frac{d}{dt}\hat{\rho}_D(t) = \lim_{\Delta t \to 0} \frac{V(t+\Delta t, t)\hat{\rho}_D(t) - \hat{\rho}_D(t)}{\Delta t}.
|
| 601 |
+
$$
|
| 602 |
+
|
| 603 |
+
Expanding $V(t + \Delta t, t)$ to order $\Delta t^2$ and using the bath states from before $\hat{\rho}_C = |\downarrow\rangle\langle\downarrow|$ and $\hat{\rho}_H = 2^{-N_H}I_H$, we get the master equation
|
| 604 |
+
|
| 605 |
+
$$
|
| 606 |
+
\begin{align*}
|
| 607 |
+
\frac{d}{dt}\hat{\rho}_D(t) &= -i[\hat{H}_D, \hat{\rho}_D(t)] + \mathcal{D}_C[\hat{\rho}_D] + \mathcal{D}_H[\hat{\rho}_D], \\
|
| 608 |
+
\mathcal{D}_C[\hat{\rho}_D] &= \gamma(\hat{\sigma}_-^{(1)}\hat{\rho}_D\hat{\sigma}_+^{(1)} - \frac{1}{2}\{\hat{\sigma}_+^{(1)}\hat{\sigma}_-^{(1)}, \hat{\rho}_D\}), \\
|
| 609 |
+
\mathcal{D}_H[\hat{\rho}_D] &= \frac{\gamma}{2}(\hat{\sigma}_+^{(6)}\hat{\rho}_D\hat{\sigma}_-^{(6)} - \frac{1}{2}\{\hat{\sigma}_-^{(6)}\hat{\sigma}_+^{(6)}, \hat{\rho}_D\} + \hat{\sigma}_-^{(6)}\hat{\rho}_D\hat{\sigma}_+^{(6)} - \frac{1}{2}\{\hat{\sigma}_+^{(6)}\hat{\sigma}_-^{(6)}, \hat{\rho}_D\}),
|
| 610 |
+
\end{align*}
|
| 611 |
+
$$
|
| 612 |
+
|
| 613 |
+
where this time we get the coupling strength
|
| 614 |
+
|
| 615 |
+
$$
|
| 616 |
+
\gamma = 4 \lim_{\Delta t \to 0} \Delta t \sum_{i=1}^{N_C} J_{Ci}^2 = 4 \lim_{\Delta t \to 0} \Delta t \sum_{i=1}^{N_H} J_{Hi}^2.
|
| 617 |
+
$$
|
| 618 |
+
|
| 619 |
+
Thus for $\gamma$ to be non-zero, we need singular coupling strengths $J_{Ci}$ and $J_{Hi}$.
|
| 620 |
+
---PAGE_BREAK---
|
| 621 |
+
|
| 622 |
+
Figure 6. Heat diode rectification magnitudes. (A) $\mathcal{R}_Q$ as a function of $h$ and $J_{34}$ for $\delta = 0.01$. (B) $\mathcal{R}_Q$ as a function of $h$ for different values of $\delta$ and $J_{34} = J_{34}^Q(h)$ (solid lines). The dashed line displays $\mathcal{R}_Q$ for a linear chain with spin 3 removed. (C) Steady-state currents $\mathcal{K}_f$ and $\mathcal{K}_r$ for $\delta = 0.01$ and $J_{34} = J_{34}^Q(h)$. For all three plots $\gamma = J, T_C = 0.1J$ and $T_H = 10.1J$.
|
| 623 |
+
|
| 624 |
+
## Appendix A9. Heat Diode Using the Global Master Equation
|
| 625 |
+
|
| 626 |
+
In this section we want to explore how the diode proposed in the main text can also be used as a heat rectifier using a global master equation. First, we change the diode Hamiltonian (1) slightly such that we still have the energy gap created by the ZZ-coupling but break the spin flip symmetry. This is done by defining the new Hamiltonian
|
| 627 |
+
|
| 628 |
+
$$ \hat{H}_Q/J = \hat{X}_{12} + (1+\delta)\hat{X}_{23} + \hat{X}_{24} + J_{34}/J\hat{X}_{34} + \hat{X}_{35} + \hat{X}_{45} + \hat{X}_{56} + h/J(\hat{\sigma}_z^{(1)} + \hat{\sigma}_z^{(2)}), $$
|
| 629 |
+
|
| 630 |
+
such that the energy gap is now created by a local magnetic field on spin 1 and 2 described by $h$. This Hamiltonian is the same as $\hat{H}_1$ explored in Appendix A10. However, in this section we will couple spin 1 and 6 to two thermal baths and use the global master equation where the baths address the eigenstate of the total system instead of just those of spin 1 and 6. Here it does not make sense to define a spin current so instead we examine how heat is transferred through the diode. This is again done through the master equation [31]
|
| 631 |
+
|
| 632 |
+
$$ \frac{\partial \hat{\rho}}{\partial t} = -i[\hat{H}_Q, \hat{\rho}] + \mathcal{D}_1[\hat{\rho}] + \mathcal{D}_6[\hat{\rho}], $$
|
| 633 |
+
|
| 634 |
+
where the dissipators are now defined as
|
| 635 |
+
|
| 636 |
+
$$ \mathcal{D}_n[\hat{\rho}] = \frac{1}{2} \sum_{\omega, \omega'}^{|ω-ω'|≠τ_R^{-1}} \gamma_n(\omega) (\hat{A}_n(\omega)\hat{\rho}\hat{A}_n^\dagger(\omega') - \hat{A}_n^\dagger(\omega')\hat{A}_n(\omega)\hat{\rho}) + \text{h.c.} $$
|
| 637 |
+
|
| 638 |
+
for $n \in \{1, 6\}$. Here the sum is done over all pairs of frequencies for which $|\omega - \omega'|$ is not much greater than the inverse relaxation time of the diode $\tau_R^{-1}$. This is due to the secular approximation which essentially comes from a rotating wave approximation. The operators $\hat{A}_n(\omega)$ are eigen-operators of $\hat{H}_Q$ defined as
|
| 639 |
+
|
| 640 |
+
$$ \hat{A}_n(\omega) = \sum_{\omega=\epsilon' - \epsilon} \Pi(\epsilon)\sigma_x^{(n)}\Pi(\epsilon') $$
|
| 641 |
+
|
| 642 |
+
with $\Pi(\epsilon)$ being the projection operator onto the space of eigen-states of $\hat{H}_Q$ with eigen-energy $\epsilon$. This sum is carried out over all pairs of projection operators $\Pi(\epsilon)$ and $\Pi(\epsilon')$ with the energy difference $\omega = \epsilon' - \epsilon$. These operators describe the transitions induced by the baths with coupling strength
|
| 643 |
+
|
| 644 |
+
$$ \gamma_n(\omega) = \begin{cases} J(\omega)(1+N_n(\omega)) & \omega \ge 0 \\ J(\omega)N_n(\omega) & \omega < 0 \end{cases}. $$
|
| 645 |
+
---PAGE_BREAK---
|
| 646 |
+
|
| 647 |
+
Figure 7. **Heat diode bath parameters.** The rectification $R_Q$ plotted for different cold bath temperatures $T_C$ with hot bath temperature $T_H = T_C + \Delta T$. First $h$ is varied keeping $\Delta T = 10J$ (A) and next $\Delta T$ is varied keeping $h = 5J$ (B). For both plots $J_{34} = J_{34}^Q(h)$, $\gamma = J$ and $\delta = 0.01$.
|
| 648 |
+
|
| 649 |
+
$N_n(\omega) = (\exp(|\omega|/T_n) - 1)^{-1}$ is the Bose-Einstein distribution describing the mean number of phonons in the bath mode with frequency $\omega$, and $J(\omega)$ is the spectral function. Here we consider an ohmic bath for which $J(\omega) = \gamma|\omega|$. We let the cold bath have temperature $T_C$ and the hot bath have temperature $T_H$. Like before, we denote $T_H = T_1 > T_6 = T_C$ as forward bias and $T_C = T_1 < T_6 = T_H$ as reverse bias.
|
| 650 |
+
|
| 651 |
+
The total change in mean energy of the diode is given by
|
| 652 |
+
|
| 653 |
+
$$ \frac{dE}{dt} = \frac{d}{dt} \mathrm{tr}\{\hat{H}_Q \hat{\rho}\} = \left\langle \frac{dH_Q}{dt} \right\rangle + \mathrm{tr}\left\{ \hat{H}_Q \frac{d\hat{\rho}}{dt} \right\}. $$
|
| 654 |
+
|
| 655 |
+
The first part is interpreted as the work done on the diode. However, since we have a constant Hamiltonian this is zero. The second part is interpreted as the total heat going into the system. In the steady state $\hat{\rho}_{ss} = 0$, and therefore the total heat exchanged between the diode and baths is zero. However, by noting that
|
| 656 |
+
|
| 657 |
+
$$ \begin{aligned} 0 &= \operatorname{tr} \left\{ \hat{H}_Q \frac{d\hat{\rho}_{ss}}{dt} \right\} \\ &= \operatorname{tr}\{H_Q D_1 [\hat{\rho}_{ss}] \} + \operatorname{tr}\{H_Q D_6 [\hat{\rho}_{ss}] \}, \end{aligned} $$
|
| 658 |
+
|
| 659 |
+
we can define the heat current as the heat exchanged between the diode and the bath interacting with spin 1
|
| 660 |
+
|
| 661 |
+
$$ K = \operatorname{tr}\{\hat{H}_Q D_1 [\hat{\rho}_{ss}]\} = -\operatorname{tr}\{\hat{H}_Q D_6 [\hat{\rho}_{ss}]\}. $$
|
| 662 |
+
|
| 663 |
+
Again here we denote the heat current in forward bias $K_f$ and the heat current in reverse bias $K_r$. Likewise, we define the rectification as
|
| 664 |
+
|
| 665 |
+
$$ R_Q = -\frac{K_f}{K_r}. $$
|
| 666 |
+
|
| 667 |
+
The heat current rectification of the diode is shown in Fig. 6 as a function of the relevant parameters. The contour plot in Fig. 6A shows $R_Q$ for a small vertical symmetry breaking of $\delta = 0.01$ as a function of $J_{34}$ and $h$. Unlike for the spin current case, we clearly see multiple resonances that makes the plot chaotic for small $J_{34}$ and $h$. However, in the upper right corner many of the resonance merge and create thicker more stable lines of large rectification of $> 10^8$. We note that the region of largest $R_Q$ follows the same parametrization as before which we call $J_{34}^Q(h) = h + 1.3J$. Fig. 6B demonstrates the dependence on $\delta$ along these lines showing once again that $\delta \ll 1$ gives the highest $R_Q$. For comparison the dashed line in Fig. 6B shows the rectification when spin 3 in Fig. 1 is removed. In Fig. 6C we confirm that the rectification is indeed due to a drop in the reverse bias heat current. In Fig. 7 the rectification as a function of the bath parameters is studied. In Fig. 7A it can be seen that largest rectification is achieved for $T_C < J$. Since $J$ sets the energy scale of the diode, for $T_C < J$ the cold bath will predominantly induce decay while the hot bath will induce both decay and excitation in the energy levels. Therefore, we expect a better diode for smaller $T_C$. In Fig. 7B we see that the rectification is stable over the first order of magnitude in $\Delta T$ but decreases slightly for very large $\Delta T$. Note that if a large excitation energy is added $H_Q \rightarrow H_Q + \omega \sum_{i=1}^6 \hat{\sigma}_z^{(i)}$ where $\omega \gg h$, $J_{34}$, $J$, then $K \approx J\omega$ and similar rectification values are achieved for both the heat and spin current.
|
| 668 |
+
|
| 669 |
+
## Appendix A10. Alternative Versions of the Diode
|
| 670 |
+
|
| 671 |
+
So far we have explored one configuration of the setup that exhibits large rectification and which is given by the Hamiltonian (1). However, many other sets of parameters will work just as well, some of which will lend themselves more suitably to different physical implementations. Here we want to explore some of these alternative versions.
|
| 672 |
+
---PAGE_BREAK---
|
| 673 |
+
|
| 674 |
+
Figure 8. **Alternative versions and scalability (A)** $\mathcal{R}$ for the two alternative versions of the diode defined by $\hat{H}_1$ and $\hat{H}_2$. For $\hat{H}_1$ we vary $h$ with $J_{34} = h + 1.3J$ while for $\hat{H}_2$ we vary $\Delta$ with $J_{34} = (\Delta + 1.3)J$. **(B)** Contrast C, fidelity $F(\hat{\rho}_{ss,r}^{(34)}, \hat{\rho}_+)$ and concurrence $\mathcal{T}(\hat{\rho}_{ss,r}^{(34)})$ (see text) as a function of $\Delta$ for the Hamiltonian $\hat{H}_2$ with the parametrization $J_{34} = (\Delta + 1.3)J$. **(C)** Rectification $\mathcal{R}$ as a function of $\Delta$ for the two extensions corresponding to the Hamiltonians $\hat{H}_{-XX}$ and $\hat{H}_{XX-}$, where $J_{34} = J_{34}^C(\Delta)$. **(D)** Rectification $\mathcal{R}$ as a function of $\Delta$ and $J_{34}$ for the Hamiltonian $\hat{H}_{XXZ^-}$. For all plots $\delta = 0.01$ and $\gamma = J$.
|
| 675 |
+
|
| 676 |
+
So far we have chosen $\Delta > 0$ which then led to the critical value $J_{23} = -( \Delta + 1.3 ) J$. Alternatively, we could have chosen $\Delta < 0$ in which case we would have gotten the critical value $J_{23} = ( -\Delta + 1.3 ) J$. This corresponds to the transformation $(\Delta, J_{34}) \rightarrow (-\Delta, -J_{34})$. A more general parametrization would therefore be
|
| 677 |
+
|
| 678 |
+
$$
|
| 679 |
+
J_{34} = \begin{cases} (-\Delta + 1.3)J & \Delta < 0 \\ -(-\Delta + 1.3)J & \Delta > 0 \end{cases}.
|
| 680 |
+
$$
|
| 681 |
+
|
| 682 |
+
From Fig. 2A it is seen that even more parametrizations give large rectification. However, for simplicity we will stop here.
|
| 683 |
+
The purpose of the Z-coupling between spin 1 and 2 is to create an energy gab between the state where both spins are down and
|
| 684 |
+
the state where one spin excitation is present. This energy gap can instead be created with local magnetic fields. Thus we may
|
| 685 |
+
define the new Hamiltonian
|
| 686 |
+
|
| 687 |
+
$$
|
| 688 |
+
\hat{H}_1/J = \hat{X}_{12} + (1+\delta)\hat{X}_{23} + \hat{X}_{24} + J_{34}/J\hat{X}_{34} + \hat{X}_{35} + \hat{X}_{45} + \hat{X}_{56} + h/J(\hat{\sigma}_z^{(1)} + \hat{\sigma}_z^{(2)}).
|
| 689 |
+
$$
|
| 690 |
+
|
| 691 |
+
With this Hamiltonian the parametrization becomes
|
| 692 |
+
|
| 693 |
+
$$
|
| 694 |
+
J_{34} = \begin{cases} -(-h + 1.3J) & h < 0 \\ h + 1.3J & h > 0 \end{cases}.
|
| 695 |
+
$$
|
| 696 |
+
|
| 697 |
+
The sign difference between this and the parametrization from before is due to how a Z-coupling and a magnetic field creates
|
| 698 |
+
the energy gap. If two spins are coupled through a Z-coupling with strength $\Delta J$, then the energy gap between the state $|\downarrow\downarrow\rangle$ and
|
| 699 |
+
the state $|\downarrow\uparrow\rangle$ is $-2\Delta J$. If these two spin are instead coupled to a magnetic field with strength $h$, then the energy gap between
|
| 700 |
+
the same two states is $2h$. Thus going from a Z-coupling to a local magnetic field we need to set $h = -\Delta J$. The rectification for
|
| 701 |
+
this model is plotted in Fig. 8A (black line), showing that this model gives rectification of the same order of magnitude as the
|
| 702 |
+
original model.
|
| 703 |
+
|
| 704 |
+
Another more subtle alternative version is defined by the Hamiltonian
|
| 705 |
+
|
| 706 |
+
$$
|
| 707 |
+
\hat{H}_2 / J = \hat{X}_{12} - (1 + \delta)\hat{X}_{23} + \hat{X}_{24} + J_{34}/J\hat{X}_{34} - \hat{X}_{35} + \hat{X}_{45} + \hat{X}_{56} + h/J (\hat{\sigma}_z^{(1)} + \hat{\sigma}_z^{(2)}).
|
| 708 |
+
$$
|
| 709 |
+
|
| 710 |
+
For this Hamiltonian the parametrization becomes
|
| 711 |
+
|
| 712 |
+
$$
|
| 713 |
+
J_{34} = \begin{cases} -(-\Delta + 1.3)J & \Delta < 0 \\ (\Delta + 1.3)J & \Delta > 0 \end{cases}.
|
| 714 |
+
$$
|
| 715 |
+
|
| 716 |
+
To explain this we note that the state $|\Psi_-\rangle$ no longer closes the diode. Instead one can go through the steps of Appendix A4
|
| 717 |
+
to show that the state $|\Psi_+\rangle$ now causes the diode to close. Therefore, the roles of $|\Psi_-\rangle$ and $|\Psi_+\rangle$ switch around such that $|\Psi_->$
|
| 718 |
+
---PAGE_BREAK---
|
| 719 |
+
|
| 720 |
+
now needs to be in resonance with the rest of the diode. This is insured by letting $J_{34} \rightarrow -J_{34}$ which leads to the above parametrization. To illustrate this we have plotted $\mathcal{R}$ for this version in Fig. 8A (red line). Furthermore, the contrast C (see main text) and fidelity between $\hat{\rho}_{+} = |\Psi_{+}\rangle\langle\Psi_{+}|$ and the reduced steady state density matrix in reverse bias $\hat{\rho}_{ss,r}^{(34)} = \text{tr}_{\hat{\delta}_{z}^{(1)}\hat{\delta}_{z}^{(2)}\hat{\delta}_{z}^{(5)}\hat{\delta}_{z}^{(6)}}[\hat{\rho}_{ss,r}]$ is plotted in Fig. 8B. The entanglement between spin 3 and 4 can be seen more generally by using the concurrence [56]
|
| 721 |
+
|
| 722 |
+
$$ \mathcal{T}(\hat{\rho}_{ss,r}^{(34)}) = \max(0, \lambda_1 - \lambda_2 - \lambda_3 - \lambda_4), $$
|
| 723 |
+
|
| 724 |
+
where $\lambda_1, ..., \lambda_4$ are eigenvalues, in decreasing order, of the non-hermitian matrix
|
| 725 |
+
|
| 726 |
+
$$ \hat{\rho}_{ss,r}^{(34)} (\sigma_y^{(3)} \sigma_y^{(4)}) \hat{\rho}_{ss,r}^{(34)*} (\sigma_y^{(3)} \sigma_y^{(4)}). $$
|
| 727 |
+
|
| 728 |
+
This is a widely used measure of entanglement which is 1 only for a maximally entangled state. The concurrence is also plotted in Fig. 8B. This model does indeed exhibit large rectification, and Fig. 8B shows that this is now due to the maximally entangled state $|\Psi_+\rangle$ building up in the junction composed of spin 3 and 4. This model would of course also work with a magnetic field instead of the Z-coupling.
|
| 729 |
+
|
| 730 |
+
## Appendix A11. Scalability of the Diode
|
| 731 |
+
|
| 732 |
+
To test the scalability of the diode proposed in the main text, we can look at possible extensions with 7 spins. There are three such obvious choices. First, one could add a 7th spin at the end of the chain with an XX coupling of strength $J$ between the 6th and 7th spin. Thus the new Hamiltonian is given from the original Hamiltonian $\hat{H}$ defined in Eq. (1)
|
| 733 |
+
|
| 734 |
+
$$ \hat{H}_{-XX}/J = \hat{H}/J + \hat{X}_{67}. $$
|
| 735 |
+
|
| 736 |
+
This new system will then be coupled to the heat baths through spin 1 and 7, replacing $\mathcal{D}_6[\hat{\rho}]$ with $\mathcal{D}_7[\hat{\rho}]$ in Eq. (2). Alternatively, one could put a 0th spin at the beginning of the chain with an XX-coupling of strength $J$ between the 0th and 1st spin. Thus we define the Hamiltonian
|
| 737 |
+
|
| 738 |
+
$$ \hat{H}_{XX-}/J = \hat{H}/J + \hat{X}_{01}. $$
|
| 739 |
+
|
| 740 |
+
The last obvious way of extension is again with an extra 0th spin at the beginning of the chain but this time with an XXZ coupling, giving the Hamiltonian
|
| 741 |
+
|
| 742 |
+
$$ \hat{H}_{XXZ-}/J = \hat{H}/J + \hat{X}_{01} + \Delta\hat{Z}_{01}. $$
|
| 743 |
+
|
| 744 |
+
The last two new systems will then be coupled to the heat baths through spin 0 and 6, replacing $\mathcal{D}_1[\hat{\rho}]$ with $\mathcal{D}_0[\hat{\rho}]$ in Eq. (2). The found rectifications for the first two versions can be seen in Fig. 8C. Here we see that these two versions almost have the same rectification as the original chain (seen in Fig. 2B). This suggests that the spin chain diode discussed in the main text can be a part of a bigger spin network. We even see that the system described by $\hat{H}_{XX-}$ performs better than the original 6 spin chain. For the rectification in the case of a spin in front of the chain coupled through an XXZ coupling (obeying the Hamiltonian $\hat{H}_{XXZ-}$), we can not expect that the usual parametrization will hold. Therefore, the rectification is plotted as a function of both $\Delta$ and $J_{34}$ in Fig. 8D. We see that the rectification is significantly higher than for the six spin diode. Already at $\Delta \sim 4$ (and $J_{34} \sim -4.5$) do we achieve a rectification of $\mathcal{R} > 10^6$. Note that this model has a resonance between a spin excitation on spin 0, 1 and 2 and the bell state $|\Psi_+\rangle$ between spin 3 and 4 if $J_{34} \sim -\Delta J$, similar to what is studied in Appendix A2, explaining why we get large $\mathcal{R}$ around this line.
|
| 745 |
+
|
| 746 |
+
## Appendix A12. $\delta = \delta'$ Symmetry
|
| 747 |
+
|
| 748 |
+
In this section we study the case where a small perturbation is added to the coupling between spin 4 and 5 such that the diode is now described by the Hamiltonian
|
| 749 |
+
|
| 750 |
+
$$ \hat{H}/J = \hat{X}_{12} + (1+\delta)\hat{X}_{23} + \hat{X}_{24} + J_{34}/J\hat{X}_{34} + \hat{X}_{35} + (1+\delta')\hat{X}_{45} + \hat{X}_{56} + \Delta\hat{Z}_{12}. $$
|
| 751 |
+
|
| 752 |
+
The dependence of the rectification on $\delta'$ only was studied in the main text, so here we will focus on $\delta = \delta'$. From the results in the main text, we expect the rectification to be lower than for $\delta' = 0$. However, with $\delta = \delta'$ the system exhibits a symmetry that might be preferable for certain implementations. In Fig. 9A the rectification is plotted as a function of $J_{34}$ for different values of $\delta = \delta'$. Although the rectification drops by an order of magnitude, it is still $> 10^4$ for $\delta = \delta' = 0.01$.
|
| 753 |
+
---PAGE_BREAK---
|
| 754 |
+
|
| 755 |
+
Figure 9. $\delta = \delta'$ symmetry, interaction strength and Jordan-Wigner transform. (A) Rectification as a function of $J_{34}$ for different values of $\delta = \delta'$ and $\Delta = 5$. (B) Rectification $\mathcal{R}$ as a function of $\Delta$ for different interaction strengths $\gamma$ between the system and the bath, where $\delta = 0.01$ was used. (C) Rectification $\mathcal{R}$ as a function of $\Delta$ for the Jordan-Wigner transformed system for different values of $\delta$. For plots B-C the parametrization $J_{34} = J_{34}^c(\Delta)$ were used.
|
| 756 |
+
|
| 757 |
+
## Appendix A13. Interaction Strength Between Diode and Baths
|
| 758 |
+
|
| 759 |
+
Here we study the effect of changing the interaction strength $\gamma$ between the baths and the system, as defined in the main article. This can be seen in Fig. 9B, where $\mathcal{R}$ is plotted for different interaction strengths. We see that the general behavior of the rectification is still achieved. However, the rectification becomes more sensitive to the inner structure of the system. Generally, the rectification is increased slightly for weaker interaction strengths. Interference in these types of systems are known to often disappear at stronger interaction. An example of this is molecular junctions which can be tuned such that interference effects cause the current in one bias to be zero to lowest order in the applied voltage $V$ [57, 58]. However, to second order in $V$ the effect is broken, and rectification can be difficult to achieve [59, 60]. In Fig. 9B we see that large rectification is achieved for couplings $\gamma$ well beyond $1/J$.
|
| 760 |
+
|
| 761 |
+
## Appendix A14. The Jordan-Wigner transformation
|
| 762 |
+
|
| 763 |
+
To perform the Jordan-Wigner transformation of the diode Hamiltonian, we first let $\hat{\sigma}_+^{(n)}$ and $\hat{\sigma}_-^{(n)}$ be the raising and lowering spin operators respectively. It is then possible to define a mapping between such operators and fermionic operators $\hat{a}_n^\dagger$ and $\hat{a}_n$ as provided by equation
|
| 764 |
+
|
| 765 |
+
$$ \hat{\sigma}_+^{(n)} = \hat{a}_n^\dagger e^{i\pi \sum_{k=1}^{n-1} \hat{n}_k} \quad (9) $$
|
| 766 |
+
|
| 767 |
+
with $\hat{n}_k = \hat{a}_k^\dagger \hat{a}_k$ [61]. These operators describe the creation and annihilation of spin-less fermions on the n'th site of a 6 site linear chain. Now, let us consider the Hamiltonian (1) from above given by
|
| 768 |
+
|
| 769 |
+
$$ \hat{H}/J = \hat{X}_{12} + (1+\delta)\hat{X}_{23} + \hat{X}_{24} + J_{34}/J\hat{X}_{34} + \hat{X}_{35} + \hat{X}_{45} + \hat{X}_{56} + \Delta\hat{\sigma}_z^{(1)}\hat{\sigma}_z^{(2)} $$
|
| 770 |
+
|
| 771 |
+
in which $\hat{X}_{mn} = 2(\hat{\sigma}_+^{(m)}\hat{\sigma}_-^{(n)} + \hat{\sigma}_-^{(m)}\hat{\sigma}_+^{(n)})$. Thus, by using the transformation given in Eq. (9), we get
|
| 772 |
+
|
| 773 |
+
$$ \hat{X}_{m(m+1)} = 2 (\hat{a}_m^\dagger e^{-i\pi \hat{n}_m} \hat{a}_{m+1} + \hat{a}_m e^{i\pi \hat{n}_m} \hat{a}_{m+1}^\dagger), \quad (10) $$
|
| 774 |
+
|
| 775 |
+
where we have used the commutation relations
|
| 776 |
+
|
| 777 |
+
$$ \begin{aligned} \{\hat{a}_n, \hat{a}_k^\dagger\} &= \delta_{n,k}, \\ \{\hat{a}_n, \hat{a}_k\} &= \{\hat{a}_n^\dagger, \hat{a}_k^\dagger\} = 0, \\ \{e^{\pm i\pi \sum_{k=1}^{m-1} \hat{n}_k}, a_m\} &= \{e^{\pm i\pi \sum_{k=1}^{m-1} \hat{n}_k}, a_m^\dagger\} = 0. \end{aligned} $$
|
| 778 |
+
---PAGE_BREAK---
|
| 779 |
+
|
| 780 |
+
By noting that only one particle can be at each site at a time so that $\hat{a}_m e^{i\pi\hat{n}_m} = -\hat{a}_m$ and similarly for $\hat{a}_m^\dagger$, we can simplify the expression to
|
| 781 |
+
|
| 782 |
+
$$ \hat{X}_{m(m+1)} = 2(\hat{a}_m^\dagger \hat{a}_{m+1} + \hat{a}_{m+1}^\dagger \hat{a}_m). $$
|
| 783 |
+
|
| 784 |
+
With similar arguments one can get
|
| 785 |
+
|
| 786 |
+
$$ \hat{X}_{m(m+2)} = 2(-1)^{\hat{n}_{m+1}} (\hat{a}_m^\dagger \hat{a}_{m+2} + \hat{a}_{m+2}^\dagger \hat{a}_m). $$
|
| 787 |
+
|
| 788 |
+
Now, we need to work on the term $\sigma_z^{(1)}\sigma_z^{(2)}$, where we can use $\sigma_z^{(k)} = 2\hat{n}_k - 1$ to write
|
| 789 |
+
|
| 790 |
+
$$ \hat{\sigma}_z^{(1)} \hat{\sigma}_z^{(2)} = -2(\hat{n}_1 - \hat{n}_2)^2 + 1. $$
|
| 791 |
+
|
| 792 |
+
Throwing away constant terms, the transformed Hamiltonian becomes
|
| 793 |
+
|
| 794 |
+
$$ \hat{H}/J = \hat{K}_{12} + (1+\delta)\hat{K}_{23} + (-1)^{\hat{n}_3}\hat{K}_{24} + J_{34}/J\hat{K}_{34} + (-1)^{\hat{n}_4}\hat{K}_{35} + \hat{K}_{45} + \hat{K}_{56} - 2\Delta(\hat{n}_1 - \hat{n}_2)^2, $$
|
| 795 |
+
|
| 796 |
+
where $\hat{K}_{mn} = 2(\hat{a}_m^\dagger\hat{a}_n + \hat{a}_n^\dagger\hat{a}_m)$ is the hopping operator. The terms $\hat{K}_{m(m+1)}$ represents a linear non-interacting spinless fermion system. The terms $(-1)^{\hat{n}_{m+1}} \hat{K}_{m(m+2)}$ describe a three site interaction which comes from the next nearest neighbor interactions of the original spin-1/2 chain. The last term $-(\hat{n}_1 - \hat{n}_2)^2$ represents a combination of a chemical potential and coulomb repulsion between two particles at site 1 and 2. If this transformed system is coupled to baths like before through the transformed operators for $\hat{\sigma}_\pm^{(n)}$, we would achieve the same result as in the main article. However, for the new system it makes more sense to couple to two heat baths through the operators $\hat{a}$ and $\hat{a}^\dagger$. With the system described by the density matrix $\hat{\rho}$ the evolution of this is again governed by the Lindblad equation
|
| 797 |
+
|
| 798 |
+
$$ \frac{\partial \hat{\rho}}{\partial t} = -i[\hat{H}, \hat{\rho}] + \mathcal{D}_1[\hat{\rho}] + \mathcal{D}_6[\hat{\rho}], $$
|
| 799 |
+
|
| 800 |
+
where $[\bullet, \bullet]$ is the commutator and $\mathcal{D}_n[\hat{\rho}]$ is a dissipative term describing the action of the environment
|
| 801 |
+
|
| 802 |
+
$$ \mathcal{D}_n[\hat{\rho}] = \gamma \left[ \lambda_n \left( \hat{a}_n^\dagger \hat{\rho} \hat{a}_n - \frac{1}{2} \{\hat{a}_n \hat{a}_n^\dagger, \hat{\rho}\} \right) + (1-\lambda_n) \left( \hat{a}_n \hat{\rho} \hat{a}_n^\dagger - \frac{1}{2} \{\hat{a}_n^\dagger \hat{a}_n, \hat{\rho}\} \right) \right]. $$
|
| 803 |
+
|
| 804 |
+
Again we let $\lambda_n$ be either 0 or 0.5. The case $\lambda_n = 0$ describes the n'th site interacting with an environment with predominantly holes whereas the case $\lambda_n = 0.5$ describes the n'th site interacting with an environment with equal holes and fermions. If $\lambda_1 = 0.5$ and $\lambda_6 = 0$, a net number of fermions are produced at site 1, travel through the chain and is absorbed at site 6. Like in the main article, we can define the current $J = \text{tr}(\hat{j}_{12}\hat{\rho}_{ss}) = \text{tr}(\hat{j}_{56}\hat{\rho}_{ss})$ as the expectation value of the operator $\hat{j}_{ij} = 2iJ(\hat{a}_i^\dagger\hat{a}_j - \hat{a}_j^\dagger\hat{a}_i)$ through the continuity equation for $\hat{n}_i$. Again here $\hat{\rho}_{ss}$ is the steady state density matrix. Doing the inverse Jordan-Wigner transform, we can find the rectification for this system, which can be seen in Fig. 9C. The achieved rectification is exactly the same as the original rectification found in Fig. 2B in the main text.
|
| 805 |
+
|
| 806 |
+
[47] V. V. Albert, "Lindbladians with multiple steady states: theory and applications", thesis, Yale University (2017).
|
| 807 |
+
|
| 808 |
+
[48] R. Alicki and K. Lendi, *Quantum Dynamical Semigroups and Applications*, Lecture Notes in Physics No. 286 (Springer-Verlag, Berlin Heidelberg, 1987).
|
| 809 |
+
|
| 810 |
+
[49] M. S. Sarandy and D. A. Lidar, Adiabatic approximation in open quantum systems. *Phys. Rev. A* **71**, 012331 (2005).
|
| 811 |
+
|
| 812 |
+
[50] J. Jing, M. S. Sarandy, D. A. Lidar, D. W. Luo, and L. A. Wu, Eigenstate tracking in open quantum systems. *Phys. Rev. A* **94**, 042131 (2016).
|
| 813 |
+
|
| 814 |
+
[51] C. K. Hu, A. C. Santos, J. M. Cui, Y. F. Huang, D. O. Soares-Pinto, M. S. Sarandy, C. F. Li, G. C. Guo, Adiabatic quantum dynamics under decoherence in a controllable trapped-ion setup. *Phys. Rev. A* **99** 062320
|
| 815 |
+
|
| 816 |
+
[52] M. Žnidarič, Spin Transport in a One-Dimensional Anisotropic Heisenberg Model. *Phys. Rev. Lett.* **106**, 220601 (2011).
|
| 817 |
+
|
| 818 |
+
[53] J. J. Mendoza-Arenas, S. Al-Assam, S. R. Clark, and D. Jaksch, Heat transport in the XXZ spin chain: from ballistic to diffusive regimes and dephasing enhancement. *Journal of Statistical Mechanics: Theory and Experiment* **2013**, P07007 (2013).
|
| 819 |
+
|
| 820 |
+
[54] V. V. Albert and L. Jiang, Symmetries and conserved quantities in Lindblad master equations. *Phys. Rev. A* **89**, 022118 (2014).
|
| 821 |
+
|
| 822 |
+
[55] P. P. Hofer, M. Perarnau-Llobet, L. D. M. Miranda, H. Géraldine, R. Silva, J. B. Brask, N. Brunner, Markovian master equations for quantum thermal machines: local versus global approach. *New Journal of Physics* **19**, 123037 (2017).
|
| 823 |
+
|
| 824 |
+
[56] W. K. Wootters, Entanglement of Formation of an Arbitrary State of Two Qubits, *Phys. Rev. Lett.* **80**, 2245 (1998).
|
| 825 |
+
|
| 826 |
+
[57] K. G. L. Pedersen, M. Strange, M. Leijnse, P. Hedegård, G. C. Solomon, J. Paaske, Quantum interference in off-resonant transport through single molecules. *Phys. Rev. B* **90**, 125413 (2014).
|
| 827 |
+
---PAGE_BREAK---
|
| 828 |
+
|
| 829 |
+
[58] G. C. Solomon, D. Q. Andrews, R. P. Van Duyne, and M. A. Ratner, When Things Are Not as They Seem: Quantum Interference Turns Molecular Electron Transfer "Rules" Upside Down *Journal of the American Chemical Society* **130**, 7788 (2008).
|
| 830 |
+
|
| 831 |
+
[59] A. Batra, J. S. Meisner, P. Darancet, Q. Chen, M. L. Steigerwald, C. Nuckolls, L. Venkataraman, Molecular diodes enabled by quantum interference. *Faraday Discuss.* **17**, 79 (2014).
|
| 832 |
+
|
| 833 |
+
[60] M. Iwane, S. Fujii, and M. Kiguchi, Molecular Diode Studies Based on a Highly Sensitive Molecular Measurement Technique. *Sensors* (*Basel, Switzerland*) **17**, 956 (2017).
|
| 834 |
+
|
| 835 |
+
[61] M. Azzouz, Interchain-coupling effect on the one-dimensional spin-1/2 antiferromagnetic Heisenberg model, *Phys. Rev. B* **48**, 6136 (1993).
|