Add files using upload-large-folder tool
Browse files- samples_new/sample_metadata.jsonl +100 -0
- samples_new/texts_merged/1131204.md +426 -0
- samples_new/texts_merged/1223200.md +300 -0
- samples_new/texts_merged/1259736.md +0 -0
- samples_new/texts_merged/174916.md +469 -0
- samples_new/texts_merged/250922.md +0 -0
- samples_new/texts_merged/2515306.md +523 -0
- samples_new/texts_merged/2590883.md +504 -0
- samples_new/texts_merged/2763593.md +364 -0
- samples_new/texts_merged/276850.md +386 -0
- samples_new/texts_merged/2779026.md +595 -0
- samples_new/texts_merged/2918349.md +208 -0
- samples_new/texts_merged/305525.md +295 -0
- samples_new/texts_merged/3226827.md +194 -0
- samples_new/texts_merged/3251599.md +679 -0
- samples_new/texts_merged/3295535.md +0 -0
- samples_new/texts_merged/3438890.md +226 -0
- samples_new/texts_merged/3450399.md +67 -0
- samples_new/texts_merged/3461249.md +272 -0
- samples_new/texts_merged/3603622.md +245 -0
- samples_new/texts_merged/3723390.md +333 -0
- samples_new/texts_merged/4174805.md +578 -0
- samples_new/texts_merged/4364106.md +764 -0
- samples_new/texts_merged/4579765.md +623 -0
- samples_new/texts_merged/4808858.md +28 -0
- samples_new/texts_merged/4872902.md +230 -0
- samples_new/texts_merged/4994833.md +529 -0
- samples_new/texts_merged/503850.md +169 -0
- samples_new/texts_merged/5396754.md +251 -0
- samples_new/texts_merged/5647681.md +487 -0
- samples_new/texts_merged/565481.md +149 -0
- samples_new/texts_merged/5893423.md +0 -0
- samples_new/texts_merged/6026555.md +180 -0
- samples_new/texts_merged/6080891.md +760 -0
- samples_new/texts_merged/6324184.md +395 -0
- samples_new/texts_merged/6332297.md +449 -0
- samples_new/texts_merged/6376231.md +621 -0
- samples_new/texts_merged/6535016.md +0 -0
- samples_new/texts_merged/6697438.md +416 -0
- samples_new/texts_merged/6724971.md +641 -0
- samples_new/texts_merged/6772016.md +219 -0
- samples_new/texts_merged/6838080.md +0 -0
- samples_new/texts_merged/7100604.md +0 -0
- samples_new/texts_merged/7604074.md +475 -0
- samples_new/texts_merged/7618174.md +712 -0
- samples_new/texts_merged/7774888.md +807 -0
- samples_new/texts_merged/822209.md +738 -0
- samples_new/texts_merged/825446.md +377 -0
- samples_new/texts_merged/879988.md +435 -0
- samples_new/texts_merged/88513.md +161 -0
samples_new/sample_metadata.jsonl
ADDED
|
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{"doc_id": "7569662", "mean_proba": 0.997152994076411, "num_pages": 6}
|
| 2 |
+
{"doc_id": "3327355", "mean_proba": 0.971603728334109, "num_pages": 60}
|
| 3 |
+
{"doc_id": "4971236", "mean_proba": 0.9744334369897842, "num_pages": 14}
|
| 4 |
+
{"doc_id": "904681", "mean_proba": 0.8895234366257985, "num_pages": 3}
|
| 5 |
+
{"doc_id": "1836869", "mean_proba": 0.8915040567517281, "num_pages": 8}
|
| 6 |
+
{"doc_id": "3884483", "mean_proba": 0.9886121082873572, "num_pages": 42}
|
| 7 |
+
{"doc_id": "7334540", "mean_proba": 0.9987283796072006, "num_pages": 2}
|
| 8 |
+
{"doc_id": "199837", "mean_proba": 0.972013454545628, "num_pages": 11}
|
| 9 |
+
{"doc_id": "1168240", "mean_proba": 0.997676532715559, "num_pages": 8}
|
| 10 |
+
{"doc_id": "6016935", "mean_proba": 0.9086370897643706, "num_pages": 34}
|
| 11 |
+
{"doc_id": "4523932", "mean_proba": 0.8853498250246048, "num_pages": 12}
|
| 12 |
+
{"doc_id": "1885128", "mean_proba": 0.8906278218093672, "num_pages": 19}
|
| 13 |
+
{"doc_id": "393503", "mean_proba": 0.9894506980975468, "num_pages": 6}
|
| 14 |
+
{"doc_id": "3193892", "mean_proba": 0.994115799665451, "num_pages": 6}
|
| 15 |
+
{"doc_id": "6813453", "mean_proba": 0.9944966547191144, "num_pages": 8}
|
| 16 |
+
{"doc_id": "6426180", "mean_proba": 0.9054293377058846, "num_pages": 7}
|
| 17 |
+
{"doc_id": "500594", "mean_proba": 0.8765622690320015, "num_pages": 20}
|
| 18 |
+
{"doc_id": "3495399", "mean_proba": 0.9484576561621256, "num_pages": 14}
|
| 19 |
+
{"doc_id": "6218816", "mean_proba": 0.9807607705394428, "num_pages": 12}
|
| 20 |
+
{"doc_id": "4239587", "mean_proba": 0.9929047502004182, "num_pages": 26}
|
| 21 |
+
{"doc_id": "7089754", "mean_proba": 0.9980307880676154, "num_pages": 33}
|
| 22 |
+
{"doc_id": "230879", "mean_proba": 0.9979761976462144, "num_pages": 13}
|
| 23 |
+
{"doc_id": "3148538", "mean_proba": 0.8522171427806219, "num_pages": 6}
|
| 24 |
+
{"doc_id": "2865847", "mean_proba": 0.9540428519248962, "num_pages": 2}
|
| 25 |
+
{"doc_id": "1772599", "mean_proba": 0.967163262458948, "num_pages": 26}
|
| 26 |
+
{"doc_id": "4579765", "mean_proba": 0.9975770957329694, "num_pages": 17}
|
| 27 |
+
{"doc_id": "7342615", "mean_proba": 0.9989697635173798, "num_pages": 13}
|
| 28 |
+
{"doc_id": "3224121", "mean_proba": 0.9953744477695888, "num_pages": 9}
|
| 29 |
+
{"doc_id": "2634535", "mean_proba": 0.8585290673531984, "num_pages": 19}
|
| 30 |
+
{"doc_id": "1259736", "mean_proba": 0.9411229211212004, "num_pages": 149}
|
| 31 |
+
{"doc_id": "4753802", "mean_proba": 0.9529886152595282, "num_pages": 16}
|
| 32 |
+
{"doc_id": "2092097", "mean_proba": 0.9999509155750276, "num_pages": 10}
|
| 33 |
+
{"doc_id": "7563909", "mean_proba": 0.9961217548166004, "num_pages": 35}
|
| 34 |
+
{"doc_id": "1973835", "mean_proba": 0.9965763115569164, "num_pages": 38}
|
| 35 |
+
{"doc_id": "3764397", "mean_proba": 0.9639263451099396, "num_pages": 12}
|
| 36 |
+
{"doc_id": "4174805", "mean_proba": 0.961553082746618, "num_pages": 17}
|
| 37 |
+
{"doc_id": "565481", "mean_proba": 0.9996201992034912, "num_pages": 4}
|
| 38 |
+
{"doc_id": "339686", "mean_proba": 0.99940624833107, "num_pages": 4}
|
| 39 |
+
{"doc_id": "3603622", "mean_proba": 0.986870400607586, "num_pages": 12}
|
| 40 |
+
{"doc_id": "1223200", "mean_proba": 0.921684911617866, "num_pages": 13}
|
| 41 |
+
{"doc_id": "6697438", "mean_proba": 0.8831472884524952, "num_pages": 22}
|
| 42 |
+
{"doc_id": "6293016", "mean_proba": 0.9937160038031064, "num_pages": 13}
|
| 43 |
+
{"doc_id": "2918349", "mean_proba": 0.9964490483204524, "num_pages": 6}
|
| 44 |
+
{"doc_id": "1808935", "mean_proba": 0.8179429352283478, "num_pages": 15}
|
| 45 |
+
{"doc_id": "3295535", "mean_proba": 0.9766881407962904, "num_pages": 36}
|
| 46 |
+
{"doc_id": "3723390", "mean_proba": 0.9646476159493128, "num_pages": 6}
|
| 47 |
+
{"doc_id": "3438890", "mean_proba": 0.9317811191082, "num_pages": 10}
|
| 48 |
+
{"doc_id": "3251599", "mean_proba": 0.9982280433177948, "num_pages": 7}
|
| 49 |
+
{"doc_id": "276850", "mean_proba": 0.998798830942674, "num_pages": 11}
|
| 50 |
+
{"doc_id": "4994833", "mean_proba": 0.998662695288658, "num_pages": 8}
|
| 51 |
+
{"doc_id": "6743834", "mean_proba": 0.9462647065520288, "num_pages": 4}
|
| 52 |
+
{"doc_id": "825446", "mean_proba": 0.911532184252372, "num_pages": 13}
|
| 53 |
+
{"doc_id": "6838080", "mean_proba": 0.9977947799488902, "num_pages": 32}
|
| 54 |
+
{"doc_id": "7604074", "mean_proba": 0.993289651779028, "num_pages": 13}
|
| 55 |
+
{"doc_id": "5647681", "mean_proba": 0.8188157608875861, "num_pages": 13}
|
| 56 |
+
{"doc_id": "6724971", "mean_proba": 0.9973200474466596, "num_pages": 21}
|
| 57 |
+
{"doc_id": "822209", "mean_proba": 0.9606187572846046, "num_pages": 26}
|
| 58 |
+
{"doc_id": "6470527", "mean_proba": 0.9523119360208512, "num_pages": 10}
|
| 59 |
+
{"doc_id": "305525", "mean_proba": 0.9593745129449028, "num_pages": 14}
|
| 60 |
+
{"doc_id": "4808858", "mean_proba": 0.9973472654819489, "num_pages": 1}
|
| 61 |
+
{"doc_id": "879988", "mean_proba": 0.9999016573031744, "num_pages": 6}
|
| 62 |
+
{"doc_id": "5893423", "mean_proba": 0.8851124197244644, "num_pages": 8}
|
| 63 |
+
{"doc_id": "2515306", "mean_proba": 0.9936049990355968, "num_pages": 8}
|
| 64 |
+
{"doc_id": "3450399", "mean_proba": 0.9164572358131408, "num_pages": 3}
|
| 65 |
+
{"doc_id": "503850", "mean_proba": 0.850948595574924, "num_pages": 14}
|
| 66 |
+
{"doc_id": "7618174", "mean_proba": 0.968082971572876, "num_pages": 25}
|
| 67 |
+
{"doc_id": "6332297", "mean_proba": 0.9995016634464264, "num_pages": 10}
|
| 68 |
+
{"doc_id": "1131204", "mean_proba": 0.9986574649810792, "num_pages": 9}
|
| 69 |
+
{"doc_id": "7774888", "mean_proba": 0.9932470734302814, "num_pages": 26}
|
| 70 |
+
{"doc_id": "3461249", "mean_proba": 0.965369338169694, "num_pages": 16}
|
| 71 |
+
{"doc_id": "6772016", "mean_proba": 0.9998548328876496, "num_pages": 6}
|
| 72 |
+
{"doc_id": "3147359", "mean_proba": 0.935382536866448, "num_pages": 22}
|
| 73 |
+
{"doc_id": "692782", "mean_proba": 0.9983158349990844, "num_pages": 5}
|
| 74 |
+
{"doc_id": "213815", "mean_proba": 0.9887832254171371, "num_pages": 6}
|
| 75 |
+
{"doc_id": "598288", "mean_proba": 0.9456826879366024, "num_pages": 370}
|
| 76 |
+
{"doc_id": "2763593", "mean_proba": 0.9346814155578612, "num_pages": 6}
|
| 77 |
+
{"doc_id": "7642017", "mean_proba": 0.9068743400275708, "num_pages": 16}
|
| 78 |
+
{"doc_id": "2909063", "mean_proba": 0.9436505238215128, "num_pages": 3}
|
| 79 |
+
{"doc_id": "2590883", "mean_proba": 0.9913453095489078, "num_pages": 18}
|
| 80 |
+
{"doc_id": "5718759", "mean_proba": 0.9894875958561896, "num_pages": 8}
|
| 81 |
+
{"doc_id": "250922", "mean_proba": 0.9653690908406232, "num_pages": 37}
|
| 82 |
+
{"doc_id": "6859646", "mean_proba": 0.8590865818893207, "num_pages": 34}
|
| 83 |
+
{"doc_id": "4872902", "mean_proba": 0.9977191276848316, "num_pages": 8}
|
| 84 |
+
{"doc_id": "6535016", "mean_proba": 0.8693619471675944, "num_pages": 53}
|
| 85 |
+
{"doc_id": "6080891", "mean_proba": 0.9882900759577752, "num_pages": 20}
|
| 86 |
+
{"doc_id": "1117773", "mean_proba": 0.8673347405024937, "num_pages": 7}
|
| 87 |
+
{"doc_id": "4409661", "mean_proba": 0.9995375604465092, "num_pages": 58}
|
| 88 |
+
{"doc_id": "450057", "mean_proba": 0.9242128431797028, "num_pages": 34}
|
| 89 |
+
{"doc_id": "6026555", "mean_proba": 0.9770366474986076, "num_pages": 4}
|
| 90 |
+
{"doc_id": "7081601", "mean_proba": 0.9810841436739322, "num_pages": 27}
|
| 91 |
+
{"doc_id": "6376231", "mean_proba": 0.9136003098554082, "num_pages": 36}
|
| 92 |
+
{"doc_id": "4364106", "mean_proba": 0.939790777862072, "num_pages": 16}
|
| 93 |
+
{"doc_id": "5396754", "mean_proba": 0.9896406365765466, "num_pages": 9}
|
| 94 |
+
{"doc_id": "3226827", "mean_proba": 0.991751770178477, "num_pages": 6}
|
| 95 |
+
{"doc_id": "2779026", "mean_proba": 0.9987509816884994, "num_pages": 20}
|
| 96 |
+
{"doc_id": "174916", "mean_proba": 0.9992432685998768, "num_pages": 13}
|
| 97 |
+
{"doc_id": "88513", "mean_proba": 0.9059492826461792, "num_pages": 5}
|
| 98 |
+
{"doc_id": "7100604", "mean_proba": 0.9811755003113496, "num_pages": 38}
|
| 99 |
+
{"doc_id": "6324184", "mean_proba": 0.9863000105727804, "num_pages": 11}
|
| 100 |
+
{"doc_id": "3594993", "mean_proba": 0.974035307765007, "num_pages": 8}
|
samples_new/texts_merged/1131204.md
ADDED
|
@@ -0,0 +1,426 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Appendices
|
| 5 |
+
|
| 6 |
+
## A. Derivations and Additional Methodology
|
| 7 |
+
|
| 8 |
+
### A.1. Generalized PointConv Trick
|
| 9 |
+
|
| 10 |
+
The matrix notation becomes very cumbersome for manipulating these higher order n-dimensional arrays, so we will instead use index notation with Latin indices i, j, k indexing points, Greek indices α, β, γ indexing feature channels, and c indexing the coordinate dimensions of which there are $d = 3$ for PointConv and $d = \dim(G) + 2 \dim(Q)$ for LieConv.³ As the objects are not geometric tensors but simply n-dimensional arrays, we will make no distinction between upper and lower indices. After expanding into indices, it should be assumed that all values are scalars, and that any free indices can range over all of the values.
|
| 11 |
+
|
| 12 |
+
Let $k_{ij}^{\alpha,\beta}$ be the output of the MLP $k_\theta$ which takes $\{a_{ij}^c\}$ as input and acts independently over the locations $i, j$. For PointConv, the input $a_{ij}^c = x_i^c - x_j^c$ and for LieConv the input $a_{ij}^c = \text{Concat}([\log(v_j^{-1}u_i), q_i, q_j])^c$.
|
| 13 |
+
|
| 14 |
+
We wish to compute
|
| 15 |
+
|
| 16 |
+
$$h_i^\alpha = \sum_{j;\beta} k_{ij}^{\alpha;\beta} f_j^\beta. \quad (12)$$
|
| 17 |
+
|
| 18 |
+
In Wu et al. (2019), it was observed that since $k_{ij}^{\alpha,\beta}$ is the output of an MLP, $k_{ij}^{\alpha,\beta} = \sum_\gamma W_\gamma^{\alpha,\beta} s_{i,j}^\gamma$ for some final weight matrix $W$ and penultimate activations $s_{i,j}^\gamma$ ($s_{i,j}^\gamma$ is simply the result of the MLP after the last nonlinearity). With this in mind, we can rewrite (12)
|
| 19 |
+
|
| 20 |
+
$$h_i^\alpha = \sum_{j,\beta} \left( \sum_\gamma W_\gamma^{\alpha,\beta} s_{i,j}^\gamma \right) f_j^\beta \quad (13)$$
|
| 21 |
+
|
| 22 |
+
$$= \sum_{\beta, \gamma} W_\gamma^{\alpha, \beta} \left( \sum_j s_{i,j}^\gamma f_j^\beta \right) \quad (14)$$
|
| 23 |
+
|
| 24 |
+
In practice, the intermediate number of channels is much less than the product of $c_{in}$ and $c_{out}$: $|\gamma| < |\alpha||\beta|$ and so this reordering of the computation leads to a massive reduction in both memory and compute. Furthermore, $b_i^{\gamma,\beta} = \sum_j s_{i,j}^\gamma f_j^\beta$ can be implemented with regular matrix multiplication and $h_i^\alpha = \sum_{\beta,\gamma} W_\gamma^{\alpha,\beta} b_i^{\gamma,\beta}$ can be also by flattening $(\beta, \gamma)$ into a single axis $\varepsilon$: $h_i^\alpha = \sum_\varepsilon W^{\alpha,\varepsilon} b_i^\varepsilon$.
|
| 25 |
+
|
| 26 |
+
The sum over index $j$ can be restricted to a subset $j(i)$ (such as a chosen neighborhood) by computing $f_j^\beta$ at each of the required indices and padding to the size of the maximum subset with zeros, and computing $b_i^{\gamma,\beta} = \sum_j s_{i,j(i)}^\gamma f_{j(i)}^\beta$ using dense matrix multiplication. Masking out of the values
|
| 27 |
+
|
| 28 |
+
at indices *i* and *j* is also necessary when there are different numbers of points per minibatch but batched together using zero padding. The generalized PointConv trick can thus be applied in batch mode when there may be varied number of points per example and varied number of points per neighborhood.
|
| 29 |
+
|
| 30 |
+
### A.2. Abelian G and Coordinate Transforms
|
| 31 |
+
|
| 32 |
+
For Abelian groups that cover $\mathcal{X}$ in a single orbit, the computation is very similar to ordinary Euclidean convolution. Defining $a_i = \log(u_i)$, $b_j = \log(v_j)$, and using the fact that $e^{-b_j} e^{a_i} = e^{a_i - b_j}$ means that $\log(v_j^{-1} u_i) = (\log \circ \exp)(a_i - b_j)$. Defining $\tilde{f} = f \circ \exp$, $\tilde{h} = h \circ \exp$; we get
|
| 33 |
+
|
| 34 |
+
$$\tilde{h}(a_i) = \frac{1}{n} \sum_{j \in \text{nbhd}(i)} (\tilde{k}_{\theta} \circ \text{proj})(a_i - b_j) \tilde{f}(b_j), \quad (15)$$
|
| 35 |
+
|
| 36 |
+
where proj = log ◦ exp projects to the image of the logarithm map. Apart from a projection and a change to logarithmic coordinates, this is equivalent to Euclidean convolution in a vector space with dimensionality of the group. When the group is Abelian and $\mathcal{X}$ is a homogeneous space, then the dimension of the group is the dimension of the input. In these cases we have a trivial stabilizer group $H$ and single origin $o$, so we can view $f$ and $h$ as acting on the input $x_i = u_i o$.
|
| 37 |
+
|
| 38 |
+
This directly generalizes some of the existing coordinate transform methods for achieving equivariance from the literature such as log polar coordinates for rotation and scaling equivariance (Esteves et al., 2017), and using hyperbolic coordinates for squeeze and scaling equivariance.
|
| 39 |
+
|
| 40 |
+
**Log Polar Coordinates:** Consider the Abelian Lie group of positive scalings and rotations: $G = \mathbb{R}^* \times SO(2)$ acting on $\mathbb{R}^2$. Elements of the group $M \in G$ can be expressed as a $2 \times 2$ matrix
|
| 41 |
+
|
| 42 |
+
$$M(r, \theta) = \begin{bmatrix} r \cos(\theta) & -r \sin(\theta) \\ r \sin(\theta) & r \cos(\theta) \end{bmatrix}$$
|
| 43 |
+
|
| 44 |
+
for $r \in \mathbb{R}^+$ and $\theta \in \mathbb{R}$. The matrix logarithm is⁴
|
| 45 |
+
|
| 46 |
+
$$\log\left(\begin{bmatrix} r \cos(\theta) & -r \sin(\theta) \\ r \sin(\theta) & r \cos(\theta) \end{bmatrix}\right) = \begin{bmatrix} \log(r) & -\theta \mod 2\pi \\ \theta \mod 2\pi & \log(r) \end{bmatrix},$$
|
| 47 |
+
|
| 48 |
+
or more compactly $\log(M(r, \theta)) = \log(r)I + (\theta \mod 2\pi)J$, which is $[\log(r), \theta \mod 2\pi]$ in the basis for the Lie algebra $[I, J]$. It is clear that proj = log ◦ exp is simply mod $2\pi$ on the J component.
|
| 49 |
+
|
| 50 |
+
As $\mathbb{R}^2$ is a homogeneous space of $G$, one can choose the global origin $o = [1, 0] \in \mathbb{R}^2$. A little algebra shows that
|
| 51 |
+
|
| 52 |
+
³dim($Q$) is the dimension of the space into which $Q$, the orbit identifiers, are embedded.
|
| 53 |
+
|
| 54 |
+
⁴Here $\theta \mod 2\pi$ is defined to mean $\theta + 2\pi n$ for the integer $n$ such that the value is in $(-\pi, \pi)$, consistent with the principal matrix logarithm. $(\theta + \pi)\%2\pi - \pi$ in programming notation.
|
| 55 |
+
---PAGE_BREAK---
|
| 56 |
+
|
| 57 |
+
lifting to the group yields the transformation $u_i = M(r_i, \theta_i)$
|
| 58 |
+
for each point $p_i = u_i o$, where $r = \sqrt{x^2 + y^2}$, and
|
| 59 |
+
$\theta = \operatorname{atan2}(y, x)$ are the polar coordinates of the point $p_i$.
|
| 60 |
+
Observe that the logarithm of $v_j^{-1} u_i$ has a simple expression
|
| 61 |
+
highlighting the fact that it is invariant to scale and rotational
|
| 62 |
+
transformations of the elements,
|
| 63 |
+
|
| 64 |
+
$$
|
| 65 |
+
\begin{align*}
|
| 66 |
+
\log(v_j^{-1} u_i) &= \log(M(r_j, \theta_j)^{-1} M(r_i, \theta_i)) \\
|
| 67 |
+
&= \log(r_i/r_j) I + (\theta_i - \theta_j \bmod 2\pi) J.
|
| 68 |
+
\end{align*}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
Now writing out our Monte Carlo estimation of the integral:
|
| 72 |
+
|
| 73 |
+
$$h(p_i) = \frac{1}{n} \sum_j \tilde{k}_\theta(\log(r_i/r_j), \theta_i - \theta_j \bmod 2\pi) f(p_j),$$
|
| 74 |
+
|
| 75 |
+
which is a discretization of the log polar convolution from
|
| 76 |
+
Esteves et al. (2017). This can be trivially extended to
|
| 77 |
+
encompass cylindrical coordinates with the group $T(1) \times$
|
| 78 |
+
$\mathbb{R}^* \times \text{SO}(2)$.
|
| 79 |
+
|
| 80 |
+
**Hyperbolic coordinates:** For another nontrivial example,
|
| 81 |
+
consider the group of scalings and squeezes $G = \mathbb{R}^* \times \text{SQ}
|
| 82 |
+
acting on the positive orthant $\mathcal{X} = \{(x, y) \in \mathbb{R}^2 : x >$
|
| 83 |
+
$0, y > 0\}$. Elements of the group can be expressed as the
|
| 84 |
+
product of a squeeze mapping and a scaling
|
| 85 |
+
|
| 86 |
+
$$M(r, s) = \begin{bmatrix} s & 0 \\ 0 & 1/s \end{bmatrix} \begin{bmatrix} r & 0 \\ 0 & r \end{bmatrix} = \begin{bmatrix} rs & 0 \\ 0 & r/s \end{bmatrix}$$
|
| 87 |
+
|
| 88 |
+
for any $r, s \in \mathbb{R}^{+}$. As the group is abelian, the logarithm
|
| 89 |
+
splits nicely in terms of the two generators $I$ and $A$:
|
| 90 |
+
|
| 91 |
+
$$\log\left(\begin{bmatrix} rs & 0 \\ 0 & r/s \end{bmatrix}\right) = (\log r)\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + (\log s)\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}.$$
|
| 92 |
+
|
| 93 |
+
Again $\mathcal{X}$ is a homogeneous space of $G$, and we choose a
|
| 94 |
+
single origin $o = [1, 1]$. With a little algebra, it is clear that
|
| 95 |
+
$M(r_i, s_i)_o = p_i$ where $r = \sqrt{xy}$ and $s = \sqrt{x/y}$ are the
|
| 96 |
+
hyperbolic coordinates of $p_i$.
|
| 97 |
+
|
| 98 |
+
Expressed in the basis $B = [I, A]$ for the Lie algebra above,
|
| 99 |
+
we see that
|
| 100 |
+
|
| 101 |
+
$$\log(v_j^{-1} u_i) = \log(r_i / r_j) I + \log(s_i / s_j) A$$
|
| 102 |
+
|
| 103 |
+
yielding the expression for convolution
|
| 104 |
+
|
| 105 |
+
$$h(p_i) = \frac{1}{n} \sum_j \tilde{k}_\theta(\log(r_i/r_j), \log(s_i/s_j)) f(p_j),$$
|
| 106 |
+
|
| 107 |
+
which is equivariant to squeezes and scalings.
|
| 108 |
+
|
| 109 |
+
As demonstrated, equivariance to groups that contain the
|
| 110 |
+
input space in a single orbit and are abelian can be achieved
|
| 111 |
+
with a simple coordinate transform; however our approach
|
| 112 |
+
generalizes to groups that are both 'larger' and 'smaller' than
|
| 113 |
+
the input space, including coordinate transform equivariance
|
| 114 |
+
as a special case.
|
| 115 |
+
|
| 116 |
+
### A.3. Sufficient Conditions for Geodesic Distance
|
| 117 |
+
|
| 118 |
+
In general, the function $d(u, v) = \| \log(v^{-1}u) \|_F$, defined
|
| 119 |
+
on the domain of GL(d) covered by the exponential map,
|
| 120 |
+
satisfies the first three conditions of a distance metric but
|
| 121 |
+
not the triangle inequality, making it a semi-metric:
|
| 122 |
+
|
| 123 |
+
1. $d(u, v) \geq 0$
|
| 124 |
+
|
| 125 |
+
2. $d(u, v) = 0 \Leftrightarrow \log(u^{-1}v) = 0 \Leftrightarrow u = v$
|
| 126 |
+
|
| 127 |
+
3. $d(u, v) = \|\log(v^{-1}u)\| = \|- \log(u^{-1}v)\| = d(v, u).$
|
| 128 |
+
|
| 129 |
+
However for certain subgroups of GL(d) with additional
|
| 130 |
+
structure, the triangle inequality holds and the function is
|
| 131 |
+
the distance along geodesics connecting group elements u
|
| 132 |
+
and v according to the metric tensor
|
| 133 |
+
|
| 134 |
+
$$\langle A, B\rangle_u := \mathrm{Tr}(A^T u^{-T} u^{-1} B), \quad (16)$$
|
| 135 |
+
|
| 136 |
+
where $-T$ denotes inverse and transpose.
|
| 137 |
+
|
| 138 |
+
Specifically, if the subgroup $G$ is in the image of the exp :
|
| 139 |
+
$g \to G$ map and each infinitesimal generator commutes with
|
| 140 |
+
its transpose: $[A, A^T] = 0$ for $\forall A \in g$, then $d(u, v) =$
|
| 141 |
+
$\|\log(v^{-1}u)\|_F$ is the geodesic distance between $u, v$.
|
| 142 |
+
|
| 143 |
+
**Geodesic Equation:** Geodesics of (16) satisfying $\nabla_{\dot{\gamma}}\dot{\gamma} = 0$ can equivalently be derived by minimizing the energy functional
|
| 144 |
+
|
| 145 |
+
$$E[\gamma] = \int_{\gamma} \langle \dot{\gamma}, \dot{\gamma} \rangle_{\gamma} dt = \int_{0}^{1} \mathrm{Tr}(\dot{\gamma}^{T} \gamma^{-T} \gamma^{-1} \dot{\gamma}) dt$$
|
| 146 |
+
|
| 147 |
+
using the calculus of variations. Minimizing curves $\gamma(t)$,
|
| 148 |
+
connecting elements $u$ and $v$ in $G$ ($\gamma(0) = v, \gamma(1) = u$)
|
| 149 |
+
satisfy
|
| 150 |
+
|
| 151 |
+
$$0 = \delta E = \delta \int_0^1 \mathrm{Tr}(\dot{\gamma}^T \gamma^{-T} \gamma^{-1} \dot{\gamma}) dt$$
|
| 152 |
+
|
| 153 |
+
Noting that $\delta(\gamma^{-1}) = -\gamma^{-1}\delta\gamma\gamma^{-1}$ and the linearity of the
|
| 154 |
+
trace,
|
| 155 |
+
|
| 156 |
+
$$2 \int_0^1 \operatorname{Tr} (\dot{\gamma}^T \gamma^{-T} \gamma^{-1} \delta \dot{\gamma}) - \operatorname{Tr} (\dot{\gamma}^T \gamma^{-T} \gamma^{-1} \delta \gamma \gamma^{-1} \dot{\gamma}) dt = 0.$$
|
| 157 |
+
|
| 158 |
+
Using the cyclic property of the trace and integrating by
|
| 159 |
+
parts, we have that
|
| 160 |
+
|
| 161 |
+
$$-2 \int_0^1 \operatorname{Tr} \left( (\frac{d}{dt}(\dot{\gamma}^T \gamma^{-T} \gamma^{-1}) + \gamma^{-1} \dot{\gamma} (\dot{\gamma}^T \gamma^{-T} \gamma^{-1})^\intercal) \delta\gamma \right) dt = 0,$$
|
| 162 |
+
|
| 163 |
+
where the boundary term $\operatorname{Tr}(\dot{\gamma}\gamma^{-T}\gamma^{-1}\delta\dot{\gamma})|_{0}^{1}$ vanishes since
|
| 164 |
+
$(\delta\gamma)(0) = (\delta\gamma)(1) = 0.$
|
| 165 |
+
|
| 166 |
+
As $\delta\gamma$ may be chosen to vary arbitrarily along the path, $\gamma$
|
| 167 |
+
must satisfy the geodesic equation:
|
| 168 |
+
|
| 169 |
+
$$\frac{d}{dt}(\dot{\gamma}^T\gamma^{-T}\gamma^{-1}) + \gamma^{-1}\dot{\gamma}\dot{\gamma}^T\gamma^{-T}\gamma^{-1} = 0. \quad (17)$$
|
| 170 |
+
---PAGE_BREAK---
|
| 171 |
+
|
| 172 |
+
**Solutions:** When $A = \log(v^{-1}u)$ satisfies $[A, A^T] = 0$, the curve $\gamma(t) = v \exp(t \log(v^{-1}u))$ is a solution to the geodesic equation (17). Clearly $\gamma$ connects $u$ and $v$, $\gamma(0) = v$ and $\gamma(1) = u$. Plugging in $\dot{\gamma} = \gamma A$ into the left hand side of equation (17), we have
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
\begin{align*}
|
| 176 |
+
&= \frac{d}{dt}(A^T \gamma^{-1}) + AA^T \gamma^{-1} \\
|
| 177 |
+
&= -A^T \gamma^{-1} \dot{\gamma} \gamma^{-1} + AA^T \gamma^{-1} \\
|
| 178 |
+
&= [A, A^T]\gamma^{-1} = 0
|
| 179 |
+
\end{align*}
|
| 180 |
+
$$
|
| 181 |
+
|
| 182 |
+
**Length of $\gamma$:** The length of the curve $\gamma$ connecting $u$ and $v$ is $\|\log(v^{-1}u)\|_F$,
|
| 183 |
+
|
| 184 |
+
$$
|
| 185 |
+
\begin{align*}
|
| 186 |
+
L[\gamma] &= \int_{\gamma} \sqrt{\langle \dot{\gamma}, \dot{\gamma} \rangle_{\gamma}} dt = \int_{0}^{1} \sqrt{\operatorname{Tr}(\dot{\gamma}^{T}\gamma^{-T}\gamma^{-1}\dot{\gamma})} dt \\
|
| 187 |
+
&= \int_{0}^{1} \sqrt{\operatorname{Tr}(A^{T}A)} dt = \|A\|_{F} = \|\log(v^{-1}u)\|_{F}
|
| 188 |
+
\end{align*}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
Of the Lie Groups that we consider in this paper, all of which have a single connected component, the groups $G = T(d)$, $SO(d)$, $\mathbb{R}^* \times SO(d)$, $\mathbb{R}^* \times SQ$ satisfy this property that $[\mathfrak{g}, \mathfrak{g}^T] = 0$; however, the $SE(d)$ groups do not.
|
| 192 |
+
|
| 193 |
+
## A.4. Equivariant Subsampling
|
| 194 |
+
|
| 195 |
+
Even if all distances and neighborhoods are precomputed, the cost of computing equation (6) for $i = 1, ..., N$ is still quadratic, $O(nN) = O(N^2)$, because the number of points in each neighborhood $n$ grows linearly with $N$ as $f$ is more densely evaluated. So that our method can scale to handle a large number of points, we show two ways two equivariantly subsample the group elements, which we can use both for the locations at which we evaluate the convolution and the locations that we use for the Monte Carlo estimator. Since the elements are spaced irregularly, we cannot readily use the coset pooling method described in (Cohen and Welling, 2016a), instead we can perform:
|
| 196 |
+
|
| 197 |
+
**Random Selection:** Randomly selecting a subset of $p$ points from the original $n$ preserves the original sampling distribution, so it can be used.
|
| 198 |
+
|
| 199 |
+
**Farthest Point Sampling:** Given a set of group elements $S = \{u_i\}_{i=1}^k \in G$, we can select a subset $S_p^*$ of size $p$ by maximizes the minimum distance between any two elements in that subset,
|
| 200 |
+
|
| 201 |
+
$$ \mathrm{Sub}_p(S) := S_p^* = \arg \max_{S_p \subset S} \min_{u,v \in S_p: u \neq v} d(u,v), \quad (18) $$
|
| 202 |
+
|
| 203 |
+
farthest point sampling on the group. Acting on a set of elements, $\mathrm{Sub}_p : S \mapsto S_p^*$, the farthest point sub-sampling is equivariant $\mathrm{Sub}_p(wS) = w\mathrm{Sub}_p(S)$ for any $w \in G$. Meaning that applying a group element to each of the elements does not change the chosen indices in
|
| 204 |
+
|
| 205 |
+
the subsampled set because the distances are left invariant $d(u_i, u_j) = d(wu_i, wu_j)$.
|
| 206 |
+
|
| 207 |
+
Now we can use either of these methods for $\mathrm{Sub}_p(\cdot)$ to equivariantly subsample the quadrature points in each neighborhood used to estimate the integral to a fixed number $p$,
|
| 208 |
+
|
| 209 |
+
$$ h_i = \frac{1}{p} \sum_{j \in \mathrm{Sub}_p(\mathrm{nbhd}(u_i))} k_\theta(v_j^{-1} u_i) f_j. \quad (19) $$
|
| 210 |
+
|
| 211 |
+
Doing so has reduced the cost of estimating the convolution from $O(N^2)$ to $O(pN)$, ignoring the cost of computing $\mathrm{Sub}_p$ and $\{\mathrm{nbhd}(u_i)\}_{i=1}^N$.
|
| 212 |
+
|
| 213 |
+
## A.5. Review and Implications of Noether's Theorem
|
| 214 |
+
|
| 215 |
+
In the Hamiltonian setting, Noether's theorem relates the continuous symmetries of the Hamiltonian of a system with conserved quantities, and has been deeply impactful in the understanding of classical physics. We give a review of Noether's theorem, loosely following Butterfield (2006).
|
| 216 |
+
|
| 217 |
+
### More on Hamiltonian Dynamics
|
| 218 |
+
|
| 219 |
+
As introduced earlier, the Hamiltonian is a function acting on the state $H(z) = H(q,p)$, (we will ignore time dependence for now) can be viewed more formally as a function on the cotangent bundle $(q,p) = z \in M = T^*C$ where $C$ is the coordinate configuration space, and this is the setting for Hamiltonian dynamics.
|
| 220 |
+
|
| 221 |
+
In general, on a manifold $\mathcal{M}$, a vector field $X$ can be viewed as an assignment of a directional derivative along $\mathcal{M}$ for each point $z \in \mathcal{M}$. It can be expanded in a basis using coordinate charts $X = \sum_{\alpha} X^{\alpha} \partial_{x^{\alpha}}$, where $\partial_{\alpha} = \frac{\partial}{\partial z^{\alpha}}$ and acts on functions $f$ by $X(f) = \sum_{\alpha} X^{\alpha} \partial_{x^{\alpha}} f$. In the chart, each of the components $X^{\alpha}$ are functions of $z$.
|
| 222 |
+
|
| 223 |
+
In Hamiltonian mechanics, for two functions on $\mathcal{M}$, there is the Poisson bracket which can be written in terms of the canonical coordinates $q_i, p_i,$
|
| 224 |
+
|
| 225 |
+
$$ \{f,g\} = \sum_i \frac{\partial f}{\partial p_i} \frac{\partial g}{\partial q_i} - \frac{\partial f}{\partial q_i} \frac{\partial g}{\partial p_i}. $$
|
| 226 |
+
|
| 227 |
+
The Poisson bracket can be used to associate each function $f$ to a vector field
|
| 228 |
+
|
| 229 |
+
$$ X_f = \{f, \cdot\} = \sum_i \frac{\partial f}{\partial p_i} \frac{\partial}{\partial q_i} - \frac{\partial f}{\partial q_i} \frac{\partial}{\partial p_i}, $$
|
| 230 |
+
|
| 231 |
+
which specifies, by its action on another function $g$, the directional derivative of $g$ along $X_f: X_f(g) = \{f,g\}$. Vector fields that can be written in this way are known as Hamiltonian vector fields, and the Hamiltonian dynamics of the
|
| 232 |
+
|
| 233 |
+
⁵Here we take the definition of the Poisson bracket to be negative of the usual definition in order to streamline notation.
|
| 234 |
+
---PAGE_BREAK---
|
| 235 |
+
|
| 236 |
+
system is a special example $X_H = \{H, \cdot\}$. This vector field in canonical coordinates $z = (p, q)$ is the vector field $X_H = F(z) = J\nabla_z H$ (i.e. the symplectic gradient, as discussed in Section 6.1). Making this connection clear, a given scalar quantity evolves through time as $\dot{f} = \{H, f\}$. But this bracket can be used to evaluate the rate of change of a scalar quantity along the flows of vector fields other than the dynamics, such as the flows of continuous symmetries.
|
| 237 |
+
|
| 238 |
+
## Noether's Theorem
|
| 239 |
+
|
| 240 |
+
The flow $\phi_{\lambda}^X$ by $\lambda \in \mathbb{R}$ of a vector field $X$ is the set of integral curves, the unique solution to the system of ODEs $\dot{z}^\alpha = X^\alpha$ with initial condition $z$ and at parameter value $\lambda$, or more abstractly the iterated application of $X$: $\phi_{\lambda}^X = \exp(\lambda X)$. Continuous symmetries transformation are the transformations that can be written as the flow $\phi_{\lambda}^X$ of a vector field. The directional derivative characterizes how a function such as the Hamiltonian changes along the flow of $X$ and is a special case of the Lie Derivative $\mathcal{L}$.
|
| 241 |
+
|
| 242 |
+
$$ \mathcal{L}_X H = \frac{d}{d\lambda} (H \circ \phi_\lambda^X)|_{\lambda=0} = X(H) $$
|
| 243 |
+
|
| 244 |
+
A scalar function is invariant to the flow of a vector field if and only if the Lie Derivative is zero
|
| 245 |
+
|
| 246 |
+
$$ H(\phi_{\lambda}^{X}(z)) = H(z) \Leftrightarrow \mathcal{L}_{X}H = 0. $$
|
| 247 |
+
|
| 248 |
+
For all transformations that respect the Poisson Bracket⁶, which we add as a requirement for a symmetry, the vector field $X$ is (locally) Hamiltonian and there exists a function $f$ such that $X = X_f = \{f, \cdot\}$. If $M$ is a contractible domain such as $\mathbb{R}^{2n}$, then $f$ is globally defined. For every continuous symmetry $\phi_{\lambda}^{X_f}$,
|
| 249 |
+
|
| 250 |
+
$$ \mathcal{L}_{X_f} H = X_f(H) = \{f, H\} = -\{H, f\} = -X_H(f), $$
|
| 251 |
+
|
| 252 |
+
by the antisymmetry of the Poisson bracket. So if $\phi_{\lambda}^X$ is a symmetry of $H$, then $X = X_f$ for some function $f$, and $H(\phi_{\lambda}^{X_f}(z)) = H(z)$ implies
|
| 253 |
+
|
| 254 |
+
$$ \mathcal{L}_{X_f} H = 0 \Leftrightarrow \mathcal{L}_{X_H} f = 0 \Leftrightarrow f(\phi_{\tau}^{X_H}(z)) = f(z) $$
|
| 255 |
+
|
| 256 |
+
or in other words $f(z(t+\tau)) = f(z(t))$ and $f$ is a conserved quantity of the dynamics.
|
| 257 |
+
|
| 258 |
+
⁶More precisely, the Poisson Bracket can be formulated in a coordinate free manner in terms of a symplectic two form $\omega$, $\{f,g\} = \omega(X_f, X_g)$. In the original coordinates $\omega = \sum_i dp_i \wedge dq^i$, and this coordinate basis, $\omega$ is represented by the matrix $J$ from earlier. The dynamics $X_H$ are determined by $dH = \omega(X_H, \cdot) = \iota_{X_H}\omega$. Transformations which respect the Poisson Bracket are symplectic, $\mathcal{L}_{X_H}\omega = 0$. With Cartan's magic formula, this implies that $d(\iota_{X_H}\omega) = 0$. Because the form $\iota_{X_H}\omega$ is closed, Poincare's Lemma implies that locally $(\iota_{X_H}\omega) = df$ for some function $f$ and hence $X = X_f$ (locally) a Hamiltonian vector field. For more details see Butterfield (2006).
|
| 259 |
+
|
| 260 |
+
This implication goes both ways, if $f$ is conserved then $\phi_{\lambda}^{X_f}$ is necessarily a symmetry of the Hamiltonian, and if $\phi_{\lambda}^{X_f}$ is a symmetry of the Hamiltonian then $f$ is conserved.
|
| 261 |
+
|
| 262 |
+
## Hamiltonian vs Dynamical Symmetries
|
| 263 |
+
|
| 264 |
+
So far we have been discussing Hamiltonian symmetries, invariances of the Hamiltonian. But in the study of dynamical systems there is a related concept of dynamical symmetries, symmetries of the equations of motion. This notion is also captured by the Lie Derivative, but between vector fields. A dynamical system $\dot{z} = F(z)$, has a continuous dynamical symmetry $\phi_{\lambda}^X$ if the flow along the dynamical system commutes with the symmetry:
|
| 265 |
+
|
| 266 |
+
$$ \phi_{\lambda}^{X}(\phi_{t}^{F}(z)) = \phi_{t}^{F}(\phi_{\lambda}^{X}(z)). \quad (20) $$
|
| 267 |
+
|
| 268 |
+
Meaning that applying the symmetry transformation to the state and then flowing along the dynamical system is equivalent to flowing first and then applying the symmetry transformation. Equation (20) is satisfied if and only if the Lie Derivative is zero:
|
| 269 |
+
|
| 270 |
+
$$ \mathcal{L}_X F = [X, F] = 0, $$
|
| 271 |
+
|
| 272 |
+
where $[.,]$ is the Lie bracket on vector fields.⁷
|
| 273 |
+
|
| 274 |
+
For Hamiltonian systems, every Hamiltonian symmetry is also a dynamical symmetry. In fact, it is not hard to show that the Lie and Poisson brackets are related,
|
| 275 |
+
|
| 276 |
+
$$ [X_f, X_g] = X_{\{f,g\}} $$
|
| 277 |
+
|
| 278 |
+
and this directly shows the implication. If $X_f$ is a Hamiltonian symmetry, $\{f, H\} = 0$, and then
|
| 279 |
+
|
| 280 |
+
$$ [X_f, F] = [X_f, X_H] = X_{\{f,H\}} = 0. $$
|
| 281 |
+
|
| 282 |
+
However, the converse is not true, dynamical symmetries of a Hamiltonian system are not necessarily Hamiltonian symmetries and thus might not correspond to conserved quantities. Furthermore even if the system has a dynamical symmetry which is the flow along a Hamiltonian vector field $\phi_{\lambda}^X$, $X = X_f = \{f, \cdot\}$, but the dynamics $F$ are not Hamiltonian, then the dynamics will not conserve $f$ in general. Both the symmetry and the dynamics must be Hamiltonian for the conservation laws.
|
| 283 |
+
|
| 284 |
+
This fact is demonstrated by Figure 9, where the dynamics of the (non-Hamiltonian) equivariant LieConv-T(2) model has a T(2) dynamical symmetry with the generators $\partial_x, \partial_y$ which are Hamiltonian vector fields for $f = p_x, f = p_y$, and yet linear momentum is not conserved by the model.
|
| 285 |
+
|
| 286 |
+
⁷The Lie bracket on vector fields produces another vector field and is defined by how it acts on functions, for any smooth function $g: [X, F](g) = X(F(g)) - F(X(g))$
|
| 287 |
+
---PAGE_BREAK---
|
| 288 |
+
|
| 289 |
+
Figure 9. Equivariance alone is not sufficient, for conservation we need both to model $\mathcal{H}$ and incorporate the given symmetry. For comparison, LieConv-T(2) is T(2)-equivariant but models $F$, and HLieConv-Trivial models $\mathcal{H}$ but is not T(2)-equivariant. Only HLieConv-T(2) conserves linear momentum.
|
| 290 |
+
|
| 291 |
+
## Conserving Linear and Angular Momentum
|
| 292 |
+
|
| 293 |
+
Consider a system of $N$ interacting particles described in Euclidean coordinates with position and momentum $q_{im}, p_{im}$, such as the multi-body spring problem. Here the first index $i = 1, 2, 3$ indexes the spatial coordinates and the second $m = 1, 2, ..., N$ indexes the particles. We will use the bolded notation $\mathbf{q}_m, \mathbf{p}_m$ to suppress the spatial indices, but still indexing the particles $m$ as in Section 6.1.
|
| 294 |
+
|
| 295 |
+
The total linear momentum along a given direction **n** is
|
| 296 |
+
$$ \mathbf{n} \cdot \mathbf{P} = \sum_{i,m} n_i p_{im} = \mathbf{n} \cdot (\sum_m \mathbf{p}_m). $$
|
| 297 |
+
Expanding the Poisson bracket, the Hamiltonian vector field
|
| 298 |
+
|
| 299 |
+
$$ X_{nP} = \{\mathbf{n} \cdot \mathbf{P}, \cdot\} = \sum_{i,m} n_i \frac{\partial}{\partial q_{im}} = \mathbf{n} \cdot \sum_{m} \frac{\partial}{\partial \mathbf{q}_{m}} $$
|
| 300 |
+
|
| 301 |
+
which has the flow $\dot{\phi}_{\lambda}^{X_{nP}}(\mathbf{q}_m, \mathbf{p}_m) = (\mathbf{q}_m + \lambda\mathbf{n}, \mathbf{p}_m)$, a translation of all particles by $\lambda\mathbf{n}$. So our model of the Hamiltonian conserves linear momentum if and only if it is invariant to a global translation of all particles, (e.g. T(2) invariance for a 2D spring system).
|
| 302 |
+
|
| 303 |
+
The total angular momentum along a given axis **n** is
|
| 304 |
+
|
| 305 |
+
$$ \mathbf{n} \cdot \mathbf{L} = \mathbf{n} \cdot \sum_m \mathbf{q}_m \times \mathbf{p}_m = \sum_{i,j,k,m} \epsilon_{ijk} n_i q_{jm} p_{km} = \sum_m \mathbf{p}_m^T A \mathbf{q}_m $$
|
| 306 |
+
|
| 307 |
+
where $\epsilon_{ijk}$ is the Levi-Civita symbol and we have defined
|
| 308 |
+
the antisymmetric matrix $A$ by $A_{kj} = \sum_i \epsilon_{ijk} n_i$.
|
| 309 |
+
|
| 310 |
+
$$ X_{nL} = \{\mathbf{n} \cdot \mathbf{L}, \cdot\} = \sum_{j,k,m} A_{kj} q_{jm} \frac{\partial}{\partial q_{km}} - A_{jk} p_{jm} \frac{\partial}{\partial p_{km}} $$
|
| 311 |
+
|
| 312 |
+
$$ X_{nL} = \sum_m (\mathbf{q}_m^T A^T \frac{\partial}{\partial \mathbf{q}_m} + \mathbf{p}_m^T A^T \frac{\partial}{\partial \mathbf{p}_m}) $$
|
| 313 |
+
|
| 314 |
+
where the second line follows from the antisymmetry of $\mathcal{A}$.
|
| 315 |
+
We can find the flow of $X_{nL}$ from the differential equations
|
| 316 |
+
|
| 317 |
+
$\dot{\mathbf{q}}_m = A\mathbf{q}, \dot{\mathbf{p}}_m = A\mathbf{q}$ which have the solution
|
| 318 |
+
|
| 319 |
+
$$ \phi_{\theta}^{X_{nL}}(\mathbf{q}_m, \mathbf{p}_m) = (e^{\theta A}\mathbf{q}_m, e^{\theta A}\mathbf{p}_m) = (R_{\theta}\mathbf{q}_m, R_{\theta}\mathbf{p}_m), $$
|
| 320 |
+
|
| 321 |
+
where $R_\theta$ is a rotation about the axis **n** by the angle $\theta$, which follows from the Rodriguez rotation formula. Therefore, the flow of the Hamiltonian vector field of angular momentum along a given axis is a global rotation of the position and momentum of each particle about that axis. Again, the dynamics of a neural network modeling a Hamiltonian conserve total angular momentum if and only if the network is invariant to simultaneous rotation of all particle positions and momenta.
|
| 322 |
+
|
| 323 |
+
# B. Additional Experiments
|
| 324 |
+
|
| 325 |
+
## B.1. Equivariance Demo
|
| 326 |
+
|
| 327 |
+
While (7) shows that the convolution estimator is equivariant, we have conducted the ablation study below examining the equivariance of the network empirically. We trained LieConv (Trivial, T(3), SO(3), SE(3)) models on a limited subset of 20k training examples (out of 100k) of the HOMO task on QM9 without any data augmentation. We then evaluate these models on a series of modified test sets where each example has been randomly transformed by an element of the given group (the test translations in T(3) and SE(3) are sampled from a normal with stddev 0.5). In table B.1 the rows are the models configured with a given group equivariance and the columns N/G denote no augmentation at training time and transformations from G applied to the test set (test translations in T(3) and SE(3) are sampled from a normal with stddev 0.5).
|
| 328 |
+
|
| 329 |
+
<table><thead><tr><th>Model</th><th>N/N</th><th>N/T(3)</th><th>N/SO(3)</th><th>N/SE(3)</th></tr></thead><tbody><tr><td>Trivial</td><td><b>173</b></td><td>183</td><td>239</td><td>243</td></tr><tr><td>T(3)</td><td><b>113</b></td><td><b>113</b></td><td>133</td><td>133</td></tr><tr><td>SO(3)</td><td><b>159</b></td><td>238</td><td><b>160</b></td><td>240</td></tr><tr><td>SE(3)</td><td><b>62</b></td><td><b>62</b></td><td><b>63</b></td><td><b>62</b></td></tr></tbody></table>
|
| 330 |
+
|
| 331 |
+
Table 4. Test MAE (in meV) on HOMO test set randomly transformed by elements of $\mathcal{G}$. Despite no data augmentation (N), $\mathcal{G}$ equivariant models perform as well on $\mathcal{G}$ transformed test data.
|
| 332 |
+
|
| 333 |
+
Notably, the performance of the LieConv-G models do not degrade when random G transformations are applied to the test set. Also, in this low data regime, the added equivariances are especially important.
|
| 334 |
+
|
| 335 |
+
## B.2. RotMNIST Comparison
|
| 336 |
+
|
| 337 |
+
While the RotMNIST dataset consists of 12k rotated MNIST digits, it is standard to separate out 10k to be used for training and 2k for validation. However, in Ti-Pooling and E(2)-Steerable CNNs, it appears that after hyperparameters were tuned the validation set is folded back into the training set
|
| 338 |
+
---PAGE_BREAK---
|
| 339 |
+
|
| 340 |
+
to be used as additional training data, a common approach used on other datasets. Although in table 1 we only use 10k training points, in the table below we report the performance with and without augmentation trained on the full 12k examples.
|
| 341 |
+
|
| 342 |
+
<table><thead><tr><th>Aug</th><th>Trivial</th><th>T<sub>y</sub></th><th>T(2)</th><th>SO(2)</th><th>SO(2)×R<sup>*</sup></th><th>SE(2)</th></tr></thead><tbody><tr><td>SO(2)</td><td>1.44</td><td>1.35</td><td>1.32</td><td>1.27</td><td>1.13</td><td>1.13</td></tr><tr><td>None</td><td>1.60</td><td>2.64</td><td>2.34</td><td>1.26</td><td>1.25</td><td>1.15</td></tr></tbody></table>
|
| 343 |
+
|
| 344 |
+
Table 5. Classification Error (%) on RotMNIST dataset for LieConv with different group equivariances and baselines:
|
| 345 |
+
|
| 346 |
+
## C. Implementation Details
|
| 347 |
+
|
| 348 |
+
### C.1. Practical Considerations
|
| 349 |
+
|
| 350 |
+
While the high-level summary of the lifting procedure (Algorithm 1) and the LieConv layer (Algorithm 2) provides a useful conceptual understanding of our method, there are some additional details that are important for a practical implementation.
|
| 351 |
+
|
| 352 |
+
1. According to Algorithm 2, $a_{ij}$ is computed in every LieConv layer, which is both highly redundant and costly. In practice, we precompute $a_{ij}$ once after lifting and feed it through the network with layers operating on the state ($\{a_{ij}\}_{i,j}^{N,N}, \{f_i\}_{i=1}^N$) instead of $\{(u_i, q_i, f_i)\}_{i=1}^N$. Doing so requires fixing the group elements that will be used at each layer for a given forwards pass.
|
| 353 |
+
|
| 354 |
+
2. In practice only $p$ elements of $nbhd_i$ are sampled (randomly) for computing the Monte Carlo estimator in order to limit the computational burden (see Appendix A.4).
|
| 355 |
+
|
| 356 |
+
3. We use the analytic forms for the exponential and logarithm maps of the various groups as described in Eade (2014).
|
| 357 |
+
|
| 358 |
+
### C.2. Sampling from the Haar Measure for Various groups
|
| 359 |
+
|
| 360 |
+
When the lifting map from $\mathcal{X} \to G \times \mathcal{X}/G$ is multi-valued, we need to sample elements of $u \in G$ that project down to $x: uo = x$ in a way consistent with the Haar measure $\mu(\cdot)$. In other words, since the restriction $\mu(\cdot)|_{\text{nbhd}}$ is a distribution, then we must sample from the conditional distribution $u \sim \mu(u|uo = x)|_{\text{nbhd}}$. In general this can be done by parametrizing the distribution of $\mu$ as a collection of random variables that includes $x$, and then sampling the remaining variables.
|
| 361 |
+
|
| 362 |
+
In this paper, the groups we use in which the lifting map is multi-valued are SE(2), SO(3), and SE(3). The process is especially straightforward for SE(2) and SE(3) as these groups can be expressed as a semi-direct product of two groups $G = H \times N$,
|
| 363 |
+
|
| 364 |
+
$$d\mu_G(h, n) = \delta(h)d\mu_H(h)d\mu_N(n), \quad (21)$$
|
| 365 |
+
|
| 366 |
+
where $\delta(h) = \frac{d\mu_N(n)}{d\mu_N(hnh^{-1})}$ (Willson, 2009). For $G = \text{SE}(d) = \text{SO}(d) \times \text{T}(d)$, $\delta(h) = 1$ since the Lebesgue measure $d\mu_{\text{T}(d)}(x) = d\lambda(x) = dx$ is invariant to rotations. So simply $d\mu_{\text{SE}(d)}(R, x) = d\mu_{\text{SO}(d)}(R)dx$.
|
| 367 |
+
|
| 368 |
+
So lifts of a point $x \in \mathcal{X}$ to $\text{SE}(d)$ consistent with the $\mu$ are just $T_x R$, the multiplication of a translation by $x$ and randomly sampled rotations $R \sim \mu_{\text{SO}(d)}(\cdot)$. There are multiple easy methods to sample uniformly from $\text{SO}(d)$ given in (Kuffner, 2004), for example sampling uniformly from $\text{SO}(3)$ can be done by sampling a unit quaternion from the 3-sphere, and identifying it with the corresponding rotation matrix.
|
| 369 |
+
|
| 370 |
+
### C.3. Model Architecture
|
| 371 |
+
|
| 372 |
+
We employ a ResNet-style architecture (He et al., 2016), using bottleneck blocks (Zagoruyko and Komodakis, 2016), and replacing ReLUs with Swish activations (Ramachandran et al., 2017). The convolutional kernel $g_\theta$ internal to each LieConv layer is parametrized by a 3-layer MLP with 32 hidden units, batch norm, and Swish nonlinearities. Not only do the Swish activations improve performance slightly, but unlike ReLUs they are twice differentiable which is a requirement for backpropagating through the Hamiltonian dynamics. The stack of elementwise linear and bottleneck blocks is followed by a global pooling layer that computes the average over all elements, but not over channels. Like for regular image bottleneck blocks, the channels for the convolutional layer in the middle are smaller by a factor of 4 for increased parameter and computational efficiency.
|
| 373 |
+
|
| 374 |
+
**Downsampling:** As is traditional for image data, we increase the number of channels and the receptive field at every downsampling step. The downsampling is performed with the farthest point downsampling method described in Appendix A.4. For a downsampling by a factor of $s < 1$, the radius of the neighborhood is scaled up by $s^{-1/2}$ and the channels are scaled up by $s^{-1/2}$. When an image is downsampled with $s = (1/2)^2$ that is typical in a CNN, this results in 2x more channels and a radius or dilation of 2x. In the bottleneck block, the downsampling operation is fused with the LieConv layer, so that the convolution is only evaluated at the downsampled query locations. We perform downsampling only on the image datasets, which have more points.
|
| 375 |
+
|
| 376 |
+
**BatchNorm:** In order to handle the varied number of group elements per example and within each neighborhood, we
|
| 377 |
+
---PAGE_BREAK---
|
| 378 |
+
|
| 379 |
+
use a modified batchnorm that computes statistics only over
|
| 380 |
+
elements from a given mask. The batch norm is computed
|
| 381 |
+
per channel, with statistics averaged over the batch size and
|
| 382 |
+
each of the valid locations.
|
| 383 |
+
|
| 384 |
+
### C.4. Details for Hamiltonian Models
|
| 385 |
+
|
| 386 |
+
**Model Symmetries:**
|
| 387 |
+
|
| 388 |
+
As the position vectors are mean centered in the model forward pass $q_i^{(i)} = q_i - \bar{q}$, HOGN and HLieConv-SO2* have additional T(2) invariance, yielding SE(2) invariance for HLieConv-SO2*. We also experimented with a HLieConv-SE2 equivariant model, but found that the exponential map for SE2 (involving taylor expands and masking) was not numerically stable enough for second derivatives, required for optimizing through the Hamiltonian dynamics. So instead we benchmark the HLieConv-SO2 (without centering) and the HLieConv-SO2* (with centering) models separately. Layer equivariance is preferable for not prematurely discarding useful information and for better modeling performance, but invariance alone is sufficient for the conservation laws. Additionally, since we know a priori that the spring problem has Euclidean coordinates, we need not model the kinetic energy $K(\mathbf{p}, m) = \sum_{j=1}^n \|\mathbf{p}_j\|^2/m_j$ and instead focus on modeling the potential $V(q, k)$. We observe that this additional inductive bias of Euclidean coordinates improves model performance. Table 6 shows the invariance and equivariance properties of the relevant models and baselines. For Noether conservation, we need both to model the Hamiltonian and have the symmetry property.
|
| 389 |
+
|
| 390 |
+
**Dataset Generation:** To generate the spring dynamics datasets we generated *D* systems each with *N* = 6 particles connected by springs. The system parameters, mass and spring constant, are set by sampling {$m_1^{(i)}, \dots, m_6^{(i)}, k_1^{(i)}, \dots, k_6^{(i)}$}$_{i=1}^N$, $m_j^{(i)} \sim U(0.1, 3.1)$, $k_j^{(i)} \sim U(0, 5)$. Following Sanchez-Gonzalez et al. (2019), we set the spring constants as $k_{ij} = k_i k_j$. For each system
|
| 391 |
+
|
| 392 |
+
$$ \begin{array}{|c|c|c|c|c|} \hline F(\mathbf{z}, t) & \mathcal{H}(\mathbf{z}, t) & T(2) & SO(2) \\ \hline \text{FC} & \bullet & & \\ \hline \text{OGN} & \bullet & & \\ \hline \text{HOGN} & & \bullet & \star \\ \hline \text{LieConv-T(2)} & \bullet & & \star \\ \hline \text{HLieConv-Trivial} & & \bullet & \\ \hline \text{HLieConv-T(2)} & & \bullet & \star \\ \hline \text{HLieConv-SO(2)} & & \bullet & \star \\ \hline \text{HLieConv-SO(2)*} & & \bullet & \star \\ \hline \end{array} $$
|
| 393 |
+
|
| 394 |
+
Table 6. Model characteristics. Models with layers invariant to *G* are denoted with ⋆, and those with equivariant layers with ⋘.
|
| 395 |
+
|
| 396 |
+
$i$, the position and momentum of body $j$ were distributed as $\mathbf{q}_j^{(i)} \sim N(0, 0.16I)$, $\mathbf{p}_j^{(i)} \sim N(0, 0.36I)$. Using the analytic form of the Hamiltonian for the spring problem, $\mathcal{H}(\mathbf{q}, \mathbf{p}) = K(\mathbf{p}, m) + V(\mathbf{q}, k)$, we use the RK4 numerical integration scheme to generate 5 second ground truth trajectories broken up into 500 evaluation timesteps. We use a fixed step size scheme for RK4 chosen automatically (as implemented in Chen et al. (2018)) with a relative tolerance of 1e-8 in double precision arithmetic. We then randomly selected a single segment for each trajectory, consisting of an initial state $\mathbf{z}_t$ and $\tau = 4$ transition states: $(\mathbf{z}_{t+1}^{(i)}, \dots, \mathbf{z}_{t+\tau}^{(i)})$.
|
| 397 |
+
|
| 398 |
+
**Training:** All models were trained in single precision arithmetic (double precision did not make any appreciable difference) with an integrator tolerance of 1e-4. We use a cosine decay for the learning rate schedule and perform early stopping over the validation MSE. We trained with a minibatch size of 200 and for 100 epochs each using the Adam optimizer (Kingma and Ba, 2014) without batch normalization. With 3k training examples, the HLieConv model takes about 20 minutes to train on one 1080Ti.
|
| 399 |
+
|
| 400 |
+
For the examination of performance over the range of dataset sizes in 8, we cap the validation set to the size of the training set to make the setting more realistic, and we also scale the number of training epochs up as the size of the dataset shrinks (epochs = $100(\sqrt{10^3/D})$) which we found to be sufficient to fit the training set. For $D \le 200$ we use the full dataset in each minibatch.
|
| 401 |
+
|
| 402 |
+
**Hyperparameters:**
|
| 403 |
+
|
| 404 |
+
<table><thead><tr><th></th><th>channels</th><th>layers</th><th>lr</th></tr></thead><tbody><tr><td>(H)FC</td><td>256</td><td>4</td><td>1e-2</td></tr><tr><td>(H)OGN</td><td>256</td><td>1</td><td>1e-2</td></tr><tr><td>(H)LieConv</td><td>384</td><td>4</td><td>1e-3</td></tr></tbody></table>
|
| 405 |
+
|
| 406 |
+
**Hyperparameter tuning:** Model hyperparameters were tuned by grid search over channel width, number of layers, and learning rate. The models were tuned with training, validation, and test datasets consisting of 3000, 2000, and 2000 trajectory segments respectively.
|
| 407 |
+
|
| 408 |
+
### C.5. Details for Image and Molecular Experiments
|
| 409 |
+
|
| 410 |
+
**RotMNIST Hyperparameters:** For RotMNIST we train each model for 500 epochs using the Adam optimizer with learning rate 3e-3 and batch size 25. The first linear layer maps the 1-channel grayscale input to $k = 128$ channels, and the number of channels in the bottleneck blocks follow the scaling law from Appendix C.3 as the group elements are downsampled. We use 6 bottleneck blocks, and the total downsampling factor $S = 1/10$ is split geometrically between the blocks as $s = (1/10)^{1/6}$ per block. The initial radius $r$ of the local neighborhoods in the first layer is set so
|
| 411 |
+
---PAGE_BREAK---
|
| 412 |
+
|
| 413 |
+
as to include 1/15 of the total number of elements in each
|
| 414 |
+
neighborhood and is scaled accordingly. The subsampled
|
| 415 |
+
neighborhood used to compute the Monte Carlo convolution
|
| 416 |
+
estimator uses *p* = 25 elements. The models take less than
|
| 417 |
+
12 hours to train on a 1080Ti.
|
| 418 |
+
|
| 419 |
+
**QM9 Hyperparameters:** For the QM9 molecular data, we use the featurization from Anderson et al. (2019), where the input features $f_i$ are determined by the atom type (C,H,N,O,F) and the atomic charge. The coordinates $x_i$ are simply the raw atomic coordinates measured in angstroms. A separate model is trained for each prediction task, all using the same hyperparameters and early stopping on the validation MAE. We use the same train, validation, test split as Anderson et al. (2019), with 100k molecules for train, 10% for test and the remaining for validation. Like with the other experiments, we use a cosine learning rate decay schedule. Each model is trained using the Adam optimizer for 1000 epochs with a learning rate of 3e-3 and batch size of 100. We use SO(3) data augmentation, 6 bottleneck blocks, each with $k = 1536$ channels. The radius of the local neighborhood is set to $r = \infty$ to include all elements. The model takes about 48 hours to train on a single 1080Ti.
|
| 420 |
+
|
| 421 |
+
### C.6. Local Neighborhood Visualizations
|
| 422 |
+
|
| 423 |
+
In Figure 10 we visualize the local neighborhood used with different groups under three different types of transformations: translations, rotations and scaling. The distance and neighborhood are defined for the tuples of group elements and orbit. For Trivial, T(2), SO(2), $\mathbb{R} \times SO(2)$ the correspondence between points and these tuples is one-to-one and we can identify the neighborhood in terms of the input points. For SE(2) each point is mapped to multiple tuples, each of which defines its own neighborhood in terms of other tuples. In the Figure, for SE(2) for a given point we visualize the distribution of points that enter the computation of the convolution at a specific tuple.
|
| 424 |
+
---PAGE_BREAK---
|
| 425 |
+
|
| 426 |
+
**Figure 10.** A visualization of the local neighborhood for different groups, in terms of the points in the input space. For the computation of the convolution at the point in red, elements are sampled from colored region. In each panel, the top row shows translations, middle row shows rotations and bottom row shows scalings of the same image. For $SE(2)$ we visualize the distribution of points entering the computation of the convolution over multiple lift samples. For each of the equivariant models that respects a given symmetry, the points that enter into the computation are not affected by the transformation.
|
samples_new/texts_merged/1223200.md
ADDED
|
@@ -0,0 +1,300 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Slotted Aloha as a game with partial information
|
| 5 |
+
|
| 6 |
+
Eitan Altman<sup>a</sup>, Rachid El Azouzi<sup>a,b,*</sup>, Tania Jiménez<sup>c</sup>
|
| 7 |
+
|
| 8 |
+
<sup>a</sup> INRIA, 2004 Route des Lucioles, Projet Mistral, 06902 Sophia Antipolis Cote d'Azur Cedex, France
|
| 9 |
+
|
| 10 |
+
<sup>b</sup> LIAACERI, Université d'Avignon, Agroparc, BP 1228, 84911 Avignon, France
|
| 11 |
+
|
| 12 |
+
<sup>c</sup> CESIMO, Facultad de Ingeniería, Universidad de Los Andes, Mérida, Venezuela
|
| 13 |
+
|
| 14 |
+
Received 13 March 2003; received in revised form 19 February 2004; accepted 25 February 2004
|
| 15 |
+
Available online 7 April 2004
|
| 16 |
+
|
| 17 |
+
Responsible Editor: E.K.P. Chong
|
| 18 |
+
|
| 19 |
+
## Abstract
|
| 20 |
+
|
| 21 |
+
This paper studies distributed choice of retransmission probabilities in slotted ALOHA. Both the cooperative team problem as well as the noncooperative game problem are considered. Unlike some previous work, we assume that mobiles do not know the number of backlogged packets at other nodes. A Markov chain analysis is used to obtain optimal and equilibrium retransmission probabilities and throughput. We then investigate the impact of adding re-transmission costs (which may represent the disutility for power consumption) on the equilibrium and show how this pricing can be used to make the equilibrium throughput coincide with the optimal team throughput.
|
| 22 |
+
|
| 23 |
+
© 2004 Elsevier B.V. All rights reserved.
|
| 24 |
+
|
| 25 |
+
**Keywords:** Slotted Aloha; Nash equilibrium; Markov chain; Pricing
|
| 26 |
+
|
| 27 |
+
## 1. Introduction
|
| 28 |
+
|
| 29 |
+
Aloha [4] and slotted Aloha [14] have long been used as random distributed medium access protocols for radio channels. They are in use in both satellite as well as cellular telephone networks for the sporadic transfer of data packets. In these protocols, packets are transmitted sporadically by various users. If packets are sent simultaneously by more than one user then they collide. After the end of the transmission of a packet, the transmitter receives the information on whether there has been a collision (and retransmission is needed) or whether it was well received. All packets involved in a collision are assumed to be corrupted and are retransmitted after some random time. We focus in this paper on the slotted Aloha (which is known to have a better achievable throughput than the unslotted version, [5]) in which time is
|
| 30 |
+
|
| 31 |
+
* Corresponding author. Address: INRIA, 2004 Route des Lucioles, Projet Mistral, 06902 Sophia Antipolis Cote d'Azur Cedex, France. Tel.: +33-492387628; fax: +33-492387971.
|
| 32 |
+
* E-mail address: rachid.elazouzi@sophia.inria.fr (R. El Azouzi).
|
| 33 |
+
---PAGE_BREAK---
|
| 34 |
+
|
| 35 |
+
divided into units. At each time unit a packet may be transmitted, and at the end of the time interval, the sources get the feedback on whether there was zero, one or more transmissions (collision) during the time slot. A packet that arrives at a source is immediately transmitted. Packets that are involved in a collision are backlogged and are scheduled for retransmission after a random time.
|
| 36 |
+
|
| 37 |
+
The determination of the above random time can be considered as a stochastic control problem. The information structure, however, is not a classical one: sources do not have full state information as they do not know how many packets are backlogged. Nor do they know how many packets have been involved in a collision.
|
| 38 |
+
|
| 39 |
+
We study this control problem in two different frameworks:
|
| 40 |
+
|
| 41 |
+
1. As a team problem, i.e. where there is a common goal to all nodes in the network (such as maximizing the system throughput).
|
| 42 |
+
|
| 43 |
+
2. As a problem in a noncooperative framework: each node wishes to maximize its own throughput. This gives rise to a game theoretical formulation.
|
| 44 |
+
|
| 45 |
+
Our main finding is that as the workload increases (i.e. as the packet arrival rate increases), sources become more aggressive at equilibrium in the game setting (in comparison with the team problem) and this results in a dramatic decrease in the total system's throughput. To avoid this collapse of system's throughput, we study the effect of adding a cost for transmissions and retransmissions (which can, in particular, represent the battery power cost). We show that this additional cost improves the system's performance and that an appropriate pricing can be chosen that yields an equilibrium performance that coincides with the team one.
|
| 46 |
+
|
| 47 |
+
Previous game formulations of the slotted ALOHA have been proposed in [10–12]. In the two last references, a full information game is considered, in which each user knows how many backlogged packets there are in all the network. Moreover, it is assumed in [11,12] that a packet that is to be transmitted for the first time waits for a random time in the same way as a backlogged packet. Our goal is to study the slotted Aloha avoiding these two assumptions; relaxing the assumptions allows to model more accurately the original versions of Aloha, and in particular, relaxing the first assumption allows for more distributed implementations of Aloha. In [10] it is assumed that nodes have always packets to send. Thus there is only one trivial state in the system (all nodes are backlogged) which is known to all users.
|
| 48 |
+
|
| 49 |
+
For more background on the use of stochastic control and of game theory in communication networks, see [1–3]. We note that the game formulation of our problem is similar to game formulation of retrial queues, in which customers retry to make a call after some random time if they find the line busy [7,9]. The difference is, however, that in retrial queues there are no collisions.
|
| 50 |
+
|
| 51 |
+
The structure of the paper is as follows. We begin by introducing in Section 2 the general model and formulate the team and the game problems. We provide a Markov analysis for both the team and the game problem. This analysis is used in Section 3 to numerically study and compare the properties of the team and the game solutions. The model with pricing is then introduced in Section 4 and is investigated numerically in Section 5. We end with a concluding section.
|
| 52 |
+
|
| 53 |
+
## 2. Model and problem formulation
|
| 54 |
+
|
| 55 |
+
We use a Markovian model based on [5]. We assume that there are a finite number of sources without buffers. The arrival flow of packets to source *i* follows a Bernoulli process with parameter $q_a$ (i.e. at each time slot, there is a probability $q_a$ of a new arrival at a source, and all arrivals are independent). As long as there is a packet at a source (i.e. as long as it is not successfully transmitted) new packets to that
|
| 56 |
+
---PAGE_BREAK---
|
| 57 |
+
|
| 58 |
+
source are blocked and lost.¹ The arrival processes to different sources are independent. A backlogged packet at source *i* is retransmitted with probability $q_r^i$. We shall restrict in our control and game problems to simple policies in which $q_r^i$ does not change in time. Since sources are symmetric, we shall further restrict to finding a symmetric optimal solution, that is retransmission probabilities $q_r^i$ that do not depend on *i*. We assume that if more than one source attempt transmission in a time slot, all packets are lost.
|
| 59 |
+
|
| 60 |
+
**Remark 1.** Other models for ALOHA have been also studied in the literature. A commonly used model is one with infinite many sources [5] with no buffers, in which the process of total number of (non-blocked) arrivals at a time slot is Poisson with parameter $\lambda$ and the process of combined transmissions and retransmissions attempts forms a Poisson process with parameter *G*. Analysis of this model shows that it has two quasi-stable operation modes (as long as $\lambda < \exp(-1)$), one corresponding to a congested system (in which there are many backlogged packets and many retransmissions) and one corresponding to an uncongested system (with small amount of backlogged packets). In this model, both operation points turn out to have the same throughput. In our model with finitely many sources we has also two quasi-stable operation modes but the throughput during congestion periods is lower than in the non-congested periods [5]. We also note that in the case of infinitely many nodes, retransmissions with a fixed positive probability renders the system unstable [8]. Finally, we should mention that there are also models in which not all packets involved in a collision are corrupted and lost, see [15] and references therein.
|
| 61 |
+
|
| 62 |
+
**Remark 2.** Quite frequently one uses the ALOHA protocol for sporadic transmissions of signaling packets such as packets for making reservation for a dedicated channel for other transmissions (that do not use ALOHA), see e.g. the description of the SPADE on demand transmission protocol for satellite communications in [16]. In the context of signaling, it is natural to assume that a source does not start generating a new signaling packet (e.g. a new reservation) as long as the current signaling packet is not transmitted. In that case, the process of attempts to retransmit a new packet from a source after the previous packet has been successfully transmitted coincides with our no buffer model.
|
| 63 |
+
|
| 64 |
+
We shall use as the state of the system the number of backlogged nodes (or equivalently, of backlogged packets) at the beginning of a slot, and denote it frequently with *n*. For any choice of values $q_r^j \in (0, 1]$, the state process is a Markov chain that contains a single ergodic chain (and possibly transient states as well). Define $q_r$ to be the vector of retransmission probabilities for all users (whose $j$th entry is $q_r^j$). Let $\pi(q_r)$ be the corresponding vector of steady state probabilities where its $n$th entry, $\pi_n(q_r)$, denotes the probability of $n$ backlogged nodes. When all entries of $q_r$ are the same, say $q$, we shall write (with some abuse of notation) $\pi(q)$ instead of $\pi(q_r)$.
|
| 65 |
+
|
| 66 |
+
We introduce further notation. Assume that there are $n$ backlogged packets, and all use the same value $q_r$ as retransmission probability. Let $Q_r(i,n)$ be the probability that $i$ out of the $n$ backlogged packets retransmit at the slot. Then
|
| 67 |
+
|
| 68 |
+
$$ Q_r(i, n) = \binom{n}{i} (1 - q_r)^{n-i} [q_r]^i. \quad (1) $$
|
| 69 |
+
|
| 70 |
+
¹ In considering the number of packets in the system, this assumption is equivalent to saying that a source does not generate new packets as long as a previous packet is not successfully transmitted.
|
| 71 |
+
---PAGE_BREAK---
|
| 72 |
+
|
| 73 |
+
Assume that *m* is the number of nodes and let $Q_a(i, n)$ be the probability that *i* unbacklogged nodes transmit packets in a given slot (i.e. that *i* arrivals occurred at nodes without backlogged packets). Then
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
Q_a(i,n) = \binom{m-n}{i} (1-q_a)^{m-n-i} [q_a]^i. \tag{2}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
Let $Q_r(1,0) = 0$ and $Q_a(1,m) = 0$.
|
| 80 |
+
|
| 81 |
+
In case all nodes use the same value *q* for *q*<sub>*r*</sub>, the transition probabilities of the Markov chain are given by
|
| 82 |
+
[5]:
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
P_{n,n+i}(q) =
|
| 86 |
+
\begin{cases}
|
| 87 |
+
Q_a(i,n), & 2 \le i \le m-n, \\
|
| 88 |
+
Q_a(1,n)[1-Q_r(0,n)], & i=1, \\
|
| 89 |
+
Q_a(1,n)Q_r(0,n) + Q_a(0,n)[1-Q_r(1,n)], & i=0, \\
|
| 90 |
+
Q_a(0,n)Q_r(1,n), & i=-1.
|
| 91 |
+
\end{cases}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
The system throughput (defined as the sample average of the number of packets that are successfully transmitted) is given almost surely by the constant
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
\mathrm{thp}(q) = \sum_{n=1}^{m} \pi_n(q) [P_{n,n-1}(q) + Q_a(1,n)Q_r(0,n)] + \pi_0(q) Q_a(1,0) = q_a \sum_{n=0}^{m} \pi_n(q)(m-n).
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
Note: the first equality follows from the fact that if the state at the beginning of the slot is *n* > 0 then there is a departure of a backlogged packet during that slot with probability *P*<sub>*n*,n−1</sub>(*q*), and of a new arriving packet with probability *Q*<sub>*a*</sub>(1,*n*)*Q*<sub>*r*</sub>(0,*n*); Moreover, if the state is 0 then there is a departure with probability *Q*<sub>*a*</sub>(1,0). The second equality simply expresses the expected number of arrivals at a time slot (which actually enter the system), which should equal to the expected number of departures (and thus the throughput) at stationary regime.
|
| 101 |
+
|
| 102 |
+
The team problem is therefore given as the solution of the optimization problem:
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\[
|
| 106 |
+
\max_q \quad \text{thp}(q) \quad \text{s.t.} \quad
|
| 107 |
+
\left\{
|
| 108 |
+
\begin{array}{l@{\quad}l@{\quad}l}
|
| 109 |
+
\pi(q) &= \pi(q)P(q), & \\
|
| 110 |
+
\pi_n(q) &\ge 0, & n = 0, \dots, m, \\
|
| 111 |
+
\displaystyle\sum_{n=0}^m \pi_n(q) &= 1.
|
| 112 |
+
\end{array}
|
| 113 |
+
\right.
|
| 114 |
+
\]
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
A solution to the team problem can be obtained by computing recursively the steady state probabilities, as in Problem 4.1 in [5], and thus obtain an explicit expression for $thp(q)$ as a function of $q$.
|
| 118 |
+
|
| 119 |
+
*Singularity at q = 0.* The only point where *P* does not have a single stationary distribution is at *q* = 0, where it has two absorbing states: *n* = *m* and *n* = *m* − 1. All other states are transient (for any *q*<sub>*a*</sub> > 0), and the probability to end at one of the absorbing states depend on the initial distribution of the Markov chain. We note that if the state *m* − 1 is reached then the throughput is *q*<sub>*a*</sub> w.p.1, where as if the state *m* is reached then the throughput equals 0. It is thus a deadlock state. For *q*<sub>*a*</sub> > 0 and *q*<sub>*r*</sub> = 0, the deadlock state is reached with positive probability from any initial state other than *m* − 1. We shall therefore exclude *q*<sub>*r*</sub> = 0 and optimize only on the range ε ≤ *q*<sub>*r*</sub> ≤ 1. We choose throughout the paper ε = 10⁻⁴.
|
| 120 |
+
|
| 121 |
+
*Existence of a solution.* The steady state probabilities π(q) are continuous over 0 < q ≤ 1. Since this is not a close interval, a solution need not exist. However, as we restrict to the closed interval q ∈ [ε, 1] where ε > 0, an optimal solution indeed exists. Note also that the limit lim<sub>q→0</sub> π(q) exists since π(q) is a rational function of q at the neighborhood of zero. Therefore for any δ > 0, there exists some q > 0 which is δ-optimal. (q*^* > 0 is said to be δ-optimal if it satisfies thp(q*) ≥ thp(q) – δ for all q ∈ (0, 1].)
|
| 122 |
+
|
| 123 |
+
Next, we formulate the game problem. For a given policy vector **q**<sub>r</sub> of retransmission probabilities for all users (whose jth entry is **q**'<sub>r</sub>), define (([**q**<sub>r</sub>]<sup>-i</sup>, ˆ**q**'<sub>r</sub>) to be a retransmission policy where user j retransmits at a slot with probability **q**'<sub>r</sub> for all j ≠ i and where user i retransmits with probability ˆ**q**'<sub>r</sub>. Each user i seeks to
|
| 124 |
+
---PAGE_BREAK---
|
| 125 |
+
|
| 126 |
+
maximize his own throughput $thp_i$. The problem we are interested in is then to find a symmetric equilibrium policy $\mathbf{q}_r^* = (q_r, q_r, \dots, q_r)$ such that for any user $i$ and any retransmission probability $q_r^i$ for that user,
|
| 127 |
+
|
| 128 |
+
$$ \mathrm{thp}_i(\mathbf{q}_r^*) \ge \mathrm{thp}_i([\mathbf{q}_r^*]^{-i}, q_r^i). \quad (3) $$
|
| 129 |
+
|
| 130 |
+
Since we restrict to symmetric $\mathbf{q}_r^*$, we shall also identify it (with some abused of notation) with the actual transmission probability (which is the same for all users). Next we show how to obtain an equilibrium policy. We first note that due to symmetry, to see whether $\mathbf{q}_r^*$ is an equilibrium it suffices to check (3) for a single player. We shall thus assume that there are $m+1$ users all together, and that the first $m$ users retransmit with a given probability $\mathbf{q}_r^{-(m+1)} = (q^o, \dots, q^o)$ and user $m+1$ retransmits with probability $\mathbf{q}_r^{(m+1)}$. Define the set
|
| 131 |
+
|
| 132 |
+
$$ \mathcal{Q}^{m+1}(\mathbf{q}_r^o) = \underset{q_r^{(m+1)} \in [\underline{e},1]}{\operatorname{argmax}} (\operatorname{thp}_{m+1}([\mathbf{q}_r^o]^{-(m+1)}, q_r^{(m+1)}), $$
|
| 133 |
+
|
| 134 |
+
where $\mathbf{q}_r^o$ denotes (with some abuse of notation) the policy where all users retransmit with probability $q_r^o$, and where the maximization is taken with respect to $q_r^{(m+1)}$. Then $q_r^*$ is a symmetric equilibrium if
|
| 135 |
+
|
| 136 |
+
$$ q_r^* \in \mathcal{Q}_r^{m+1}(q_r^*). $$
|
| 137 |
+
|
| 138 |
+
To compute $\mathrm{thp}_{m+1}([\mathbf{q}_r^o]^{-i}, q_r^i)$, we introduce again a Markov chain with a two dimensional state. The first state component corresponds to the number of backlogged packets among the users 1, ..., m, and the second component is the number of backlogged packets (either 1 or 0) of user m + 1. The transition probabilities are given by
|
| 139 |
+
|
| 140 |
+
$$ P_{(n,i),(n+k,j)}(\mathbf{q}_r^o, \mathbf{q}_r^{(m+1)}) = \left\{ \begin{array}{ll} \begin{array}{l} Q_a(k,n), \\ Q_a(k,n)(1-q_a), \\ Q_a(k,n)q_a, \end{array} & i=j=1 \\[2ex] \begin{array}{l} Q_a(1,n)[1-Q_r(0,n)(1-q_r^{(m+1)})], \\ Q_a(1,n)[1-Q_r(0,n)](1-q_a), \\ Q_a(1,n)q_a, \end{array} & i=0,j=1 \\[2ex] \begin{array}{ll} (1-q_r^{(m+1)})Z + q_r(1-Q_r(0,n))Q_a(0,n), & i=j=1 \\ (1-q_a)Z + q_aQ_a(0,n)Q_r(0,n), & i=j=0 \\ q_aQ_a(0,n)[1-Q_r(0,n)], & i=0,j=1 \\ q_r^{(m+1)}Q_a(0,n)Q_r(0,n), & i=1,j=0 \\[2ex] Q_a(0,n)Q_r(1,n)(1-q_r^{(m+1)}), & i=j=1 \\ Q_a(0,n)Q_r(1,n)(1-q_a), & i=j=0 \end{array} & k=1, \\[2ex] 0 & k=-1, \end{array} \right. $$
|
| 141 |
+
|
| 142 |
+
where $Z = (Q_a(1,n)Q_r(0,n) + Q_a(0,n))[1 - Q_r(1,n)]$ and where $Q_a$ and $Q_r$ are given in (1) and (2), respectively (with $q_r^o$ replacing $q_r$).
|
| 143 |
+
|
| 144 |
+
The throughput of user $m+1$ is given by
|
| 145 |
+
|
| 146 |
+
$$ \mathrm{thp}_{m+1}([\mathbf{q}_r^o]^{-(m+1)}, q_r^{(m+1)}) = q_a \sum_{n=0}^{m} \pi_{n,0}([\mathbf{q}_r^o]^{-(m+1)}, q_r^{(m+1)}). \quad (4) $$
|
| 147 |
+
|
| 148 |
+
### 3. Numerical investigation
|
| 149 |
+
|
| 150 |
+
In this section we shall obtain the retransmission probabilities which solve the team and the game problem. We investigate their dependence and the dependence of the throughput that they imply on the arrival probabilities $q_a$ and on the number of mobiles.
|
| 151 |
+
---PAGE_BREAK---
|
| 152 |
+
|
| 153 |
+
Figs. 1 and 2 provide the total throughput and optimal retransmission probabilities $q_r$ for $m = 2, m = 10$ and $m = 50$ for the team problem, as a function of the arrival probability $q_a$. We see that in heavy traffic, the throughput decreases when the number of mobiles increases. Also, we observe that the optimal retransmission policy is more and more small when the arrival probability increases or the number of mobiles increases: as the system becomes more congested (larger arrival probability or large number of mobiles) the transmission probability decreases so as to counter expected collisions. But for light traffic, we observe that the slotted Aloha is very efficient when the number of mobiles is large: in that regime, the optimal throughput achieved increases as the number of mobiles increases.
|
| 154 |
+
|
| 155 |
+
The intuitive reason that the team optimal retransmission probabilities are close to 0 when arrival probabilities are close to one is that if a mobile finds all other mobiles backlogged then it can transmit for very long time all its packets at a rate of almost one packet per slot, without fearing collisions. Since its arrival probabilities are close to one, then throughput is not wasted during such periods. (Note however that a throughput close to 1 cannot be achieved since with some nonnegligible probability, all mobiles will be backlogged during long periods when retransmission probabilities are very low.) The behavior we see could remind of CDMA systems in which best performance is sometime achieved by “time-sharing” the access between users in order to decrease interference [13].
|
| 156 |
+
|
| 157 |
+
Next, we show in Figs. 3 and 4 the total optimal throughput versus the number of mobiles for some fixed arrival probabilities ($q_a = 0.7, 0.8, 0.9$). In Fig. 3 we observe that the optimal throughput converges to some value when the number of mobiles goes to infinity, and convergence is faster when the arrival probability $q_a$ is larger. In fact, for heavy traffic with large number of mobiles, the optimal retransmission probability is seen to be $\epsilon$. Thus, the steady state probabilities $\pi$ are then close to $\pi_m = 1/2, \pi_{m-1} = 1/2$ and $\pi_n = 0$ $\forall n < m-1$. Hence the total throughput becomes $q_a/2$. If we look at the value of the throughput on the y-axis of Fig. 3 we observe that the throughput indeed converges to 0.35 (resp. 0.4, 0.45) for $q_a = 0.7$ (resp. $q_a = 0.8, q_a = 0.9$).
|
| 158 |
+
|
| 159 |
+
Now, we present the results we obtain when we use the game problem. Figs. 5 and 6 show total throughput at equilibrium (obtained by multiplying the expression in Eq. (4) by the number of mobiles) and the retransmission probability at equilibrium as a function of the arrival probability for the game scenario. We see that for game problem, in contrast to team problem, the equilibrium retransmission becomes more and more aggressive as the arrival probability increases or the number of mobiles increases which explains the dramatic decrease in the system's throughput. Moreover, the equilibrium retransmission quickly increases to 1 when the number of mobiles increases. In particular *the throughput is zero when $m > 5$ for each arrival probability*. In conclusion, the game solution is very inefficient for heavy traffic, and even for light traffic it becomes inefficient when the number of mobiles is larger than five.
|
| 160 |
+
|
| 161 |
+
We note that a similar aggressive behavior at equilibrium has been observed in [6] in the context of flow control by several competing users that share a common drop tail buffer. However in that context, the most aggressive behavior (of transmission at maximum rate) is the “equilibrium” solution for *any arrival rate*, and not just at high rates as in our case. We may thus wonder why retransmission probabilities of 1 are not an equilibrium in our slotted Aloha problem (in the case of light traffic). An intuitive reason could be that if a mobile deviates and retransmits with probability one, (while other continue to retransmit with the equilibrium probability $q^* < 1$) the total congestion in the system (i.e. the number of backlogged mobiles) increases; this provokes more retransmissions from other mobiles which then causes sufficiently more collisions of packets from the deviating mobile so as to cause a decrease in its throughput.
|
| 162 |
+
|
| 163 |
+
## 4. Adding costs for retransmissions
|
| 164 |
+
|
| 165 |
+
In this section we consider the problem where there is an extra cost $\theta$ per each transmission and retransmission. This can represent the disutility for the consumption of battery energy, which is a scarce
|
| 166 |
+
---PAGE_BREAK---
|
| 167 |
+
|
| 168 |
+
Fig. 1. Optimal throughput for the team case as a function of the arrival probabilities $q_a$ for $m = 2, 10, 50$.
|
| 169 |
+
|
| 170 |
+
Fig. 2. The optimal retransmission probabilities in the team case as a function of the arrival probabilities $q_a$ for $m = 2, 10, 50$.
|
| 171 |
+
|
| 172 |
+
Fig. 3. Optimal throughput for the team case as a function of the number of mobiles $m$ for $q_a = 0.7, 0.8, 0.9$.
|
| 173 |
+
|
| 174 |
+
Fig. 4. The optimal retransmission probabilities in the team case as a function of the number of mobiles $m$ for $q_a = 0.7, 0.8, 0.9$.
|
| 175 |
+
|
| 176 |
+
Fig. 5. Optimal throughput for the game case as a function of the arrival probabilities $q_a$ for $m = 2, 4, 6$.
|
| 177 |
+
|
| 178 |
+
Fig. 6. The optimal retransmission probabilities in the game case as a function of the arrival probabilities $q_a$ for $m = 2, 4, 6$.
|
| 179 |
+
---PAGE_BREAK---
|
| 180 |
+
|
| 181 |
+
resource. For a given symmetric $q$ for all users, the steady-state retransmission cost is $\theta q \sum_{n=0}^{m} \pi_n(q)n$, where as the transmission cost of arriving packets (i.e. packets that enter the system and are not rejected) is $\theta \text{thp}(q)$. (This is because the expected number of arrival packets equals to the expected number of departing packets at steady-state, and each time a packet arrives at the system it is immediately transmitted.)
|
| 182 |
+
|
| 183 |
+
Thus the new team problem is
|
| 184 |
+
|
| 185 |
+
$$ \max_q \left\{ \text{thp}(q)(1-\theta) - \theta q \sum_{n=0}^{m} \pi_n(q)n \right\}. $$
|
| 186 |
+
|
| 187 |
+
For the noncooperative problem, the retransmission cost for a symmetric retransmission policy $q_r^o$ of users 1, ..., m and a retransmission probability $q_r^{(m+1)}$ of user $m+1$ is
|
| 188 |
+
|
| 189 |
+
$$ \theta q_r^{(m+1)} \sum_{n=0}^{m} \pi_{n,1}([\mathbf{q}_r^o]^{-(m+1)}, q_r^{(m+1)}). $$
|
| 190 |
+
|
| 191 |
+
User $m+1$ is thus faced with the problem:
|
| 192 |
+
|
| 193 |
+
$$ \max_{q_r^{m+1}} J_{m+1}(q_r^o, q_r^{(m+1)}) $$
|
| 194 |
+
|
| 195 |
+
where
|
| 196 |
+
|
| 197 |
+
$$ J_{m+1}(q_r^o, q_r^{(m+1)}) = \text{thp}_{m+1}([\mathbf{q}_r^o]^{-(m+1)}, q_r^{(m+1)})(1-\theta) - \theta q_r^{(m+1)} \sum_{n=0}^{m} \pi_{n,1}([\mathbf{q}_r^o]^{-(m+1)}, q_r^{(m+1)}). $$
|
| 198 |
+
|
| 199 |
+
Define as we did before
|
| 200 |
+
|
| 201 |
+
$$ \bar{D}_r^{m+1}(q_r^o) = \underset{q_r^{(m+1)} \in [\underline{e}, \overline{1}]}{\operatorname{argmax}} \left( J_{m+1}([\underline{q}_r^o]^{-(m+1)}, q_r^{(m+1)}) \right). $$
|
| 202 |
+
|
| 203 |
+
Then we seek for the value $q_r^*$ of retransmission probability that satisfies
|
| 204 |
+
|
| 205 |
+
$$ q_r^* \in \bar{D}_r^{m+1}(q_r^*), $$
|
| 206 |
+
|
| 207 |
+
which is the Nash equilibrium for the game problem.
|
| 208 |
+
|
| 209 |
+
# 5. Numerical investigation
|
| 210 |
+
|
| 211 |
+
In this section we obtain the retransmission probabilities which solve the team and the game problems with the extra transmission costs. We shall investigate the dependence of the solution on the value $\theta$.
|
| 212 |
+
|
| 213 |
+
In Figs. 7–12 we depict the throughput obtained at the optimal solution and the optimal retransmission probabilities, respectively, as a function of the arrival probability, for the team problem with $m = 2, 10, 50$, for various values of $\theta$. We see that both the throughput as well as the retransmission probabilities are monotone decreasing in the cost. This can be expected since retransmissions become more costly with increasing $\theta$. An interesting feature is that for any fixed $\theta \neq 0$, the retransmission probabilities first increase in the arrival probability and then decrease. For $\theta = 0$, in contrast, the optimal retransmission probability decreases in the arrival probability (which is natural since congestion in the system increases as $q_a$ increases).
|
| 214 |
+
|
| 215 |
+
Next we consider the game problem with $m = 2, 10, 50$ mobiles.
|
| 216 |
+
|
| 217 |
+
Figs. 13–18 show the impact of $\theta$ on the total throughput and equilibrium retransmission probability $q_r$, as a function of the arrival $q_a$. We see that increasing the cost $\theta$ results in decreasing the retransmission probabilities. Furthermore with extra cost, the equilibrium retransmission is more and more small when the cost $\theta$ increases. We see also that indeed the throughput is improved considerably by adding a cost on
|
| 218 |
+
---PAGE_BREAK---
|
| 219 |
+
|
| 220 |
+
Fig. 7. Throughput at optimal $q_r$ for the team case as a function of $q_a$ for $m = 2$ and $\theta = 0, 0.4, 0.7, 0.9$.
|
| 221 |
+
|
| 222 |
+
Fig. 8. The optimal retransmission probabilities in the team case as a function of $q_a$ for $m = 2$ and $\theta = 0, 0.4, 0.7, 0.9$.
|
| 223 |
+
|
| 224 |
+
Fig. 9. Throughput at optimal $q_r$ for the team case as a function of $q_a$ for $m = 10$ and $\theta = 0, 0.4, 0.7, 0.9$.
|
| 225 |
+
|
| 226 |
+
Fig. 10. The optimal retransmission probabilities in the team case as a function of $q_a$ for $m = 10$ and $\theta = 0, 0.4, 0.7, 0.9$.
|
| 227 |
+
|
| 228 |
+
Fig. 11. Throughput at optimal $q_r$ for the team case as a function of $q_a$ for $m = 50$ and $\theta = 0, 0.4, 0.7, 0.9$.
|
| 229 |
+
|
| 230 |
+
Fig. 12. The optimal retransmission probabilities in the team case as a function of $q_a$ for $m = 50$ and $\theta = 0, 0.4, 0.7, 0.9$.
|
| 231 |
+
---PAGE_BREAK---
|
| 232 |
+
|
| 233 |
+
Fig. 13. Total throughput for the game case as a function of the arrival probabilities $q_a$ for $m = 2$ (number of mobiles) and $\theta = 0, 0.4, 0.7, 0.9$.
|
| 234 |
+
|
| 235 |
+
Fig. 14. The equilibrium retransmission probabilities in the game case as function of the arrival probabilities $q_a$ for $m = 2$ (number of mobiles) and $\theta = 0, 0.4, 0.7, 0.9$.
|
| 236 |
+
|
| 237 |
+
Fig. 15. Total throughput for the game case as a function of the arrival probabilities $q_a$ for $m = 10$ (number of mobiles) and $\theta = 0, 0.4, 0.7, 0.9$.
|
| 238 |
+
|
| 239 |
+
Fig. 16. The equilibrium retransmission probabilities in the game case as function of the arrival probabilities $q_a$ for $m = 10$ (number of mobiles) and $\theta = 0, 0.4, 0.7, 0.9$.
|
| 240 |
+
|
| 241 |
+
Fig. 17. Total throughput for the game case as a function of the arrival probabilities $q_a$ for $m = 50$ (number of mobiles) and $\theta = 0, 0.4, 0.7, 0.9$.
|
| 242 |
+
|
| 243 |
+
Fig. 18. The equilibrium retransmission probabilities in the game case as function of the arrival probabilities $q_a$ for $m = 50$ (number of mobiles) and $\theta = 0, 0.4, 0.7, 0.9$.
|
| 244 |
+
---PAGE_BREAK---
|
| 245 |
+
|
| 246 |
+
Fig. 19. The retransmission cost $\theta$ such that the optimal retransmission in the game coincides with that of the original team problem, as function of the arrival probabilities $q_a$ for $m = 2, 10, 50$.
|
| 247 |
+
|
| 248 |
+
retransmission, especially for large arrival probabilities or for $m > 3$. However, we believe that these bad performances of game problem can be potentially be solved by using this extra cost even with large number of mobiles. We observe also that for different values of $q_a$, we obtain different costs $\theta$ which gives best throughput. For example, in Fig. 13, for $q_a = 0.4$, $\theta = 0.4$ gives best throughput and for $q_a = 0.9$, $\theta = 0.9$ gives best throughput. We then compute the cost $\theta$ that is necessary for the equilibrium retransmission probabilities to coincide with those obtained for the team problem. This is the value of $\theta$ that will yield the optimal system throughput. The results are presented in Fig. 19 for $m = 2, 10$ and $50$.
|
| 249 |
+
|
| 250 |
+
From Fig. 19, we see that as the number of mobiles is large ($\ge 10$), the value of $\theta$ that gives the team solution depends less and less on the number of mobiles. This is an appealing property since it suggests that for a large number of mobiles ($m \ge 10$), we may have a pricing choice $\theta$ that can be chosen in a robust way, and may perform close to the team case even if the number of mobiles change.
|
| 251 |
+
|
| 252 |
+
## 6. Concluding remarks
|
| 253 |
+
|
| 254 |
+
We have studied three approaches for choosing retransmission probabilities in a slotted Aloha system. First, we studied the team problem, then the noncooperative game problem. The objective was initially to maximize the throughput. We saw that as the arrival probabilities increased, the behavior of mobiles became more and more aggressive (as compared to the team problem) which resulted in a global deterioration of the system throughput. This is in contrast to the team problem in which throughput increased with the arrival probabilities. We also considered additional costs on transmissions and showed numerically that pricing could be used to enforce an equilibrium whose throughput corresponds to the team optimal solution.
|
| 255 |
+
|
| 256 |
+
## Acknowledgements
|
| 257 |
+
|
| 258 |
+
This work was partially supported by the EURO NGI network of excellence as well as by the PRIXNET ARC Inria collaboration grant.
|
| 259 |
+
---PAGE_BREAK---
|
| 260 |
+
|
| 261 |
+
## References
|
| 262 |
+
|
| 263 |
+
[1] E. Altman, Applications of Markov decision processes in communication networks: a survey, in: E. Feinberg, A. Shwartz (Eds.), Markov Decision Processes, Models, Methods, Directions, and Open Problems, Kluwer, Dordrecht, 2001, pp. 488–536.
|
| 264 |
+
|
| 265 |
+
[2] E. Altman, T. Boulogne, R. El Azouzi, T. Jiménez, L. Wynter, A survey on networking games in telecommunications, Computers and Operations Research, in press. doi:10.1016/j.cor.2004.06.005.
|
| 266 |
+
|
| 267 |
+
[3] E. Altman, L. Wynter, Equilibrium, games, and pricing in transportation and telecommunication networks, Crossovers between Transportation Planning and Telecommunications 4 (1) (2004) 7–21.
|
| 268 |
+
|
| 269 |
+
[4] N. Abramson, The Aloha system—another alternative for computer communications, AFIPS Conference Proceedings 36 (1970) 295–298.
|
| 270 |
+
|
| 271 |
+
[5] D. Bertsekas, R. Gallager, Data Networks, Prentice Hall, Englewood Cliffs, NJ, 1987.
|
| 272 |
+
|
| 273 |
+
[6] D. Dutta, A. Goel, J. Heidemann, Oblivious AQM and Nash equilibria, IEEE Infocom, 2003.
|
| 274 |
+
|
| 275 |
+
[7] A. Elcan, Optimal customer return rate for an M/M/1 queueing system with retrials, Probability in the Engineering and Informational Sciences 8 (1994) 521–539.
|
| 276 |
+
|
| 277 |
+
[8] G. Fayolle, E. Gelenbe, J. Labetoulle, Stability and optimal control of the packet switching broadcast channel, Journal of the Association for Computing Machinery 24 (3) (1977) 375–386.
|
| 278 |
+
|
| 279 |
+
[9] R. Hassin, M. Haviv, On optimal and equilibrium retrial rates in a busy system, Probability in the Engineering and Informational Sciences 10 (1996) 223–227.
|
| 280 |
+
|
| 281 |
+
[10] Y. Jin, G. Kesidis, Equilibria of a noncooperative game for heterogeneous users of an ALOHA network, IEEE Communication Letters 6 (7) (2002) 282–284.
|
| 282 |
+
|
| 283 |
+
[11] A.B. MacKenzie, S.B. Wicker, Selfish users in Aloha: a game theoretic approach, in: Proceedings of the Fall 2001 IEEE Vehicular Technology Conference, 2001.
|
| 284 |
+
|
| 285 |
+
[12] A.B. MacKenzie, S.B. Wicker, Stability of multipacket slotted Aloha with selfish user users and perfect information, in: Proceedings of IEEE Infocom, San Francisco, 2003.
|
| 286 |
+
|
| 287 |
+
[13] S. Ramakrishna, J.M. Holtzman, A scheme for throughput maximization in a dual-class CDMA system, IEEE Journal Selected Areas in Communication 16 (1998) 830–844.
|
| 288 |
+
|
| 289 |
+
[14] L.G. Roberts, Aloha packet system with and without slots and capture, Tech. Rep. Ass Note 8, Stanford Research Institute, Advance Research Projects Agency, Network Information Center, 1972.
|
| 290 |
+
|
| 291 |
+
[15] J.H. Sarker, M. Hassan, S.J. Halme, Power level selection schemes to improve throughput and stability of slotted ALOHA under heavy load, Computer Communications 25 (2002) 1719–1726.
|
| 292 |
+
|
| 293 |
+
[16] M. Schwartz, Information, Transmission, Modulation and Noise, third ed., McGraw-Hill, New York, 1980.
|
| 294 |
+
|
| 295 |
+
**Eitan Altman** received the B.Sc. degree in Electrical Engineering (1984), the B.A. degree in Physics (1984) and the Ph.D. degree in Electrical Engineering (1990), all from the Technion-Israel Institute, Haifa. In 1990 he further received his B.Mus. degree in Music Composition in Tel-Aviv university. Since 1990, he has been with INRIA (National research institute in informatics and control) in Sophia-Antipolis, France. His current research interests include performance evaluation and control of telecommunication networks, stochastic control and dynamic games. In recent years, he has applied control theoretical techniques in several joint projects with the French telecommunications company—France Télécom. Since 2000, he has also been with CESIMO, Facultad de Ingeniería, Univesidad de Los Andes, Mérida, Venezuela.
|
| 296 |
+
|
| 297 |
+
**Rachid El Azouzi** received the Ph.D. degree in Applied Mathematics from the Mohammed V University, Rabat, Morocco (2000). He joined INRIA (National research institute in informatics and control) Sophia-Antipolis for post-doctoral and Research Engineer positions. Since 2003, he has been a researcher at the University of Avignon, France. Her research interests are mobile networks, performance evaluation, the TCP protocol, error control in wireless networks, resource allocation, networking games and pricing.
|
| 298 |
+
---PAGE_BREAK---
|
| 299 |
+
|
| 300 |
+
Tania Jiménez received her DEA (equivalent to M.Sc.) at 1997, and Ph.D. at 2000, both in University of Nice Sophia-Antipolis, in Networks and Distributed Systems. Her research interests include simulation as well as optimization and control of telecommunication networks. She has been a teaching and research assistant at Nice university, teaching computer science courses. She is now a lecturer at CESIMO, Facultad de Ingenieria, Universidad de Los Andes, Merida, Venezuela.
|
samples_new/texts_merged/1259736.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples_new/texts_merged/174916.md
ADDED
|
@@ -0,0 +1,469 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
ON THE LOCATION OF ZEROS OF THE LAPLACIAN MATCHING
|
| 5 |
+
POLYNOMIALS OF GRAPHS
|
| 6 |
+
|
| 7 |
+
JIANG-CHAO WAN, YI WANG, ALI MOHAMMADIAN
|
| 8 |
+
|
| 9 |
+
School of Mathematical Sciences, Anhui University, Hefei 230601, Anhui, China
|
| 10 |
+
|
| 11 |
+
**ABSTRACT.** The Laplacian matching polynomial of a graph $G$, denoted by $\mathcal{LM}(G,x)$, is a new graph polynomial whose all roots are nonnegative real numbers. In this paper, we investigate the location of zeros of the Laplacian matching polynomials. Let $G$ be a connected graph. We show that $0$ is a root of $\mathcal{LM}(G,x)$ if and only if $G$ is a tree. We prove that the number of distinct positive zeros of $\mathcal{LM}(G,x)$ is at least equal to the length of the longest path in $G$. It is also established that the zeros of $\mathcal{LM}(G,x)$ and $\mathcal{LM}(G-e,x)$ interlace for each edge $e$ of $G$. Using the path-tree of $G$, we present a linear algebraic approach to investigate the largest zero of $\mathcal{LM}(G,x)$ and particularly to give tight upper and lower bounds on it.
|
| 12 |
+
|
| 13 |
+
# 1. INTRODUCTION
|
| 14 |
+
|
| 15 |
+
The graph polynomials, such as the characteristic polynomial, the chromatic polynomial, the independence polynomial, the matching polynomial, and many others, are widely studied and play important roles in applications of graphs in several diverse fields. The location of zeros of graph polynomials is a main topic in algebraic combinatorics and can be used to describe some structures and parameters of graphs. In this paper, we focus on the location of zeros of the Laplacian matching polynomials of graphs. For more results on the location of zeros of graph polynomials, we refer to [9].
|
| 16 |
+
|
| 17 |
+
Throughout this paper, all graphs are assumed to be finite, undirected, and without loops or multiple edges. Let $G$ be a graph. We denote the vertex set of $G$ by $V(G)$ and the edge set of $G$ by $E(G)$. Let $M$ be a subset of $E(G)$. We denote by $V(M)$ the set of vertices of $G$ each of which is an endpoint of one of the edges in $M$. If no two distinct edges in $M$ share a common endpoint, then $M$ is called a *matching* of $G$. The set of matchings of $G$ is denoted by $\mathcal{M}(G)$. A matching $M \in \mathcal{M}(G)$ is said to be *perfect* if $V(M) = V(G)$. The *matching polynomial* of $G$ is
|
| 18 |
+
|
| 19 |
+
$$ \mathcal{M}(G,x) = \sum_{M \in \mathcal{M}(G)} (-1)^{|M|} x^{|V(G) \setminus V(M)|} $$
|
| 20 |
+
|
| 21 |
+
which was formally defined by Heilmann and Lieb [7] in studying statistical physics, although it has appeared independently in several different contexts.
|
| 22 |
+
|
| 23 |
+
The matching polynomial is a fascinating mathematical object and attracts considerable attention of researchers. For an instance, by studying the multiplicity of zeros of the matching polynomials, Chen and Ku [8] gave a generalization of the Gallai–Edmonds theorem which is a
|
| 24 |
+
|
| 25 |
+
2020 Mathematics Subject Classification. Primary: 05C31, 05C70. Secondary: 05C05, 05C50, 12D10.
|
| 26 |
+
Key words and phrases. Graph polynomial, Matching, Subdivision of graphs, Zeros of polynomials.
|
| 27 |
+
Email adress:wanjc@stu.ahu.edu.cn (J.-C. Wan), wangy@ahu.edu.cn (Y. Wang, corresponding author), ali.m@ahu.edu.cn (A. Mohammadian).
|
| 28 |
+
Funding. The research of the second author is supported by the National Natural Science Foundation of China with grant numbers 11771016 and 11871073. The research of the third author is supported by the Natural Science Foundation of Anhui Province with grant number 2008085MA03.
|
| 29 |
+
---PAGE_BREAK---
|
| 30 |
+
|
| 31 |
+
structure theorem in classical graph theory. For another instance, using a well known upper bound on zeros of the matching polynomials, Marcus, Spielman, and Srivastava [10] established that infinitely many bipartite Ramanujan graphs exist. Some earlier facts on the matching polynomials can be found in [4].
|
| 32 |
+
|
| 33 |
+
We want to summarize here some basic features of the zeros of the matching polynomial. For this, let us first introduce some more notations and terminology which we need. For a vertex $v$ of a graph $G$, we denote by $N_G(v)$ the set of all vertices of $G$ adjacent to $v$. The degree of $v$ is defined as $|N_G(v)|$ and is denoted by $d_G(v)$. The maximum degree and the minimum degree of the vertices of $G$ are denoted by $\Delta(G)$ and $\delta(G)$, respectively. For a subset $W$ of $V(G)$, we shall use $G[W]$ to denote the induced subgraph of $G$ induced by $W$ and we simply use $G-W$ instead of $G[V(G)\setminus W]$. Also, for a vertex $v$ of $G$, we simply write $G-v$ for $G - \{v\}$. For an edge $e$ of $G$, we denote by $G-e$ the subgraph of $G$ obtained by deleting the edge $e$.
|
| 34 |
+
|
| 35 |
+
Let $\alpha_1 \le \dots \le \alpha_n$ and $\beta_1 \le \dots \le \beta_m$ be respectively the zeros of two real rooted polynomials $f$ and $g$ with $\deg f = n$ and $\deg g = m$. We say that the zeros of $f$ and $g$ interlace if either
|
| 36 |
+
|
| 37 |
+
$$\alpha_1 \le \beta_1 \le \alpha_2 \le \beta_2 \le \dots$$
|
| 38 |
+
|
| 39 |
+
or
|
| 40 |
+
|
| 41 |
+
$$\beta_1 \le \alpha_1 \le \beta_2 \le \alpha_2 \le \dots$$
|
| 42 |
+
|
| 43 |
+
in which case one clearly must have $|n-m| \le 1$. We adopt the convention that the zeros of any polynomial of degree 0 interlace the zeros of any other polynomial.
|
| 44 |
+
|
| 45 |
+
For any connected graph $G$, the assertions given in (1.1)-(1.3) are known.
|
| 46 |
+
|
| 47 |
+
(1.1) All the roots of $\mathcal{M}(G, x)$ are real. Moreover, if $\Delta(G) \ge 2$, then the zeros of $\mathcal{M}(G, x)$ lie in the interval $(-2\sqrt{\Delta(G)-1}, 2\sqrt{\Delta(G)-1})$ [7].
|
| 48 |
+
|
| 49 |
+
(1.2) The number of distinct roots of $\mathcal{M}(G, x)$ is at least equal to $\ell(G)+1$, where $\ell(G)$ is the length of the longest path in $G$ [5].
|
| 50 |
+
|
| 51 |
+
For each vertex $v \in V(G)$, the zeros of $\mathcal{M}(G-v, x)$ interlace the zeros of $\mathcal{M}(G, x)$.
|
| 52 |
+
|
| 53 |
+
(1.3) In addition, the largest zero of $\mathcal{M}(G, x)$ has the multiplicity 1 and is greater than the largest zero of $\mathcal{M}(G-v, x)$ [6].
|
| 54 |
+
|
| 55 |
+
Recently, Mohammadian [11] introduced a new graph polynomial that is called the *Laplacian matching polynomial* and is defined for a graph $G$ as
|
| 56 |
+
|
| 57 |
+
$$ (1.4) \qquad \mathcal{LM}(G,x) = \sum_{M \in \mathcal{M}(G)} (-1)^{|M|} \left( \prod_{v \in V(G) \setminus V(M)} (x - d_G(v)) \right). $$
|
| 58 |
+
|
| 59 |
+
Mohammadian proved that all roots of $\mathcal{LM}(G, x)$ are real and nonnegative, and moreover, if $\Delta(G) \ge 2$, then the zeros of $\mathcal{LM}(G, x)$ lie in the interval $[0, \Delta(G) + 2\sqrt{\Delta(G)-1})$. By observing this interval, it is natural to ask: What is the sufficient and necessary condition for 0 to be a root of $\mathcal{LM}(G, x)$? More generally, as a new real rooted graph polynomial, it is natural to investigate the properties of zeros such as the interlacing of zeros, the upper and lower bounds of the largest zero, the maximum multiplicity of zeros, and the number of distinct zeros. In this paper, we mainly prove that the assertions given in (1.5)-(1.7) hold for any connected graph $G$, letting $\ell(G)$ be the length of the longest path in $G$.
|
| 60 |
+
|
| 61 |
+
If $\Delta(G) \ge 2$, then the zeros of $\mathcal{LM}(G, x)$ are contained in the interval $[0, \Delta(G) + 2\sqrt{\Delta(G)-1}\cos\frac{\pi}{2\ell(G)+2}]$, and in addition, the upper bound of the interval is a zero of $\mathcal{LM}(G, x)$ if and only if $G$ is a cycle.
|
| 62 |
+
|
| 63 |
+
(1.5)
|
| 64 |
+
---PAGE_BREAK---
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
(1.6) \quad \text{The number of distinct positive roots of } \mathcal{LM}(G, x) \text{ is at least equal to } \ell(G). \text{ Also, if } \delta(G) \ge 2, \text{ then } \mathcal{LM}(G, x) \text{ has at least } \ell(G) + 1 \text{ distinct positive roots.}
|
| 68 |
+
$$
|
| 69 |
+
|
| 70 |
+
For each edge $e \in E(G)$, the zeros of $\mathcal{L}\mathcal{M}(G,x)$ and $\mathcal{L}\mathcal{M}(G-e,x)$ interlace in
|
| 71 |
+
the sense that, if $\alpha_1 \le \cdots \le \alpha_n$ and $\beta_1 \le \cdots \le \beta_n$ are respectively the zeros of
|
| 72 |
+
$\mathcal{L}\mathcal{M}(G,x)$ and $\mathcal{L}\mathcal{M}(G-e,x)$ in which $n = |V(G)|$, then $\beta_1 \le \alpha_1 \le \beta_2 \le \alpha_2 \le$
|
| 73 |
+
$\cdots \le \beta_n \le \alpha_n$. Further, the largest zero of $\mathcal{L}\mathcal{M}(G,x)$ has the multiplicity 1 and
|
| 74 |
+
is strictly greater than the largest zero of $\mathcal{L}\mathcal{M}(H,x)$ for any proper subgraph $H$ of
|
| 75 |
+
$G$.
|
| 76 |
+
|
| 77 |
+
It should be mentioned that the Laplacian matching polynomial is recently studied under a different name and expression by Chen and Zhang [17].
|
| 78 |
+
|
| 79 |
+
For a graph $G$, the *subdivision* of $G$, denoted by $S(G)$, is the graph derived from $G$ by replac-
|
| 80 |
+
ing every edge $e = \{a,b\}$ of $G$ with two edges $\{a,v_e\}$ and $\{v_e,b\}$ along with the new vertex $v_e$
|
| 81 |
+
corresponding to the edge $e$. We know from a result of Yan and Yeh [16] that
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
(1.8) \qquad \mathcal{M}(S(G),x) = x^{|E(G)|-|V(G)|} \mathcal{L}\mathcal{M}(G,x^2)
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
for any graph $G$, which is also proved by Chen and Zhang [17] by different method. The equality (1.8) shows that the problem of the location of zeros of the Laplacian matching polynomial of a graph $G$ can be transformed into the problem that deals with the location of zeros of the matching polynomial of $S(G)$. For an instance, using (1.8) and the first statement in (1.1), it immediately follows that the zeros of $\mathcal{LM}(G,x)$ are nonnegative real numbers. The assertion (1.6) is proved in Section 2 by the subdivision of graphs.
|
| 88 |
+
|
| 89 |
+
One of the most important tools in the theory of the matching polynomial is the concept of ‘path-tree’ which is introduced by Godsil [5]. Given a graph $G$ and a vertex $u \in V(G)$, the *path-tree* $T(G, u)$ is the tree which has as vertices the paths in $G$ which start at $u$ where two such paths are adjacent if one is a maximal proper subpath of the other. In Section 3, we show that the path-tree is also applicable for the Laplacian matching polynomial by making some appropriate adjustments. Using this, we prove (1.5) which is a slight improvement of the second statement of Theorem 2.6 of [11]. The assertion (1.7) is proved in Section 3 by linear algebra arguments.
|
| 90 |
+
|
| 91 |
+
Let us introduce more notations and definitions before moving on to the next section. We use
|
| 92 |
+
$\lambda(f(x))$ to denote the largest zero of a real rooted polynomial $f(x)$. For a square matrix $M$, we shall
|
| 93 |
+
use $\varphi(M, x)$ to denote the characteristic polynomial of $M$ in the indeterminate $x$. If all the roots of
|
| 94 |
+
$\varphi(M, x)$ are real, then its largest zero is denoted by $\lambda(M)$. For a graph $G$, the *adjacency matrix* of
|
| 95 |
+
$G$, denoted by $A(G)$, is a matrix whose rows and columns are indexed by $V(G)$ and the $(u, v)$-entry
|
| 96 |
+
is 1 if $u$ and $v$ are adjacent and 0 otherwise. Let $D(G)$ be the diagonal matrix whose rows and
|
| 97 |
+
columns are indexed as the rows and the columns of $A(G)$ with $d_G(v)$ in the $v$th diagonal position.
|
| 98 |
+
The matrices $L(G) = D(G) - A(G)$ and $Q(G) = D(G) + A(G)$ are respectively said to be the
|
| 99 |
+
*Laplacian matrix* and the *signless Laplacian matrix* of $G$. It is known that $\mathcal{M}(G, x) = \varphi(A(G), x)$
|
| 100 |
+
if and only if $G$ is a forest [14]. In addition, it is proved that $\mathcal{LM}(G, x) = \varphi(L(G), x)$ if and only
|
| 101 |
+
if $G$ is a forest [11]. Among other results, we present a generalization of these results in Section 2.
|
| 102 |
+
|
| 103 |
+
## 2. SUBDIVISION OF GRAPHS AND THE LAPLACIAN MATCHING POLYNOMIAL
|
| 104 |
+
|
| 105 |
+
In this section, we examine the location of zeros of the Laplacian matching polynomial by establishing a relation between the Laplacian matching polynomial of a graph and the matching polynomial of the subdivision of that graph. Then, by analysing the structures of the subdivision of graphs, we will prove (1.6). To begin with, we recall the multivariate matching polynomial that covers both the matching polynomial and the Laplacian matching polynomial. This multivariate graph polynomial was introduced by Heilmann and Lieb [7].
|
| 106 |
+
---PAGE_BREAK---
|
| 107 |
+
|
| 108 |
+
Let $G$ be a graph and associate the vector $\mathbf{x}_G = (x_v)_{v \in V(G)}$ with $G$ in which $x_v$ is an indeterminate corresponding to the vertex $v \in V(G)$. Notice that, for a subgraph $H$ of $G$, $\mathbf{x}_H$ is the vector that has the same coordinate as $\mathbf{x}_G$ in the positions corresponding to the vertices in $V(H)$. The *multivariate matching polynomial* of $G$ is defined as
|
| 109 |
+
|
| 110 |
+
$$ (2.1) \qquad \mathfrak{M}(G, \mathbf{x}_G) = \sum_{M \in \mathcal{M}(G)} (-1)^{|M|} \left( \prod_{v \in V(G) \setminus V(M)} x_v \right). $$
|
| 111 |
+
|
| 112 |
+
Let $\mathbf{1}_G$ be the all one vector of length $|V(G)|$. Also, for a subgraph $H$ of $G$, we let $\mathbf{d}_{G,H} = (d_G(v))_{v \in V(H)}$. For simplicity, we write $\mathbf{d}_G$ instead of $\mathbf{d}_{G,G}$. We sometimes drop the subscript of the vector symbols if there is no possible confusion. It is easy to see that
|
| 113 |
+
|
| 114 |
+
$$ (2.2) \qquad \mathfrak{M}(G, x\mathbf{1}_G) = \mathcal{M}(G, x) $$
|
| 115 |
+
|
| 116 |
+
and
|
| 117 |
+
|
| 118 |
+
$$ (2.3) \qquad \mathfrak{M}(G, x\mathbf{1}_G - \mathbf{d}_G) = \mathcal{L}\mathcal{M}(G, x). $$
|
| 119 |
+
|
| 120 |
+
Note that
|
| 121 |
+
|
| 122 |
+
$$ \mathfrak{M}(G_1 \cup G_2, (\mathbf{x}_{G_1}, \mathbf{x}_{G_2})) = \mathfrak{M}(G_1, \mathbf{x}_{G_1})\mathfrak{M}(G_2, \mathbf{x}_{G_2}), $$
|
| 123 |
+
|
| 124 |
+
where $G_1 \cup G_2$ denotes the disjoint union of two graphs $G_1$ and $G_2$. So, in what follows, we often restrict our attention on connected graphs.
|
| 125 |
+
|
| 126 |
+
We need the following useful lemma in the sequel.
|
| 127 |
+
|
| 128 |
+
**Lemma 2.1** (Amini [1]). Let $G$ be a graph. For any vertex $v \in V(G)$,
|
| 129 |
+
|
| 130 |
+
$$ \mathfrak{M}(G, \mathbf{x}_G) = x_v \mathfrak{M}(G - v, \mathbf{x}_{G-v}) - \sum_{w \in N_G(v)} \mathfrak{M}(G - v - w, \mathbf{x}_{G-w}). $$
|
| 131 |
+
|
| 132 |
+
By combining Lemma 2.1 and (2.2), we get
|
| 133 |
+
|
| 134 |
+
$$ (2.4) \qquad \mathcal{M}(G, x) = x\mathcal{M}(G-v, x) - \sum_{w \in N_G(v)} \mathcal{M}(G-v-w, x), $$
|
| 135 |
+
|
| 136 |
+
which is a well known recursive formula for the matching polynomial.
|
| 137 |
+
|
| 138 |
+
The following theorem, which is a generalization of (1.8), plays a crucial role in our proofs in Section 3.
|
| 139 |
+
|
| 140 |
+
**Theorem 2.2.** Let $G$ be a graph. For any subset $W$ of $V(G)$,
|
| 141 |
+
|
| 142 |
+
$$ \mathcal{M}(S(G) - W, x) = x^{|E(G)| - |V(G)| + |W|} \mathfrak{M}(G - W, x^2 \mathbf{1}_{G-W} - \mathbf{d}_{G,G-W}). $$
|
| 143 |
+
|
| 144 |
+
*Proof.* For simplicity, let $k = |V(G) \setminus W|$ and $m = |E(G)|$. We prove the assertion by induction on $k$. If $V(G) \setminus W = \{u\}$ for some vertex $u \in V(G)$, then $S(G) - W$ consists of a star on $d_G(u) + 1$ vertices and $|E(G)| - d_G(u)$ isolated vertices. Therefore,
|
| 145 |
+
|
| 146 |
+
$$ \mathcal{M}(S(G) - W, x) = x^{m+1} - d_G(u)x^{m-1} $$
|
| 147 |
+
|
| 148 |
+
and
|
| 149 |
+
|
| 150 |
+
$$ \mathfrak{M}(G - W, x^2 \mathbf{1} - \mathbf{d}) = x^2 - d_G(u). $$
|
| 151 |
+
---PAGE_BREAK---
|
| 152 |
+
|
| 153 |
+
So, the claimed equality holds for $k=1$. Assume that $k \ge 2$. Choose a vertex $u \in V(G) \setminus W$ and let $H = S(G) - W - u$. By Lemma 2.1, the induction hypothesis and (2.4), we have
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\begin{align*}
|
| 157 |
+
x^{m-k+2}\mathfrak{M}(G-W, x^2\mathbf{1}-\mathbf{d}) &= x(x^2-d_G(u))x^{m-k+1}\mathfrak{M}(G-W-u, x^2\mathbf{1}-\mathbf{d}) \\
|
| 158 |
+
&\quad - \sum_{v \in N_{G-W}(u)} x^{m-k+2}\mathfrak{M}(G-W-u-v, x^2\mathbf{1}-\mathbf{d}) \\
|
| 159 |
+
&= x(x^2-d_G(u))\mathcal{M}(H,x) - \sum_{v \in N_{G-W}(u)} \mathcal{M}(H-v,x) \\
|
| 160 |
+
&= x^2\mathcal{M}(S(G)-W,x) + x^2 \sum_{v \in N_{S(G)-W}(u)} \mathcal{M}(H-v,x) \\
|
| 161 |
+
&\quad - d_G(u)x\mathcal{M}(H,x) - \sum_{v \in N_{G-W}(u)} \mathcal{M}(H-v,x).
|
| 162 |
+
\end{align*}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
Hence, in order to complete the induction step, it suffices to prove that
|
| 166 |
+
|
| 167 |
+
$$ (2.5) \qquad d_G(u)x\mathcal{M}(H,x) = x^2 \sum_{v \in N_{S(G)-W}(u)} \mathcal{M}(H-v,x) - \sum_{v \in N_{G-W}(u)} \mathcal{M}(H-v,x). $$
|
| 168 |
+
|
| 169 |
+
To establish (2.5), let $N_G(u) \cap W = \{a_1, \dots, a_s\}$ and $N_G(u) \setminus W = \{b_1, \dots, b_t\}$. Also, for $i=1, \dots, s$, let $a'_i$ be the vertex of $S(G)$ corresponding to the edge $\{u, a_i\}$ of $G$ and, for $j=1, \dots, t$, let $b'_j$ be the vertex of $S(G)$ corresponding to the edge $\{u, b_j\}$ of $G$. Notice that, if one of $N_G(u) \cap W$ and $N_G(u) \setminus W$ is empty, then we may derive (2.5) by the same discussion as below. We have $d_G(u) = s+t$ and $N_{S(G)-W}(u) = N_{S(G)}(u) = \{a'_1, \dots, a'_s, b'_1, \dots, b'_t\}$. The structure of $H$ is illustrated in Figure 1.
|
| 170 |
+
|
| 171 |
+
**Figure 1.** The structure of $H$.
|
| 172 |
+
|
| 173 |
+
We have $d_H(a'_i) = 0$ for $i = 1, \dots, s$ and $d_H(b'_j) = 1$ for $j = 1, \dots, t$. By applying (2.4) for $a'_i$ and $b'_j$, we find that
|
| 174 |
+
|
| 175 |
+
$$ \mathcal{M}(H,x) = x\mathcal{M}(H-a'_i,x) $$
|
| 176 |
+
---PAGE_BREAK---
|
| 177 |
+
|
| 178 |
+
and
|
| 179 |
+
|
| 180 |
+
$$
|
| 181 |
+
\begin{align*}
|
| 182 |
+
x\mathcal{M}(H, x) &= x^2\mathcal{M}(H - b'_j, x) - x\mathcal{M}(H - b_j - b'_j, x) \\
|
| 183 |
+
&= x^2\mathcal{M}(H - b'_j, x) - \mathcal{M}(H - b_j, x).
|
| 184 |
+
\end{align*}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
Therefore,
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\begin{align*}
|
| 191 |
+
d_G(u)x\mathcal{M}(H,x) &= sx\mathcal{M}(H,x) + tx\mathcal{M}(H,x) \\
|
| 192 |
+
&= x^2 \sum_{i=1}^{s} \mathcal{M}(H - a'_i, x) + x^2 \sum_{j=1}^{t} \mathcal{M}(H - b'_j, x) - \sum_{j=1}^{t} \mathcal{M}(H - b_j, x) \\
|
| 193 |
+
&= x^2 \sum_{v \in N_{S(G)-W}(u)} \mathcal{M}(H-v,x) - \sum_{v \in N_{G-W}(u)} \mathcal{M}(H-v,x),
|
| 194 |
+
\end{align*}
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
which is exactly (2.5). This completes the proof.
|
| 198 |
+
□
|
| 199 |
+
|
| 200 |
+
In what follows, we prove some results about the Laplacian matching polynomial by analysing
|
| 201 |
+
the structures of the subdivision of graphs. The following consequence immediately follows from
|
| 202 |
+
Theorem 2.2 and the first statement in (1.1). It worth to mention that the following result is proved
|
| 203 |
+
in [17] for a different expression of the Laplacian matching polynomial.
|
| 204 |
+
|
| 205 |
+
**Corollary 2.3.** Let $G$ be a graph. Then
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
\mathcal{M}(S(G), x) = x^{|E(G)|-|V(G)|} \mathcal{L}\mathcal{M}(G, x^2).
|
| 209 |
+
$$
|
| 210 |
+
|
| 211 |
+
In particular, the zeros of $\mathcal{L}\mathcal{M}(G, x)$ are nonnegative real numbers.
|
| 212 |
+
|
| 213 |
+
For a graph $G$, it is proved that $\mathcal{L}\mathcal{M}(G, x) = \varphi(L(G), x)$ if and only if $G$ is a forest [11]. Since $0$ is an eigenvalue of $L(G)$, we deduce that $\mathcal{L}\mathcal{M}(G, 0) = 0$ if $G$ is a forest. From (1.4), we get the combinatorial identity
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
\sum_{M \in \mathcal{M}(F)} (-1)^{|M|} \left( \prod_{v \in V(F) \setminus V(M)} d_F(v) \right) = 0
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
for any forest F. The following theorem, which is proved in [17], gives a necessary and sufficient condition for 0 to be a root of the Laplacian matching polynomial. We present here a different proof for it.
|
| 220 |
+
|
| 221 |
+
**Theorem 2.4** (Chen, Zhang [17]). Let $G$ be a connected graph. Then, $0$ is a root of $\mathcal{LM}(G, x)$ if and only if $G$ is a tree.
|
| 222 |
+
|
| 223 |
+
*Proof.* If $G$ is a tree, then $|E(G)| = |V(G)| - 1$ and so $\mathcal{LM}(G, x^2) = x\mathcal{M}(S(G), x)$ by Corollary 2.3, implying that $0$ is a root of $\mathcal{LM}(G, x)$. We prove that $0$ is not a root of $\mathcal{LM}(G, x)$ if $G$ is not a tree. For this, assume that $|E(G)| \ge |V(G)|$. One may easily consider $S(G)$ as a bipartite graph with the bipartition $\{V(G), E(G)\}$ after identifying each new vertex $v_e$ of $S(G)$ with its corresponding edge $e$ of $G$.
|
| 224 |
+
|
| 225 |
+
We claim that $S(G)$ has a matching that saturates the part $V(G)$. If $G$ contains a vertex $u$ with degree 1 and $e$ is the edge incident to $u$ in $G$, then it suffices to prove that $S(G-u)$ has a matching that saturates the part $V(G-u)$, since the union of such matching and the edge $\{u, v_e\}$ forms a matching of $S(G)$ that saturates the part $V(G)$. Thus, we may assume that $d_G(v) \ge 2$ for all vertices $v \in V(G)$. We are going to establish that $S(G)$ satisfies Hall's condition [2, Theorem 16.4]. For a subset $W$ of $V(G)$, we shall use $N_G(W)$ to denote the set of vertices of $G$ each of which is adjacent to a vertex in $W$ and $\partial_G(W)$ to denote the set of edges of $G$ each of which has exactly one endpoint in $W$. For any subset $U$ of the part $V(G)$, since $d_G(v) \ge 2$ for all vertices $v \in V(G),
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
(2.6) \qquad |\partial_{S(G)}(U)| \ge 2|U|.
|
| 229 |
+
$$
|
| 230 |
+
---PAGE_BREAK---
|
| 231 |
+
|
| 232 |
+
On the other hand, $d_{S(G)}(v_e) = 2$ for each $e \in E(G)$, so
|
| 233 |
+
|
| 234 |
+
$$ (2.7) \qquad |\partial_{S(G)}(N_{S(G)}(U))| = 2|N_{S(G)}(U)|. $$
|
| 235 |
+
|
| 236 |
+
Clearly, $|\partial_{S(G)}(N_{S(G)}(U))| \ge |\partial_{S(G)}(U)|$ which implies that $|N_{S(G)}(U)| \ge |U|$ using (2.6) and (2.7). This means that $S(G)$ satisfies Hall's condition, as required.
|
| 237 |
+
|
| 238 |
+
We proved that $S(G)$ has a matching that saturates the part $V(G)$. This means that the smallest power of $x$ in $\mathcal{M}(S(G), x)$ is $|E(G)| - |V(G)|$ by (2.1) and (2.2). In view of Corollary 2.3, $\mathcal{M}(S(G), x) = x^{|E(G)|-|V(G)|}\mathcal{LM}(G, x^2)$ which shows that the constant term in $\mathcal{LM}(G, x)$ is nonzero. So, 0 is not a root of $\mathcal{LM}(G, x)$. This completes the proof. $\square$
|
| 239 |
+
|
| 240 |
+
In the next theorem, we give a lower bound on the number of distinct zeros of the Laplacian matching polynomial.
|
| 241 |
+
|
| 242 |
+
**Theorem 2.5.** Let $G$ be a connected graph and let $\ell(G)$ be the length of the longest path in $G$. Then the number of distinct positive roots of $\mathcal{LM}(G, x)$ is at least equal to $\ell(G)$. Also, if $\delta(G) \ge 2$, then $\mathcal{LM}(G, x)$ has at least $\ell(G) + 1$ distinct positive roots.
|
| 243 |
+
|
| 244 |
+
*Proof.* For convenience, let $\ell = \ell(G)$. Denote by $\ell'$ the length of the longest path in $S(G)$. From (1.2), $\mathcal{M}(S(G), x)$ has at least $\ell' + 1$ distinct roots. By Corollary 2.3, $\mathcal{M}(S(G), x) = x^{|E(G)|-|V(G)|}\mathcal{LM}(G, x^2)$ which shows that $\mathcal{LM}(G, x^2)$ has at least $\ell'$ distinct nonzero roots. Since all roots of $\mathcal{LM}(G, x)$ are real and nonnegative by Corollary 2.3, it follows that $\mathcal{LM}(G, x)$ has at least $\lceil \ell'/2 \rceil$ distinct positive roots.
|
| 245 |
+
|
| 246 |
+
For each edge $e \in E(G)$, denote by $v_e$ the vertex of $S(G)$ corresponding to $e$. Let $w_0, w_1, \dots, w_\ell$ be a path in $G$. Then, $w_0, v_{e_1}, w_1, \dots, v_{e_\ell}, w_\ell$ is a path in $S(G)$ of length $2\ell$, where $e_i = \{w_{i-1}, w_i\} \in E(G)$ for $i=1, \dots, \ell$. Thus, $\ell' \ge 2\ell$ and so $\mathcal{LM}(G, x)$ has at least $\ell$ distinct positive roots.
|
| 247 |
+
|
| 248 |
+
Now, assume that $\delta(G) \ge 2$. This assumption allows us to consider a vertex $w' \in N_G(w_0) \setminus \{w_1\}$. Then, $S(G)$ contains the path $v_{e'}$, $w_0$, $v_{e_1}$, $w_1$, \dots, $v_{e_\ell}$, $w_\ell$ of length $2\ell + 1$, where $e' = \{w', w_0\} \in E(G)$. Therefore, $\ell' \ge 2\ell + 1$ and so $\mathcal{LM}(G, x)$ has at least $\lceil \ell'/2 \rceil \ge \ell + 1$ distinct positive roots. This completes the proof. $\square$
|
| 249 |
+
|
| 250 |
+
**Remark 2.6.** The second statement in Theorem 2.5 implies that, if $G$ is a graph with a Hamilton cycle, then the zeros of $\mathcal{LM}(G, x)$ are all distinct.
|
| 251 |
+
|
| 252 |
+
Given a graph $G$, it is known that $\mathcal{M}(G, x) = \varphi(A(G, x))$ if and only if $G$ is a forest [14]. Also, as we mentioned before, it is established that $\mathcal{LM}(G, x) = \varphi(L(G), x)$ if and only if $G$ is a forest [11]. Below, we present a general result which shows that the multivariate matching polynomial of a forest has a determinantal representation in terms of its adjacency matrix, which will be used in the next section.
|
| 253 |
+
|
| 254 |
+
**Theorem 2.7.** Let $F$ be a forest. Then $\mathfrak{M}(F, \mathbf{x}_F) = \det(\mathbf{X}_F - A(F))$, where $\mathbf{X}_F$ is a diagonal matrix whose rows and columns are indexed by $V(F)$ and the $(v,v)$-entry is $x_v$ for any vertex $v \in V(F)$. In particular, $\mathcal{M}(F, x) = \varphi(A(F), x)$ and $\mathcal{LM}(F, x) = \varphi(L(F), x)$.
|
| 255 |
+
|
| 256 |
+
*Proof.* We prove that $\mathfrak{M}(F, \mathbf{x}_F) = \det(\mathbf{X}_F - A(F))$ by induction on $|E(F)|$. The equality is trivially valid if $|E(F)| = 0$. So, assume that $|E(F)| \ge 1$. As $F$ is a forest, we may consider two vertices $u, v \in V(F)$ with $N_F(u) = \{v\}$. Without loss of generality, we may assume that the first row and column of $A(F)$ are corresponding to $u$ and the second row and column of $A(F)$ are corresponding to $v$. Expanding the determinant of $\mathbf{X}_F - A(F)$ along its first row, we obtain by the induction hypothesis and Lemma 2.1 that
|
| 257 |
+
|
| 258 |
+
$$ \begin{align*}
|
| 259 |
+
\det (\mathbf{X}_F - A(F)) &= x_u \det (\mathbf{X}_{F-u} - A(F-u)) - \det (\mathbf{X}_{F-u-v} - A(F-u-v)) \\
|
| 260 |
+
&= x_u \mathfrak{M}(F-u, \mathbf{x}_{F-u}) - \mathfrak{M}(F-u-v, \mathbf{x}_{F-u-v}) \\
|
| 261 |
+
&= \mathfrak{M}(F, \mathbf{x}_F),
|
| 262 |
+
\end{align*} $$
|
| 263 |
+
|
| 264 |
+
as desired. The 'in particular' statement immediately follows from (2.2) and (2.3). $\square$
|
| 265 |
+
---PAGE_BREAK---
|
| 266 |
+
|
| 267 |
+
**Corollary 2.8.** For a tree $T$, the multiplicity of $0$ as a root of $\mathcal{LM}(T, x)$ is $1$.
|
| 268 |
+
|
| 269 |
+
*Proof.* It is well known that the number of connected components of a graph $\Gamma$ is equal to the multiplicity of $0$ as a root of $\varphi(L(\Gamma), x)$ [3, Proposition 1.3.7]. So, the result follows from $\mathcal{LM}(T, x) = \varphi(L(T), x)$ which is given in Theorem 2.7. $\square$
|
| 270 |
+
|
| 271 |
+
### 3. THE LARGEST ZERO OF THE LAPLACIAN MATCHING POLYNOMIAL
|
| 272 |
+
|
| 273 |
+
The purpose of this section is to investigate the location of the largest zero of the Laplacian matching polynomial. We give a linear algebraic approach to study the largest zero of the Laplacian matching polynomial and present sharp upper and lower bounds on it. The assertions (1.5) and (1.7) are also proved in this section based on the linear algebraic approach.
|
| 274 |
+
|
| 275 |
+
Let $G$ be a connected graph and $u \in V(G)$. Let $T(G, u)$ be the path-tree of $G$ respect to the vertex $u$ which is introduced in Section 1. Consider two vectors $x_G = (x_v)_{v \in V(G)}$ and $x_{T(G,u)} = (x_P)_{P \in V(T(G,u))}$ of indeterminates associated with $G$ and $T(G, u)$, respectively. For every vertex $P \in V(T(G, u))$, we may identify $x_P$ with $x_{v(P)}$ in which $v(P)$ is the terminal vertex of the path $P$ in $G$. In such way, $G$ and $T(G, u)$ will be equipped with two vectors consisting of the same indeterminates, which are simply denoted by **x** when there is no ambiguity. In what follows, for every subgraph $H$ of $G$ and vertex $u \in V(H)$, we denote by $D_G(T(H, u))$ the diagonal matrix whose rows and columns are indexed by $V(T(H, u))$ and the $(P, P)$-entry is $d_G(v(P))$.
|
| 276 |
+
|
| 277 |
+
The univariate version of the following theorem, which is proved by Godsil [5], has a key role in the theory of the matching polynomial. Notice that, for a graph $G$ and a vertex $u \in V(G)$, $u$ is a path in $G$ and the corresponding vertex in $T(G, u)$ will also be referred to as $u$.
|
| 278 |
+
|
| 279 |
+
**Theorem 3.1 (Amini [1]).** Let $G$ be a connected graph and let $u \in V(G)$. Then
|
| 280 |
+
|
| 281 |
+
$$ \frac{\mathfrak{M}(G - u, \boldsymbol{x})}{\mathfrak{M}(G, \boldsymbol{x})} = \frac{\mathfrak{M}(T(G, u) - u, \boldsymbol{x})}{\mathfrak{M}(T(G, u), \boldsymbol{x})}, $$
|
| 282 |
+
|
| 283 |
+
and moreover, $\mathfrak{M}(G, \boldsymbol{x})$ divides $\mathfrak{M}(T(G, u), \boldsymbol{x})$.
|
| 284 |
+
|
| 285 |
+
For a connected graph $G$ and a vertex $u \in V(G)$, Theorem 3.1 and Theorem 2.7 yield that $\mathcal{M}(G, x)$ divides $\varphi(A(T(G, u)), x)$. Since all roots of the characteristic polynomial of a symmetric matrix are real, the first statement in (1.1) is obtained as an application of Theorem 3.1. For the Laplacian matching polynomial, we get the following result.
|
| 286 |
+
|
| 287 |
+
**Corollary 3.2.** Let $G$ be a connected graph, $H$ be a subgraph of $G$, and $u \in V(H)$. If $H$ is connected, then $\mathfrak{M}(H, x\mathbf{1}_H - d_{G,H})$ divides $\varphi(D_G(T(H, u)) + A(T(H, u)), x)$. In particular, $\varphi(D_G(T(G, u)) + A(T(G, u)), x)$ is divisible by $\mathcal{LM}(G, x)$ for every vertex $u \in V(G)$.
|
| 288 |
+
|
| 289 |
+
*Proof.* By Theorem 3.1, we find that $\mathfrak{M}(H, x\mathbf{1}_H - d_{G,H})$ divides $\mathfrak{M}(T(H, u), x\mathbf{1}_H - d_{G,H})$. It follows from Theorem 2.7 that
|
| 290 |
+
|
| 291 |
+
$$ \begin{aligned} \mathfrak{M}(T(H, u), x\mathbf{1}_H - d_{G,H}) &= \det (xI - D_G(T(H, u)) - A(T(H, u))) \\ &= \varphi(D_G(T(H, u)) + A(T(H, u)), x), \end{aligned} $$
|
| 292 |
+
|
| 293 |
+
which establishes what we require. Since $\mathfrak{M}(G, x\mathbf{1}_G - d_G) = \mathcal{LM}(G, x)$ using (2.3), the ‘in particular’ statement immediately follows. $\square$
|
| 294 |
+
|
| 295 |
+
**Remark 3.3.** The matrix $D_G(T(G, u)) + A(T(G, u))$, which appeared in Corollary 3.2, is a symmetric diagonally dominant matrix with nonnegative diagonal entries, so all of its eigenvalues are nonnegative real numbers. Hence, Corollary 3.2 gives us another proof for the fact that all roots of the Laplacian matching polynomial are real and nonnegative which was also proved in Corollary 2.3.
|
| 296 |
+
---PAGE_BREAK---
|
| 297 |
+
|
| 298 |
+
It is well known that the largest zero of the matching polynomial of a graph is equal to the largest eigenvalue of the adjacency matrix of a path-tree of that graph. This fact is obtained by combining the Perron-Frobenius theorem [3, Theorem 2.2.1] and Theorems 2.7 and 3.1. The following theorem can be considered as an analogue of the fact. Indeed, the following theorem presents a linear algebra technique to treat with the largest zero of the Laplacian matching polynomial.
|
| 299 |
+
|
| 300 |
+
**Theorem 3.4.** Let $G$ be a connected graph, $H$ be a subgraph of $G$, and $u \in V(H)$. If $H$ is connected, then
|
| 301 |
+
|
| 302 |
+
$$ (3.1) \qquad \lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H})) = \lambda(D_G(T(H,u)) + A(T(H,u))). $$
|
| 303 |
+
|
| 304 |
+
In particular, $\lambda(\mathcal{LM}(G,x)) = \lambda(D_G(T(G,u)) + A(T(G,u)))$. Also, the largest root of $\mathcal{LM}(G,x)$ has the multiplicity 1.
|
| 305 |
+
|
| 306 |
+
*Proof.* We prove (3.1) by induction on $|V(H)|$. Clearly, (3.1) is valid for $|V(H)| = 1$. Assume that $|V(H)| \ge 2$. We first show that
|
| 307 |
+
|
| 308 |
+
$$ (3.2) \qquad \lambda(\mathfrak{M}(H-u, x\mathbf{1}_{H-u} - \mathbf{d}_{G,H-u})) < \lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H})). $$
|
| 309 |
+
|
| 310 |
+
To see (3.2), we apply Theorem 2.2 and (1.3) to get that
|
| 311 |
+
|
| 312 |
+
$$ \begin{align*} \lambda(\mathfrak{M}(H-u, x^2\mathbf{1}_{H-u} - \mathbf{d}_{G,H-u})) &= \lambda(\mathcal{M}(S(G)-W-u, x)) \\ &< \lambda(\mathcal{M}(S(G)-W, x)) \\ &= \lambda(\mathfrak{M}(H, x^2\mathbf{1}_H - \mathbf{d}_{G,H})), \end{align*} $$
|
| 313 |
+
|
| 314 |
+
where $W = V(G) \setminus V(H)$. This clearly proves (3.2). Now, let $N_H(u) = \{u_1, \dots, u_k\}$ and let $H_i$ be the connected component of $H-u$ containing $u_i$ for $i=1, \dots, k$. By the induction hypothesis,
|
| 315 |
+
|
| 316 |
+
$$ (3.3) \qquad \lambda(\mathfrak{M}(H_i, x\mathbf{1}_{H_i} - \mathbf{d}_{G,H_i})) = \lambda(D_G(T(H_i, u_i)) + A(T(H_i, u_i))) $$
|
| 317 |
+
|
| 318 |
+
for $i=1, \dots, k$. It is not hard to see the $k \times k$ block diagonal matrix whose $i$th block diagonal entry is $D_G(T(H_i, u_i)) + A(T(H_i, u_i))$, say $R$, is a principal submatrix of $D_G(T(H, u)) + A(T(H, u))$ with size $|T(H, u)|-1$. Hence, by the interlacing theorem [3, Corollary 2.5.2], it follows that $\lambda(R)$ is greater than or equal to the second largest eigenvalue of $D_G(T(H, u)) + A(T(H, u))$. Further, it follows from (3.3) and (3.2) that
|
| 319 |
+
|
| 320 |
+
$$ \begin{align*} \lambda(R) &= \max \left\{ \lambda(\mathfrak{M}(H_i, x\mathbf{1}_{H_i} - \mathbf{d}_{G,H_i})) \middle| 1 \le i \le k \right\} \\ &= \lambda(\mathfrak{M}(H-u, x\mathbf{1}_{H-u} - \mathbf{d}_{G,H-u})) \\ &< \lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H})). \end{align*} $$
|
| 321 |
+
|
| 322 |
+
Thus, $\lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H}))$ is strictly greater than the second largest eigenvalue of $D_G(T(H, u)) + A(T(H, u))$. On the other hand, Corollary 3.2 implies that $\lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H}))$ is a zero of $\varphi(D_G(T(H, u)) + A(T(H, u)), x)$. So, we conclude that $\lambda(\mathfrak{M}(H, x\mathbf{1}_H - \mathbf{d}_{G,H}))$ is the largest eigenvalue of $D_G(T(H, u)) + A(T(H, u))$. This completes the induction step and demonstrates that (3.1) holds.
|
| 323 |
+
|
| 324 |
+
For the 'in particular' statement, note that (3.1) and (2.3) yield that
|
| 325 |
+
|
| 326 |
+
$$ \lambda(D_G(T(G,u)) + A(T(G,u))) = \lambda(\mathfrak{M}(G, x\mathbf{1}_G - \mathbf{d}_G)) = \lambda(\mathcal{LM}(G,x)), $$
|
| 327 |
+
|
| 328 |
+
and further, the connectedness of $G$ implies that $D_G(T(G,u)) + A(T(G,u))$ is an irreducible matrix with nonnegative entries, and consequently, its largest eigenvalue has the multiplicity 1 by the Perron-Frobenius theorem [3, Theorem 2.2.1]. $\square$
|
| 329 |
+
---PAGE_BREAK---
|
| 330 |
+
|
| 331 |
+
**Corollary 3.5.** Let $G$ be a connected graph and $u \in V(G)$. Then
|
| 332 |
+
|
| 333 |
+
$$ (3.4) \qquad \lambda(\mathcal{LM}(G,x)) \ge \lambda(L(T(G,u))) $$
|
| 334 |
+
|
| 335 |
+
with the equality holds if and only if $G$ is a tree.
|
| 336 |
+
|
| 337 |
+
*Proof.* We first recall the fact that a graph $\Gamma$ is bipartite if and only if $\varphi(L(\Gamma), x) = \varphi(Q(\Gamma), x)$ [3, Proposition 1.3.10]. For each $P \in V(T(G, u))$, we have $d_{T(G,u)}(P) \le d_G(v(P))$, where $v(P)$ is the terminal vertex of the path $P$ in $G$. Therefore, $R = D_G(T(G, u)) + A(T(G, u)) - Q(T(G, u))$ has nonnegative entries, and thus, Theorem 3.4, the Perron-Frobenius theorem [3, Theorem 2.2.1], and the above mentioned fact yield that
|
| 338 |
+
|
| 339 |
+
$$ \begin{align} \lambda(\mathcal{LM}(G,x)) &= \lambda(D_G(T(G,u)) + A(T(G,u))) \\ &= \lambda(R + Q(T(G,u))) \\ &\ge \lambda(Q(T(G,u))) \\ &= \lambda(L(T(G,u))), \end{align} \tag{3.5} $$
|
| 340 |
+
|
| 341 |
+
proving (3.4). If $G$ is a tree, then $G$ is isometric to $T(G, u)$ and since $\mathcal{LM}(G,x) = \varphi(L(G), x)$ by Theorem 2.7, the equality in (3.4) is attained. Conversely, assume that the equality in (3.4) holds. Consequently, the equality in (3.5) occurs, and hence, the Perron-Frobenius theorem [3, Theorem 2.2.1] implies that $R=0$. This means that $d_{T(G,u)}(P) = d_G(v(P))$ for each $P \in V(T(G, u))$. We assert that $G$ is a tree. Towards a contradiction, suppose that there is a cycle $C$ in $G$. As $G$ is connected, there is a path $P_1$ in $G$ which start at $u$, none of its internal vertices is on $C$, and $v(P_1) \in V(C)$. Fix $w \in N_G(v(P_1)) \cap V(C)$ and let $P_2$ be the path on $C$ between $v(P_1)$ and $w$ whose length is more that 1. If $P$ is the path between $u$ and $w$ formed by $P_1$ and $P_2$, then it is clear that $d_{T(G,u)}(P) < d_G(v(P))$. This contradiction completes the proof. $\square$
|
| 342 |
+
|
| 343 |
+
In the following consequence, we give some lower bounds on the largest zero of the Laplacian matching polynomial.
|
| 344 |
+
|
| 345 |
+
**Corollary 3.6.** Let $G$ be a connected graph. Then
|
| 346 |
+
|
| 347 |
+
$$ \lambda(\mathcal{LM}(G,x)) \ge \max \left\{ \Delta(G) + 1, \delta(G) + \sqrt{\Delta(G)} \right\} $$
|
| 348 |
+
|
| 349 |
+
with the equality holds if and only if $G$ is a star.
|
| 350 |
+
|
| 351 |
+
*Proof.* Let $u \in V(G)$ be of degree $\Delta(G)$. Indeed, $d_{T(G,u)}(u) = d_G(u)$ and therefore $\Delta(T(G,u)) = \Delta(G)$. For each connected graph $\Gamma$, Proposition 3.9.3 of [3] states that $\lambda(L(\Gamma)) \ge \Delta(\Gamma) + 1$ with the equality holds if and only if $\Delta(\Gamma) = |V(\Gamma)| - 1$. By this fact and Corollary 3.5, we obtain that $\lambda(\mathcal{LM}(G,x)) \ge \lambda(L(T(G,u))) \ge \Delta(T(G,u)) + 1 = \Delta(G) + 1$, and moreover, the equality $\lambda(\mathcal{LM}(G,x)) = \Delta(G) + 1$ holds if and only if $G$ is a star.
|
| 352 |
+
|
| 353 |
+
For each connected graph $\Gamma$, the Perron-Frobenius theorem [3, Theorem 2.2.1] implies that $\lambda(A(\Gamma)) \ge \sqrt{\Delta(\Gamma)}$ with the equality holds if and only if $\Gamma$ is a star. Using this fact, Theorem 3.4, and the Weyl inequality [3, Theorem 2.8.1], we derive
|
| 354 |
+
|
| 355 |
+
$$ \begin{align} \lambda(\mathcal{LM}(G,x)) &= \lambda(D_G(T(G,u)) + A(T(G,u))) \\ &\ge \delta(G) + \lambda(A(T(G,u))) \\ &\ge \delta(G) + \sqrt{\Delta(T(G,u))} \\ &= \delta(G) + \sqrt{\Delta(G)}. \end{align} \tag{3.6} $$
|
| 356 |
+
---PAGE_BREAK---
|
| 357 |
+
|
| 358 |
+
Suppose that the equality $\lambda(\mathcal{LM}(G, x)) = \delta(G) + \sqrt{\Delta(G)}$ holds. So, the equality in (3.6) is attained, and thus, $T(G, u)$ is a star. This implies that $G$ is a star, and then, $\lambda(\mathcal{LM}(G, x)) = \delta(G) + \sqrt{\Delta(G)}$ forces that $|V(G)| \le 2$. Since the equality $\lambda(\mathcal{LM}(G, x)) = \delta(G) + \sqrt{\Delta(G)}$ is valid for the stars $G$ on at most 2 vertices, the proof is complete. $\square$
|
| 359 |
+
|
| 360 |
+
In the following theorem, we establish (1.5) which slightly improves the second statement of Theorem 2.6 of [11].
|
| 361 |
+
|
| 362 |
+
**Theorem 3.7.** Let $G$ be a connected graph with $\Delta(G) \ge 2$ and let $\ell(G)$ be the length of the longest path in $G$. Then,
|
| 363 |
+
|
| 364 |
+
$$ (3.7) \qquad \lambda(\mathcal{LM}(G, x)) \le \Delta(G) + 2\sqrt{\Delta(G)-1} \cos \frac{\pi}{2\ell(G)+2} $$
|
| 365 |
+
|
| 366 |
+
with the equality holds if and only if $G$ is a cycle.
|
| 367 |
+
|
| 368 |
+
*Proof.* For simplicity, let $\Delta = \Delta(G)$ and $\ell = \ell(G)$. For every positive integers $d$ and $k \ge 2$, the Bethe tree $B_{d,k}$ is a rooted tree with $k$ levels in which the root vertex is of degree $d$, the vertices on levels $2, \dots, k-1$ are of degree $d+1$, and the vertices on level $k$ are of degree 1. By Theorem 7 of [13],
|
| 369 |
+
|
| 370 |
+
$$ (3.8) \qquad \lambda(A(B_{d,k})) = 2\sqrt{d} \cos \frac{\pi}{k+1}. $$
|
| 371 |
+
|
| 372 |
+
Let $u \in V(G)$. It is not hard to check that $T(G, u)$ is isomorphic to a subgraph of $B_{\Delta-1,2\ell+1}$. For this, it is enough to correspond $u \in V(T(G, u))$ to an arbitrary vertex on level $\ell+1$ in $B_{\Delta-1,2\ell+1}$. By applying Theorem 3.4, the Weyl inequality [3, Theorem 2.8.1], the interlacing theorem [3, Corollary 2.5.2], and (3.8), we derive
|
| 373 |
+
|
| 374 |
+
$$ \begin{align} \lambda(\mathcal{LM}(G,x)) &= \lambda(D_G(T(G,u)) + A(T(G,u))) \\ &\le \lambda(D_G(T(G,u))) + \lambda(A(T(G,u))) \tag{3.9} \\ &\le \Delta + \lambda(A(B_{\Delta-1,2\ell+1})) \\ &= \Delta + 2\sqrt{\Delta-1} \cos \frac{\pi}{2\ell+2}, \end{align} $$
|
| 375 |
+
|
| 376 |
+
proving (3.7). Now, assume that the equality in (3.7) is achieved. Therefore, the equality in (3.9) occurs, and thus, the Perron-Frobenius theorem [3, Theorem 2.2.1] implies that $T(G, u)$ is isomorphic to $B_{\Delta-1,2\ell+1}$. Since $\Delta \ge 2$, one can easily obtain that $G$ is a cycle. Conversely, if $G$ is a cycle, then $T(G, u)$ is a path on $2\ell+1$ vertices. By Theorem 3.4 and (3.8), we get
|
| 377 |
+
|
| 378 |
+
$$ \lambda(\mathcal{LM}(G,x)) = \lambda(D_G(T(G,u)) + A(T(G,u))) = 2 + \lambda(A(B_{1,2\ell+1})) = 2 + 2\cos \frac{\pi}{2\ell+2}. $$
|
| 379 |
+
|
| 380 |
+
This completes the proof. $\square$
|
| 381 |
+
|
| 382 |
+
Stevanović [15] proved that the eigenvalues of the adjacency matrix of a tree $T$ are less than $2\sqrt{\Delta(T)-1}$. The corollary below gives an improvement of this upper bound for the subdivision of trees.
|
| 383 |
+
|
| 384 |
+
**Corollary 3.8.** Let $G$ be a graph with $\Delta(G) \ge 2$. Then
|
| 385 |
+
|
| 386 |
+
$$ (3.10) \qquad \lambda(\mathcal{M}(S(G), x)) < 1 + \sqrt{\Delta(G)-1}. $$
|
| 387 |
+
|
| 388 |
+
In particular, if $F$ is a forest with $\Delta(F) \ge 2$, then $\lambda(A(S(F))) < 1 + \sqrt{\Delta(F)-1}$.
|
| 389 |
+
---PAGE_BREAK---
|
| 390 |
+
|
| 391 |
+
*Proof.* It follows from Theorem 3.7 that $\lambda(\mathcal{LM}(G, x)) < \Delta(G) + 2\sqrt{\Delta(G)-1}$. Moreover, it follows from Corollary 2.3 that $\lambda(\mathcal{M}(S(G), x)) = \sqrt{\lambda(\mathcal{LM}(G, x))}$. From these, we find that
|
| 392 |
+
|
| 393 |
+
$$ \lambda(\mathcal{M}(S(G), x)) < \sqrt{\Delta(G) + 2\sqrt{\Delta(G) - 1}} = 1 + \sqrt{\Delta(G) - 1}, $$
|
| 394 |
+
|
| 395 |
+
proving (3.10). As the subdivision of a forest is a forest, the 'in particular' statement follows from Theorem 2.7 and (3.10). $\square$
|
| 396 |
+
|
| 397 |
+
**Remark 3.9.** Note that $\Delta(S(G)) = \Delta(G)$ for every graph $G$ with $\Delta(G) \ge 2$. So, for the subdivision of a graph with the maximum degree at least 2, the upper bound which appears in (3.10) is sharper than the upper bound that comes from (1.1).
|
| 398 |
+
|
| 399 |
+
We demonstrated in Theorem 3.4 that the largest zero of the Laplacian matching polynomial has the multiplicity 1. In the following theorem, we prove the remaining statements of (1.7) as analogues of the results given in (1.3).
|
| 400 |
+
|
| 401 |
+
**Theorem 3.10.** Let $G$ be a graph and let $n = |V(G)|$. For each edge $e \in E(G)$, the zeros of $\mathcal{LM}(G, x)$ and $\mathcal{LM}(G-e, x)$ interlace in the sense that, if $\alpha_1 \le \dots \le \alpha_n$ and $\beta_1 \le \dots \le \beta_n$ are respectively the zeros of $\mathcal{LM}(G, x)$ and $\mathcal{LM}(G-e, x)$, then $\beta_1 \le \alpha_1 \le \beta_2 \le \alpha_2 \le \dots \le \beta_n \le \alpha_n$. Also, if $G$ is connected, then $\lambda(\mathcal{LM}(G, x)) > \lambda(\mathcal{LM}(H, x))$ for any proper subgraph $H$ of $G$.
|
| 402 |
+
|
| 403 |
+
*Proof.* Fix an edge $e \in E(G)$ and denote by $v_e$ the vertex of $S(G)$ corresponding to $e$. Let $\alpha_1 \le \dots \le \alpha_n$ and $\beta_1 \le \dots \le \beta_n$ be the zeros of $\mathcal{LM}(G, x)$ and $\mathcal{LM}(G-e, x)$, respectively. Corollary 2.3 yields that $\sqrt{\alpha_1} \le \dots \le \sqrt{\alpha_n}$ is the end part of the nondescending sequence which consists of all the zeros of $\mathcal{M}(S(G), x)$ and $\sqrt{\beta_1} \le \dots \le \sqrt{\beta_n}$ is the end part of the nondescending sequence which consists of all the zeros of $\mathcal{M}(S(G-e), x)$. As $S(G-e) = S(G) - v_e$, it follows from (1.3) that the zeros of $\mathcal{M}(S(G), x)$ and $\mathcal{M}(S(G-e), x)$ interlace. So, we find that
|
| 404 |
+
|
| 405 |
+
$$ \sqrt{\beta_1} \le \sqrt{\alpha_1} \le \sqrt{\beta_2} \le \sqrt{\alpha_2} \le \dots \le \sqrt{\beta_n} \le \sqrt{\alpha_n} $$
|
| 406 |
+
|
| 407 |
+
which means that $\beta_1 \le \alpha_1 \le \beta_2 \le \alpha_2 \le \dots \le \beta_n \le \alpha_n$, as desired.
|
| 408 |
+
|
| 409 |
+
Now, assume that $G$ is connected. Let $H$ be a proper subgraph of $G$ and let $u \in V(H)$. As $T(H, u)$ is a proper subgraph of $T(G, u)$, if $R$ denotes the submatrix of $D_G(T(G, u)) + A(T(G, u))$ corresponding to the vertices in $V(T(H, u))$, then $R - (D_H(T(H, u)) + A(T(H, u)))$ is a nonzero matrix with nonnegative entries. So, by applying Theorem 3.4 and the Perron-Frobenius theorem [3, Theorem 2.2.1], we get
|
| 410 |
+
|
| 411 |
+
$$
|
| 412 |
+
\begin{align*}
|
| 413 |
+
\lambda(\mathcal{LM}(G,x)) &= \lambda(D_G(T(G,u)) + A(T(G,u))) \\
|
| 414 |
+
&> \lambda(R) \\
|
| 415 |
+
&> \lambda(D_H(T(H,u)) + A(T(H,u))) \\
|
| 416 |
+
&= \lambda(\mathcal{LM}(H,x)). \quad \square
|
| 417 |
+
\end{align*}
|
| 418 |
+
$$
|
| 419 |
+
|
| 420 |
+
**Remark 3.11.** For every graph $G$ and real number $\alpha$, let $m_G(\alpha)$ denote the multiplicity of $\alpha$ as a root of $\mathcal{LM}(G,x)$. As a consequence of Theorem 3.10, we have $|m_G(\alpha) - m_{G-e}(\alpha)| \le 1$ for each edge $e \in E(G)$.
|
| 421 |
+
|
| 422 |
+
It is known that among all trees with a fixed number of vertices the path has the smallest value of the largest Laplacian eigenvalue [12]. The following result can be considered as an analogue of this fact and is obtained from Theorems 2.7 and 3.10.
|
| 423 |
+
|
| 424 |
+
**Corollary 3.12.** Let $P_n$ and $K_n$ be the path and complete graph on $n$ vertices, respectively. For any connected graph $G$ on $n$ vertices which is not $P_n$ and $K_n$,
|
| 425 |
+
|
| 426 |
+
$$ \lambda(\mathcal{LM}(P_n, x)) < \lambda(\mathcal{LM}(G, x)) < \lambda(\mathcal{LM}(K_n, x)). $$
|
| 427 |
+
---PAGE_BREAK---
|
| 428 |
+
|
| 429 |
+
4. CONCLUDING REMARKS
|
| 430 |
+
|
| 431 |
+
In this paper, we have discovered some properties of the location of zeros of the Laplacian matching polynomial. Most of our results can be considered as analogues of known results on the matching polynomial. Comparing to the matching polynomial, the Laplacian matching polynomial contains not only the information of the sizes of matchings in the graph but also the vertex degrees of the graph. Hence, it seems to be that more structural properties of graphs can be reflected by the Laplacian matching polynomial rather than the matching polynomial. For an instance, 0 is a root of $LM(G,x)$ if and only if $G$ is a forest, in while 0 is a root of $M(G,x)$ if and only if $G$ has no perfect matchings.
|
| 432 |
+
|
| 433 |
+
More interesting facts about the Laplacian matching polynomial can be concerned in further. For example, one may focus on the multiplicities of zeros of the Laplacian matching polynomial as there are many results on the multiplicities of zeros of the matching polynomial. In view of Remark 3.11, for every graph $G$ and real number $\alpha$, one may divide $E(G)$ into three subsets based on how the multiplicity of $\alpha$ changes when an edge of $G$ is removed. The corresponding problem about the matching polynomial is investigated by Chen and Ku [8]. Also, it is a known result that the multiplicity of a zero of the matching polynomial is at most the path partition number of the graph, that is, the minimum number of vertex disjoint paths required to cover all the vertices of the graph [4, Theorem 6.4.5]. It seems to be an interesting problem to find a sharp upper bound on the multiplicity of a zero of the Laplacian matching polynomial.
|
| 434 |
+
|
| 435 |
+
REFERENCES
|
| 436 |
+
|
| 437 |
+
[1] N. Amini, Spectrahedrality of hyperbolicity cones of multivariate matching polynomials, Journal of Algebraic Combinatorics 50 (2019) 165–190.
|
| 438 |
+
|
| 439 |
+
[2] J.A. Bondy, U.S.R. Murty, Graph Theory, Graduate Texts in Mathematics, Volume 244, Springer, New York, 2008.
|
| 440 |
+
|
| 441 |
+
[3] A.E. Brouwer, W.H. Haemers, Spectra of Graphs, Springer, New York, 2012.
|
| 442 |
+
|
| 443 |
+
[4] C.D. Godsil, Algebraic Combinatorics, Chapman and Hall Mathematics Series, Chapman & Hall, New York, 1993.
|
| 444 |
+
|
| 445 |
+
[5] C.D. Godsil, Matchings and walks in graphs, Journal of Graph Theory 5 (1981) 285–297.
|
| 446 |
+
|
| 447 |
+
[6] C.D. Godsil, I. Gutman, On the theory of the matching polynomial, Journal of Graph Theory 5 (1981) 137–144.
|
| 448 |
+
|
| 449 |
+
[7] O.J. Heilmann, E.H. Lieb, Theory of monomer-dimer systems, Communications in Mathematical Physics 25 (1972) 190–232.
|
| 450 |
+
|
| 451 |
+
[8] C.Y. Ku, W. Chen, An analogue of the Gallai–Edmonds structure theorem for non-zero roots of the matching polynomial, Journal of Combinatorial Theory—Series B 100 (2010) 119–127.
|
| 452 |
+
|
| 453 |
+
[9] J.A. Makowsky, E.V. Ravve, N.K. Blanchard, On the location of roots of graph polynomials, European Journal of Combinatorics 41 (2014) 1–19.
|
| 454 |
+
|
| 455 |
+
[10] A.W. Marcus, D.A. Spielman, N. Srivastava, Interlacing families I: Bipartite Ramanujan graphs of all degrees, Annals of Mathematics—Second Series 182 (2015) 307–325.
|
| 456 |
+
|
| 457 |
+
[11] A. Mohammadian, Laplacian matching polynomial of graphs, Journal of Algebraic Combinatorics 52 (2020) 33–39.
|
| 458 |
+
|
| 459 |
+
[12] M. Petrović, I. Gutman, The path is the tree with smallest greatest Laplacian eigenvalue, Kragujevac Journal of Mathematics 24 (2002) 67–70.
|
| 460 |
+
|
| 461 |
+
[13] O. Rojo, M. Robbiano, An explicit formula for eigenvalues of Bethe trees and upper bounds on the largest eigenvalue of any tree, Linear Algebra and its Applications 427 (2007) 138–150.
|
| 462 |
+
|
| 463 |
+
[14] H. Sachs, Beziehungen zwischen den in einem Graphen enthaltenen Kreisen und seinem charakteristischen Polynom, Publicationes Mathematicae Debrecen 11 (1964) 119–134.
|
| 464 |
+
|
| 465 |
+
[15] D. Stevanović, Bounding the largest eigenvalue of trees in terms of the largest vertex degree, Linear Algebra and its Applications 360 (2003) 35–42.
|
| 466 |
+
|
| 467 |
+
[16] W. Yan, Y.-N. Yeh, On the matching polynomial of subdivision graphs, Discrete Applied Mathematics 157 (2009) 195–200.
|
| 468 |
+
|
| 469 |
+
[17] Y. Zhang, H. Chen, The average Laplacian polynomial of a graph, Discrete Applied Mathematics 283 (2020) 737–743.
|
samples_new/texts_merged/250922.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples_new/texts_merged/2515306.md
ADDED
|
@@ -0,0 +1,523 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
New Encoding for Translating Pseudo-Boolean Constraints into SAT
|
| 5 |
+
|
| 6 |
+
Amir Aavani and David Mitchell and Eugenia Ternovska
|
| 7 |
+
|
| 8 |
+
Simon Fraser University, Computing Science Department
|
| 9 |
+
{aaa78,mitchell,ter}@sfu.ca
|
| 10 |
+
|
| 11 |
+
Abstract
|
| 12 |
+
|
| 13 |
+
A Pseudo-Boolean (PB) constraint is a linear arithmetic constraint over Boolean variables. PB constraints are and widely used in declarative languages for expressing NP-hard search problems. While there are solvers for sets of PB constraints, there are also reasons to be interested in transforming these to propositional CNF formulas, and a number of methods for doing this have been reported. We introduce a new, two-step, method for transforming PB constraints to propositional CNF formulas. The first step re-writes each PB constraint as a conjunction of PB-Mod constraints, and the second transforms each PB-Mod constraint to CNF. The resulting CNF formulas are compact, and make effective use of unit propagation, in that unit propagation can derive facts from these CNF formulas which it cannot derive from the CNF formulas produced by other commonly-used transformation. We present a preliminary experimental evaluation of the method, using instances of the number partitioning problem as a benchmark set, which indicates that our method out-performs other transformations to CNF when the coefficients of the PB constraints are not small.
|
| 14 |
+
|
| 15 |
+
Introduction
|
| 16 |
+
|
| 17 |
+
A Pseudo-Boolean constraint (PB-constraint) is an equality or inequality on a linear combination of Boolean literals, of the form
|
| 18 |
+
|
| 19 |
+
$$ \sum_{i=1}^{n} a_i l_i \text{ op } b $$
|
| 20 |
+
|
| 21 |
+
where op is one of {<, ≤, =, ≥, >}, $a_1, \dots, a_n$ and b are integers, and $l_1, \dots, l_n$ are Boolean literals. Under truth assignment $\mathcal{A}$ for the literals, the left-hand evaluates to the sum of the coefficients whose corresponding literals are mapped to true by $\mathcal{A}$. PB-constraints are also known as 0-1 integer linear constraints. By taking the variables to be propositional literals, rather than 0-1 valued arithmetic variables, we can consider the combination of PB-constraints with other logical expressions. Moreover, a propositional clause ($l_1 \lor \dots \lor l_k$) is equivalent to the PB-constraint $\sum_{i=1}^k l_i \ge 1$. Thus, PB-constraints are a natural generalization of propositional clauses with which it is easier to describe arithmetic
|
| 22 |
+
|
| 23 |
+
properties of a problem. For example, the Knapsack problem has a trivial representation as a conjunction of two PB-constraints:
|
| 24 |
+
|
| 25 |
+
$$ \sum_{i=1}^{n} w_i l_i < C \quad \land \quad \sum_{i=1}^{n} v_i l_i > V, $$
|
| 26 |
+
|
| 27 |
+
but directly representing it with a propositional CNF formula is non-trivial.
|
| 28 |
+
|
| 29 |
+
Software which finds solutions to sets of PB-constraints (PB solvers) exist, for example PBS (Aloul et al. 2002) and PUEBLO (Sheini and Sakallah 2006), but there is not a sustained effort to produce continually updated high-performance solvers. Integer linear programming (ILP) systems can be used to find solutions to sets of PB-constraints, but they are generally optimized for performance on certain types of optimization problems, and do not perform well on some important families of search problems. Moreover, the standard ILP input is a set of linear inequalities, and many problems are not effectively modelled this way, such as problems involving disjunctions of constraints, such as $(p \land q) \lor (r \land s)$. There are standard techniques for transforming these, involving additional variables, but extensive use of these techniques causes performance problems. (Transforming problems to propositional CNF also requires adding new variables, but there seems to be little performance penalty in this case.)
|
| 30 |
+
|
| 31 |
+
Another approach to solving problems modelled with PB-constraints is to transform them to a logically equivalent set of propositional clauses and then apply a SAT solver. There are at least two clear benefits of this approach. One is that high-performance SAT solvers are being improved constantly, and since they take a standard input format, there is always a selection of good, and frequently updated, solvers to make use of. A second is that solving problems involving Boolean combinations of constraints is straightforward. This approach is particularly attractive for problems which are naturally represented by a relatively small number of PB constraints together which a large number of purely Boolean constraints.
|
| 32 |
+
|
| 33 |
+
The question of how best to transform a set of PB-constraints to a set of clauses is complex. Several methods have been reported, but there is still much to be learned. Here, we describe a new method of transformation, and to present some preliminary evidence of its utility.
|
| 34 |
+
|
| 35 |
+
Copyright © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
|
| 36 |
+
---PAGE_BREAK---
|
| 37 |
+
|
| 38 |
+
We define a PBMod-constraint to be of the form:
|
| 39 |
+
|
| 40 |
+
$$
|
| 41 |
+
\sum_{i=1}^{n} a_i l_i \equiv b \pmod{M}
|
| 42 |
+
$$
|
| 43 |
+
|
| 44 |
+
where $a_1, \cdots, a_n$ and $b$ are non-negative integers less than $M$, and $l_1, \cdots, l_n$ are literals.
|
| 45 |
+
|
| 46 |
+
Our method of transforming a PB-constraint to CNF in-
|
| 47 |
+
volves first transforming it to a set of PB-Mod constraints,
|
| 48 |
+
and then transforming these to CNF. Thus, we replace the
|
| 49 |
+
question of how best to transform an arbitrary PB-constraint
|
| 50 |
+
to CNF with two questions: how to choose a set of PB-
|
| 51 |
+
Mod constraints, and how to transform each of these to CNF.
|
| 52 |
+
There are benefits of this, due to properties of the PB-Mod
|
| 53 |
+
constraints. For example, we show that there are many PB-
|
| 54 |
+
constraints whose unsatisfiability can be proven by showing
|
| 55 |
+
the unsatisfiability of a PBMod-constraint, which is much
|
| 56 |
+
simple.
|
| 57 |
+
|
| 58 |
+
We present two methods for translating PBMod-
|
| 59 |
+
constraints to CNF. Both these encodings allow unit prop-
|
| 60 |
+
agation to infer inconsistency if the current assignment can-
|
| 61 |
+
not be extended to a satisfying assignment for that PBMod-
|
| 62 |
+
constraint, and hence unit propagation can infer inconsis-
|
| 63 |
+
tency for the original PB-constraint. We also show that the
|
| 64 |
+
number of PB-constraints for which unit propagation can in-
|
| 65 |
+
fer inconsistency, given the output of proposed translation, is
|
| 66 |
+
much larger than for the other existing encodings. We also
|
| 67 |
+
point out that it is impossible to translate all PB-constraints
|
| 68 |
+
in the form $\sum a_i l_i = b$ into polynomial size arc-consistent
|
| 69 |
+
CNF unless P=NP.
|
| 70 |
+
|
| 71 |
+
We also present the results of an experimental study, us-
|
| 72 |
+
ing instances of the number partitioning problem as a bench-
|
| 73 |
+
mark, which indicates that our new method outperforms oth-
|
| 74 |
+
ers in the literature.
|
| 75 |
+
|
| 76 |
+
For the sake of space, proofs are omitted from this paper.
|
| 77 |
+
All proofs can be found in (Aavani 2011).
|
| 78 |
+
|
| 79 |
+
**Notation and Terminology**
|
| 80 |
+
|
| 81 |
+
Let $X$ be a set of Boolean variables. An assignment $\mathcal{A}$ to $X$ is a possibly partial function from $X$ to $\{\text{true, false}\}$. Assignment $\mathcal{A}$ to $X$ is a total assignment if it is defined at every variable in $X$. For any $S \subseteq X$, we write $\mathcal{A}[S]$ for the assignment obtained by restricting the domain of $\mathcal{A}$ to the variables in $S$. We say assignment $\mathcal{B}$ extends assignment $\mathcal{A}$ if $\mathcal{B}$ is defined on every variable that $\mathcal{A}$ is, and for every variable $x$ where $\mathcal{A}$ is defined, $\mathcal{A}(x) = \mathcal{B}(x)$.
|
| 82 |
+
|
| 83 |
+
A literal, *l*, is either a Boolean variable or negation of a
|
| 84 |
+
Boolean variable and we denote by var(l) the variable un-
|
| 85 |
+
derlying literal *l*. Assignment *A* satisfies literal *l*, written
|
| 86 |
+
*A* |= *l*, if *l* is an atom *x* and *A*(*x*) = true or *l* is a negated
|
| 87 |
+
atom ¬*x* and *A*(*x*) = false.
|
| 88 |
+
|
| 89 |
+
A clause $C = \{l_1, \dots, l_m\}$ over $X$ is a set of literals such that $\text{var}(l_i) \in X$. Assignment $\mathcal{A}$ satisfies clause $C = \{l_1, \dots, l_m\}$ if there exists at least one literal $l_i$ such that $\mathcal{A} \models l_i$. A total assignment falsifies clause $C$ if it does not satisfy any of its literals. An assignment satisfies a set of clauses if it satisfies all the clauses in that set.
|
| 90 |
+
|
| 91 |
+
A PB-constraint $Q$ on $X$ is an expression of the form:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
a_1 l_1 + \cdots + a_n l_n \quad \mathbf{op} \quad b \qquad (1)
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where op is one of {<, ≤, =, ≥, >}, for each i, a_i is an
|
| 98 |
+
integer and l_i a literal over X, and b is an integer. We call a_i
|
| 99 |
+
the coefficient of l_i, and b the bound.
|
| 100 |
+
|
| 101 |
+
Total assignment $\mathcal{A}$ to $X$ satisfies PB-constraint $Q$ on $X$, written $\mathcal{A} \models Q$, if $\sum_{i:\mathcal{A}=\lhd l_i} a_i \mathrm{op} b$, that is, the sum of coefficients for literals mapped to true (the left hand side) satisfies the given relation to the bound (the right hand side).
|
| 102 |
+
|
| 103 |
+
Canonical Form
|
| 104 |
+
|
| 105 |
+
In this paper, we focus on translating PB equality constraints
|
| 106 |
+
with positive coefficients:
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
a_1 x_1 + \cdots + a_n x_n = b
|
| 110 |
+
\quad (2)
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where integers $(a_1 \cdots a_n$ and $b$) are all positive.
|
| 114 |
+
|
| 115 |
+
**Definition 1** Constraints $Q_1$ on $X$ and $Q_2$ on $Y \supseteq X$ are equivalent iff for every total assignment $\mathcal{A}$ for $X$ which satisfies $Q_1$, there exists an extension of $\mathcal{A}$ to $Y$ which satisfies $Q_2$, and every total assignment $\mathcal{B}$ to $Y$ which satisfies $Q_2$ also satisfies $Q_1$.
|
| 116 |
+
|
| 117 |
+
It is not hard to show that every PB-constraint has an
|
| 118 |
+
equivalent PB-constraint of the form (2). For sake of space,
|
| 119 |
+
we do not include the details but refer interested readers to
|
| 120 |
+
(Aavani 2011).
|
| 121 |
+
|
| 122 |
+
Valid Translation
|
| 123 |
+
|
| 124 |
+
**Definition 2** Let $Q$ be a PB-constraint or PB-Mod con-
|
| 125 |
+
straint over variables $X = \{x_1, \dots, x_n\}$, $Y$ a set of
|
| 126 |
+
Boolean variables (called auxiliary variables) disjoint from
|
| 127 |
+
$X$, and $v$ a Boolean variable not occurring in $X \cup Y$, and
|
| 128 |
+
$C = \{C_1, \dots, C_m\}$ a set of clauses on $X \cup Y \cup \{v\}$. Then
|
| 129 |
+
we say the pair $(v, C)$, is a valid translation of $Q$ if
|
| 130 |
+
|
| 131 |
+
1. *C* is satisfiable, and
|
| 132 |
+
|
| 133 |
+
2. if *A* is a total assignment for *X* ∪ *Y* ∪ {*v*} that satisfies *C*, then
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\mathcal{A} \models Q \iff \mathcal{A} \models v.
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
Intuitively, *C* ensures that every *v* always takes the same
|
| 140 |
+
truth value as *Q*.
|
| 141 |
+
|
| 142 |
+
In (Bailleux, Boufkhad, and Roussel 2009), a translation
|
| 143 |
+
is defined to be a set of clauses *C* such that *A* |= *Q* off some
|
| 144 |
+
extension of *A* (to the auxiliary variables of *C*) satisfies *C*. If
|
| 145 |
+
⟨*v*, *C*⟩ is a valid translation by Definition 2, then (*v*) ∪ *C* is a
|
| 146 |
+
translation in this other sense, and if *C* is a translation in the
|
| 147 |
+
other sense, then ⟨*v*, *D*⟩, where *D* is equivalent to *v* ↔ *C*,
|
| 148 |
+
is a valid translation. So these two definitions are essentially
|
| 149 |
+
equivalent, except that our definition makes available a vari-
|
| 150 |
+
able which always has the same truth value as *Q*, which can
|
| 151 |
+
be convenient. For example, it makes it easy to use *Q* condi-
|
| 152 |
+
tionally.
|
| 153 |
+
|
| 154 |
+
**Example 1** Let $Q$ be the unsatisfiable PB-constraint $2x_1 + 4\neg x_2 = 3$. Then the pair $(v, \{C_1\})$, where $C_1 = \{\neg v\}$, is a valid translation of $Q$.
|
| 155 |
+
|
| 156 |
+
**Example 2** Let $Q$ be the satisfiable PB-constraint $1x_1 + 2x_2 = 2$. Then $\langle v, C \rangle$, where $C$ is any set of clauses logically equivalent to $(v \leftrightarrow \neg x_1) \land (v \leftrightarrow x_2)$ is a valid translation of $Q$. Here, $X = \{x_1, x_2\}$ and $Y = \emptyset$.
|
| 157 |
+
---PAGE_BREAK---
|
| 158 |
+
|
| 159 |
+
In describing construction of translations, we will some-
|
| 160 |
+
times overload our notation, using a symbol for both a vari-
|
| 161 |
+
able and a translation. For example, if *D* is a valid transla-
|
| 162 |
+
tion, we may use *D* as a variable in a clause for constructing
|
| 163 |
+
another translation. Thus, *D* is the pair ⟨*D*, *C*⟩.
|
| 164 |
+
|
| 165 |
+
Tseitin Transformation
|
| 166 |
+
|
| 167 |
+
The usual method for transforming a propositional formula to CNF is that of Tseitin(Tseitin 1968). To transform formula φ to CNF, a fresh propositional variable is used to represent the truth value of each subformula of φ. For each subformula ψ, denote by ψ' the associated propositional variable. If ψ is a variable, then ψ' is just ψ. The CNF formula is the set of clauses containing the clause (φ'), and for each sub-formula ψ of φ:
|
| 168 |
+
|
| 169 |
+
1. If $\psi = \psi_1 \lor \psi_2$, the clauses $\{\neg\psi', \psi'_1, \psi'_2\}$, $\{\psi', \neg\psi'_1\}$ and $\{\psi', \neg\psi'_2\}$;
|
| 170 |
+
|
| 171 |
+
2. If $\psi = \psi_1 \land \psi_2$, the clauses $\{\neg\psi', \psi'_1\}$, $\{\neg\psi', \psi'_2\}$, and $\{\psi', \neg\psi'_1, \neg\psi'_2\}$;
|
| 172 |
+
|
| 173 |
+
3. If $\psi = \neg\psi_1$, the clauses $\{\neg\psi', \neg\psi'_1\}$ and $\{\psi', \psi'_1\}$.
|
| 174 |
+
|
| 175 |
+
New Method for PBMod-constraints
|
| 176 |
+
|
| 177 |
+
We define a normal PBMod-constraint be of the form:
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
\sum_{i=1}^{n} a_i l_i \equiv b (\text{mod } M), \quad (3)
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
where $0 \le a_i < M$ for all $1 \le i \le n$ and $0 \le b < M$.
|
| 184 |
+
Total assignment $\mathcal{A}$ is a solution to a PBMod-constraint iff
|
| 185 |
+
the value of left-hand side summation under $\mathcal{A}$ minus the
|
| 186 |
+
value of right-hand side of the equation, $b$, is a multiple of
|
| 187 |
+
$M$.
|
| 188 |
+
|
| 189 |
+
**Definition 3** If $Q$ is the PB-constraint $\sum a_i l_i = b$ and $M$ an integer greater than 1, then by $Q[M]$ we denote the PBMod-constraint $\sum a'_i l_i = b'(\text{mod } M)$ where:
|
| 190 |
+
|
| 191 |
+
1. $a'_i = a_i \bmod M$,
|
| 192 |
+
|
| 193 |
+
2. $b' = b \bmod M$.
|
| 194 |
+
|
| 195 |
+
**Example 3** Let $Q$ be the constraint $6x1+5x2+7x3 = 12$. Then, we have that
|
| 196 |
+
|
| 197 |
+
$Q[3]$ is $0x1 + 2x2 + 1x3 \equiv 0 (\text{mod } 3)$, and
|
| 198 |
+
|
| 199 |
+
$Q[5]$ is $1x1 + 0x2 + 2x3 = 2 (\text{mod } 5).$
|
| 200 |
+
|
| 201 |
+
Every solution to a PB-constraint *Q* is also a solution to
|
| 202 |
+
*Q*[*M]* for any *M* ≥ 2. Also, for sufficiently large values of
|
| 203 |
+
*M*, each solution to *Q*[*M]* is a solution to *Q*.
|
| 204 |
+
|
| 205 |
+
**Proposition 1** If *Q* is a PB-constraint ∑ *a*ᵢ*l*ᵢ = *b* and *M* > ∑ *a*ᵢ then *Q*[*M*] and *Q* have the same satisfying assignments.
|
| 206 |
+
|
| 207 |
+
More interesting is that, for a given PB-constraint Q, we
|
| 208 |
+
can construct sets constraints Q[M_i], none of which are
|
| 209 |
+
equivalent to Q, but such that their conjunction has the same
|
| 210 |
+
set of solutions as Q. Our goal will be to choose values of
|
| 211 |
+
M_i such that the resulting set of PB-Mod constraints is easy
|
| 212 |
+
to transform to CNF.
|
| 213 |
+
|
| 214 |
+
**Proposition 2** Let $Q$ be the PB-constraint $\sum a_i l_i = b, M_1$ and $M_2$ be integers with $M_3 = \text{lcm}(M_1, M_2)$. Further, let $S_1$ be the set of satisfying assignments for $Q[M_1]$, and $S_2$
|
| 215 |
+
|
| 216 |
+
be the set of assignments satisfying $Q[M_2]$. Then the set of satisfying assignments for $Q[M3]$ is $S_1 \cap S_2$.
|
| 217 |
+
|
| 218 |
+
Proposition 2 tells us that in order to find the set of solu-
|
| 219 |
+
tions to a PBMod-constraint modulo $M_3 = \text{lcm}(M_1, M_2)$,
|
| 220 |
+
one can find the set of solutions to two PBMod-constraints
|
| 221 |
+
(modulo $M_1$ and $M_2$) and return their intersection. This gen-
|
| 222 |
+
eralizes in the obvious way.
|
| 223 |
+
|
| 224 |
+
**Lemma 1** Let {$M_1, \dots, M_m$} be a set of $m$ positive integers and $M = \text{lcm}(M_1, \dots, M_m)$. Let $Q$ the PB-constraint $\sum a_i l_i = b$. If $M > \sum a_i$, and $S_i$ is the set of satisfying assignments for $Q[M_i]$, then the set of satisfying assignments of $Q[M]$ is
|
| 225 |
+
|
| 226 |
+
$$
|
| 227 |
+
\bigcap_{i \in 1..m} S_i.
|
| 228 |
+
$$
|
| 229 |
+
|
| 230 |
+
We can now easily construct a valid translation of a PB-
|
| 231 |
+
constraint from valid translations of a suitable set of PB-
|
| 232 |
+
Mod constraints.
|
| 233 |
+
|
| 234 |
+
**Theorem 1** Let $Q$ be a PB-constraint $\sum a_i l_i = b$, $\{M_1, \dots, M_m\}$ a set of positive integers, and $M = \text{lcm}(M_1, \dots, M_m)$ with $M > \sum a_i$. Suppose that, for each $i \in \{1, \dots, m\}$, $\langle v_k, C_k \rangle$ is a valid translation of $Q[M_i]$, each over distinct sets of variables. Then for any set $C$ of clauses logically equivalent to $\cup_i C_i \cup C'$, where $C'$ is a set of clauses equivalent to $v \leftrightarrow (v_1 \wedge v_2 \cdots \wedge v_m)$, the pair $(v, C)$, is a valid translation of $Q$.
|
| 235 |
+
|
| 236 |
+
Since $\text{lcm}(2, \dots, k) \ge 2^{k-1}$, (Farhi and Kane 2009), the
|
| 237 |
+
set $\mathbb{M}^\mathbb{N} = \{2, \dots, [\log \sum a_i] + 1\}$ can be used as the set of
|
| 238 |
+
moduli for encoding $\sum a_i l_i = b$.
|
| 239 |
+
|
| 240 |
+
Another candidate for set of moduli is the first *m* prime numbers, where *m* is the smallest number such that the produce of the first *m* primes exceeds $\sum a_i$. We will denote this set by $\mathbb{M}^p$. The following proposition gives an estimate for the size of set $\mathbb{M}^p$, and for the value of $P_m$. As usual, we denote by $P_i$ the $i^{th}$ prime number.
|
| 241 |
+
|
| 242 |
+
**Proposition 3** Let $m$ be the smallest integer such that the product of the first $m$ primes is greater than $S$. Then:
|
| 243 |
+
|
| 244 |
+
$$
|
| 245 |
+
1. m = |\mathbb{M}^p| = \theta(\frac{\ln S}{\ln \ln S}).
|
| 246 |
+
$$
|
| 247 |
+
|
| 248 |
+
2. $P_m < \ln S.$
|
| 249 |
+
|
| 250 |
+
A third candidate is the set
|
| 251 |
+
|
| 252 |
+
$$
|
| 253 |
+
\mathbb{M}^{\mathbb{P}} = \{ P_i^{n_i} | P_i^{n_i - 1} \leq \lg S \leq P_i^{n_i} \}.
|
| 254 |
+
$$
|
| 255 |
+
|
| 256 |
+
It is straightforward to observe that |$\mathbb{M}^{\mathbb{P}}$| ≤ (ln S)/(ln ln S) and the its maximum element is at most lg S.
|
| 257 |
+
|
| 258 |
+
In general, the size of a description of PB-constraint
|
| 259 |
+
∑ *a*<sub>*i*</sub>*l*<sub>*i*</sub> = *b* is θ(n log *a*<sub>Max</sub>) where *n* is the number of liter-
|
| 260 |
+
als (coefficients) in the constraint and *a*<sub>Max</sub> is the value of
|
| 261 |
+
the largest coefficient. The description of PBMod-constraint
|
| 262 |
+
Q[*M*] has size θ(n log *M*). So, a translation for Q[*M*] which
|
| 263 |
+
produces a CNF with O(n<sup>*k*<sub>1</sub></sup> *M*<sup>*k*<sub>2</sub></sup>) clauses and variables, for
|
| 264 |
+
some constants *k*<sub>1</sub> and *k*<sub>2</sub>, (which may be exponential the in-
|
| 265 |
+
put size), provides a may to translate PB-constraints to CNF
|
| 266 |
+
of size polynomial in the representation of the PB-constraint.
|
| 267 |
+
Two such translations are described in the next section. We
|
| 268 |
+
describe several others in (Aavani 2011).
|
| 269 |
+
---PAGE_BREAK---
|
| 270 |
+
|
| 271 |
+
# Encoding For PB-Mod Constraints
|
| 272 |
+
|
| 273 |
+
In this section, we describe translations of PBMod-constraints of the form (3) to CNF. Remember that our ultimate goal is not translation of PB-constraints. For simplicity, we assume all coefficients in each PBMod-constraint are non-zero.
|
| 274 |
+
|
| 275 |
+
## Dynamic Programming Based Transformation (DP)
|
| 276 |
+
|
| 277 |
+
The translation presented here encodes PBMod-constraints using a Dynamic Programming approach. Let $D_m^j$ be a valid translation for $\sum_{i=1}^j a_i l_i \equiv m (\text{mod } M)$. We can use the following set of clauses to describe the relationship among $D_m^j$, $D_{m-a_j}^{j-1}$, $D_m^j$ and $l_j$:
|
| 278 |
+
|
| 279 |
+
1. If both $D_{m-a_j}^{j-1}$ and $l_j$ are true, $D_m^j$ must be true, which can be represented by the clause $\{\neg D_{m-a_j}^{j-1}, \neg l_j, D_m^l\}$.
|
| 280 |
+
|
| 281 |
+
2. If $D_m^{j-1}$ is true and $l_j$ is false, $D_m^j$ must be true, i.e., $\{\neg D_m^{j-1}, l_j, D_m^j\}$.
|
| 282 |
+
|
| 283 |
+
3. If $D_m^j$ is true, either $D_m^{j-1}$ or $D_{m-a_j}^{j-1}$ must be true, i.e., $\{\neg D_m^j, D_m^{j-1}, D_{m-a_j}^{j-1}\}$.
|
| 284 |
+
|
| 285 |
+
For the base cases, when $j=0$, we have:
|
| 286 |
+
|
| 287 |
+
1. $D_0^0$ is true, i.e., $\{D_0^0\}$.
|
| 288 |
+
|
| 289 |
+
2. If $m \neq 0$, $D_m^0$ is false, i.e., $\{\neg D_m^0\}$.
|
| 290 |
+
|
| 291 |
+
**Proposition 4** Let $D = \{D_m^j\}$ and C be the set of clauses used to describe variables in D. Then, pair $\langle D_b^n, C \rangle$ is valid translation for (3).
|
| 292 |
+
|
| 293 |
+
By applying standard dynamic programming techniques, we can avoid describing the unnecessary $D_m^j$, and obtain a smaller CNF.
|
| 294 |
+
|
| 295 |
+
By adding the following clauses, we can boost the performance of unit propagation.
|
| 296 |
+
|
| 297 |
+
1. If $D_{m_1}^j$ is true, $D_{m_2}^j$ should be false($m_1 \neq m_2$), i.e., $\{\neg D_{m_1}^j, \neg D_{m_2}^j\}$.
|
| 298 |
+
|
| 299 |
+
2. There is at least one $m$ such that $D_m^j$ is true, i.e., $\{D_m^j | m = 0 \cdots M - 1\}$.
|
| 300 |
+
|
| 301 |
+
Binary Decision Diagrams, BDD, are standard tools for translating constraints to SAT. One can construct a BDD-based encoding for PBMod-constraints similar to BDD-based encoding for PB-constraint described in (Eén and Sorensson 2006). Unit propagation can infer more facts on the CNF generated by boosted version of DP-based encoding than the CNF generated by BDD-based encoding. Comparing BDD-based and DP-based encodings, former produces larger CNF while unit propagation infers the same facts on the output of both encodings.
|
| 302 |
+
|
| 303 |
+
**Remark 1** In (Aavani 2011), we proved that DP-based encoding, plus the extra clauses, has the following property. Given partial assignment A, if there is no total assignment B extending A such that B satisfies both C and $\sum_{i=1}^j a_i l_i \equiv m (\text{mod } M)$, then unit propagation infers false as the value for variable $D_m^j$.
|
| 304 |
+
|
| 305 |
+
# Divide and Conquer Based Transformation (DC)
|
| 306 |
+
|
| 307 |
+
The translation presented next reflects a Divide and Conquer approach. We define auxiliary variables in $D = \{D_a^{s,l}\}$ such that variable $D_a^{s,l}$ describes the necessary and sufficient condition for satisfiability of subproblem $\sum_{i=s}^{s+l-1} a_i x_i \equiv a (\text{mod } M)$.
|
| 308 |
+
|
| 309 |
+
Let $D^{s,l} = \{D_a^{s,l} : 0 \le a < M\}$. We can use the following set of clauses to describe the relation among the $3 * M$ variables in sets $D^{s,l}$, $D_{s,\frac{l}{2}}$ and $D_{s+\frac{l}{2},\frac{l}{2}}$:
|
| 310 |
+
|
| 311 |
+
1. If both $D_{m_1}^{s,\frac{l}{2}}$ and $D_{m_2}^{s+\frac{l}{2},\frac{l}{2}}$ are true, $D_{m_1+m_2}^{s,l}$ should be true, i.e., $\{\neg D_{m_1}^{s,\frac{l}{2}}, \neg D_{m_2}^{s+\frac{l}{2},\frac{l}{2}}, D_{m_1+m_2}^{s,l}\}$.
|
| 312 |
+
|
| 313 |
+
2. If $D_{m_1}^{s,l}$ is true, $D_{m_2}^{s,l}$ should be false($m_1 \neq m_2$), i.e., $\{\neg D_{m_1}^{s,l}, \neg D_{m_2}^{s,l}\}$.
|
| 314 |
+
|
| 315 |
+
3. There is at least one $m$ such that $D_m^{s,l}$ is true, i.e., $\{D_m^{s,l} | m = 0 \cdots M - 1\}$.
|
| 316 |
+
|
| 317 |
+
For the base cases, when $l=1$, we have:
|
| 318 |
+
|
| 319 |
+
1. $D_0^{s,1}$ is true iff $x_s$ is false, i.e., $\{{x_s, D_1^{s,1}}, \{\neg x_s, \neg D_1^{s,1}\}}$.
|
| 320 |
+
|
| 321 |
+
2. $D_1^{s,1}$ is true iff $x_s$ is true, i.e., $\{{\neg x_s, D_1^{s,1}}, \{x_s, \neg D_1^{s,1}\}}$.
|
| 322 |
+
|
| 323 |
+
**Proposition 5** Let $D = \{D_a^{s,l}\}$ and C be the clauses which are used to describe the variables in D. Then, pair $\langle D_b^n, C \rangle$ is a valid translation for (3).
|
| 324 |
+
|
| 325 |
+
**Remark 2** In (Aavani 2011), we showed another version of DC-based encoding which also has the property we described in Remark 1.
|
| 326 |
+
|
| 327 |
+
**Theorem 2** The numbers of clauses and auxiliary variables used in the DP and CD translations of PBMod constraint $\sum a_i x_i \equiv b (\text{mod } M)$, and the depths of the formulas implicit in these CNF formulas, are as given in Table 1. These same properties, for the PB-constrain translations obtained DP and DC translations together with $M^P$ or $M^{PP}$ as moduli, are as given in Table 2.
|
| 328 |
+
|
| 329 |
+
<table><thead><tr><td>Encoder</td><td># of Aux. Vars.</td><td># of Clauses</td><td>Depth</td></tr></thead><tbody><tr><td>DP</td><td>O(nM)</td><td>O(nM)</td><td>O(n)</td></tr><tr><td>DC</td><td>O(nM)</td><td>O(nM<sup>2</sup>)</td><td>O(log n)</td></tr></tbody></table>
|
| 330 |
+
|
| 331 |
+
Table 1: Summary of size and depth of translations for $\sum a_i x_i \equiv b (\text{mod } M)$.
|
| 332 |
+
|
| 333 |
+
In the previous section, we described two candidates for sets of moduli, namely Prime and PrimePower, and in this section, we explained two encodings for transforming PBMod constraints to SAT, namely DP and DC. This gives us four different translations for PB constraints to SAT. Table 2 summarizes the number of clauses and variables and the depth of corresponding formula for these translations, and also for the Sorting Network based encoding (Eén 2005), and Binary Adder encoding (Eén 2005).
|
| 334 |
+
---PAGE_BREAK---
|
| 335 |
+
|
| 336 |
+
<table><thead><tr><th>PBMod Endr</th><th># of Vars.</th><th># of Clauses</th><th>Depth</th></tr></thead><tbody><tr><td>Prime.DP</td><td>O(1/ln(S))</td><td>O(n ln(S))</td><td>O(n)</td></tr><tr><td>Prime.DC</td><td>O(n log(S)/ln ln(S))</td><td>O(n (log(S)/ln ln(S)))<sup>2</sup>)</td><td>O(log n)</td></tr><tr><td>PPower.DP</td><td>O(log(S)/log log S)</td><td>O(n log(S)/log log(S))</td><td>O(n)</td></tr><tr><td>PPower.DC</td><td>O(n log(S)/log log(S))</td><td>O(n (log(S)/log log(S)))<sup>2</sup>)</td><td>O(log n)</td></tr><tr><td>BAdder</td><td>O(n log(S))</td><td>O(n log(S))</td><td>O(log(S) * log n)</td></tr><tr><td>SN</td><td>O(n log(S/n) log<sup>2</sup>(n log(S/n)))</td><td>O(n log(S/n) log<sup>2</sup>(n log(S/n)))</td><td>O(log<sup>2</sup>(n log(S/n)))</td></tr></tbody></table>
|
| 337 |
+
|
| 338 |
+
Table 2: Summary of size and depth of different encodings for translating $\sum a_i x_i = b$, where $S = \sum a_i$.
|
| 339 |
+
|
| 340 |
+
## Performance of Unit Propagation
|
| 341 |
+
|
| 342 |
+
Here we examine some properties of the proposed encodings.
|
| 343 |
+
|
| 344 |
+
### Background
|
| 345 |
+
|
| 346 |
+
Generalized arc-consistency (GAC) is one of the desired properties for an encoding which is related to the performance of unit propagation, UP, procedure inside a SAT Solver. Bailluex and et. al., in (Bailleux, Boufkhad, and Roussel 2009), defined UP-detect inconsistency and UP-maintain GAC for PB-constraint's encodings. Although, the way they defined a translation is slightly different from us, these two concepts can still be discussed in our context.
|
| 347 |
+
|
| 348 |
+
Let $E$ be an encoding method for PB-constraints, $Q$ be a PB-constraint on $X$ and $\langle v, C \rangle = E(Q)$ the translation for $Q$ obtained from encoding $E$. Then,
|
| 349 |
+
|
| 350 |
+
1. Encoding $E$ for constraint $Q$ supports UP-detect inconsistency if for every (partial) assignment $\mathcal{A}$, we have that every total extension of $A[X]$ makes $Q$ false if and only if unit propagation derives $\{-v\}$ from $C \cup \{\{x\} \mid \mathcal{A} \models x\}$;
|
| 351 |
+
|
| 352 |
+
2. Encoding $E$ for constraint $Q$ is said to UP-maintain GAC if for every (partial) assignment $\mathcal{A}$ and any literal $l$ where $\text{var}(l) \in X$, we have that $l$ is true in every total extension of $\mathcal{A}$ that satisfies $Q$, if and only if unit propagation derives $\{l\}$ from $C \cup \{v\} \cup \{\{x\} \mid \mathcal{A} \models x\}$;
|
| 353 |
+
|
| 354 |
+
An encoding for PB-constraints is generalized arc-consistent, or simply arc-consistent, if it supports both UP-detect inconsistency and UP-maintain GAC for all possible constraints.
|
| 355 |
+
|
| 356 |
+
In this section, we show that there cannot be an encoding for PB-constraint in form $\sum a_i l_i = b$ which always produces a polynomial size arc-consistent CNF unless P=co-NP. Also we study the arc-consistency of our encoding and discuss why one can expect the proposed encodings to perform well.
|
| 357 |
+
|
| 358 |
+
### Hardness Result
|
| 359 |
+
|
| 360 |
+
Here, we show that it is not very likely to have a generalized arc-consistent encoding which always produces polynomial size CNF.
|
| 361 |
+
|
| 362 |
+
**Theorem 3** There does not exist a UP-detectable encoding which always produces polynomial size CNF unless P= co-NP. There does not exist a UP-maintainable encoding which always produces polynomial size CNF unless P= co-NP.
|
| 363 |
+
|
| 364 |
+
**Proof (sketch)** The theorem can be proven by observing that a subset sum problem instance can be written as a PB-constraint, and having a UP-detectable encoding enables us to prove unsatisfiability whenever the original subset problem instance is not satisfiable. The proof for hardness of having UP-maintainable encoding is similar to this argument. For complete proof, see (Aavani 2011).
|
| 365 |
+
|
| 366 |
+
## UP for Proposed Encodings
|
| 367 |
+
|
| 368 |
+
Although there is no arc-consistent encoding for PB-constraints, both DP-based and DC-based encodings for PBMod-constraints are generalized arc-consistent encodings.
|
| 369 |
+
|
| 370 |
+
Also, as mentioned before, unit propagation is able to infer inconsistency, on the CNF generated by these encodings, as soon as the current partial assignment cannot be extended to a total satisfying assignment. Notice that what we state here is more powerful than arc-consistency as it considers the auxiliary variables, too. More formally, let $\langle v, C \rangle$ be the output of DP-based (DC-based) encoding for PBMod-constraint $Q$. Given a partial assignment $\mathcal{A}$ s.t., $v \in \mathcal{A}^+$,
|
| 371 |
+
|
| 372 |
+
$$ \mathcal{A} \not\models C \cup \{v\} \Leftrightarrow \mathcal{A} \not\models_{UP} C \cup \{v\}. \quad (4) $$
|
| 373 |
+
|
| 374 |
+
This feature enables SAT solver to detect their mistakes on each of PBMod-constraints as soon as such a mistake occurs.
|
| 375 |
+
|
| 376 |
+
In the rest of this section, we study the cases for which we expect SAT solvers to perform well on the output of our encoding. Let $Q$ be a PB-constraint on $X$, $\mathcal{A}$ be a partial assignment and $\text{Ans}(\mathcal{A})$ be the set of total assignment, to $X$, satisfying $Q$ and extending $\mathcal{A}[X]$. There are two situations in which UP is able to infer the values of input variables:
|
| 377 |
+
|
| 378 |
+
1. Unit Propagation Detects Inconsistency: One can infer the current partial assignment, $\mathcal{A}$, cannot satisfying $Q$ by knowing $\text{Ans}(\mathcal{A}) = \emptyset$. Recall that there are partial assignments and PB-constraints such that although $\text{Ans}(\mathcal{A}) = \emptyset$, each of the $m$ PBMod-constraints has non-empty solution (but the intersection of their solution is empty).
|
| 379 |
+
|
| 380 |
+
If at least one of the $m$ PBMod-constraints is inconsistent with the current partial assignment, UP can infer inconsistency, in both DP and DC encodings.
|
| 381 |
+
|
| 382 |
+
2. Unit Propagation Infers the Value for an Input Variable: One can infer the value of input variable $x_k$ is true/false if $x_k$ takes the same value in all the solutions to $Q$. For this kind of constraints, UP might be able to infer the value of $x_k$, too.
|
| 383 |
+
---PAGE_BREAK---
|
| 384 |
+
|
| 385 |
+
If there exists a PBMod-constraint for which all its solutions which extend $\mathcal{A}$, have mapped $x_k$ to the same value, UP can infer the value of $x_k$.
|
| 386 |
+
|
| 387 |
+
These two cases are illustrated in the following example.
|
| 388 |
+
|
| 389 |
+
**Example 4** Let $Q(X) = x_1+2*x_2+3x_3+4*x_4+5*x_5 = 12$.
|
| 390 |
+
|
| 391 |
+
1. If $\mathcal{A}$, the current partial assignment, is $\mathcal{A}=\{\neg x_2, \neg x_4\}$ and $M=5$. There is no total assignment satisfying $1x_1 + 3x_3 + 0x_5 \equiv 2 \pmod 5$.
|
| 392 |
+
|
| 393 |
+
2. If $\mathcal{A}$, the current partial assignment, is $\mathcal{A}=\{\neg x_3, \neg x_5\}$ and $M=2$, there are four total assignments extending $\mathcal{A}$ and satisfying PBMod-constraint $1x_1 + 0x_2 + 0x_4 \equiv 0 \pmod 2$. In all of them, $x_1$ is mapped to false.
|
| 394 |
+
|
| 395 |
+
A special case of the second situation is when UP can detect the values of all $x \in X$ given the current partial assignment. In the rest of this section, we estimate the number of PB-constraints for which UP can solve the problem. More precisely, we give a lower bound on the number of PB-constraints for which UP detects inconsistency or it expands an empty assignment to a solution given the translation of those constraints.
|
| 396 |
+
|
| 397 |
+
Let us assume the constraints are selected, uniformly at random, from $\{\sum a_i l_i + \dots + a_n l_n = b : 1 \le a_i \le A = 2^{R(n)} \text{ and } 1 \le b \le n * A\}$ where $R(n)$ is a polynomial in $n$ and $R(n) > n$. To simplify the analysis, we use the same prime modulos $\mathbb{P}^n = \{P_1 = 2, \dots, P_m = \theta(R(n)) > 2n\}$ for all constraints.
|
| 398 |
+
|
| 399 |
+
Consider the following PBMod-constraints:
|
| 400 |
+
|
| 401 |
+
$$1x_1 + \cdots + 1x_{n-1} + 1x_n = n + 1 \pmod{P_m} \quad (5)$$
|
| 402 |
+
|
| 403 |
+
$$1x_1 + \cdots + 1x_{n-1} + 1x_n = n \pmod{P_m} \quad (6)$$
|
| 404 |
+
|
| 405 |
+
It is not hard to verify that (5) does not have any solution and (6) has exactly one solution. It is straightforward to verify that UP can infer inconsistency given a translation obtained by DP-based (DC-based) encoding for (5), even if the current assignment is empty. Also, UP expands the empty assignment to an assignment mapping all $x_i$ to true on a translation for (6) obtained by either DP-based encoding or DC-based encoding. Chinese Remainder Theorem, (Ding, Pei, and Salomaa 1996), implies that there are $(A/P_m)^{n+1} = 2^{(n+1)R(n)}/R(n)^{n+1}$ different PB-constraints in the form $\sum a_i l_i = b$ such that their corresponding PBMod-constraints, where the modulo is $P_m$, are the same as (5). The same claim is true for (6).
|
| 406 |
+
|
| 407 |
+
The above argument shows that, for proposed encoding, the number of easy to solve PB-constraints is huge. In (Aavani 2011), we showed that this number is much smaller for Sorting network:
|
| 408 |
+
|
| 409 |
+
**Observation 1** (Aavani 2011) There are at most $(\log A)^n$ instances where the CNF produced by Sorting Network encoding maintains arc-consistency, while this number for our encoding is at least $(A/\log(A))^n$. So, if $A = 2^{R(n)}$, almost always we have $2^{R(n)}/R(n) \gg R(n)$.
|
| 410 |
+
|
| 411 |
+
**Observation 2** (Aavani 2011) There is a family of PB-constraints whose translation through totalizer-based encoding is not arc-consistent but the translation obtained by our encoding is arc-consistent.
|
| 412 |
+
|
| 413 |
+
## Experimental Evaluation
|
| 414 |
+
|
| 415 |
+
By combining any modulo selection approach and any PBMod-constraint encoder, one can construct a PB-constraint solver. In this section, we selected the following configurations: Prime with DP (Prime.DP), Prime with DC (Prime.DC). We used CryptoMiniSAT as the SAT solver for our encodings, as it performed better than MiniSAT, on our initial benchmarking experiments.
|
| 416 |
+
|
| 417 |
+
To evaluate the performance of these configurations, we used the Number Partitioning Problem, NPP. Given a set of integers $S = \{a_1, \dots, a_n\}$, NPP asks whether there is a subset of $S$ such that the summation of its members is exactly $\sum a_i/2$. Following (Gent and Walsh 1998), we generated 100 random instances for NPP, for a given $n$ and $L$ as follows:
|
| 418 |
+
|
| 419 |
+
Create set $S = \{a_1, \dots, a_n\}$ such that each of $a_i$ is selected independently at random from $[0 \dots 2^L]$.
|
| 420 |
+
|
| 421 |
+
We ran each instance on our two configurations and also on two other encodings, Sorting Network based encoding (SN), Binary Adder Encoding (BADD)(Eén and Sorensson 2006), provided by MiniSAT+¹. All running times, reported in this paper, are the total running times (the result of summation of times spent to generate CNF formulas and time spent to solve the CNF formulas). We also tried to run the experiments with BDD encoder, but as the CNF produced by BDD encoder is exponentially big, it failed to solve medium and large size instances.
|
| 422 |
+
|
| 423 |
+
Before we describe the result of experiments, we discuss some properties of the number partitioning problem.
|
| 424 |
+
|
| 425 |
+
### Number Partitioning Problem
|
| 426 |
+
|
| 427 |
+
The Number partitioning problem is an NP-Complete problems, and it can also be seen as a special case of subset sum problem. In the SAT context, an instance of NPP can be rewritten as a PB-constraint whose comparison operator is “=”. Neither this problem nor subset sum problem has received much attention by the SAT community.
|
| 428 |
+
|
| 429 |
+
Size of an instance of NPP, where set $S$ with $n$ elements and $a_{Max}$ is the maximum absolute value in $S$, is $\theta(n * \log(a_{Max})) + n$. It is known that if the value of $a_{Max}$ is polynomial wrt $n$, the standard dynamic programming approach can solve this problem in time $O(na_{Max})$, which is polynomial time wrt to the instance size. If $a_{Max}$ is too large, $2^\Omega(2^{\theta(n)})$, the naive algorithm, which generates all the $2^n$ subsets of $S$, works in polynomial time wrt the instance size. The hard instances for this problem are those in which $a_{Max}$ is neither too small nor too large wrt $n$.
|
| 430 |
+
|
| 431 |
+
In (Borgs, Chayes, and Pittel 2001), the authors defined $k = L/n$ and showed that NPP has a phase transition at $k=1$: for $k < 1$, there are many perfect partitions with probability tending to 1 as $n \mapsto \infty$, while for $k > 1$, there are not perfect partitions with probability tending to 1. As $n \mapsto \infty$.
|
| 432 |
+
|
| 433 |
+
### Experiments
|
| 434 |
+
|
| 435 |
+
All the experiments were performed on a Linux cluster (Intel(R) Xeon(R) 2.66GHz). We set the time limit for the to
|
| 436 |
+
|
| 437 |
+
¹http://minisat.se/
|
| 438 |
+
---PAGE_BREAK---
|
| 439 |
+
|
| 440 |
+
Figure 1: The left hand figure plots the best solver for pairs $n$ and $L$ ($n = 3 \cdots 30$, $L = 3 \cdots 2n$). The right hand figure shows the average solving time, in second, of the engines which solved all the 100 instances in 10 minutes timeout, for $n = L \in 3 \cdots n$.
|
| 441 |
+
|
| 442 |
+
be 10 minutes. During our experiments, we noticed that
|
| 443 |
+
the sorting network encoding in MiniSAT+ incorrectly an-
|
| 444 |
+
nounces some unsatisfiable instances to be satisfiable (an
|
| 445 |
+
example of which is the following constraint). We did not
|
| 446 |
+
investigate the reason of this issue in the source code of
|
| 447 |
+
MiniSAT+, and all the reported timings are using the bro-
|
| 448 |
+
ken code.
|
| 449 |
+
|
| 450 |
+
$$5x_1 + 7x_2 + 1x_3 + 5x_4 = 9.$$
|
| 451 |
+
|
| 452 |
+
In our experiments, we generated 100 instances for $n \in \{3..30\}$ and $L \in \{3..2 * n\}$. We say a solver wins on a set of instances if it solves more instances than the others and in the case of a tie, we decide the winner by looking at the average running time. The instances on which each solver performed the best are plotted on Figure 1. As the Sorting Network solver was never a winner on any of the sets, it did not show up in the graph.
|
| 453 |
+
|
| 454 |
+
One can observe the following patterns from the data pre-
|
| 455 |
+
sented in Figure 1:
|
| 456 |
+
|
| 457 |
+
1. For $n < 15$, all solvers successfully solve all the instances.
|
| 458 |
+
|
| 459 |
+
2. Sorting network fails to solve all the instances where $n = 20$.
|
| 460 |
+
|
| 461 |
+
3. BADD solves all the instances when $n = L = 24$ in a reasonable time, but it suddenly fails when the $n(L)$ gets larger.
|
| 462 |
+
|
| 463 |
+
4. For large enough $n$ ($n < 15$) BADD is the winner only when $L$ is small.
|
| 464 |
+
|
| 465 |
+
5. For large enough $n$ ($n < 15$) either PDC or PDP is the best performing solver.
|
| 466 |
+
|
| 467 |
+
Conclusion and Future Work
|
| 468 |
+
|
| 469 |
+
We presented a method for translating Pseudo-Boolean con-
|
| 470 |
+
straints into CNF. The size of produces CNF is polyno-
|
| 471 |
+
mial with respect to the input size. We also showed that
|
| 472 |
+
for exponentially many instances, the produced CNF is arc-
|
| 473 |
+
|
| 474 |
+
consistent. The number of arc-consistent instances, for our
|
| 475 |
+
encodings, is much bigger than that of the existing encod-
|
| 476 |
+
ings.
|
| 477 |
+
|
| 478 |
+
In our experimental evaluation section, we described a set
|
| 479 |
+
of randomly generated number partitioning instances with
|
| 480 |
+
two parameters, *n* and *L*, where *n* describes the size of our
|
| 481 |
+
set and $2^L$ is the maximum value in the set. The experimen-
|
| 482 |
+
tal result suggests that Prime.DP and Prime.DC encoding
|
| 483 |
+
outperform Binary Adder and Sorting Network encodings.
|
| 484 |
+
|
| 485 |
+
Future work
|
| 486 |
+
|
| 487 |
+
The upper bounds for our encodings, presented in Table 2,
|
| 488 |
+
are not tight. We hope to improve these and give the exact
|
| 489 |
+
asymptotic sizes. Further experimental evaluation is needed
|
| 490 |
+
to determine the relative performance of the various meth-
|
| 491 |
+
ods on more practical instances, and on instances with larger
|
| 492 |
+
numbers of variables. Finally, we hope to develop heuristics
|
| 493 |
+
for automatically choosing the best encoding to use for any
|
| 494 |
+
given PB constraint.
|
| 495 |
+
|
| 496 |
+
References
|
| 497 |
+
|
| 498 |
+
Aavani, A. 2011. Translating pseudo-boolean constraints into cnf. CoRR abs/1104.1479.
|
| 499 |
+
|
| 500 |
+
Aloul, F.; Ramani, A.; Markov, I.; and Sakallah, K. 2002.
|
| 501 |
+
PBS: a backtrack-search pseudo-boolean solver and opti-
|
| 502 |
+
mizer. In Proceedings of the 5th International Symposium
|
| 503 |
+
on Theory and Applications of Satisfiability, 346–353. Cite-
|
| 504 |
+
seer.
|
| 505 |
+
|
| 506 |
+
Bailleux, O.; Boufkhad, Y.; and Roussel, O. 2009. New Encodings of Pseudo-Boolean Constraints into CNF. Theory and Applications of Satisfiability Testing-SAT 2009 181–194.
|
| 507 |
+
|
| 508 |
+
Borgs, C.; Chayes, J.; and Pittel, B. 2001. Phase transition and finite-size scaling for the integer partitioning problem. Random Structures & Algorithms 19(3-4):247–288.
|
| 509 |
+
---PAGE_BREAK---
|
| 510 |
+
|
| 511 |
+
Ding, C.; Pei, D.; and Salomaa, A. 1996. *Chinese remainder theorem: applications in computing, coding, cryptography*. World Scientific Publishing Co., Inc. River Edge, NJ, USA.
|
| 512 |
+
|
| 513 |
+
Eén, N., and Sorensson, N. 2006. Translating pseudo-boolean constraints into SAT. *Journal on Satisfiability, Boolean Modeling and Computation* 2(3-4):1-25.
|
| 514 |
+
|
| 515 |
+
Eén, N. 2005. *SAT Based Model Checking*. Ph.D. Dissertation, Department of Computing Science, Chalmers University of Technology and Goteborg University.
|
| 516 |
+
|
| 517 |
+
Farhi, B., and Kane, D. 2009. New results on the least common multiple of consecutive integers. In *Proc. Amer. Math. Soc*, volume 137, 1933-1939.
|
| 518 |
+
|
| 519 |
+
Gent, I. P., and Walsh, T. 1998. Analysis of heuristics for number partitioning. *Computational Intelligence* 14(3):430-451.
|
| 520 |
+
|
| 521 |
+
Sheini, H., and Sakallah, K. 2006. Pueblo: A hybrid pseudo-boolean SAT solver. *Journal on Satisfiability, Boolean Modeling and Computation* 2:61-96.
|
| 522 |
+
|
| 523 |
+
Tseitin, G. 1968. On the complexity of derivation in propositional calculus. *Studies in constructive mathematics and mathematical logic* 2(115-125):10-13.
|
samples_new/texts_merged/2590883.md
ADDED
|
@@ -0,0 +1,504 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
A LOADING-DEPENDENT
|
| 5 |
+
MODEL OF PROBABILISTIC
|
| 6 |
+
CASCADING FAILURE
|
| 7 |
+
|
| 8 |
+
**IAN DOBSON**
|
| 9 |
+
|
| 10 |
+
Electrical & Computer Engineering Department
|
| 11 |
+
University of Wisconsin-Madison
|
| 12 |
+
Madison, WI 53706
|
| 13 |
+
E-mail: dobson@engr.wisc.edu
|
| 14 |
+
|
| 15 |
+
**BENJAMIN A. CARRERAS**
|
| 16 |
+
|
| 17 |
+
Oak Ridge National Laboratory
|
| 18 |
+
Oak Ridge, TN 37831
|
| 19 |
+
E-mail: carrerasba@ornl.gov
|
| 20 |
+
|
| 21 |
+
**DAVID E. NEWMAN**
|
| 22 |
+
|
| 23 |
+
Physics Department
|
| 24 |
+
University of Alaska
|
| 25 |
+
Fairbanks, AK 99775
|
| 26 |
+
E-mail: ffden@uaf.edu
|
| 27 |
+
|
| 28 |
+
We propose an analytically tractable model of loading-dependent cascading failure that captures some of the salient features of large blackouts of electric power transmission systems. This leads to a new application and derivation of the quasibinomial distribution and its generalization to a saturating form with an extended parameter range. The saturating quasibinomial distribution of the number of failed components has a power-law region at a critical loading and a significant probability of total failure at higher loadings.
|
| 29 |
+
|
| 30 |
+
# 1. INTRODUCTION
|
| 31 |
+
|
| 32 |
+
Cascading failure is the usual mechanism for large blackouts of electric power transmission systems. For example, long, intricate cascades of events caused the August 1996 blackout in northwestern America [25] that disconnected 30,390 MW of power
|
| 33 |
+
---PAGE_BREAK---
|
| 34 |
+
|
| 35 |
+
to 7.5 million customers [23]. An even more spectacular example is the August
|
| 36 |
+
2003 blackout in northeastern America that disconnected 61,800 MW of power to
|
| 37 |
+
an area spanning 8 states and 2 provinces and containing 50 million people [33].
|
| 38 |
+
The vital importance of the electrical infrastructure to society motivates the con-
|
| 39 |
+
struction and study of models of cascading failure.
|
| 40 |
+
|
| 41 |
+
In this article, we describe some of the salient features of cascading failure in
|
| 42 |
+
blackouts with an analytically tractable probabilistic model. The features that we
|
| 43 |
+
abstract from the formidable complexities of large blackouts are the large but
|
| 44 |
+
finite number of components: components that fail when their load exceeds a thresh-
|
| 45 |
+
old, an initial disturbance loading the system, and the additional loading of com-
|
| 46 |
+
ponents by the failure of other components. The initial overall system stress is
|
| 47 |
+
represented by upper and lower bounds on a range of initial component loadings.
|
| 48 |
+
The model neglects the length of times between events and the diversity of power
|
| 49 |
+
system components and interactions. Of course, an analytically tractable model is
|
| 50 |
+
necessarily much too simple to represent with realism all of the aspects of cas-
|
| 51 |
+
cading failure in blackouts; the objective is, rather, to help understand some global
|
| 52 |
+
systems effects that arise in blackouts and in more detailed models of blackouts.
|
| 53 |
+
Although our main motivation is large blackouts, the model is sufficiently simple
|
| 54 |
+
and general that it could be applied to cascading failure of other large, intercon-
|
| 55 |
+
nected infrastructures.
|
| 56 |
+
|
| 57 |
+
We summarize our cascading failure model and indicate some of the connec-
|
| 58 |
+
tions to the literature that are elaborated later. The model has many identical com-
|
| 59 |
+
ponents randomly loaded. An initial disturbance adds load to each component and
|
| 60 |
+
causes some components to fail by exceeding their loading limit. Failure of a com-
|
| 61 |
+
ponent causes a fixed load increase for other components. As components fail, the
|
| 62 |
+
system becomes more loaded and cascading failure of further components becomes
|
| 63 |
+
likely. The probability distribution of the number of failed components is a satu-
|
| 64 |
+
rating quasibinomial distribution. The quasibinomial distribution was introduced
|
| 65 |
+
by Consul [11] and further studied by Burtin [3], Islam, O'Shaughnessy, and Smith
|
| 66 |
+
[19], and Jaworski [20]. The saturation in our model extends the parameter range
|
| 67 |
+
of the quasibinomial distribution, and the saturated distribution can represent highly
|
| 68 |
+
stressed systems with a high probability of all components failing. Explicit formu-
|
| 69 |
+
las for the saturating quasibinomial distribution are derived using a recursion and
|
| 70 |
+
via the quasimultinomial distribution of the number of failures in each stage of the
|
| 71 |
+
cascade. These derivations of the quasibinomial distribution and its generalization
|
| 72 |
+
to a saturating form appear to be novel. The cascading failure model can also be
|
| 73 |
+
expressed as a queuing model, and in the nonsaturating case, the number of cus-
|
| 74 |
+
tomers in the first busy period is known to be quasibinomial [10,32].
|
| 75 |
+
|
| 76 |
+
The article is organized as follows. Section 2 describes cascading failure black-
|
| 77 |
+
outs and Section 3 describes the model and its normalization. Section 4 derives
|
| 78 |
+
the saturating quasibinomial distribution of the number of failures and shows how
|
| 79 |
+
the saturation generalizes the quasibinomial distribution and extends its parameter
|
| 80 |
+
range. Section 5 illustrates the use of the model in studying the effect of system
|
| 81 |
+
loading.
|
| 82 |
+
---PAGE_BREAK---
|
| 83 |
+
|
| 84 |
+
## 2. THE NATURE OF CASCADING FAILURE BLACKOUTS
|
| 85 |
+
|
| 86 |
+
Bulk electrical power transmission systems are complex networks of large numbers of components that interact in diverse ways. For example, most of America and Canada east of the Rocky Mountains is supplied by a single network running at a shared supply frequency. This network includes thousands of generators, tens of thousands of transmission lines and network nodes, and about 100 control centers that monitor and control the network flows. The flow of power and some dynamical effects propagate on a continental scale. All of the electrical components have limits on their currents and voltages. If these limits are exceeded, automatic protection devices or the system operators disconnect the component from the system. We regard the disconnected component as failed because it is not available to transmit power (in practice, it will be reconnected later). Components can also fail in the sense of misoperation or damage due to aging, fire, weather, poor maintenance, or incorrect design or operating settings. In any case, the failure causes a transient and causes the power flow in the component to be redistributed to other components according to circuit laws and subsequently redistributed according to automatic and manual control actions. The transients and readjustments of the system can be local in effect or can involve components far away, so that a component disconnection or failure can effectively increase the loading of many other components throughout the network. In particular, the propagation of failures is not limited to adjacent network components. The interactions involved are diverse and include deviations in power flows, frequency, and voltage, as well as operation or misoperation of protection devices, controls, operator procedures, and monitoring and alarm systems. However, all of the interactions between component failures tend to be stronger when components are highly loaded. For example, if a more highly loaded transmission line fails, it produces a larger transient, there is a larger amount of power to redistribute to other components, and failures in nearby protection devices are more likely. Moreover, if the overall system is more highly loaded, components have smaller margins so they can tolerate smaller increases in load before failure, the system nonlinearities and dynamical couplings increase, and the system operators have fewer options and more stress.
|
| 87 |
+
|
| 88 |
+
A typical large blackout has an initial disturbance or trigger event, followed by a sequence of cascading events. Each event further weakens and stresses the system and makes subsequent events more likely. Examples of an initial disturbance are short circuits of transmission lines through untrimmed trees, protection device misoperation, and bad weather. The blackout events and interactions are often rare, unusual, or unanticipated because the likely and anticipated failures are already routinely accounted for in power system design and operation. The complexity is such that it can take months after a large blackout to sift through the records, establish the events occurring, and reproduce with computer simulations and hindsight a causal sequence of events.
|
| 89 |
+
|
| 90 |
+
The historically high reliability of North American power transmission systems is largely due to estimating the transmission system capability and designing
|
| 91 |
+
---PAGE_BREAK---
|
| 92 |
+
|
| 93 |
+
and operating the system with margins with respect to a chosen subset of likely and serious contingencies. The analysis is usually either a deterministic analysis of estimated worst cases or a Monte Carlo simulation of moderately detailed probabilistic models that capture steady-state interactions [2]. Combinations of likely contingencies and some dependencies between events such as common mode or common cause are sometimes considered. The analyses address the first few likely failures rather than the propagation of many rare or unanticipated failures in a cascade.
|
| 94 |
+
|
| 95 |
+
We briefly review some other approaches to cascading failure in power system blackouts. Carreras, Lynch, Dobson, and Newman [4] represented cascading transmission line overloads and outages in a power system model using the DC load flow approximation and standard linear programming optimization of the generation dispatch. The model shows critical point behavior as load is increased and can show power tails similar to those observed in blackout data. Chen and Thorp [9] modeled power system blackouts using the DC load flow approximation and standard linear programming optimization of the generation dispatch and represented in detail hidden failures of the protection system. The expected blackout size is obtained using importance sampling and it shows some indications of a critical point as loading is increased. Rios, Kirschen, Jawayeera, Nedic, and Allan [30] evaluated expected blackout cost using Monte Carlo simulation of a power system model that represents the effects of cascading line overloads, hidden failures of the protection system, power system dynamic instabilities, and the operator responses to these phenomena. Ni, McCalley, Vittal, and Tayyib [26] evaluate expected contingency severities based on real-time predictions of the power system state to quantify the risk of operational conditions. The computations account for current and voltage limits, cascading line overloads, and voltage instability. Roy, Asavathiratham, Lesieutre, and Verghese [31] constructed randomly generated tree networks that abstractly represent influences between idealized components. Components can be failed or operational according to a Markov model that represents both internal component failure and repair processes and influences between components that cause failure propagation. The effects of the network degree and the intercomponent influences on the failure size and duration were studied. Pepyne, Panayiotou, Cassandras, and Ho [29] also used a Markov model for discrete state power system nodal components, but they propagated failures along the transmission lines of a power systems network with a fixed probability. They studied the effect of the propagation probability and maintenance policies that reduce the probability of hidden failures. The challenging problem of determining cascading failure due to dynamic transients in hybrid nonlinear differential equation models was addressed by DeMarco [15] using Lyapunov methods applied to a smoothed model and by Parrilo, Lall, Paganini, Verghese, Lesieutre, and Marsden [28] using Karhunen-Loeve and Galerkin model reduction. Watts [34] described a general model of cascading failure in which failures propagate through the edges of a random network. Network nodes have a random threshold and fail when this threshold is exceeded by a sufficient fraction of failed nodes one edge away. Phase transitions causing large cascades can occur when the net-
|
| 96 |
+
---PAGE_BREAK---
|
| 97 |
+
|
| 98 |
+
work becomes critically connected by having sufficiently average degree or when a highly connected network has sufficiently low average degree so that the effect of a single failure is not swamped by a high connectivity to unfailed nodes. Lindley and Singpurwalla [24] described some foundations for causal and cascading failure in infrastructures and model cascading failure as an increase in a component failure rate within a time interval after another component fails. Initial versions of the cascading failure model of this article appear in Dobson, Chen, Thorp, Carreras, and Newman [18] and Dobson, Carreras, and Newman [16].
|
| 99 |
+
|
| 100 |
+
### 3. DESCRIPTION OF MODEL
|
| 101 |
+
|
| 102 |
+
The model has *n* identical components with random initial loads. For each component, the minimum initial load is $L^{\min}$ and the maximum initial load is $L^{\max}$. For $j = 1, 2, \dots, n$, component *j* has initial load $L_j$ that is a random variable uniformly distributed in [$L^{\min}, L^{\max}$]. $L_1, L_2, \dots, L_n$ are independent.
|
| 103 |
+
|
| 104 |
+
Components fail when their load exceeds $L^{\text{fail}}$. When a component fails, a fixed and positive amount of load *P* is transferred to each of the components.
|
| 105 |
+
|
| 106 |
+
To start the cascade, an initial disturbance loads each component by an additional amount *D*. Some components may then fail depending on their initial loads $L_j$, and the failure of each of these components will distribute an additional load *P* that can cause further failures in a cascade. The components become progressively more loaded as the cascade proceeds.
|
| 107 |
+
|
| 108 |
+
In particular, the model produces failures in stages *i* = 0,1,2,... according to the following algorithm, where $M_i$ is the number of failures in stage *i*.
|
| 109 |
+
|
| 110 |
+
**CASCADE Algorithm**
|
| 111 |
+
|
| 112 |
+
0. All *n* components are initially unfailed and have initial loads $L_1, L_2, \dots, L_n$ that are independent random variables uniformly distributed in [$L^{\min}, L^{\max}$].
|
| 113 |
+
|
| 114 |
+
1. Add the initial disturbance *D* to the load of each component. Initialize the stage counter *i* to zero.
|
| 115 |
+
|
| 116 |
+
2. Test each unfailed component for failure: For *j* = 1, ..., *n*, if component *j* is unfailed and its load is greater than $L^{\text{fail}}$, then component *j* fails. Suppose that $M_i$ components fail in this step.
|
| 117 |
+
|
| 118 |
+
3. Increment the component loads according to the number of failures $M_i$: Add $M_i P$ to the load of each component.
|
| 119 |
+
|
| 120 |
+
4. Increment *i* and go to step 2.
|
| 121 |
+
|
| 122 |
+
The CASCADE algorithm has the property that if there are no failures in stage *j* so that $M_j = 0$, then $0 = M_j = M_{j+1} = \dots$ so that there are no subsequent failures (in step 2, $M_j$ can be zero either because all the components have already failed or because the loads of the unfailed components are less than $L^{\text{fail}}$). Since there are *n* components, it follows that $M_n = 0$ and that the outcome with the maximum number of stages with nonzero failures is $1 = M_0 = M_1 = \dots = M_{n-1}$. We are most interested in the total number of failures $S = M_0 + M_1 + \dots + M_{n-1}$.
|
| 123 |
+
---PAGE_BREAK---
|
| 124 |
+
|
| 125 |
+
When the model in an application is being interpreted, the load increment *P* need not correspond only to transfer of a physical load such as the power flow through a component. Many ways by which a component failure makes the failure of other components more likely can be thought of as increasing an abstract "load" on the other components until failure occurs when a threshold is reached.
|
| 126 |
+
|
| 127 |
+
It is useful to normalize the loads and model parameters so that the initial loads lie in [0,1] and $L^{\text{fail}} = 1$ while preserving the sequence of component failures and $M_0, M_1, \dots$. First, note that the sequence of component failures and $M_0, M_1, \dots$ are unchanged by adding the same constant to the initial disturbance *D* and the failure load $L^{\text{fail}}$. In particular, choosing the constant to be $L^{\max} - L^{\text{fail}}$, the initial disturbance *D* is modified to $D + (L^{\max} - L^{\text{fail}})$ and the failure load $L^{\text{fail}}$ is modified to $L^{\text{fail}} + (L^{\max} - L^{\text{fail}}) = L^{\max}$. Then all of the loads are shifted and scaled to yield normalized parameters. The normalized initial load on component *j* is $\ell_j = (L_j - L^{\min})/(L^{\max} - L^{\min})$ so that $\ell_j$ is a random variable uniformly distributed on [0,1]. The normalized minimum initial load is zero, and the normalized maximum initial load and the normalized failure load are both one. The normalized modified initial disturbance and the normalized load increase when a component fails are
|
| 128 |
+
|
| 129 |
+
$$d = \frac{D + L^{\max} - L^{\text{fail}}}{L^{\max} - L^{\min}}, \quad p = \frac{P}{L^{\max} - L^{\min}}. \qquad (1)$$
|
| 130 |
+
|
| 131 |
+
An alternative way to describe the model follows. It is convenient to use the nor-
|
| 132 |
+
malized parameters in Eq. (1). Let $N(t)$ be the number of components with loads in
|
| 133 |
+
$(1-t, 1]$. If the $n$ initial component loadings are regarded as $n$ points in $[0, 1] \subset \mathbb{R}$,
|
| 134 |
+
then $N(t)$ is the number of points greater than $1-t$. Then $0 \le N(t) \le n$, the sample
|
| 135 |
+
paths of $N$ are nondecreasing, and $N(t) = 0$ for $t \le 0$ and $N(t) = n$ for $t \ge 1$.
|
| 136 |
+
|
| 137 |
+
Let the number of components failed at or before stage *j* be $S_j = M_0 + M_1 + \dots + M_j$. Then, assuming $S_{-1} = 0$, the CASCADE algorithm generates $S_0, S_1, \dots$ according to
|
| 138 |
+
|
| 139 |
+
$$S_j = N(d + S_{j-1}p), \quad j = 0, 1, \dots \qquad (2)$$
|
| 140 |
+
|
| 141 |
+
Then $0 \le S_j \le n$, $S_j$ is nondecreasing, and $S_k = S_{k+1}$ implies that $S_j = S_{j+1}$ for $j \ge k$. The minimum such $k$ is the maximum stage number in which failures occur and $S_{-1} < S_0 < S_1 < \dots < S_k = S_{k+1} = \dots$ and the total number of failures $S = S_k$; that is,
|
| 142 |
+
|
| 143 |
+
$$N(d + Sp) = S, \qquad (3)$$
|
| 144 |
+
|
| 145 |
+
$$N(d + S_j p) > S_j, \quad -1 \le j < k. \qquad (4)$$
|
| 146 |
+
|
| 147 |
+
Moreover, for $j < k$ and $r = 0, 1, \dots, M_{j+1} - 1$,
|
| 148 |
+
|
| 149 |
+
$$N(d + (S_j + r)p) \ge N(d + S_j p) = S_{j+1} = S_j + M_{j+1} > S_j + r. \qquad (5)$$
|
| 150 |
+
---PAGE_BREAK---
|
| 151 |
+
|
| 152 |
+
Therefore, $N(d + sp) > s$ for $s = 0, 1, \dots, S - 1$, and this inequality and Eq. (3) allow
|
| 153 |
+
the total number of failures to be characterized as
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
S = \min\{s | N(d + sp) = s, s \in \{0,1,2,\dots\}\}. \quad (6)
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
If, at stage *j*, $d + S_j p > 1$, we say that the model saturates. Saturation implies $S_{j+1} = n$. Saturation never occurs if *d* and *p* are small enough that $d + np < 1$.
|
| 160 |
+
|
| 161 |
+
The model can be formulated as a queue with a single server. Exactly $n$ cus-
|
| 162 |
+
tomers arrive during a given hour independently and uniformly. The server is avail-
|
| 163 |
+
able to serve these customers at time $d$ after the start of the hour because of
|
| 164 |
+
completing some other task. The customer service time is $p$. Then, $S$ is the num-
|
| 165 |
+
ber of customers that arrive during the first busy period. The queue saturates when
|
| 166 |
+
the first busy period runs past the end of the hour. Charalambides [10] and Takács
|
| 167 |
+
[32] analyzed this queue in the nonsaturating case described in Section 4.3.
|
| 168 |
+
|
| 169 |
+
The model can also be recast in the form of an approximate and idealized fiber bundle model. There are $n$ identical, parallel fibers in the bundle. The $L_j$ of the unnormalized model now indicates breaking strength: Fiber $j$ has random breaking strength $L^{\text{fail}} - L_j$ that is uniformly distributed in [$L^{\text{fail}} - L^{\max}$, $L^{\text{fail}} - L^{\min}$]. Each fiber has zero load initially. Then, an initial force is applied to the bundle that increases the load of each fiber to $D$ and this starts a burst avalanche of fiber breaks of size $S$. When a fiber breaks, it distributes a constant amount of load $P$ to all the other fibers. In contrast, and with better physical justification, idealized fiber bundle models with global redistribution as described by Kloster, Hansen, and Hemmer [22] redistribute the current fiber load equally to the remaining fibers.
|
| 170 |
+
|
| 171 |
+
**4. DISTRIBUTION OF NUMBER OF FAILURES**
|
| 172 |
+
|
| 173 |
+
The main result is that the distribution of the total number of component failures
|
| 174 |
+
$S$ is
|
| 175 |
+
|
| 176 |
+
$$
|
| 177 |
+
P[S=r] = \begin{cases}
|
| 178 |
+
\binom{n}{r} \phi(d) (d+rp)^{r-1} (\phi(1-d-rp))^{n-r}, & r=0,1,\ldots,n-1 \\
|
| 179 |
+
1 - \sum_{s=0}^{n-1} P(S=s), & r=n,
|
| 180 |
+
\end{cases} \tag{7}
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
where $p \ge 0$ and the saturation function is
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\phi(x) = \begin{cases} 0, & x < 0 \\ x, & 0 \le x \le 1 \\ 1, & x > 1. \end{cases} \qquad (8)
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
It is convenient to assume that $0^0 \equiv 1$ and $0/0 \equiv 1$ when these expressions arise in
|
| 190 |
+
any formula in this article.
|
| 191 |
+
---PAGE_BREAK---
|
| 192 |
+
|
| 193 |
+
If $d \ge 0$ and $d + np \le 1$, then there is no saturation ($\phi(x) = x$) and Eq. (7) reduces to the quasibinomial distribution
|
| 194 |
+
|
| 195 |
+
$$P[S=r] = \binom{n}{r} d(d+rp)^{r-1}(1-d-rp)^{n-r}. \quad (9)$$
|
| 196 |
+
|
| 197 |
+
The quasibinomial distribution was introduced by Consul [11] to model an urn problem in which a player makes strategic decisions. Burtin [3] derived the distribution of the number of initially uninfected nodes that become infected in an inverse epidemic process in a random mapping. This distribution is quasibinomial, with $d$ the fraction of initially infected nodes and $p$ the uniform random mapping probability. Islam et al. [19] interpreted $d$ and $p$ as primary and secondary infection probabilities and applied the quasibinomial distribution to data on the final size of influenza epidemics. Jaworski [20] generalized the derivation to a random mapping with a general fixed-point probability.
|
| 198 |
+
|
| 199 |
+
The cascading failure model gives a new application and interpretation of the quasibinomial distribution. Moreover, the saturation in Eq. (7) extends the range of parameters of the quasibinomial distribution to allow $d + np > 1$. Section 5 shows that this extended parameter range can describe regimes with a high probability of all components failing.
|
| 200 |
+
|
| 201 |
+
The next two subsections derive Eq. (7) from the CASCADE algorithm in two ways: by means of a recursion and by means of the quasimultinomial joint distribution of $M_0, M_1, \dots, M_{n-1}$.
|
| 202 |
+
|
| 203 |
+
**4.1. Recursion**
|
| 204 |
+
|
| 205 |
+
It is convenient to show the dependence of the distribution of number of failures on the normalized parameters by writing $P[S=r] = f(r,d,p,n)$.
|
| 206 |
+
|
| 207 |
+
In the case of $n=0$ components,
|
| 208 |
+
|
| 209 |
+
$$f(0, d, p, 0) = 1. \qquad (10)$$
|
| 210 |
+
|
| 211 |
+
According to the CASCADE algorithm, when the initial disturbance $d \le 0$, no components fail, and when $d \ge 1$, all $n$ components fail. Then
|
| 212 |
+
|
| 213 |
+
$$f(r, d, p, n) = \begin{cases} 1 - \phi(d), & r=0 \\ 0, & 0 < r < n \\ \phi(d), & r=n \end{cases} \quad (d \le 0 \text{ or } d \ge 1) \text{ and } n > 0. \tag{11}$$
|
| 214 |
+
|
| 215 |
+
We assume $n > 0$ and $0 < d < 1$ for the rest of the subsection.
|
| 216 |
+
|
| 217 |
+
The initial disturbance $d$ causes stage 0 failure of the components that have initial load $\ell$ in $(1-d, 1]$. Therefore, the probability of any component failing in stage 0 is $d$ and
|
| 218 |
+
---PAGE_BREAK---
|
| 219 |
+
|
| 220 |
+
$$P[M_0 = k] = \binom{n}{k} d^k (1-d)^{n-k}. \quad (12)$$
|
| 221 |
+
|
| 222 |
+
Suppose that $M_0 = k$ and consider the $n-k$ components that did not fail in stage 0. Since none of the $n-k$ components failed in stage 0, their initial loads $\ell$ must lie in $[0, 1-d]$ and the distribution of their initial loads conditioned on not failing in stage 0 is uniform in $[0, 1-d]$. In stage 1, each of the $n-k$ components has had a load increase $d$ from the initial disturbance and an additional load increase $kp$ from the stage 0 failure of $k$ components. Therefore, the equivalent total initial disturbance for each of the $n-k$ components is $D = kp + d$.
|
| 223 |
+
|
| 224 |
+
To summarize, assuming $M_0 = k$, the failure of the $n-k$ components in stage 1 is governed by the model with initial disturbance $D = kp + d$, load transfer $P = p$, $L^{\min} = 0$, $L^{\max} = 1-d$, $L^{\text{fail}} = 1$, and $n-k$ components. Normalizing the parameters using Eq. (1) yields that the failure of the $n-k$ components is governed by the model with normalized initial disturbance $kp/(1-d)$ and normalized load transfer $p/(1-d)$; that is,
|
| 225 |
+
|
| 226 |
+
$$P[S=r|M_0=k] = f\left(r-k, \frac{kp}{1-d}, \frac{p}{1-d}, n-k\right). \quad (13)$$
|
| 227 |
+
|
| 228 |
+
Combining Eqs. (12) and (13) yields the recursion
|
| 229 |
+
|
| 230 |
+
$$
|
| 231 |
+
\begin{align*}
|
| 232 |
+
f(r,d,p,n) &= \sum_{k=0}^{r} P[S=r|M_0=k] P[M_0=k] \\
|
| 233 |
+
&= \sum_{k=0}^{r} \binom{n}{k} d^k (1-d)^{n-k} f\left(r-k, \frac{kp}{1-d}, \frac{p}{1-d}, n-k\right), \\
|
| 234 |
+
&\qquad 0 \le r \le n, \quad 0 < d < 1, \quad n > 0. \tag{14}
|
| 235 |
+
\end{align*}
|
| 236 |
+
$$
|
| 237 |
+
|
| 238 |
+
Equations (10), (11), and (14) define $f(r,d,p,n) = P[S=r]$ for all $n \ge 0$ and $p \ge 0$. Equations (10) and (11) agree with Eq. (7). Moreover, it is routine to prove in the Appendix that Eq. (7) satisfies recursion (14). Therefore, Eq. (7) is the distribution of $S$ in the CASCADE algorithm. Thus, the recursion offers a simple way to derive the saturating quasibinomial distribution that avoids complicated algebra or combinatorics. It is also straightforward to use Eqs. (10) and (14) to confirm by induction on $n$ that Eq. (7) is a probability distribution.
|
| 239 |
+
|
| 240 |
+
## 4.2. A Quasimultinomial Distribution
|
| 241 |
+
|
| 242 |
+
This subsection shows that the joint distribution of $M_0, M_1, \dots, M_{n-1}$ is quasimultinomial and hence derives Eq. (7). It is convenient throughout to assume $d \ge 0$, restrict $m_0, m_1, \dots$ to nonnegative integers, and write $s_i = m_0 + m_1 + \dots + m_i$ for $i = 0, 1, \dots$ and $s_{-1} = 0$.
|
| 243 |
+
---PAGE_BREAK---
|
| 244 |
+
|
| 245 |
+
Let $\alpha_0 = \phi(d), \beta_0 = 1$, and, for $i=1,2,...$,
|
| 246 |
+
|
| 247 |
+
$$
|
| 248 |
+
\alpha_i = \phi \left( \frac{m_{i-1} p}{1 - d - s_{i-2} p} \right), \quad \beta_i = \phi(1 - d - s_{i-2} p). \qquad (15)
|
| 249 |
+
$$
|
| 250 |
+
|
| 251 |
+
The identity
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
\beta_i(1 - \alpha_i) = \beta_{i+1}, \quad i = 0, 1, 2, \dots, \tag{16}
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
can be verified using $1 - \phi(x) = \phi(1-x)$ and $d \ge 0$ and considering all of the cases.
|
| 258 |
+
|
| 259 |
+
In step 2 of stage 0 in the CASCADE algorithm, the probability that the load increment of *d* causes one of the components to fail is $\alpha_0 = \phi(d)$ and the probability of $m_0$ failures in the *n* components is
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
P[M_0 = m_0] = \binom{n}{m_0} \alpha_0^{m_0} (1-\alpha_0)^{n-m_0}. \quad (17)
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
Consider the end of step 2 of stage *i* ≥ 1 in the CASCADE algorithm. The
|
| 266 |
+
failures that have occurred are *M*₀ = *m*₀, *M*₁ = *m*₁, ..., *M*ᵢ = *m*ᵢ and there are *n* − *s*ᵢ
|
| 267 |
+
unfailed components, but the component loads have not yet been incremented by
|
| 268 |
+
*m*ᵢ*p* in step 3.
|
| 269 |
+
|
| 270 |
+
Suppose that *d* + *s*ᵢ₋₁*p* < 1. Then, conditioned on the *n* − *s*ᵢ components not yet having failed, the loads of the *n* − *s*ᵢ unfailed components are uniformly distributed in [*d* + *s*ᵢ₋₁*p*, 1]. In step 3, the probability that the load increment of *m*ᵢ*p* causes one of the unfailed components to fail is αᵢ₊₁ and the probability of *m*ᵢ₊₁ failures in the *n* − *s*ᵢ unfailed components is
|
| 271 |
+
|
| 272 |
+
$$
|
| 273 |
+
\begin{align}
|
| 274 |
+
P[M_{i+1} &= m_{i+1} | M_i = m_i, \dots, M_0 = m_0] \nonumber \\
|
| 275 |
+
&= \binom{n-s_i}{m_{i+1}} \alpha_{i+1}^{m_{i+1}} (1-\alpha_{i+1})^{n-s_{i+1}}, && m_{i+1} = 0, 1, \dots, n-s_i. \tag{18}
|
| 276 |
+
\end{align}
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
Suppose that $d + s_{i-1}p \ge 1$. Then, all of the components must have failed on a previous step and $P[M_{i+1} = m_{i+1}|M_i = m_i, \dots, M_0 = m_0] = 1$ for $m_{i+1} = 0$ and is zero otherwise. In this case, $\alpha_{i+1} = 0$ and Eq. (18) is verified.
|
| 280 |
+
|
| 281 |
+
We claim that for $s_i \le n$,
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
P[M_i = m_i, \dots, M_0 = m_0]
|
| 285 |
+
\begin{equation}
|
| 286 |
+
= \frac{n!}{m_0! m_1! \cdots m_i! (n-s_i)!} (\alpha_0 \beta_0)^{m_0} (\alpha_1 \beta_1)^{m_1} \cdots (\alpha_i \beta_i)^{m_i} \beta_{i+1}^{n-s_i}. \tag{19}
|
| 287 |
+
\end{equation}
|
| 288 |
+
$$
|
| 289 |
+
---PAGE_BREAK---
|
| 290 |
+
|
| 291 |
+
Equation (19) is proved by induction on $i$. For $i=0$, Eq. (19) reduces to Eq. (17). The inductive step is verified by multiplying Eqs. (18) and (19) and using Eq. (16) to obtain $P[M_{i+1} = m_{i+1}, \dots, M_0 = m_0]$ in the form of Eq. (19).
|
| 292 |
+
|
| 293 |
+
An expression equivalent to Eq. (19) obtained using Eq. (16) is
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
\begin{align}
|
| 297 |
+
P[M_i = m_i, \dots, M_0 = m_0] & \\
|
| 298 |
+
&= \frac{n!}{m_0! m_1! \dots m_i! (n-s_i)!} (\beta_0 - \beta_1)^{m_0} (\beta_1 - \beta_2)^{m_1} \dots (\beta_i - \beta_{i+1})^{m_i} \beta_{i+1}^{n-s_i}. \tag{20}
|
| 299 |
+
\end{align}
|
| 300 |
+
$$
|
| 301 |
+
|
| 302 |
+
The CASCADE algorithm has the property that if there are no failures in stage $j$ so that $M_j = 0$, then $0 = M_j = M_{j+1} = \dots$ and there are no subsequent failures. This property is verified by Eq. (20) because $m_j = 0$ implies $\beta_{j+1} = \beta_{j+2}$ so that the factor $(\beta_{j+1} - \beta_{j+2})^{m_{j+1}} = 0^{m_{j+1}}$, which vanishes unless $m_{j+1} = 0$. Iterating this argument gives $0 = M_j = M_{j+1} = \dots$. Since the maximum number of failures is $n$, the longest sequence of failures has $n$ stages with $M_0 = M_1 = \dots = M_{n-1} = 1$. It follows that $0 = M_n = M_{n+1} = \dots$ and that the nontrivial part of the joint distribution is determined by $M_0, M_1, \dots, M_{n-1}$. It also follows that $M_{n-1} = 0$ if there are less than $n$ stages with failures.
|
| 303 |
+
|
| 304 |
+
Equation (20) can now be rewritten for $i=n-1$. Let $I$ be the largest integer not exceeding $n$ such that $1-d-s_{I-2}p > 0$. Then, Eq. (20) becomes, for $s_{n-1} \le n$,
|
| 305 |
+
|
| 306 |
+
$$
|
| 307 |
+
\begin{align}
|
| 308 |
+
P[M_{n-1} = m_{n-1}, \dots, M_0 = m_0] & \nonumber \\
|
| 309 |
+
&= \frac{n!}{m_0! m_1! \cdots m_{n-1}! (n-s_{n-1})!} (\phi(d))^{m_0} (m_0 p)^{m_1} (m_1 p)^{m_2} \cdots (m_{I-2} p)^{m_{I-1}} \nonumber \\
|
| 310 |
+
&\qquad \times (\phi(1-d-s_{I-2}p))^{n-s_{I-1}} A(\mathbf{m}, I), \tag{21}
|
| 311 |
+
\end{align}
|
| 312 |
+
$$
|
| 313 |
+
|
| 314 |
+
where $A(\mathbf{m}, n) = 1$ and $A(\mathbf{m}, I) = 0^{m_{I+1}} \cdots 0^{m_{n-1}} 0^{n-s_{n-1}}$ for $I < n$. It follows from the definition of $A(\mathbf{m}, I)$ that Eq. (21) vanishes for $I < n$ unless $0 = M_{I+1} = \cdots = M_{n-1}$ and $S = M_0 + \cdots + M_I = n$. (Although Eq. (21) was derived assuming $d \ge 0$, it also holds for $d < 0$. In particular, for $d < 0$, Eq. (21) implies $P[M_{n-1} = 0, \dots, M_0 = 0] = 1$.)
|
| 315 |
+
|
| 316 |
+
Equation (21) generalizes the quasibinomial distribution and is a form of quasi-
|
| 317 |
+
multinomial distribution. It is a different generalization of the quasibinomial dis-
|
| 318 |
+
tribution than the quasitrinomial distribution considered by Berg and Mutafchiev
|
| 319 |
+
[1] to describe numbers of nodes in central components of random mappings.
|
| 320 |
+
|
| 321 |
+
Suppose that $S = M_0 + \dots + M_{n-1} = r < n$. Then, $M_{n-1} = 0$ and $M_0 + \dots + M_{n-2} = r - M_{n-1} = r$, and Eq. (21) vanishes unless $I=n$. Summing Eq. (21) over nonnegative integers $m_0, \dots, m_{n-1}$ that sum to $r$ yields
|
| 322 |
+
---PAGE_BREAK---
|
| 323 |
+
|
| 324 |
+
$$
|
| 325 |
+
\begin{align*}
|
| 326 |
+
P[S=r] &= \sum_{s_{n-1}=r} \frac{n!}{m_0! m_1! \cdots m_{n-1}! (n-r)!} (\phi(d))^{m_0} (m_0 p)^{m_1} \cdots (m_{n-2} p)^{m_{n-1}} \\
|
| 327 |
+
&\qquad \times (\phi(1-d-rp))^{n-r} \\
|
| 328 |
+
&= \binom{n}{r} (\phi(1-d-rp))^{n-r} p^r \sum_{s_{n-1}=r} \frac{r!}{m_0! m_1! \cdots m_{n-1}!} \\
|
| 329 |
+
&\qquad \times \left(\frac{\phi(d)}{p}\right)^{m_0} m_0^{m_1} \cdots m_{n-2}^{m_{n-2}'},
|
| 330 |
+
\end{align*}
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
which reduces to Eq. (7) using a lemma by Katz [21]. (The context of Katz’s lemma assumes $\phi(d)/p$ is a positive integer, but the generalization is immediate.)
|
| 334 |
+
|
| 335 |
+
**4.3. Applying a Generalized Ballot Theorem**
|
| 336 |
+
|
| 337 |
+
Charalambides [10] explained how the quasibinomial distribution appears as a consequence of generalized ballot theorems in the theory of fluctuations of stochastic processes [32]. We summarize this approach and comment that it derives only the nonsaturating cases of Eq. (7).
|
| 338 |
+
|
| 339 |
+
We assume $0 < d < 1$. Consider $p$ multiplied by the number of components $N(t)$ with loads in $(1-t, 1]$. For $0 \le t \le 1$, $pN(t)$ is a stochastic process with interchangeable increments whose sample functions are nondecreasing step functions with $pN(0) = 0$. According to Eq. (6), the first passage time of $t - pN(t)$ through $d$ is $\min\{t | pN(t) = t - d\} = \min\{d + sp | N(d + sp) = s\} = d + Sp$. Then, according to Takács [32, Sect. 17, Thm. 4],
|
| 340 |
+
|
| 341 |
+
$$
|
| 342 |
+
P[d + Sp \le t] = \sum_{d \le y \le t} \frac{d}{y} P[pN(y) = y - d] \quad (22)
|
| 343 |
+
$$
|
| 344 |
+
|
| 345 |
+
for $0 < d \le t \le 1$; that is,
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
\sum_{k=0}^{\lfloor (t-d)/p \rfloor} P[S=k] = \sum_{k=0}^{\lfloor (t-d)/p \rfloor} \frac{d}{d+kp} P[N(d+kp)=k]. \quad (23)
|
| 349 |
+
$$
|
| 350 |
+
|
| 351 |
+
Setting $t = d + rp$ in Eq. (23) for $r = 0, 1, \dots, \min\{n, (1-d)/p\}$, differencing the resulting equations, and using the binomial distribution of $N(t)$ for $0 \le t \le 1$ yields the nonsaturating cases of Eq. (7). However, the approach does not extend to the saturating cases because $pN(t)$ does not have interchangeable increments when $t > 1$.
|
| 352 |
+
|
| 353 |
+
**4.4. Approximate Power Tail Exponent at a Critical Case**
|
| 354 |
+
|
| 355 |
+
We describe standard approximations of the quasibinomial distribution that yield a power tail exponent at the critical case. For parameters satisfying $np + d \le 1$ (no saturation), the distribution of $S$ is quasibinomial and can be approximated by let-
|
| 356 |
+
---PAGE_BREAK---
|
| 357 |
+
|
| 358 |
+
ting $n \to \infty$, $p \to 0$, and $d \to 0$ in such a way that $\lambda = np$ and $\theta = nd$ are fixed to give the generalized (or Lagrangian) Poisson distribution [12–14]
|
| 359 |
+
|
| 360 |
+
$$P[S=r] \approx \theta(r\lambda + \theta)^{r-1} \frac{\exp(-r\lambda - \theta)}{r!}, \quad (24)$$
|
| 361 |
+
|
| 362 |
+
which is the distribution of the number of offspring in a Galton–Watson–Bienaymé branching process, with the first generation produced by a Poisson distribution with parameter $\theta$ and subsequent generations produced by a Poisson distribution with parameter $\lambda$. The critical case for the branching process is $np = \lambda = 1$ and Otter [27] proved that at criticality, the distribution of the number of offspring has a power tail with exponent -1.5. Further implications for cascading failure of the branching process approximation are considered in Dobson, Carreras, and Newman [17].
|
| 363 |
+
|
| 364 |
+
## 5. EFFECT OF LOADING
|
| 365 |
+
|
| 366 |
+
How much can an electric power transmission system be loaded before there is undue risk of cascading failure? This section discusses qualitative effects of loading on the distribution of blackout size and then applies the model to describe the effect of loading and illustrate its use.
|
| 367 |
+
|
| 368 |
+
### 5.1. Distribution of Blackout Size at Extremes of Loading
|
| 369 |
+
|
| 370 |
+
Consider cascading failure in a power transmission system in the impractically extreme cases of very low and very high loading. At very low loading near zero, any failures that occur have minimal impact on other components and these other components have large operating margins. Multiple failures are possible, but they are approximately independent so that the probability of multiple failures is approximately the product of the probabilities of each of the failures. Since the blackout size is roughly proportional to the number of failures, the probability distribution of the blackout size will have an exponential tail. The probability distribution of the blackout size is different if the power system were to be operated recklessly at a very high loading in which every component was close to its loading limit. Then, any initial disturbance would necessarily cause a cascade of failures leading to total or near total blackout. It is clear that the probability distribution of the blackout size must somehow change continuously from the exponential tail form to the certain total blackout form as loading increases from a very low to a very high loading. We are interested in the nature of the transition between these two extremes.
|
| 371 |
+
|
| 372 |
+
### 5.2. Effect of Loading in the Model
|
| 373 |
+
|
| 374 |
+
This subsection describes one way to represent a load increase in the model and how this leads to a parameterization of the normalized model. Then the effect of the load increase on the distribution of the number of components failed is described.
|
| 375 |
+
|
| 376 |
+
For purposes of illustration, the system has $n = 1000$ components. Suppose that the system is operated so that the initial component loadings vary from $L^{\min}$ to
|
| 377 |
+
---PAGE_BREAK---
|
| 378 |
+
|
| 379 |
+
$L^{\text{max}} = L^{\text{fail}} = 1$. Then the average initial component loading $L = (L^{\text{min}} + 1)/2$ may be increased by increasing $L^{\text{min}}$. The initial disturbance $D = 0.0004$ is assumed to be the same as the load transfer amount $P = 0.0004$. These modeling choices for component load lead, via the normalization of Eq. (1), to the parameterization $p = d = 0.0004/(2 - 2L)$, $0.5 \le L < 1$. The increase in the normalized power transfer $p$ with increased $L$ can be thought of as strengthening the component interactions that cause cascading failure.
|
| 380 |
+
|
| 381 |
+
The probability distribution of the number $S$ of components failed as $L$ increases from 0.6 is shown in Figure 1. The distribution for the nonsaturating case $L = 0.6$ has a tail that is approximately exponential. The tail becomes heavier as $L$ increases, and the distribution for the critical case $L = 0.8$, $np = 1$ has an approximate power-law region over a range of $S$. The power-law region has an exponent of approximately $-1.4$ and this compares to the exponent of $-1.5$ obtained by the analytic approximation in Section 4.4. The distribution for the saturated case $L = 0.9$ has an approximately exponential tail for small $r$, zero probability of intermediate $r$, and a probability of 0.80 of all 1000 components failing. If an intermediate number of components fail in a saturated case, then the cascade always proceeds to all 1000 components failing.
|
| 382 |
+
|
| 383 |
+
The increase in the mean number of failures ES as the average initial component loading $L$ is increased is shown in Figure 2. The sharp change in gradient at the critical loading $L = 0.8$ corresponds to the saturation of Eq. (7) and the consequent increasing probability of all components failing. Indeed, at $L = 0.8$, the change in
|
| 384 |
+
|
| 385 |
+
**FIGURE 1.** Log-log plot of distribution of number of components failed $S$ for three values of average initial load $L$. Note the power-law region for the critical loading $L = 0.8$. $L = 0.9$ has an isolated point at (1000,0.80), indicating probability 0.80 of all 1000 components failed. The probability of no failures is 0.61 for $L = 0.6$, 0.37 for $L = 0.8$, and 0.14 for $L = 0.9$.
|
| 386 |
+
---PAGE_BREAK---
|
| 387 |
+
|
| 388 |
+
**FIGURE 2.** Mean number of components failed *ES* as a function of average initial component loading *L*. Note the change in gradient at the critical loading *L* = 0.8. There are *n* = 1000 components and *ES* becomes 1000 at the highest loadings.
|
| 389 |
+
|
| 390 |
+
gradient in Figure 2 together with the power-law region in the distribution of *S* in
|
| 391 |
+
Figure 1 suggest a type 2 phase transition in the system. If we interpret the number
|
| 392 |
+
of components failed as corresponding to blackout size, the power-law region is
|
| 393 |
+
consistent with North American blackout data and blackout simulation results
|
| 394 |
+
[4,8,18]. In particular, North American blackout data suggest an empirical distri-
|
| 395 |
+
bution of blackout size with a power tail with exponent between −1 and −2 [6,7,8].
|
| 396 |
+
This power tail indicates a significant risk of large blackouts that is not present
|
| 397 |
+
when the distribution of blackout sizes has an exponential tail [5].
|
| 398 |
+
|
| 399 |
+
The model results show how system loading can influence the risk of cascading failure. At low loading, there is an approximately exponential tail in the distribution of number of components failed and a low risk of large cascading failure. There is a critical loading at which there is a power-law region in the distribution of number of components failed and a sharp increase in the gradient of the mean number of components failed. As loading is increased past the critical loading, the distribution of number of components failed saturates, there is an increasingly significant probability of all components failing, and there is a significant risk of large cascading failure.
|
| 400 |
+
|
| 401 |
+
**Acknowledgments**
|
| 402 |
+
|
| 403 |
+
The work was coordinated by the Consortium for Electric Reliability Technology Solutions and funded in part by the Assistant Secretary for Energy Efficiency and Renewable Energy, Office of Power Technologies, Transmission Reliability Program of the U.S. Department of Energy under contract 9908935 and Interagency Agreement DE-A1099EE35075 with the National Science Foundation. The work was funded in part by NSF grants ECS-0214369 and ECS-0216053. Part of this research has been carried out
|
| 404 |
+
---PAGE_BREAK---
|
| 405 |
+
|
| 406 |
+
at Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.
|
| 407 |
+
|
| 408 |
+
References
|
| 409 |
+
|
| 410 |
+
1. Berg, S. & Mutafchiev, L. (1990). Random mappings with an attracting center: Lagrangian distributions and a regression function. *Journal of Applied Probability* 27: 622–636.
|
| 411 |
+
|
| 412 |
+
2. Billington, R. & Allan, R.N. (1996). *Reliability evaluation of power systems*, 2nd ed. New York: Plenum Press.
|
| 413 |
+
|
| 414 |
+
3. Burtin, Y.D. (1980). On a simple formula for random mappings and its applications. *Journal of Applied Probability* 17: 403–414.
|
| 415 |
+
|
| 416 |
+
4. Carreras, B.A., Lynch, V.E., Dobson, I., & Newman, D.E. (2002). Critical points and transitions in an electric power transmission model for cascading failure blackouts. *Chaos* 12(4): 985–994.
|
| 417 |
+
|
| 418 |
+
5. Carreras, B.A., Lynch, V.E., Newman, D.E., & Dobson, I. (2003). Blackout mitigation assessment in power transmission systems. In *36th Hawaii International Conference on System Sciences*.
|
| 419 |
+
|
| 420 |
+
6. Carreras, B.A., Newman, D.E., Dobson, I., & Poole, A.B. (2001). Evidence for self-organized criticality in electric power system blackouts. In *34th Hawaii International Conference on System Sciences*.
|
| 421 |
+
|
| 422 |
+
7. Carreras, B.A., Newman, D.E., Dobson, I., & Poole, A.B. (2004). Evidence for self-organized criticality in a time series of electric power system blackouts. *IEEE Transactions on Circuits and Systems I: Regular Papers* 51(9): 1733–1740.
|
| 423 |
+
|
| 424 |
+
8. Chen, J., Thorp, J.S., & Parashar, M. (2001). Analysis of electric power disturbance data. In *34th Hawaii International Conference on System Sciences*.
|
| 425 |
+
|
| 426 |
+
9. Chen, J. & Thorp, J.S. (2002). A reliability study of transmission system protection via a hidden failure DC load flow model. In *IEE Fifth International Conference on Power System Management and Control*, pp. 384–389.
|
| 427 |
+
|
| 428 |
+
10. Charalambides, Ch.A. (1990). Abel series distributions with applications to fluctuations of sample functions of stochastic functions. *Communications in Statistics: Theory and Methods* 19(1): 317–335.
|
| 429 |
+
|
| 430 |
+
11. Consul, P.C. (1974). A simple urn model dependent upon predetermined strategy. *Sankhyā: The Indian Journal of Statistics, Series B* 36(4): 391–399.
|
| 431 |
+
|
| 432 |
+
12. Consul, P.C. (1988). On some models leading to a generalized Poisson distribution. *Communications in Statistics: Theory and Methods* 17(2): 423–442.
|
| 433 |
+
|
| 434 |
+
13. Consul, P.C. (1989). *Generalized Poisson distributions*. New York: Marcel Dekker.
|
| 435 |
+
|
| 436 |
+
14. Consul, P.C. & Shoukri, M.M. (1988). Some chance mechanisms leading to a generalized Poisson probability model. *American Journal of Mathematical and Management Sciences* 8(1&2): 181–202.
|
| 437 |
+
|
| 438 |
+
15. DeMarco, C.L. (2001). A phase transition model for cascading network failure. *IEEE Control Systems Magazine* 21(6): 40–51.
|
| 439 |
+
|
| 440 |
+
16. Dobson, I., Carreras, B.A., & Newman, D.E. (2003). A probabilistic loading-dependent model of cascading failure and possible implications for blackouts. In *36th Hawaii International Conference on System Sciences*.
|
| 441 |
+
|
| 442 |
+
17. Dobson, I., Carreras, B.A., & Newman, D.E. (2004). A branching process approximation to cascading load-dependent system failure. In *37th Hawaii International Conference on System Sciences*.
|
| 443 |
+
|
| 444 |
+
18. Dobson, I., Chen, J., Thorp, J.S., Carreras, B.A., & Newman, D.E. (2002). Examining criticality of blackouts in power system models with cascading events. In *35th Hawaii International Conference on System Sciences*.
|
| 445 |
+
|
| 446 |
+
19. Islam, M.N., O'Shaughnessy, C.D., & Smith, B. (1996). A random graph model for the final-size distribution of household infections. *Statistics in Medicine* 15: 837–843.
|
| 447 |
+
|
| 448 |
+
20. Jaworski, J. (1998). Predecessors in a random mapping. *Random Structures and Algorithms* 14: 501–519.
|
| 449 |
+
|
| 450 |
+
21. Katz, L. (1955). Probability of indecomposability of a random mapping function. *Annals of Mathematical Statistics* 26: 512–517.
|
| 451 |
+
|
| 452 |
+
22. Kloster, M., Hansen, A., & Hemmer, P.C. (1997). Burst avalanches in solvable models of fibrous materials. *Physical Review E* 56(3).
|
| 453 |
+
---PAGE_BREAK---
|
| 454 |
+
|
| 455 |
+
23. Kosterev, D.N., Taylor, C.W., & Mittelstadt, W.A. (1999). Model validation for the August 10, 1996 WSCC system outage. *IEEE Transactions on Power Systems* 13(3): 967–979.
|
| 456 |
+
|
| 457 |
+
24. Lindley, D.V. & Singpurwalla, N.D. (2002). On exchangeable, causal and cascading failures. *Statistical Science* 17(2): 209–219.
|
| 458 |
+
|
| 459 |
+
25. NERC (North American Electric Reliability Council) (2002). *1996 system disturbances*. Princeton, NJ: NERC.
|
| 460 |
+
|
| 461 |
+
26. Ni, M., McCalley, J.D., Vittal, V., & Tayyib, T. (2003). Online risk-based security assessment. *IEEE Transactions on Power Systems* 18(1): 258–265.
|
| 462 |
+
|
| 463 |
+
27. Otter, R. (1949). The multiplicative process. *Annals of Mathematical Statistics* 20: 206–224.
|
| 464 |
+
|
| 465 |
+
28. Parrilo, P.A., Lall, S., Paganini, F., Verghese, G.C., Lesieutre, B.C., & Marsden, J.E. (1999). Model reduction for analysis of cascading failures in power systems. *Proceedings of the American Control Conference* 6: 4208–4212.
|
| 466 |
+
|
| 467 |
+
29. Pepyne, D.L., Panayiotou, C.G., Cassandras, C.G., & Ho, Y.-C. (2001). Vulnerability assessment and allocation of protection resources in power systems. *Proceedings of the American Control Conference* 6: 4705–4710.
|
| 468 |
+
|
| 469 |
+
30. Rios, M.A., Kirschen, D.S., Jawayeera, D., Nedic, D.P., & Allan, R.N. (2002). Value of security: modeling time-dependent phenomena and weather conditions. *IEEE Transactions on Power Systems* 17(3): 543–548.
|
| 470 |
+
|
| 471 |
+
31. Roy, S., Asavathiratham, C., Lesieutre, B.C., & Verghese, G.C. (2001). Network models: growth, dynamics, and failure. In *34th Hawaii International Conference on System Sciences*, pp. 728–737.
|
| 472 |
+
|
| 473 |
+
32. Takács, L. (1967). *Combinatorial methods in the theory of stochastic processes*. New York: Wiley.
|
| 474 |
+
|
| 475 |
+
33. U.S.–Canada Power System Outage Task Force (2004). *Final Report on the August 14th blackout in the United States and Canada*. United States Department of Energy and National Resources Canada.
|
| 476 |
+
|
| 477 |
+
34. Watts, D.J. (2002). A simple model of global cascades on random networks. *Proceedings of the National Academy of Sciences USA* 99(9): 5766–5771.
|
| 478 |
+
|
| 479 |
+
# APPENDIX
|
| 480 |
+
|
| 481 |
+
## Saturating Quasibinomial Formula Satisfies Recursion
|
| 482 |
+
|
| 483 |
+
We prove that the saturating quasibinomial formula (7) satisfies recursion (14) for $0 < d < 1$ and $n > 0$.
|
| 484 |
+
|
| 485 |
+
In the case $d + rp < 1$ and $r < n$, since
|
| 486 |
+
|
| 487 |
+
$$d + rp < 1 \Leftrightarrow \frac{kp}{1-d} + (r-k) \frac{p}{1-d} < 1, \quad (25)$$
|
| 488 |
+
|
| 489 |
+
none of the instances of $f$ in the right-hand side of Eq. (14) saturate so that the right-hand side of Eq. (14) becomes
|
| 490 |
+
|
| 491 |
+
$$\sum_{k=0}^{r} \binom{n}{k} d^k (1-d)^{n-k} \binom{n-k}{r-k} \frac{kp}{1-d} \left(\frac{rp}{1-d}\right)^{r-k-1} \left(1 - \frac{rp}{1-d}\right)^{n-r} \\ = \binom{n}{r} \sum_{k=0}^{r} \binom{r}{k} \frac{k}{r} d^k (rp)^{r-k} (1-d- rp)^{n-r} = \binom{n}{r} d(d+rp)^{r-1} (1-d- rp)^{n-r}.$$
|
| 492 |
+
|
| 493 |
+
In the case $d + rp \ge 1$ and $r < n$, Eq. (25) and $r - k < n - k$ imply that all of the instances of $f$ in the right-hand side of Eq. (14) vanish.
|
| 494 |
+
---PAGE_BREAK---
|
| 495 |
+
|
| 496 |
+
In the case $r=n$, substituting the expression from Eq. (7) for $f(n-k,(kp)/(1-d))$,
|
| 497 |
+
$p/(1-d), n-k$) into the right-hand side of Eq. (14) leads to
|
| 498 |
+
|
| 499 |
+
$$
|
| 500 |
+
1 - \sum_{t=0}^{n-1} \sum_{k=0}^{t} \binom{n}{k} d^k (1-d)^{n-k} f\left(t-k, \frac{kp}{1-d}, \frac{p}{1-d}, n-k\right) = 1 - \sum_{s=0}^{n-1} f(s,d,p,n),
|
| 501 |
+
$$
|
| 502 |
+
|
| 503 |
+
where the last step uses the result established above that Eq. (7) satisfies Eq. (14) for
|
| 504 |
+
$r < n$.
|
samples_new/texts_merged/2763593.md
ADDED
|
@@ -0,0 +1,364 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Face Recognition with One Sample Image per Class
|
| 5 |
+
|
| 6 |
+
Shaokang Chen
|
| 7 |
+
Intelligent Real-Time Imaging and
|
| 8 |
+
Sensing (IRIS) Group
|
| 9 |
+
The University of Queensland
|
| 10 |
+
Brisbane, Queensland, Australia
|
| 11 |
+
shaokang@itee.uq.edu.au
|
| 12 |
+
|
| 13 |
+
Brian C. Lovell
|
| 14 |
+
Intelligent Real-Time Imaging and
|
| 15 |
+
Sensing (IRIS) Group
|
| 16 |
+
The University of Queensland
|
| 17 |
+
Brisbane, Queensland, Australia
|
| 18 |
+
lovell@itee.uq.edu.au
|
| 19 |
+
|
| 20 |
+
## Abstract
|
| 21 |
+
|
| 22 |
+
There are two main approaches for face recognition with variations in lighting conditions. One is to represent images with features that are insensitive to illumination in the first place. The other main approach is to construct a linear subspace for every class under the different lighting conditions. Both of these techniques are successfully applied to some extent in face recognition, but it is hard to extend them for recognition with variant facial expressions. It is observed that features insensitive to illumination are highly sensitive to expression variations, which result in face recognition with changes in both lighting conditions and expressions a difficult task. We propose a new method called Affine Principle Components Analysis in an attempt to solve both of these problems. This method extract features to construct a subspace for face representation and warps this space to achieve better class separation. The proposed technique is evaluated using face databases with both variable lighting and facial expressions. We achieve more than 90% accuracy for face recognition by using only one sample image per class.
|
| 23 |
+
|
| 24 |
+
## 1. Introduction
|
| 25 |
+
|
| 26 |
+
One of the difficulties in face recognition (FR) is the numerous variations between images of the same face due to changes in lighting conditions, view points or facial expressions. A good face recognition system should recognize faces and be immune to these variations as mush as possible. Yet, it is been reported in [19] that differences between images of the same face due to these variations are normally greater than those between different faces. Therefore, most of the systems designed to date can only deal with face images taken under constrained conditions. So these major problems must be
|
| 27 |
+
|
| 28 |
+
overcome in the quest to produce robust face recognition systems.
|
| 29 |
+
|
| 30 |
+
In the past few years, different approaches have been proposed for face recognition to reduce the impact of these nuisance factors. Two main approaches are used for illumination invariant recognition. One is to represent images with features that are less sensitive to illumination changes such as the edge maps of the image. But edges generated from shadows are related to illumination changes and may have an impact on recognition. Experiments in [19] show that even with the best image representations using illumination insensitive features and distance measurement, the misclassification rate is more then 20%. The second approach presented in [21] and [22], is to prove that images of convex Lambertian objects under different lighting conditions can be approximated by a low dimensional linear subspaces. Kreigman, Belhumeur and Georghiades proposed an appearance-based method in [7] for recognizing faces under variations in lighting and view point based on this concept. Nevertheless, these methods all suppose the surface reflectance of human faces is Lambertian reflectance and it is hard for these systems to deal with cast shadows. Furthermore, these systems need several images of the same face taken under different lighting source directions to construct a model of a given face. However, sometimes it is hard to obtain different images of a given face under specific conditions.
|
| 31 |
+
|
| 32 |
+
As for expression invariant recognition, it is still unsolved for machine recognition and is even a difficult task for humans. In [23] and [24], images are morphed to be the same shape as the one used for training. But it is not guaranteed that all images can be morphed correctly, for example an image with closed eyes cannot be morphed to a neutral image because of the lack of texture inside the eyes. It is also hard to learn the local motions within the feature space to determine the expression changes of each face, since the way one person express a certain emotion is normally somewhat different from
|
| 33 |
+
---PAGE_BREAK---
|
| 34 |
+
|
| 35 |
+
others. Martinez proposed a method to deal with variations in facial expressions in [20]. An image is divided into several local areas and those that are less sensitive to expressional changes are chosen and weighed independently. But features that are insensitive to expression changes may be sensitive to illumination variation. This is discussed in [19] which says that "when a given representation is sufficient to overcome a single image variation, it may still be affected by other processing stages that control other imaging parameters".
|
| 36 |
+
|
| 37 |
+
It is known that performance of face recognition systems is acutely dependent on the choice of features [3], which is thus the key step in the recognition methodology. Principal Component Analysis (PCA) and Fisher Linear Discriminant (FLD) [1] are two well-known statistical feature extraction techniques for face recognition. PCA, a standard decorrelation technique, derives an orthogonal projection basis, which allows representation of faces in a vastly reduced feature space — this dimensionality reduction increases generalisation ability. PCA finds a set of orthogonal features, which provide a maximally compact representation of the majority of the variation of the facial data. But PCA might extract some noise features that degenerate performance of the system. For this reason, Swets and Weng [8] argue in favor of methods such as FLD which seek to determine the most discriminatory features by taking into account both within-class and between-class variation to derive the Most Discriminating Features (MDF). However, compared to PCA, it has been shown that FLD overfits to the training data resulting in a lack of generalization ability [2].
|
| 38 |
+
|
| 39 |
+
We propose a new method Affine Principle Component Analysis (APCA) that can deal with variations both in illumination and facial expression. This paper discusses APCA and presents results, which show that the recognition performance of APCA greatly exceeds that of both PCA and FLD when recognizing known faces with unknown changes in illumination and expression.
|
| 40 |
+
|
| 41 |
+
## 2. Review of PCA & FLD
|
| 42 |
+
|
| 43 |
+
PCA and FLD are two popular techniques for face recognition. They abstract features from training face images to generate orthogonal sets of feature vectors, which span a subspace of the face images. Recognition is then performed within this space based on some distance metric (possibly Euclidean).
|
| 44 |
+
|
| 45 |
+
### 2.1. PCA (Principal Component Analysis)
|
| 46 |
+
|
| 47 |
+
PCA is a second-order method for finding the linear representation of faces using only the covariance of data and determines the set of orthogonal components (feature vectors) which minimise the reconstruction error for a given number of feature vectors. Consider the face image set $I = [I_1, I_2, ..., I_n]$, where $I_i$ is a p×q image, $i \in [1..n]$, p,q,n ∈ Z⁺, the average face $\Psi$ of the image set is defined by:
|
| 48 |
+
|
| 49 |
+
$$ \Psi = \frac{1}{n} \sum_{i=1}^{n} I_i . \quad (1) $$
|
| 50 |
+
|
| 51 |
+
Normalizing each image by subtracting the average face, we have the normalized difference image:
|
| 52 |
+
|
| 53 |
+
$$ \tilde{D}_i = I_i - \Psi . \quad (2) $$
|
| 54 |
+
|
| 55 |
+
Unpacking $\tilde{D}_i$ row-wise, we form the $N$ ($N = p \times q$) dimensional column vector $d_i$. We define the covariance matrix $C$ of the normalized image set $D = [d_1, d_2, ..., d_n]$ by:
|
| 56 |
+
|
| 57 |
+
$$ C = \sum_{i=1}^{n} d_i d_i^T = DD^T \quad (3) $$
|
| 58 |
+
|
| 59 |
+
An eigendecomposition of $C$ yields eigenvalues $\lambda_i$ and eigenvectors $u_i$ which satisfy:
|
| 60 |
+
|
| 61 |
+
$$ Cu_i = \lambda_i u_i, \quad (4) $$
|
| 62 |
+
|
| 63 |
+
$$ DD^T = C = \sum_{i=1}^{N} \lambda_i u_i u_i^T, \quad (5) $$
|
| 64 |
+
|
| 65 |
+
where $i \in [1..N]$. Since those eigenvectors obtained looks like human faces physically, they are also called eigenfaces. Generally, we select a small subset of $m<n$ eigenvectors, to define a reduced dimensionality facespace that yields higher recognition performance on unseen examples of faces. Choosing $m=10$ or thereabout seems to yield good performance in practice. Although PCA defines a face subspace that contains the greatest covariance, it is not necessarily the best choice for classification since it may retain principle components with large noise and nuisance factors [2].
|
| 66 |
+
|
| 67 |
+
### 2.2 FLD (Fisher Linear Discriminant)
|
| 68 |
+
|
| 69 |
+
FLD finds the optimum projection for classification of the training data by simultaneously diagonalizing the within-class and between-class scatter matrices [2]. The FLD procedure consists of two operations: whitening and diagonalization [2]. Given $M$ classes $S_j$, $j \in [1...M]$, we denote the exemplars of each class by $s_{j,k} = [s_{j,1}, s_{j,2}, ..., s_{j,K_j}]$ where $K_j$ is the number of exemplars in class $j$. Let $\mu_j$ denote the mean of class $j$
|
| 70 |
+
---PAGE_BREAK---
|
| 71 |
+
|
| 72 |
+
and $\bar{\mu}$ denote the grand mean for all the exemplars, then
|
| 73 |
+
the between class scatter matrix is defined by:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
B = \sum_{j=1}^{M} K_j (\mu_j - \bar{\mu})(\mu_j - \bar{\mu})^T, \quad (6)
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
and the within class scatter matrix is defined by:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
W = \sum_{j=1}^{M} \sum_{k=1}^{K_j} (s_{j,k} - \mu_j)(s_{j,k} - \mu_j)^T
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
(7)
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
W_{FLD} = \arg \max_A \frac{|A^T BA|}{|A^T WA|}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
(8)
|
| 92 |
+
|
| 93 |
+
In other words, FLD extracts features that are strong between classes but weak within class. While FLD often yields higher recognition performance than PCA, it tends to overfit to the training data, since it relies heavily on how the within-class scatter captures reliable variations for a specific class [2]. In addition, it is optimised for specific classes, so it needs several samples in every class and thus can determine only a maximum of M-1 features.
|
| 94 |
+
|
| 95 |
+
3. PROPOSED METHOD
|
| 96 |
+
|
| 97 |
+
An Affine PCA method is introduced in this section in
|
| 98 |
+
an attempt to overcome some of the limitations of both
|
| 99 |
+
PCA and FLD. First of all, we apply PCA for dimen-
|
| 100 |
+
sionality reduction and to obtain the eigenfaces *U* .
|
| 101 |
+
Every face image can be projected into this subspace to
|
| 102 |
+
form an *m*-dimensional feature vector *s*<sub>j,k</sub>, where
|
| 103 |
+
*m* < *n*, denotes the number of principal eigenfaces cho-
|
| 104 |
+
sen for the projection, and *k* = 1,2,...,*K*<sub>j</sub>, denotes the k<sup>th</sup>
|
| 105 |
+
sample of the class *S*<sub>j</sub>, where *j* = 1,2,...,*M*. We often
|
| 106 |
+
use the nearest neighbor method for classification, where
|
| 107 |
+
the distance between two face vectors represents the en-
|
| 108 |
+
ergy difference between them. In the case of variable
|
| 109 |
+
illumination, lighting changes dominate over the charac-
|
| 110 |
+
teristic differences between faces. It has also been
|
| 111 |
+
proved in [19] that the distance between face vectors
|
| 112 |
+
with facial expression variations are generally greater
|
| 113 |
+
than that with face identity. This is the main reason why
|
| 114 |
+
PCA does not work well under variable lighting and ex-
|
| 115 |
+
pression. In fact, not all the features have the same im-
|
| 116 |
+
portance in recognition. Features that are strong between
|
| 117 |
+
classes and weak within class are much more useful for
|
| 118 |
+
the recognition task. Therefore, we propose an affine
|
| 119 |
+
model (Affine PCA) to resolve this problem. The affine
|
| 120 |
+
procedure involves three steps: eigenspace rotation,
|
| 121 |
+
whitening transformation and eigenface filtering.
|
| 122 |
+
|
| 123 |
+
3.1. Eigenspace Rotation
|
| 124 |
+
|
| 125 |
+
The eigenfaces extracted from PCA are Most Expres-
|
| 126 |
+
sive Features (MEF) and these are not necessarily opti-
|
| 127 |
+
mal for face recognition performance as stated in [8].
|
| 128 |
+
Applying FLD we can obtain the Most Discriminating
|
| 129 |
+
Features but overfits to only training data lacking of gen-
|
| 130 |
+
eralization capacity. Therefore, in order not to lose gen-
|
| 131 |
+
eralization ability while still keep the discrimination, we
|
| 132 |
+
prefer to rotate the space and find the most variant fea-
|
| 133 |
+
tures that can represent changes due to lighting or ex-
|
| 134 |
+
pression variation. That is to extract the within class
|
| 135 |
+
covariance and apply PCA to find the best eigen features
|
| 136 |
+
that maximally represent within class variations. The
|
| 137 |
+
within class difference is defined as:
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
D_{Within} = \sum_{j=1}^{M} \sum_{k=1}^{K_j} s_{j,k} - \mu_j, \qquad (9)
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
and the within class covariance become:
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
Cov_{Within} = D_{Within} D_{Within}^{T}, \quad (10)
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
which is a *m* by *m* matrix. Applying singular value
|
| 150 |
+
decomposition (SVD) to within class covariance matrix,
|
| 151 |
+
we have,
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
Cov_{Within} = USV^T = \sum_{i=1}^{m} \sigma_i v_i v_i^T .
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
Then the rotation matrix M is the set of eigen vectors of
|
| 158 |
+
covariance matrix, $M = [v_1, v_2, ..., v_m]$. Then all the
|
| 159 |
+
vectors represented in the original subspace are trans-
|
| 160 |
+
formed into new space by multiply by M.
|
| 161 |
+
|
| 162 |
+
## 3.2. Whitening Transformation
|
| 163 |
+
|
| 164 |
+
The purpose for whitening is to normalize the scatter
|
| 165 |
+
matrix for uniform gain control. Since as stated in [3]
|
| 166 |
+
"mean square error underlying PCA preferentially
|
| 167 |
+
weights low frequencies", we would need to compensate
|
| 168 |
+
for that. The whitening parameter Γ is related to the ei-
|
| 169 |
+
genvalues λᵢ. Conventionally, we would use the stan-
|
| 170 |
+
dard deviation for whitening, that is:
|
| 171 |
+
Γi = √λᵢ, i = [1...m]. But this value appears to compress
|
| 172 |
+
the eigenspace so much that class separability is dimin-
|
| 173 |
+
ished. We therefore use Γᵢ = λᵢ/p, where the exponent p is
|
| 174 |
+
determined empirically.
|
| 175 |
+
|
| 176 |
+
3.3. Filtering the Eigenfaces
|
| 177 |
+
|
| 178 |
+
The aim of filtering is to diminish the contribution of
|
| 179 |
+
eigenfaces that are strongly affected by variations. We
|
| 180 |
+
want to be able to enhance features that capture the main
|
| 181 |
+
differences between classes (faces) while diminishing the
|
| 182 |
+
contribution of those that are largely due to lighting or
|
| 183 |
+
---PAGE_BREAK---
|
| 184 |
+
|
| 185 |
+
expression variation (within class differences). We thus
|
| 186 |
+
define a filtering parameter $\Lambda$ which is related to iden-
|
| 187 |
+
tity-to-variation (ITV) ratio. The ITV is a ratio measur-
|
| 188 |
+
ing the correlation with a change in person versus a
|
| 189 |
+
change in variations for each of the eigenfaces. For an M
|
| 190 |
+
class problem, assume that for each of the M classes
|
| 191 |
+
(persons) we have examples under K standardized differ-
|
| 192 |
+
ent variations in illumination or expression. In case of
|
| 193 |
+
illumination changes, the lighting source is positioned in
|
| 194 |
+
front, above, below, left and right as illustrated in Figure
|
| 195 |
+
2. The facial expression changes are normal, surprised
|
| 196 |
+
and unpleasant as shown in Figure 3. Let us denote the i-th
|
| 197 |
+
eigenface of the k-th sample for class (person) S_j by
|
| 198 |
+
|
| 199 |
+
$s_{i,j,k}$. Then
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
\begin{align}
|
| 203 |
+
ITV_i &= \frac{\text{Between Class Scatter}}{\text{Within Class Scatter}} \nonumber \\
|
| 204 |
+
&= \frac{\frac{1}{M} \sum_{j=1}^{M} \frac{1}{K} \sum_{k=1}^{K} |s_{i,j,k} - \bar{\sigma}_{i,k}|}{\frac{1}{M} \sum_{j=1}^{M} \frac{1}{K} \sum_{k=1}^{K} |s_{i,j,k} - \mu_{i,j}|}, \tag{11}
|
| 205 |
+
\end{align}
|
| 206 |
+
$$
|
| 207 |
+
|
| 208 |
+
$$
|
| 209 |
+
\bar{\sigma}_{i,k} = \frac{1}{M} \sum_{j=1}^{M} s_{i,j,k},
|
| 210 |
+
$$
|
| 211 |
+
|
| 212 |
+
and
|
| 213 |
+
|
| 214 |
+
$$
|
| 215 |
+
\mu_{i,j} = \frac{1}{K} \sum_{k=1}^{K} s_{i,j,k}, \quad i = [1 \cdots m].
|
| 216 |
+
$$
|
| 217 |
+
|
| 218 |
+
Here $\bar{\sigma}_{i,k}$ represents the i-th element of the mean face vector for variation $k$ for all persons and $\mu_{i,j}$ represents the i-th element of the mean face vector for person $j$ under all different variations. We then define the scaling parameter $\Lambda$ by:
|
| 219 |
+
|
| 220 |
+
$$
|
| 221 |
+
\Lambda_i = ITV_i^q \quad (12)
|
| 222 |
+
$$
|
| 223 |
+
|
| 224 |
+
where *q* is an exponential scaling factor determined em-
|
| 225 |
+
pirically as before. Instead of this exponential scaling
|
| 226 |
+
factor, other non-linear functions such as thresholding
|
| 227 |
+
suggest themselves. These possibilities have been ex-
|
| 228 |
+
plored, but so far the exponential scaling perform best.
|
| 229 |
+
After the affine transformation, the distance *d* between
|
| 230 |
+
two face vectors *s*<sub>*j*,k*</sub> and *s*<sub>*j',k*</sub>' is:
|
| 231 |
+
|
| 232 |
+
$$
|
| 233 |
+
d_{jj',kk'} = \sqrt{\sum_{i=1}^{m} [\omega_i (s_{i,j,k} - s_{i,j',k'})]^2}, \quad (13)
|
| 234 |
+
$$
|
| 235 |
+
|
| 236 |
+
$$
|
| 237 |
+
\omega_i = \Gamma_i \Lambda_i / |\Gamma \Lambda^T|.
|
| 238 |
+
$$
|
| 239 |
+
|
| 240 |
+
The weights $\omega_i$ scale the corresponding eigenfaces. To determine the two exponents $p$ and $q$ for $\Gamma$ and $\Lambda$, we introduce a cost function and optimise them empirically. It is defined by:
|
| 241 |
+
|
| 242 |
+
$$
|
| 243 |
+
OPT = \sum_{j=1}^{M} \sum_{k=1}^{K} \sum_{m} \left( \frac{d_{jj,k0}}{d_{jm,k0}} \right), \forall m \in d_{jm,k0} < d_{jj,k0} \quad (14)
|
| 244 |
+
$$
|
| 245 |
+
|
| 246 |
+
where $d_{jj,k0}$ is the distance between the sample $s_{j,k}$ and $s_{j,0}$ which is the standard image reference for class $S_j$ (typically the normally illuminated image). Note that the condition $d_{jm,k0} < d_{jj,k0}$ is only true when there is a misclassification error. Thus $OPT$ is a combination of error rate and the ratio of between-class distance to within-class distance. By minimizing $OPT$, we can determine the best choices for $p$ and $q$. Figure 1 shows the relationship between $OPT$ and $p, q$. For one of our training database, a minimum was obtained at $p = -0.2, q = -0.4$.
|
| 247 |
+
|
| 248 |
+
From the above, our final set of transformed eigenfaces would be:
|
| 249 |
+
|
| 250 |
+
$$
|
| 251 |
+
u_i' = \omega_i u_i M = \frac{1}{\sigma_i} \omega_i D v_i M \quad (15)
|
| 252 |
+
$$
|
| 253 |
+
|
| 254 |
+
where $i=[1...m]$. After transformation, we can apply PCA again on the compressed subspace to further reduce dimensionality (two-stage PCA).
|
| 255 |
+
|
| 256 |
+
# 4. EXPERIMENTAL RESULTS
|
| 257 |
+
|
| 258 |
+
The method is tested on an Asian Face Image Data-
|
| 259 |
+
base PF01 [6] for both changes in lighting source posi-
|
| 260 |
+
tions and facial expressions. The size of each image is
|
| 261 |
+
171×171 pixels with 256 grey levels per pixel. Figure 2
|
| 262 |
+
and 3 show some examples from the database. To evalu-
|
| 263 |
+
ate the performance of our methods, we performed a 3-
|
| 264 |
+
fold cross validation on the database as follows. We
|
| 265 |
+
choose one-third of the 107 subjects to construct our
|
| 266 |
+
APCA model, one-third for training. Then we just add
|
| 267 |
+
the normally faces (pictures in the first column in Figure
|
| 268 |
+
1 and 2) of the remaining one-thirds of the data into our
|
| 269 |
+
recognition database. We then attempt to recognize these
|
| 270 |
+
faces under all the other conditions. This process is re-
|
| 271 |
+
peated three-fold using different partitions and the per-
|
| 272 |
+
formance is averaged. All the results listed in this paper
|
| 273 |
+
are obtained from experiments only on testing data. Ta-
|
| 274 |
+
ble 1 is the comparison of recognition rate between
|
| 275 |
+
APCA and PCA. It is clear from the results that Affine
|
| 276 |
+
PCA performs much better than PCA in face recognition
|
| 277 |
+
under variable lighting conditions. The proposed APCA
|
| 278 |
+
outperforms PCA remarkably in recognition rate with
|
| 279 |
+
99.3% for training data and 95.6% for testing data with
|
| 280 |
+
negligible reduction in performance for normally lit
|
| 281 |
+
faces. Figure 3 displays the recognition rates against
|
| 282 |
+
numbers of eigenfaces used (m). It can be seen that
|
| 283 |
+
selecting the principal 40 to 50 eigenfaces is sufficient
|
| 284 |
+
for invariant luminance face recognition. This number is
|
| 285 |
+
---PAGE_BREAK---
|
| 286 |
+
|
| 287 |
+
somewhat higher than is required for standard PCA, where selecting *m* in the range 10 to 20 is sufficient — this is possibly a necessary consequence of the greater complexity of the APCA face subspace compared to standard PCA.
|
| 288 |
+
|
| 289 |
+
Figure 1. Examples of illumination changes in Asian Face Database PF01.
|
| 290 |
+
|
| 291 |
+
Figure 2. Examples of expression changes in Asian Face Database PF01.
|
| 292 |
+
|
| 293 |
+
As for variations in facial expression, APCA achieves higher recognition rate than PCA with an increase of 10%. For changes in both lighting condition and expression, APCA always performs better than PCA despite of the change in number of eigenfaces. The gain is almost stable with high dimension of subspace. It can also be seen from Figure 3, that recognition rate of expression changes does not decrease dramatically with the reduce of number of eigen features compared to illumination variations. Therefore, only as low as 20 features is enough to recognition faces with facial expression variations.
|
| 294 |
+
|
| 295 |
+
We also test the performance of APCA on variations on illumination and expression simultaneously. The recognition rate of APCA is less than 5% lower than that of illumination changes and expression changes, but it is obviously higher than the recognition rate of PCA. Thus
|
| 296 |
+
|
| 297 |
+
it shows that performance of APCA is stable in spite of the complexity of variations. However, PCA is not as robust as APCA with different variations. For illumination changes, PCA only achieve less than 60% accuracy while the accuracy increase to more than 80% for expression variations. It drops back to 60% with changes combining illumination and expression. This phenomenon has also been reported in [19] as any given representation is not sufficient to overcome variations in both illumination and expression.
|
| 298 |
+
|
| 299 |
+
Figure 3. Recognition Rate Vs. Number of features.
|
| 300 |
+
|
| 301 |
+
<table><thead><tr><th rowspan="2">Method</th><th colspan="3">Recognition rate</th></tr><tr><th>Illumination Variation</th><th>Expression Variation</th><th>Illumination and Expression Variations</th></tr></thead><tbody><tr><td>PCA</td><td>57.3%</td><td>84.6%</td><td>70.6%</td></tr><tr><td>Affine PCA</td><td>95.6%</td><td>92.2%</td><td>86.8%</td></tr></tbody></table>
|
| 302 |
+
|
| 303 |
+
Table 1. Comparison of recognition rate between APCA and PCA.
|
| 304 |
+
|
| 305 |
+
Conclusion
|
| 306 |
+
|
| 307 |
+
We have described an easy to calculate and efficient face recognition algorithm by warping the face subspace constructed from PCA. The affine procedure contains three steps: rotating the eigen space, whitening Transformation, and then filtering the eigenfaces. After affine transformation, features are assigned with different weights for recognition which in fact enlarge the between
|
| 308 |
+
---PAGE_BREAK---
|
| 309 |
+
|
| 310 |
+
class covariance while minimizing within class covariance. There only have as few as two variable parameters during the optimization compared to other methods for high dimensionality problems. This method can not only deal with variations in illumination and expression separately but also perform well for the combination of both changes with only one sample image per class. Experiments show that APCA is more robust to change in illumination and expression and have better generalization capacity compared to the FLD method.
|
| 311 |
+
|
| 312 |
+
A shortcoming of the algorithm is that we can not guarantee that the weights achieved are the best for recognition since we only rotate the eigen space to the direction that best represent the within class covariance. Future work will be to search the eigen space and find the best eigen features suitable for face recognition.
|
| 313 |
+
|
| 314 |
+
## References
|
| 315 |
+
|
| 316 |
+
[1] P.Belhumeur, J. Hespanha, and D. Kriegman, "Eigenfaces vs. fisherfaces: Recognition using class specific linear projection", IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.19, No.7, 711-720, 1997.
|
| 317 |
+
|
| 318 |
+
[2] Chengjun Liu and Harry Wechsler, "Enhanced Fisher Linear Discriminant Models for Face Recognition", 14th International Conference on Pattern Recognition, ICPR'98, Queensland, Australia, August 17-20, 1998.
|
| 319 |
+
|
| 320 |
+
[3] Chengjun Liu and Harry Wechsler, "Evolution of Optimal Projection Axes (OPA) for Face Recognition". Third IEEE International Conference on Automatic face and Gesture Recognition, FG'98, Nara, Japan, April 14-16,1998.
|
| 321 |
+
|
| 322 |
+
[4] Dao-Qing Dai, Guo-Can Feng, Jian-Huang Lai and P.C. Yuen, "Face Recognition Based on Local Fisher Features", 2nd Int. Conf. on Multimodal Interface, Beijing, 2000.
|
| 323 |
+
|
| 324 |
+
[5] Hua Yu and Jie Yang, "A Direct LDA Algorithm for High-Dimensional Data-with Application to Face Recognition", Pattern Recognition 34(10), 2001, pp. 2067-2070.
|
| 325 |
+
|
| 326 |
+
[6] Intelligent Multimedia Lab., "Asian Face Image Database PF01", http://nova.postech.ac.kr/.
|
| 327 |
+
|
| 328 |
+
[7] Georghiades, A.S. and Belhumeur, P.N. and Kriegman, D.J., "From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose", IEEE Trans. Pattern Anal. Mach. Intelligence, vol.23, No. 6, 2001, pp. 643-660.
|
| 329 |
+
|
| 330 |
+
[8] Daniel L. Swets and John Weng, "Using discriminant eigenfeatures for image retrieval", IEEE Trans. on PAMI, vol. 18, No. 8, 1996, pp. 831-836.
|
| 331 |
+
|
| 332 |
+
[9] X.W. Hou, S.Z. Li, H.J. Zhang, "Direct Appearance Models". In Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition. Hawaii. December, 2001.
|
| 333 |
+
|
| 334 |
+
[10] Z. Xue, S.Z. Li, and E.K. Teoh. "Facial Feature Extraction and Image Warping Using PCA Based Statistic Model". In Proceedings of 2001 International Conference on Image Processing. Thessaloniki, Greece. October 7-10, 2001.
|
| 335 |
+
|
| 336 |
+
[11] S.Z. Li, K.L. Chan and C.L. Wang. "Performance Evaluation of the Nearest Feature Line Method in Image Classification and Retrieval". IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1335-1339. November, 2000.
|
| 337 |
+
|
| 338 |
+
[12] G.D. Guo, H.J. Zhang, S.Z. Li. "Pairwise Face Recognition". In Proceedings of 8th IEEE International Conference on Computer Vision. Vancouver, Canada. July 9-12, 2001.
|
| 339 |
+
|
| 340 |
+
[13] S. Mika, G. Ratsch, J.Weston, and K. R. M. B. Scholkopf, "Fisher discriminant analysis with kernels", Neural networks for Signal Processing IX, 1999, pp.41-48.
|
| 341 |
+
|
| 342 |
+
[14] M. A. Turk and A. P. Pentland, "Eigenfaces for recognition", Journal of Cognitive Neuroscience, vol. 3, No. 1, 1991, pp.71-86.
|
| 343 |
+
|
| 344 |
+
[15] Jie Zhou and David Zhang "Face Recognition by Combining Several Algorithms", ICPR 2002.
|
| 345 |
+
|
| 346 |
+
[16] Alexandre Lemieux and Marc Parizeau, "Experiments on Eigenfaces Robustness", ICPR 2002.
|
| 347 |
+
|
| 348 |
+
[17] A. M. Martinez and A. C. Kak, "PCA versus LDA", IEEE TPAMI, 23(2):228-233, 2001.
|
| 349 |
+
|
| 350 |
+
[18] A. Yilmaz and M. Gokmen, "Eigenhill vs. eigenface and eigengedge", In Proceedings of International Conference Pattern Recognition, Barcelona, Spain, 2000, pp.827-830.
|
| 351 |
+
|
| 352 |
+
[19] Yael Adin, Yael Moses, and Shimon Ullman, "Face Recognition: The problem of Compensating for Changes in Illumination Direction", IEEE PAMI, Vol. 19, No. 7, 1997.
|
| 353 |
+
|
| 354 |
+
[20] Aleix M. Martinez, "Recognizing Impercisely Localized, Partially Occluded and Expression Variant Faces from a Single Sample per Class", IEEE TPAMI, Vol. 24, No. 6, 2002.
|
| 355 |
+
|
| 356 |
+
[21] Ronen Basri and David W. Jacobs, "Lambertian Reflec-tance and Linear Subspaces", IEEE TPAMI, Vol. 25, No.2 2003.
|
| 357 |
+
|
| 358 |
+
[22] Peter W. Hallinan, "A Low-Dimensional representation of Human faces for Arbitrary Lighting Conditioins", Proc. IEEE Conf. Computer Vision and Pattern recognition, 1994.
|
| 359 |
+
|
| 360 |
+
[23] D. Beymer and T. Poggio, "Face Recognition from One Example View", Science, Vol. 272, No. 5250, 1996.
|
| 361 |
+
|
| 362 |
+
[24] M. J. Black, D. J. Fleet and Y. Yacoob, "Robustly esti-mating Changes in Image Appearance", Computer Vision and Image Understanding, Vol. 78, No. 1, 2000.
|
| 363 |
+
|
| 364 |
+
[25] Shaokang Chen, Brian C. Lovell and Sai Sun, "Face Recognition with APCA in Variant Illuminations", Work-shop on Signal Processing and Applications, Australia, December, 2002.
|
samples_new/texts_merged/276850.md
ADDED
|
@@ -0,0 +1,386 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
On the entropy for group actions on the circle
|
| 5 |
+
|
| 6 |
+
by
|
| 7 |
+
|
| 8 |
+
Eduardo Jorquera (Santiago)
|
| 9 |
+
|
| 10 |
+
**Abstract.** We show that for a finitely generated group of $C^2$ circle diffeomorphisms, the entropy of the action equals the entropy of the restriction of the action to the non-wandering set.
|
| 11 |
+
|
| 12 |
+
1. Introduction. Let $(X, \mathrm{dist})$ be a compact metric space and $G$ a group of homeomorphisms of $X$ generated by a finite family of elements $\Gamma = \{g_1, \dots, g_n\}$. To simplify, we will always assume that $\Gamma$ is symmetric, that is, $g^{-1} \in \Gamma$ for every $g \in \Gamma$. For each $n \in \mathbb{N}$ we denote by $B_{\Gamma}(n)$ the ball of radius $n$ in $G$ (with respect to $\Gamma$), that is, the set of elements $f \in G$ which may be written in the form $f = g_{i_m} \cdots g_{i_1}$ for some $m \le n$ and $g_{i_j} \in \Gamma$. For $g \in G$ we let $\|f\| = \|f\|_{\Gamma} = \min\{n : f \in B_{\Gamma}(n)\}$
|
| 13 |
+
|
| 14 |
+
As in the classical case, given $\varepsilon > 0$ and $n \in \mathbb{N}$, two points $x, y$ in $X$ are said to be $(n, \varepsilon)$-separated if there exists $g \in B_{\Gamma}(n)$ such that $\mathrm{dist}(g(x), g(y)) \ge \varepsilon$. A subset $A \subset X$ is $(n, \varepsilon)$-separated if all $x \neq y$ in $A$ are $(n, \varepsilon)$-separated. We denote by $s(n, \varepsilon)$ the maximal possible cardinality (perhaps infinite) of an $(n, \varepsilon)$-separated set. The topological entropy of the action at the scale $\varepsilon$ is defined by
|
| 15 |
+
|
| 16 |
+
$$h_{\Gamma}(G \circled{=} X, \varepsilon) = \limsup_{n \uparrow \infty} \frac{\log(s(n, \varepsilon))}{n},$$
|
| 17 |
+
|
| 18 |
+
and the *topological entropy* is defined by
|
| 19 |
+
|
| 20 |
+
$$h_{\Gamma}(G \circled{=}} X) = \lim_{\varepsilon \downarrow 0} h_{\Gamma}(G \circled{=}} X, \varepsilon).$$
|
| 21 |
+
|
| 22 |
+
Notice that, although $h_{\Gamma}(G \circled{=} X, \varepsilon)$ depends on the system of generators, the properties of having zero, positive, or infinite entropy are independent of this choice.
|
| 23 |
+
|
| 24 |
+
The definition above was proposed in [5] as an extension of the classical topological entropy of single maps (the definition extends to pseudo-groups
|
| 25 |
+
|
| 26 |
+
2000 Mathematics Subject Classification: 20B27, 37A35, 37C85, 37E10.
|
| 27 |
+
Key words and phrases: topological entropy, group actions, circle diffeomorphisms.
|
| 28 |
+
---PAGE_BREAK---
|
| 29 |
+
|
| 30 |
+
of homeomorphisms, and hence is suitable for applications in foliation theory). Indeed, for a homeomorphism $f$, the topological entropy of the action of $\mathbb{Z} \sim \langle f \rangle$ equals twice the (classical) topological entropy of $f$. Nevertheless, the functorial properties of this notion remain unclear. For example, the following fundamental question is open.
|
| 31 |
+
|
| 32 |
+
**GENERAL QUESTION.** Is it true that $h_{\Gamma}(G \circled{=} X)$ is equal to $h_{\Gamma}(G \circled{=} \Omega)$?
|
| 33 |
+
|
| 34 |
+
Here $\Omega = \Omega(G \circled{=} X)$ denotes the *non-wandering set* of the action, or in other words
|
| 35 |
+
|
| 36 |
+
$$ \Omega = \{x \in X : \text{ for every neighborhood } U \text{ of } x, \text{ we have} \\ f(U) \cap U \neq \emptyset \text{ for some } f \neq \text{id in } G\}. $$
|
| 37 |
+
|
| 38 |
+
This is a closed invariant set whose complement $\Omega^c$ corresponds to the *wandering set* of the action.
|
| 39 |
+
|
| 40 |
+
The notion of topological entropy for group actions is quite appropriate in the case where $X$ is a one-dimensional manifold. In fact, in this case, the topological entropy is necessarily finite (cf. §2). Moreover, in the case of actions by diffeomorphisms, the dichotomy $h_{\text{top}} = 0$ or $h_{\text{top}} > 0$ is well understood. Indeed, according to a result originally proved by Ghys, Langevin, and Walczak, for groups of $C^2$ diffeomorphisms [5], and extended by Hurder to groups of $C^1$ diffeomorphisms (see for instance [9]), we have $h_{\text{top}} > 0$ if and only if there exists a resilient orbit for the action. This means that there exists a group element $f$ contracting an interval towards a fixed point $x_0$ inside, and another element $g$ which sends $x_0$ into its basin of contraction under $f$.
|
| 41 |
+
|
| 42 |
+
The results of this work give a positive answer to the General Question above in the context of group actions on one-dimensional manifolds under certain mild assumptions.
|
| 43 |
+
|
| 44 |
+
**THEOREM A.** If $G$ is a finitely generated subgroup of $\operatorname{Diff}_+^2(S^1)$, then for every finite system of generators $\Gamma$ of $G$, we have
|
| 45 |
+
|
| 46 |
+
$$ h_{\Gamma}(G \circled{=} S^1) = h_{\Gamma}(G \circled{=} \Omega). $$
|
| 47 |
+
|
| 48 |
+
Our proof for Theorem A actually works in the Denjoy class $C^{1+bv}$, and applies to general codimension-one foliations on compact manifolds. In the class $C^{1+Lip}$, it is quite possible that we could give an alternative proof using standard techniques from level theory [2, 6].
|
| 49 |
+
|
| 50 |
+
It is unclear whether Theorem A extends to actions of lower regularity. However, it still holds under certain algebraic hypotheses. In fact (quite unexpectedly), the regularity hypothesis is used to rule out the existence of elements $f \in G$ that fix some connected component of the wandering set and which are *distorted*, that is,
|
| 51 |
+
|
| 52 |
+
$$ \lim_{n \to \infty} \frac{\|f^n\|}{n} = 0. $$
|
| 53 |
+
---PAGE_BREAK---
|
| 54 |
+
|
| 55 |
+
Actually, for the equality between the entropies it suffices to require that no element in $G$ be subexponentially distorted. In other words, it suffices to require that, for each element $f \in G$ with infinite order, there exist a non-decreasing function $q : \mathbb{N} \to \mathbb{N}$ (depending on $f$) with subexponential growth satisfying $q(\|f^n\|) \ge n$ for every $n \in \mathbb{N}$. This is an algebraic condition which is satisfied by many groups, for example nilpotent or free groups. (We refer the reader to [1] for a nice discussion of distorted elements.) Under this hypothesis, the following result holds.
|
| 56 |
+
|
| 57 |
+
**THEOREM B.** If $G$ is a finitely generated subgroup of Homeo$_+(S^1)$ without subexponentially distorted elements, then for every finite system of generators $\Gamma$ of $G$, we have
|
| 58 |
+
|
| 59 |
+
$$h_{\Gamma}(G \circled{=} S^1) = h_{\Gamma}(G \circled{=} \Omega).$$
|
| 60 |
+
|
| 61 |
+
The entropy of general group actions and distorted elements seem to be related in an interesting manner. Indeed, though the topological entropy of a single homeomorphism $f$ may be equal to zero, if this map appears as a subexponentially distorted element inside an acting group, then it may create positive entropy for the group action.
|
| 62 |
+
|
| 63 |
+
**2. Some background.** In this work we will consider the normalized length on the circle, and every homeomorphism will be orientation preserving.
|
| 64 |
+
|
| 65 |
+
We begin by noticing that if $G$ is a finitely generated group of circle homeomorphisms and $\Gamma$ is a finite generating system for $G$, then for all $n \in \mathbb{N}$ and all $\varepsilon > 0$ one has
|
| 66 |
+
|
| 67 |
+
$$ (1) \qquad s(n, \varepsilon) \le \frac{1}{\varepsilon} \#B_{\Gamma}(n). $$
|
| 68 |
+
|
| 69 |
+
Indeed, let $A$ be an $(n, \varepsilon)$-separated set of cardinality $s(n, \varepsilon)$. Then for any two adjacent points $x, y$ in $A$ there exists $f \in B_{\Gamma}(n)$ such that $\text{dist}(f(x), f(y)) \ge \varepsilon$. For a fixed $f$, the intervals $[f(x), f(y)]$ which appear have disjoint interiors. Since the total length of the circle is 1, any given $f$ can be used in this construction at most $1/\varepsilon$ times, which immediately gives (1).
|
| 70 |
+
|
| 71 |
+
Notice that, taking the logarithm on both sides of (1), dividing by $n$, and passing to the limit gives
|
| 72 |
+
|
| 73 |
+
$$h_{\Gamma}(G \circled{=} S^1) \le \operatorname{gr}_{\Gamma}(G),$$
|
| 74 |
+
|
| 75 |
+
where $\operatorname{gr}_{\Gamma}(G)$ denotes the *growth* of $G$ with respect to $\Gamma$, that is,
|
| 76 |
+
|
| 77 |
+
$$\operatorname{gr}_{\Gamma}(G) = \lim_{n \to \infty} \frac{\log(\#\{B_{\Gamma}(n)\})}{n}.$$
|
| 78 |
+
|
| 79 |
+
Some easy consequences of this fact are the following:
|
| 80 |
+
---PAGE_BREAK---
|
| 81 |
+
|
| 82 |
+
* If $G$ has subexponential growth, that is, if $\text{gr}_\Gamma(G) = 0$ (in particular, if $G$ is nilpotent, or if $G$ is the Grigorchuk–Maki group considered in [8]), then $\text{h}_\Gamma(G \circled S^1) = 0$ for all finite generating systems $\Gamma$.
|
| 83 |
+
|
| 84 |
+
* In the general case, if $\# \Gamma = q \ge 1$, then from the relations
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
\[
|
| 88 |
+
\#B_{\Gamma}(n) \le 1 + \sum_{j=1}^{n} 2q(2q-1)^{j-1} = \begin{cases} 1 + \frac{q}{q-1}((2q-1)^n - 1), & q \ge 2, \\ 1 + 2n, & q=1, \end{cases}
|
| 89 |
+
\]
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
one concludes that
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
h_{\Gamma}(G \circled S^1) \le \log(2q - 1).
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
This shows in particular that the entropy of the action of $G$ on $S^1$
|
| 99 |
+
is finite. Notice that this may also be deduced from the probabilistic
|
| 100 |
+
arguments of [3] (see Théorème D therein). However, these arguments
|
| 101 |
+
only yield the weaker estimate $h_{\Gamma}(G \circled S^1) \le \log(2q)$ when $\Gamma$ has
|
| 102 |
+
cardinality $q$.
|
| 103 |
+
|
| 104 |
+
**3. Some preparations for the proofs.** The statements of our results are obvious when the non-wandering set of the action equals the whole circle. Hence, we will assume in what follows that $\Omega$ is a proper subset of $S^1$, and we will denote by $I$ a connected component of the complement of $\Omega$. Let $\text{St}(I)$ denote the stabilizer of $I$ in $G$.
|
| 105 |
+
|
| 106 |
+
LEMMA 1. *The stabilizer St*(*I*) *is either trivial or infinite cyclic.*
|
| 107 |
+
|
| 108 |
+
*Proof.* The (restrictions to *I* of the) non-trivial elements of St(*I*)|*I* have no fixed points, for otherwise these points would be non-wandering. Thus St(*I*)|*I* acts freely on *I*, and according to Hölder's Theorem [4, 7], its action is semiconjugate to an action by translations. We claim that if St(*I*)|*I* is non-trivial, then it is infinite cyclic. Indeed, if not then the corresponding group of translations is dense. This implies that the preimage by the semiconjugacy of any point whose preimage is a single point corresponds to a non-wandering point for the action. But this contradicts the fact that *I* is contained in Ω<sup>*c*</sup>.
|
| 109 |
+
|
| 110 |
+
If St(I)|_I is trivial then f|_I is trivial for every f ∈ St(I), and hence f
|
| 111 |
+
itself must be the identity. We then conclude that St(I) is trivial.
|
| 112 |
+
|
| 113 |
+
Analogously, $\text{St}(I)$ is cyclic if $\text{St}(I)|_I$ is cyclic. In this case, $\text{St}(I)|_I$ is generated by the restriction to the interval $I$ of the generator of $\text{St}(I)$. $\blacksquare$
|
| 114 |
+
|
| 115 |
+
**DEFINITION 1.** A connected component *I* of Ω<sup>*c*</sup> will be called *of type 1* if St(*I*) is trivial, and *of type 2* if St(*I*) is infinite cyclic.
|
| 116 |
+
|
| 117 |
+
Notice that the families of connected components of type 1 and 2 are
|
| 118 |
+
invariant, that is, for each $f \in G$ the interval $f(I)$ is of type 1 (resp. of
|
| 119 |
+
type 2) if $I$ is of type 1 (resp. of type 2). Moreover, given two connected
|
| 120 |
+
components of type 1 of Ω<sup>*c*</sup>, there exists at most one element in *G* sending
|
| 121 |
+
---PAGE_BREAK---
|
| 122 |
+
|
| 123 |
+
the former to the latter. Indeed, if $f(I) = g(I)$ then $g^{-1}f$ is in the stabilizer of $I$, and hence $f = g$ if $I$ is of type 1.
|
| 124 |
+
|
| 125 |
+
LEMMA 2. Let $x_1, \dots, x_m$ be points contained in a single type 1 connected component of $\Omega^c$. If for some $\varepsilon > 0$ the points $x_i, x_j$ are $(\varepsilon, n)$-separated for every $i \neq j$, then $m \le 1 + 1/\varepsilon$.
|
| 126 |
+
|
| 127 |
+
*Proof.* Let $I = ]a,b[$ be the connected component of type 1 of $\Omega^c$ containing the points $x_1, \dots, x_m$. After renumbering the $x_i$'s, we may assume that $a < x_1 < \dots < x_m < b$. For each $1 \le i \le m-1$ one can choose an element $g_i \in B_I(n)$ such that $\text{dist}(g_i(x_i), g_i(x_{i+1})) \ge \varepsilon$. Now, since $I$ is of type 1, the intervals $]g_i(x_i), g_i(x_{i+1})[$ are pairwise disjoint. Therefore, the number of these intervals times their minimal length is less than or equal to 1. This gives $(m-1)\varepsilon \le 1$, thus proving the lemma. $\blacksquare$
|
| 128 |
+
|
| 129 |
+
The case of connected components $I$ of type 2 of $\Omega^c$ is much more complicated. The difficulty is that if the generator of the stabilizer of $I$ is subexponentially distorted in $G$, then there exist exponentially many $(n, \varepsilon)$-separated points inside $I$, and hence a relevant part of the entropy is “concentrated” in $I$. To deal with this problem, for each connected component $I$ of type 2 of $\Omega^c$ we denote by $p_I$ its middle point, and then we define $\ell_I: G \to \mathbb{N}_0$ as follows. Let $h$ be the generator of the stabilizer of $I$ such that $h(x) > x$ for all $x$ in $I$. For each $f \in G$ the element $fhf^{-1}$ is the generator of the stabilizer of $f(I)$ with the analogous property. We then let $\ell_I(f) = |r|$, where $r$ is the unique integer such that
|
| 130 |
+
|
| 131 |
+
$$ f h^r f^{-1} (p_{f(I)}) \leq f(p_I) < f h^{r+1} f^{-1} (p_{f(I)}). $$
|
| 132 |
+
|
| 133 |
+
LEMMA 3. For all $f,g$ in $G$ one has
|
| 134 |
+
|
| 135 |
+
$$ \ell_I(g \circ f) \le \ell_{f(I)}(g) + \ell_I(f) + 1. $$
|
| 136 |
+
|
| 137 |
+
*Proof.* Let $r$ be the unique integer such that
|
| 138 |
+
|
| 139 |
+
$$ (2) \qquad (fhf^{-1})^r (p_{f(I)}) \le f(p_I) < (fhf^{-1})^{r+1} (p_{f(I)}), $$
|
| 140 |
+
|
| 141 |
+
and let $s$ be the unique integer for which
|
| 142 |
+
|
| 143 |
+
$$ (gfhf^{-1}g^{-1})^s (p_{gf(I)}) \le g(p_{f(I)}) < (gfhf^{-1}g^{-1})^{s+1} (p_{gf(I)}), $$
|
| 144 |
+
|
| 145 |
+
so that
|
| 146 |
+
|
| 147 |
+
$$ \ell_I(f) = |r|, \quad \ell_{f(I)}(g) = |s|. $$
|
| 148 |
+
|
| 149 |
+
We then have
|
| 150 |
+
|
| 151 |
+
$$ g^{-1}(gfhf^{-1}g^{-1})^s (p_{gf(I)}) \le p_{f(I)} < g^{-1}(gfhf^{-1}g^{-1})^{s+1} (p_{gf(I)}), $$
|
| 152 |
+
|
| 153 |
+
that is,
|
| 154 |
+
|
| 155 |
+
$$ (fhf^{-1})^s g^{-1} (p_{gf(I)}) \le p_{f(I)} < (fhf^{-1})^{s+1} g^{-1} (p_{gf(I)}). $$
|
| 156 |
+
---PAGE_BREAK---
|
| 157 |
+
|
| 158 |
+
Therefore,
|
| 159 |
+
|
| 160 |
+
$$ (f h f^{-1})^r (f h f^{-1})^s g^{-1}(p_{gf(I)}) \leq f(p_I) < (f h f^{-1})^{r+1} (f h f^{-1})^{s+1} g^{-1}(p_{gf(I)}), $$
|
| 161 |
+
|
| 162 |
+
and hence
|
| 163 |
+
|
| 164 |
+
$$ (f h f^{-1})^{r+s} g^{-1}(p_{gf(I)}) \leq f(p_I) < (f h f^{-1})^{r+s+2} g^{-1}(p_{gf(I)}). $$
|
| 165 |
+
|
| 166 |
+
This easily gives
|
| 167 |
+
|
| 168 |
+
$$ g(f h f^{-1})^{r+s} g^{-1}(p_{gf(I)}) \leq g f(p_I) < g(f h f^{-1})^{r+s+2} g^{-1}(p_{gf(I)}), $$
|
| 169 |
+
|
| 170 |
+
and thus
|
| 171 |
+
|
| 172 |
+
$$ (g f h f^{-1} g^{-1})^{r+s}(p_{gf(I)}) \leq g f(p_I) < (g f h f^{-1} g^{-1})^{r+s+2}(p_{gf(I)}). $$
|
| 173 |
+
|
| 174 |
+
This shows that $\ell_I(gf)$ equals either $|r+s|$ or $|r+s+1|$, which concludes the proof. $\blacksquare$
|
| 175 |
+
|
| 176 |
+
The following corollary is a direct consequence of the preceding lemma, but may be proved independently.
|
| 177 |
+
|
| 178 |
+
**COROLLARY 1.** For every $f \in G$ one has
|
| 179 |
+
|
| 180 |
+
$$ |\ell_I(f) - \ell_{f(I)}(f^{-1})| \leq 1. $$
|
| 181 |
+
|
| 182 |
+
*Proof.* From (2) one obtains
|
| 183 |
+
|
| 184 |
+
$$ h^{-(r+1)}(p_I) < f^{-1}(p_{f(I)}) \leq h^{-r}(p_I) < h^{-r+1}(p_I), $$
|
| 185 |
+
|
| 186 |
+
and hence $\ell_{f(I)}(f^{-1})$ equals either $|r|$ or $|r+1|$. Since $\ell_I(f) = |r|$, the corollary follows. $\blacksquare$
|
| 187 |
+
|
| 188 |
+
**4. The proof in the smooth case.** To rule out the possibility of “concentration” of the entropy on a type 2 connected component $I$ of $\Omega^c$, in the $C^2$ case we will use classical control of distortion arguments in order to construct, starting from the function $\ell_I$, a kind of quasi-morphism from $G$ into $\mathbb{N}_0$. Slightly more generally, let $\mathcal{F}$ be any finite family of connected components of type 2 of $\Omega^c$. We denote by $\mathcal{F}^G$ the family of all intervals contained in the orbits of the intervals in $\mathcal{F}$. For each $f \in G$ we then define
|
| 189 |
+
|
| 190 |
+
$$ \ell_{\mathcal{F}}(f) = \sup_{I \in \mathcal{F}^G} \ell_I(f). $$
|
| 191 |
+
|
| 192 |
+
*A priori*, the value of $\ell_{\mathcal{F}}$ could be infinite. We claim, however, that, for groups of $C^2$ diffeomorphisms, this value is necessarily finite for every element $f$.
|
| 193 |
+
|
| 194 |
+
**PROPOSITION 1.** For every finite family $\mathcal{F}$ of type 2 connected components of $\Omega^c$, the value of $\ell_{\mathcal{F}}(f)$ is finite for each $f \in G$.
|
| 195 |
+
|
| 196 |
+
To prove this proposition, we will need to estimate the function $\ell_I(f)$ in terms of the distortion of $f$ on the interval $I$.
|
| 197 |
+
---PAGE_BREAK---
|
| 198 |
+
|
| 199 |
+
LEMMA 4. For each fixed type 2 connected component $I$ of $\Omega^c$ and every $g \in G$, the value of $\ell_I(g)$ is bounded from above by a number $L(V)$ depending on $V = \text{var}(\log(g'|_I))$, the total variation of the logarithm of the derivative of the restriction of $g$ to $I$.
|
| 200 |
+
|
| 201 |
+
*Proof.* Write $I = ]a,b[$ and $g(I) = [\bar{a},\bar{b}]$. If $h$ is a generator for the stabilizer of $I$, then for every $f \in G$ the value of $\ell_I(f)$ corresponds (up to some constant $\pm 1$) to the number of fundamental domains for the dynamics of $fhf^{-1}$ on $f(I)$ between the points $p_{f(I)}$ and $f(p_I)$, which in turn corresponds to the number of fundamental domains for the dynamics of $h$ on $I$ between $f^{-1}(p_{f(I)})$ and $p_I$. Therefore, we need to show that there exist $c < d$ in $]a,b[$ depending on $V$ and such that $g^{-1}(p_{g(I)})$ belongs to $[c,d]$. We will show that this happens for the values
|
| 202 |
+
|
| 203 |
+
$$c = a + \frac{|I|}{2e^V} \quad \text{and} \quad d = b - \frac{|I|}{2e^V}.$$
|
| 204 |
+
|
| 205 |
+
We will just check that the first choice works, leaving the second one to the reader. By the Mean Value Theorem, there exist $x \in g(I)$ and $y \in [\bar{a}, p_{g(I)}]$ such that
|
| 206 |
+
|
| 207 |
+
$$ (g^{-1})'(x) = \frac{|I|}{|g(I)|} $$
|
| 208 |
+
|
| 209 |
+
and
|
| 210 |
+
|
| 211 |
+
$$ (g^{-1})'(y) = \frac{|g^{-1}([\bar{a}, p_{f(I})]|}{|[\bar{a}, p_{g(I)}]|} = \frac{g^{-1}(p_{g(I)}) - a}{|g(I)|/2}. $$
|
| 212 |
+
|
| 213 |
+
By the definition of the constant $V$, we have $(g^{-1})'(x)/(g^{-1})'(y) \le e^V$. This gives
|
| 214 |
+
|
| 215 |
+
$$ e^V \ge \frac{|I|/|g(I)|}{2(g^{-1}(p_{g(I)}) - a)/|g(I)|} = \frac{|I|}{2(g^{-1}(p_{g(I)}) - a)}, $$
|
| 216 |
+
|
| 217 |
+
thus proving that $g^{-1}(p_{g(I)}) \ge a + |I|/2e^V$, as we wanted to show. $\blacksquare$
|
| 218 |
+
|
| 219 |
+
*Proof of Proposition 1.* Let $J = [\bar{a}, \bar{b}]$ be an interval in the $G$-orbit of $I = ]a, b[$. If $g = g_{i_n} \cdots g_{i_1}, g_{i_j} \in \Gamma$, is an element of minimal length sending $I$ to $J$, then the intervals $I, g_{i_1}(I), g_{i_2}g_{i_1}(I), \dots, g_{i_{n-1}} \cdots g_{i_2}g_{i_1}(I)$ have pairwise disjoint interiors. Therefore,
|
| 220 |
+
|
| 221 |
+
$$ \mathrm{var}(\log(g'|_I)) \le \sum_{j=0}^{n-1} \mathrm{var}(\log(g'_{i_{j+1}}|_{g_{i_j}\cdots g_{i_1}(I)})) \le \sum_{h \in \Gamma} \mathrm{var}(\log(h')) =: W. $$
|
| 222 |
+
|
| 223 |
+
Moreover, setting $V = \text{var}(\log(f'))$, we have
|
| 224 |
+
|
| 225 |
+
$$ \text{var}(\log((fg)'_I)) \le \text{var}(\log(g'_I)) + \text{var}(\log(f')) = W + V. $$
|
| 226 |
+
---PAGE_BREAK---
|
| 227 |
+
|
| 228 |
+
By Lemmas 3 and 4 and Corollary 1,
|
| 229 |
+
|
| 230 |
+
$$
|
| 231 |
+
\begin{align*}
|
| 232 |
+
\ell_J(f) &\le \ell_J(g^{-1}) + \ell_I(fg) + 1 \le \ell_I(g) + \ell_I(fg) + 2 \\
|
| 233 |
+
&\le L(W) + L(W+V) + 2.
|
| 234 |
+
\end{align*}
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
This proves the assertion of the proposition when $\mathcal{F}$ consists of a single interval. The case of general finite $\mathcal{F}$ follows easily. $\blacksquare$
|
| 238 |
+
|
| 239 |
+
For a given $\epsilon > 0$ we define $\ell_{\epsilon} = \ell_{\mathcal{F}_{\epsilon}}$, where $\mathcal{F}_{\epsilon} = \{I_1, \dots, I_k\}$ is the family of all connected components of $\Omega^c$ having length greater than or equal to $\epsilon$, with $k = k(\epsilon)$. Notice that, by Lemma 3, for every $f,g$ in $\Gamma$ one has
|
| 240 |
+
|
| 241 |
+
$$
|
| 242 |
+
(3) \qquad \ell_{\varepsilon}(gf) \le \ell_{\varepsilon}(g) + \ell_{\varepsilon}(f) + 1.
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
LEMMA 5. There exist constants $A(\varepsilon) > 0$ and $B(\varepsilon)$ with the following property: If $x_1, \dots, x_m$ are points in a single connected component of type 2 of $\Omega^c$ and $x_i, x_j$ are $(\varepsilon, n)$-separated for every $i \neq j$, then $m \le A(\varepsilon)n + B(\varepsilon)$.
|
| 246 |
+
|
| 247 |
+
*Proof.* Write $c_\varepsilon = \max\{\ell_\varepsilon(g) : g \in \Gamma\}$ (according to Proposition 1, the value of $c_\varepsilon$ is finite). Let $I$ be the type 2 connected component of $\Omega^c$ containing $x_1, \dots, x_m$. We may assume that $x_1 < \dots < x_m$. For each $1 \le i \le k$ let $h_i$ be the generator of $\text{St}(I_i)$. Notice that $\ell_\varepsilon(h_i^r) \ge |r|$ for all $r \in \mathbb{Z}$.
|
| 248 |
+
|
| 249 |
+
If $f$ is an element in $B_{\Gamma}(n)$ sending $I$ to some $I_i$, then the number of points which are $\varepsilon$-separated by $f$ is less than or equal to $1/\varepsilon + 1$. We claim that the number of elements in $B_{\Gamma}(n)$ sending $I$ to $I_i$ is bounded above by $4nc_{\varepsilon} + 4n - 1$. Indeed, if $g$ also sends $I$ onto $I_i$ then $gf^{-1} \in \text{St}(I_i)$, hence $gf^{-1} = h_i^r$ some $r$. Therefore, using (3) one obtains $|r| \le \ell_{\varepsilon}(h_i^r) \le 2nc_{\varepsilon} + 2n - 1$.
|
| 250 |
+
|
| 251 |
+
Since the previous arguments apply to each type 2 interval $I_i$, we have
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
m \le k(1/\varepsilon + 1)(4nc_{\varepsilon} + 4n - 1).
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
Therefore, letting
|
| 258 |
+
|
| 259 |
+
$$
|
| 260 |
+
A(\varepsilon) = (4k + 4k/\varepsilon)(1 + c_{\varepsilon}) \quad \text{and} \quad B(\varepsilon) = -(k + k/\varepsilon)
|
| 261 |
+
$$
|
| 262 |
+
|
| 263 |
+
concludes the proof. $\blacksquare$
|
| 264 |
+
|
| 265 |
+
To conclude the proof of Theorem A, the following notation will be useful.
|
| 266 |
+
|
| 267 |
+
**NOTATION.** 1. Given $\epsilon > 0$ and $n \in \mathbb{N}$, we denote by $s(n, \epsilon)$ the largest cardinality of an $(n, \epsilon)$-separated subset of $\mathbb{S}^1$. Likewise, $s_{\Omega}(n, \epsilon)$ will denote the largest cardinality of an $(n, \epsilon)$-separated set contained in the non-wandering set.
|
| 268 |
+
|
| 269 |
+
*Proof of Theorem A.* Fix $0 < \epsilon < 1/2L$, where $L$ is a common Lipschitz constant for the elements in $\Gamma$. We will show that, for some function $p_\epsilon$ growing linearly in $n$ (and whose coefficients depend on $\epsilon$), one has
|
| 270 |
+
|
| 271 |
+
$$
|
| 272 |
+
(4) \qquad s(n, \varepsilon) \le p_{\varepsilon}(n)s_{\Omega}(n, \varepsilon) + p_{\varepsilon}(n).
|
| 273 |
+
$$
|
| 274 |
+
---PAGE_BREAK---
|
| 275 |
+
|
| 276 |
+
Actually, any function $p_\varepsilon$ with subexponential growth and satisfying such an inequality suffices. Indeed, taking the logarithm of both sides, dividing by $n$, and passing to the limit implies that
|
| 277 |
+
|
| 278 |
+
$$h_{\Gamma}(G \circledast S^1, \varepsilon) = h_{\Gamma}(G \circledast \Omega, \varepsilon).$$
|
| 279 |
+
|
| 280 |
+
Letting $\varepsilon$ go to zero gives
|
| 281 |
+
|
| 282 |
+
$$h_{\Gamma}(G \circledast S^1) \leq h_{\Gamma}(G \circledast \Omega).$$
|
| 283 |
+
|
| 284 |
+
Since the opposite inequality is obvious, this shows the desired equality between the entropies.
|
| 285 |
+
|
| 286 |
+
To show (4), fix an $(n, \varepsilon)$-separated set $S$ containing $s(n, \varepsilon)$ points. Let $n_{\Omega}$ (resp. $n_{\Omega^c}$) be the number of points in $S$ which are in $\Omega$ (resp. in $\Omega^c$). Obviously, $s(n, \varepsilon) = n_{\Omega} + n_{\Omega^c}$. Let $t = t_S$ be the number of connected components of $\Omega^c$ containing points in $S$, and let $l = [t/2]$, where $[\cdot]$ denotes integer part. We will show that there exists an $(n, \varepsilon)$-separated set $T$ contained in $\Omega$ and having cardinality $l$. This will obviously give $s_{\Omega}(n, \varepsilon) \ge l$. The inequalities $t \le 2l+1$ and $n_{\Omega} \le s_{\Omega}(n, \varepsilon)$, together with Lemmas 2 and 3, will imply that
|
| 287 |
+
|
| 288 |
+
$$ \begin{aligned} s(n, \varepsilon) &= n_{\Omega} + n_{\Omega^c} \le n_{\Omega} + tk(1 + 1/\varepsilon)(4nc_{\varepsilon} + 4n - 1) \\ &\le s_{\Omega}(n, \varepsilon) + (2s_{\Omega}(n, \varepsilon) + 1)k(1 + 1/\varepsilon)(4nc_{\varepsilon} + 4n - 1), \end{aligned} $$
|
| 289 |
+
|
| 290 |
+
thus showing (4).
|
| 291 |
+
|
| 292 |
+
To show the existence of the set $T$ with the properties above, we proceed in a constructive way. Let us enumerate the connected components of $\Omega^c$ containing points in $S$ in a cyclic way as $I_1, \dots, I_t$. Now for each $1 \le i \le l$ choose a point $t_i \in \Omega$ between $I_{2i-1}$ and $I_{2i}$, and let $T = \{t_1, \dots, t_l\}$. We need to check that, for $i \ne j$, the points $t_i$ and $t_j$ are $(n, \varepsilon)$-separated. Now by construction, for each $i \ne j$ there exist at least two different points $x, y$ in $S$ contained in the interval of smallest length in $S^1$ joining $t_i$ and $t_j$. Since $S$ is $(n, \varepsilon)$-separated, there exist $m \le n$ and $g_{i_1}, \dots, g_{i_m}$ in $\Gamma$ such that $\text{dist}(h(x), h(y)) \ge \varepsilon$, where $h = g_{i_m} \cdots g_{i_2}g_{i_1}$. Unfortunately, because of the topology of the circle, this does not imply that $\text{dist}(h(t_i), h(t_j)) \ge \varepsilon$. However, the proof will be finished if we show that
|
| 293 |
+
|
| 294 |
+
$$ (5) \quad \text{dist}(g_{i_r} \cdots g_{i_1}(t_i), g_{i_r} \cdots g_{i_1}(t_j)) \ge \varepsilon \quad \text{for some } 0 \le r \le m. $$
|
| 295 |
+
|
| 296 |
+
This claim is obvious if $\text{dist}(t_i, t_j) \ge \varepsilon$. If this is not the case then, by the definition of the constants $\varepsilon$ and $L$, the length of the interval $[g_{i_1}(t_i), g_{i_1}(t_j)]$ is smaller than $1/2$, and hence it coincides with the distance between its endpoints. If this distance is at least $\varepsilon$, then we are done. If not, the same argument shows that the length of the interval $[g_{i_2}g_{i_1}(t_i), g_{i_2}g_{i_1}(t_j)]$ is smaller than $1/2$ and coincides with the distance between its endpoints. If this length is at least $\varepsilon$, then we are done. If not, we continue the procedure. Clearly, there must be some integer $r \le m$ such that the length of the
|
| 297 |
+
---PAGE_BREAK---
|
| 298 |
+
|
| 299 |
+
interval $[g_{i_{r-1}} \cdots g_{i_1}(t_i), g_{i_{r-1}} \cdots g_{i_1}(t_j)]$ is smaller than $\varepsilon$, and the one of
|
| 300 |
+
$[g_{i_r} \cdots g_{i_1}(t_i), g_{i_r} \cdots g_{i_1}(t_j)]$ is greater than or equal to $\varepsilon$. As before, the
|
| 301 |
+
length of the next interval will be forced to be smaller than 1/2, and hence
|
| 302 |
+
it will coincide with the distance between its endpoints. This shows (5) and
|
| 303 |
+
concludes the proof of Theorem A. $\blacksquare$
|
| 304 |
+
|
| 305 |
+
**5. The proof in the absence of subexponentially distorted elements.** Recall that topological entropy is invariant under topological conjugacy. Therefore, due to [3, Théorème D], in order to prove Theorem B we may assume that $G$ is a group of bi-Lipschitz homeomorphisms. Let $L$ be a common Lipschitz constant for the elements in $\Gamma$. Fix again $0 < \varepsilon < 1/2L$, and let $I_1, \dots, I_k$ be the connected components of $\Omega^c$ having length greater than or equal to $\varepsilon$. Let $h_i$ be a generator for the stabilizer of $I_i$ (with $h_i = \text{Id}$ in case $I_i$ is of type 1). Consider the minimal non-decreasing function $q_\varepsilon$ such that, for each of the non-trivial $h_i$'s, one has $q_\varepsilon(\|h_i^r\|) \ge r$ for all positive $r$. We will show that (4) holds for the function
|
| 306 |
+
|
| 307 |
+
$$p_{\varepsilon}(n) = 2k(1 + 1/\varepsilon)(2q_{\varepsilon}(2n) + 1) + 1.$$
|
| 308 |
+
|
| 309 |
+
Notice that, by assumption, this function $p_\epsilon$ grows at most subexponentially
|
| 310 |
+
in $n$. Hence, as in the case of Theorem A, inequality (4) allows us to finish
|
| 311 |
+
the proof of the equality between the entropies.
|
| 312 |
+
|
| 313 |
+
The main difficulty in showing (4) in this case is that Lemma 5 is no
|
| 314 |
+
longer available. However, the following still holds.
|
| 315 |
+
|
| 316 |
+
LEMMA 6. If $x_1, \dots, x_m$ are points in a single type 2 connected component $I$ of $\Omega^c$ having length at least $\varepsilon$, and $x_i, x_j$ are $(\varepsilon, n)$-separated for all $i \neq j$, then $m \le k(1/\varepsilon + 1)(2q_\varepsilon(2n) + 1)$.
|
| 317 |
+
|
| 318 |
+
*Proof.* Let $I$ be the type 2 connected component of $\Omega^c$ containing $x_1, \dots, x_m$. We may assume that $x_1 < \dots < x_m$. If $f$ is an element in $B_I(n)$ sending $I$ to some $I_i$, then the number of points which are $\varepsilon$-separated by $f$ is less than or equal to $1/\varepsilon + 1$. We claim that the number of elements in $B_I(n)$ sending $I$ to $I_i$ is bounded above by $q_\varepsilon(r)$. Indeed, if $g$ also sends $I$ to $I_i$ then $gf^{-1} \in \text{St}(I_i)$, hence $gf^{-1} = h_i^r$ some $r$. Therefore,
|
| 319 |
+
|
| 320 |
+
$$2n \geq \|gf^{-1}\| = \|h_i^r\|,$$
|
| 321 |
+
|
| 322 |
+
and hence
|
| 323 |
+
|
| 324 |
+
$$q_{\epsilon}(2n) \ge q_{\epsilon}(\|h_i^r\|) \ge |r|.$$
|
| 325 |
+
|
| 326 |
+
Since the previous arguments apply to each type 2 interval $I_i$, this gives
|
| 327 |
+
|
| 328 |
+
$$m \le k(1/\epsilon + 1)(2q_\epsilon(2n) + 1),$$
|
| 329 |
+
|
| 330 |
+
thus proving the lemma. $\blacksquare$
|
| 331 |
+
|
| 332 |
+
To show (4) in the present case, we proceed as in the proof of Theorem A. We fix an $(n, \epsilon)$-separated set $S$ containing $s(n, \epsilon)$ points. We let $n_\Omega$
|
| 333 |
+
---PAGE_BREAK---
|
| 334 |
+
|
| 335 |
+
resp. $n_{\Omega^c}$) be the number of points in $S$ which are in $\Omega$ (resp. in $\Omega^c$), so that
|
| 336 |
+
$s(n, \varepsilon) = n_{\Omega} + n_{\Omega^c}$. Let $t = t_S$ be the number of connected components of $\Omega^c$
|
| 337 |
+
containing points in $S$, and let $l = [t/2]$. As before, one can show that there
|
| 338 |
+
exists an $(n, \varepsilon)$-separated set contained in $\Omega$ and having cardinality $l$. This
|
| 339 |
+
will obviously give $s_{\Omega}(n, \varepsilon) \ge l$. The inequalities $t \le 2l+1$ and $n_{\Omega} \le s_{\Omega}(n, \varepsilon)$
|
| 340 |
+
still hold. Using Lemmas 2 and 6 one now obtains
|
| 341 |
+
|
| 342 |
+
$$
|
| 343 |
+
\begin{align*}
|
| 344 |
+
s(n, \varepsilon) &= n_{\Omega} + n_{\Omega^{c}} \leq n_{\Omega} + tk(1 + 1/\varepsilon)(2q_{\varepsilon}(2n) + 1) \\
|
| 345 |
+
&\leq s_{\Omega}(n, \varepsilon) + (2s_{\Omega}(n, \varepsilon) + 1)k(1 + 1/\varepsilon)(2q_{\varepsilon}(2n) + 1).
|
| 346 |
+
\end{align*}
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
This concludes the proof of Theorem B.
|
| 350 |
+
|
| 351 |
+
**Acknowledgments.** I would like to thank Andrés Navas for introduc-
|
| 352 |
+
ing me to this subject and his continuous support during this work, which
|
| 353 |
+
was partially funded by Research Network on Low Dimensional Dynamical
|
| 354 |
+
Systems (PBCT-Conicyt’s project ADI 17). I would also extend my grati-
|
| 355 |
+
tude to both the referee and the editor for pointing out a subtle error in the
|
| 356 |
+
original version of this paper.
|
| 357 |
+
|
| 358 |
+
References
|
| 359 |
+
|
| 360 |
+
[1] D. Calegari and M. Freedman, *Distortion in transformation groups*, Geom. Topology 10 (2006), 267–293.
|
| 361 |
+
|
| 362 |
+
[2] J. Cantwell and L. Conlon, *Poincaré–Bendixson theory for leaves of codimension one*, Trans. Amer. Math. Soc. 265 (1981), 181–209.
|
| 363 |
+
|
| 364 |
+
[3] B. Deroin, V. Kleptsyn et A. Navas, *Sur la dynamique unidimensionnelle en régularité intermédiaire*, Acta Math. 199 (2007), 199–262.
|
| 365 |
+
|
| 366 |
+
[4] E. Ghys, *Groups acting on the circle*, Enseign. Math. 47 (2001), 329–407.
|
| 367 |
+
|
| 368 |
+
[5] E. Ghys, R. Langevin et P. Walczak, *Entropie géométrique des feuilletages*, Acta Math. 160 (1988), 105–142.
|
| 369 |
+
|
| 370 |
+
[6] G. Hector, *Architecture des feuilletages de classe C²*, Astérisque 107–108 (1983), 243–258.
|
| 371 |
+
|
| 372 |
+
[7] A. Navas, *Groups of Circle Diffeomorphisms*, forthcoming book; Spanish version: Ensaios Matemáticos 13, Brazil. Math. Soc., 2007.
|
| 373 |
+
|
| 374 |
+
[8] —, *Growth of groups and diffeomorphisms of the circle*, Geom. Funct. Anal. 18 (2008), 988–1028.
|
| 375 |
+
|
| 376 |
+
[9] P. Walczak, *Dynamics of Foliations, Groups and Pseudogroups*, IMPAN Monogr. Math. 64, Birkhäuser, Basel, 2004.
|
| 377 |
+
|
| 378 |
+
Departamento de Matemáticas
|
| 379 |
+
Facultad de Ciencias
|
| 380 |
+
Universidad de Chile
|
| 381 |
+
Las Palmeras 3425, Ñuñoa
|
| 382 |
+
Santiago, Chile
|
| 383 |
+
E-mail: ejorquer@u.uchile.cl
|
| 384 |
+
|
| 385 |
+
Received 15 September 2008;
|
| 386 |
+
in revised form 25 February 2009
|
samples_new/texts_merged/2779026.md
ADDED
|
@@ -0,0 +1,595 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Erdös-Rényi Sequences and Deterministic construction of
|
| 5 |
+
Expanding Cayley Graphs
|
| 6 |
+
|
| 7 |
+
V. Arvind *
|
| 8 |
+
|
| 9 |
+
Partha Mukhopadhyay†
|
| 10 |
+
|
| 11 |
+
Prajakta Nimbhorkar †
|
| 12 |
+
|
| 13 |
+
May 15, 2011
|
| 14 |
+
|
| 15 |
+
Abstract
|
| 16 |
+
|
| 17 |
+
Given a finite group $G$ by its multiplication table as input, we give a deterministic polynomial-time construction of a directed Cayley graph on $G$ with $O(\log|G|)$ generators, which has a rapid mixing property and a constant spectral expansion.
|
| 18 |
+
|
| 19 |
+
We prove a similar result in the undirected case, and give a new deterministic polynomial-time construction of an expanding Cayley graph with $O(\log|G|)$ generators, for any group $G$ given by its multiplication table. This gives a completely different and elementary proof of a result of Wigderson and Xiao [10].
|
| 20 |
+
|
| 21 |
+
For any finite group $G$ given by a multiplication table, we give a deterministic polynomial-time construction of a cube generating sequence that gives a distribution on $G$ which is arbitrarily close to the uniform distribution. This derandomizes the well-known construction of Erdös-Rényi sequences [2].
|
| 22 |
+
|
| 23 |
+
# 1 Introduction
|
| 24 |
+
|
| 25 |
+
Let $G$ be a finite group with $n$ elements, and let $J = \{g_1, g_2, \dots, g_k\}$ be a *generating set* for the group $G$.
|
| 26 |
+
|
| 27 |
+
The *directed Cayley graph* Cay$(G, J)$ is a directed graph with vertex set $G$ with directed edges of the form $(x, xg_i)$ for each $x \in G$ and $g_i \in J$. Clearly, since $J$ is a generating set for $G$, Cay$(G, J)$ is a strongly connected graph with every vertex of out-degree $k$.
|
| 28 |
+
|
| 29 |
+
The *undirected Cayley graph* Cay$(G, J \cup J^{-1})$ is an undirected graph on the vertex set $G$ with undirected edges of the form $\{x, xg_i\}$ for each $x \in G$ and $g_i \in J$. Again, since $J$ is a generating set for $G$, Cay$(G, J \cup J^{-1})$ is a connected regular graph of degree $|J \cup J^{-1}|$.
|
| 30 |
+
|
| 31 |
+
Let $X = (V, E)$ be an undirected regular $n$-vertex graph of degree $D$. Consider the *normalized adjacency matrix* $A_X$ of the graph $X$. It is a symmetric matrix with largest eigenvalue 1. For $0 < \lambda < 1$, the graph $X$ is an $(n, D, \lambda)$-spectral expander if the second largest eigenvalue of $A_X$, in absolute value, is bounded by $\lambda$.
|
| 32 |
+
|
| 33 |
+
The study of expander graphs and its properties is of fundamental importance in theoretical computer science; the Hoory-Linial-Wigderson monograph is an excellent source [4] for current
|
| 34 |
+
|
| 35 |
+
*The Institute of Mathematical Sciences, Chennai, India>Email: arvind@imsc.res.in
|
| 36 |
+
|
| 37 |
+
†Chennai Mathematical Institute, Siruseri, India. Emails: {partham,prajakta}@cmi.ac.in
|
| 38 |
+
---PAGE_BREAK---
|
| 39 |
+
|
| 40 |
+
developments and applications. A central problem is the explicit construction of expander graph families [4, 5]. By explicit it is meant that the family of graphs has efficient deterministic constructions, where the notion of efficiency is often tailored to a specific application, e.g. [9]. Explicit constructions with the best known (and near optimal) expansion and degree parameters are Cayley expander families (the so-called Ramanujan graphs) [5].
|
| 41 |
+
|
| 42 |
+
Does every finite group have an expanding generator set? Alon and Roichman, in [1], answered this in the positive using the probabilistic method. Let $G$ be any finite group with $n$ elements. Given any constant $\lambda > 0$, they showed that a random multiset $J$ of size $O(\log n)$ picked uniformly at random from $G$ is, with high probability, a spectral expander with second largest eigenvalue bounded by $\lambda$. In other words, $\text{Cay}(G, J \cup J^{-1})$ is an $O(\log n)$ degree, $\lambda$-spectral expander with high probability. The theorem also gives a polynomial (in $n$) time randomized algorithm for construction of a Cayley expander on $G$: pick the elements of $J$ independently and uniformly at random and check that $\text{Cay}(G, J \cup J^{-1})$ is a spectral expander. There is a brute-force deterministic simulation of this that runs in $n^{O(\log n)}$ time by cycling through all candidate sets $J$. Wigderson and Xiao in [10], give a very interesting $n^{O(1)}$ time derandomized construction based on Chernoff bounds for matrix-valued random variables (and pessimistic estimators). Their result is the starting point of the study presented in this paper.
|
| 43 |
+
|
| 44 |
+
In this paper, we give an entirely different and elementary $n^{O(1)}$ time derandomized construction that is based on analyzing mixing times of random walks on expanders rather than on its spectral properties. Our construction is conceptually somewhat simpler and also works for directed Cayley graphs.
|
| 45 |
+
|
| 46 |
+
The connection between mixing times of random walks on a graph and its spectral expansion is well studied. For undirected graphs we have the following.
|
| 47 |
+
|
| 48 |
+
**Theorem 1.1** [8, Theorem 1] Let $A$ be the normalized adjacency matrix of an undirected graph. For every initial distribution, suppose the distribution obtained after $t$ steps of the random walk following $A$ is $\epsilon$-close to the uniform distribution in the $L_1$ norm. Then the spectral gap $(1 - |\lambda_1|)$ of $A$ is $\Omega(\frac{1}{t} \log(\frac{1}{\epsilon}))$.
|
| 49 |
+
|
| 50 |
+
In particular, if the graph is $\text{Cay}(G, J \cup J^{-1})$ for any $n$ element group $G$, such that a $C \log n$ step random walk is $\frac{1}{n^c}$-close to the uniform distribution in $L_1$ norm, then the spectral gap is a constant $\frac{c}{C}$.
|
| 51 |
+
|
| 52 |
+
Even for directed graphs a connection between mixing times of random walks and the spectral properties of the underlying Markov chain is known.
|
| 53 |
+
|
| 54 |
+
**Theorem 1.2** [6, Theorem 5.9] Let $\lambda_{max}$ denote the second largest magnitude (complex valued) eigenvalue of the normalized adjacency matrix $P$ of a strongly connected aperiodic Markov Chain. Then the mixing time is lower bounded by $\tau(\epsilon) \ge \frac{\log(1/2\epsilon)}{\log(1/|\lambda_{max}|)}$, where $\epsilon$ is the difference between the resulting distribution and the uniform distribution in the $L_1$ norm.
|
| 55 |
+
|
| 56 |
+
In [7], Pak uses this connection to prove an analogue of the Alon-Roichman theorem for directed Cayley graphs: Let $G$ be an $n$ element group and $J = \langle g_1, \dots, g_k \rangle$ consist of $k = O(\log n)$ group elements picked independently and uniformly at random from $G$. Pak shows that for any initial distribution on $G$, the distribution obtained by an $O(\log n)$ steps *lazy random walk* on the directed graph $\text{Cay}(G, J)$ is $\frac{1}{\text{poly}(n)}$- close to the uniform distribution. Then, by Theorem 1.2, it follows that the directed Cayley graph $\text{Cay}(G, J)$ has a constant spectral expansion. Crucially, we note
|
| 57 |
+
---PAGE_BREAK---
|
| 58 |
+
|
| 59 |
+
that Pak considers lazy random walks, since his main technical tool is based on *cube generating sequences* for finite groups introduced by Erdös and Rényi in [2].
|
| 60 |
+
|
| 61 |
+
**Definition 1.3** Let $G$ be a finite group and $J = \langle g_1, \dots, g_k \rangle$ be a sequence of group elements. For any $\delta > 0$, $J$ is said to be a cube generating sequence for $G$ with closeness parameter $\delta$, if the probability distribution $D_J$ on $G$ given by $g_1^{\epsilon_1} \dots g_k^{\epsilon_k}$, where each $\epsilon_i$ is independently and uniformly distributed in $\{0, 1\}$, is $\delta$-close to the uniform distribution in the $L_2$-norm.
|
| 62 |
+
|
| 63 |
+
Erdös and Rényi [2] proved the following theorem.
|
| 64 |
+
|
| 65 |
+
**Theorem 1.4** Let $G$ be a finite group and $J = \langle g_1, \dots, g_k \rangle$ be a sequence of $k$ elements of $G$ picked uniformly and independently at random. Let $D_J$ be the distribution on $G$ generated by $J$, i.e. $D_J(x) = \text{Pr}_{\{\epsilon_i \in \mathbb{R}\{0,1\} : 1 \le i \le k\}} [g_1^{\epsilon_1} \dots g_k^{\epsilon_k} = x]$ for $x \in G$, and $U$ be the uniform distribution on $G$. Then the expected value $\mathbb{E}_J \|D_J - U\|_2^2 = 1/2^k (1 - 1/n)$.
|
| 66 |
+
|
| 67 |
+
In particular if we choose $k = O(\log n)$, the resulting distribution $D_J$ is $\frac{1}{\text{poly}(n)}$-close to the uniform distribution in $L_2$ norm.
|
| 68 |
+
|
| 69 |
+
## Our Results
|
| 70 |
+
|
| 71 |
+
Let $G$ be a finite group with $n$ elements given by its multiplication table. Our first result is a derandomization of a result of Pak [7]. We show a deterministic polynomial-time construction of a generating set $J$ of size $O(\log |G|)$ such that a lazy random walk on Cay$(G, J)$ mixes fast. Throughout the paper, we measure the distance between two distributions in $L_2$ norm.
|
| 72 |
+
|
| 73 |
+
**Theorem 1.5** For any constant $c > 1$, there is a deterministic poly($n$) time algorithm that computes a generating set $J$ of size $O(\log n)$ for the given group $G$, such that given any initial distribution on $G$ the lazy random walk of $O(\log n)$ steps on the directed Cayley graph Cay$(G, J)$ yields a distribution that is $\frac{1}{n^c}$-close (in $L_2$ norm) to the uniform distribution.
|
| 74 |
+
|
| 75 |
+
Theorem 1.5 and Theorem 1.2 together yield the following corollary.
|
| 76 |
+
|
| 77 |
+
**Corollary 1.6** Given a finite group $G$ and any $\epsilon > 0$, there is a deterministic polynomial-time algorithm to construct an $O(\log n)$ size generating set $J$ such that Cay$(G, J)$ is a spectral expander (i.e. its second largest eigenvalue in absolute value is bounded by $\epsilon$).
|
| 78 |
+
|
| 79 |
+
Our next result yields an alternative proof of the Wigderson-Xiao result [10]. In order to carry out a similar approach as the proof of Theorem 1.5 for undirected Cayley graphs, we need a suitable generalization of cube generating sequences, and in particular, a generalization of [2]. Using this generalization, we can give a deterministic poly($n$) time algorithm to compute $J = \langle g_1, g_2, \dots, g_k \rangle$ where $k = O(\log n)$ such that a lazy random walk of length $O(\log n)$ on Cay$(G, J \cup J^{-1})$ is $\frac{1}{\text{poly}(n)}$-close to the uniform distribution. Here the lazy random walk is described by the symmetric transition matrix $A_J = \frac{1}{3}I + \frac{1}{3k}(P_J + P_{J-1})$ where $P_J$ and $P_{J-1}$ are the adjacency matrices of the Cayley graphs Cay$(G, J)$ and Cay$(G, J^{-1})$ respectively.
|
| 80 |
+
|
| 81 |
+
**Theorem 1.7** Let $G$ be a finite group of order $n$ and $c > 1$ be any constant. There is a deterministic poly($n$) time algorithm that computes a generating set $J$ of size $O(\log n)$ for $G$, such that an $O(\log n)$ step lazy random walk on $G$, governed by the transition matrix $A_J$ described above, is $\frac{1}{n^c}$-close to the uniform distribution, for any given initial distribution on $G$.
|
| 82 |
+
---PAGE_BREAK---
|
| 83 |
+
|
| 84 |
+
Theorem 1.7 and the connection between mixing time and spectral expansion for undirected graphs given by Theorem 1.1 yields the following.
|
| 85 |
+
|
| 86 |
+
**Corollary 1.8 (Wigderson-Xiao)** [10] Given a finite group $G$ by its multiplication table, there is a deterministic polynomial (in $|G|$) time algorithm to construct a generating set $J$ such that $\text{Cay}(G, J \cup J^{-1})$ is a spectral expander.
|
| 87 |
+
|
| 88 |
+
Finally, we show that the construction of cube generating sequences can also be done in deterministic polynomial time.
|
| 89 |
+
|
| 90 |
+
**Theorem 1.9** For any constant $c > 1$, there is a deterministic polynomial (in $n$) time algorithm that outputs a cube generating sequence $J$ of size $O(\log n)$ such that the distribution $D_J$ on $G$, defined by the cube generating sequence $J$, is $\frac{1}{n^c}$-close to the uniform distribution.
|
| 91 |
+
|
| 92 |
+
## 1.1 Organization of the paper
|
| 93 |
+
|
| 94 |
+
The paper is organized as follows. We prove Theorem 1.5 and Corollary 1.6 in Section 2. The proof of Theorem 1.7 and Corollary 1.8 are given in Section 3. We prove Theorem 1.9 in Section 4. Finally, we summarize in Section 5.
|
| 95 |
+
|
| 96 |
+
# 2 Expanding Directed Cayley Graphs
|
| 97 |
+
|
| 98 |
+
Let $D_1$ and $D_2$ be two probability distributions over the finite set $\{1, 2, \dots, n\}$. We use the $L_2$ norm to measure the distance between the two distributions: $$ ||D_1 - D_2||_2 = \left[ \sum_{x \in [n]} |D_1(x) - D_2(x)|^2 \right]^{\frac{1}{2}}. $$ Let $U$ denote the uniform distribution on $[n]$. We say that a distribution $D$ is $\delta$-close to the uniform distribution if $$ ||D - U||_2 \le \delta. $$
|
| 99 |
+
|
| 100 |
+
**Definition 2.1** The collision probability of a distribution $D$ on $[n]$ is defined as $\text{Coll}(D) = \sum_{i \in [n]} D(i)^2$. It is easy to see that $\text{Coll}(D) \le 1/n + \delta$ if and only if $||D - U||_2^2 \le \delta$ and $\text{Coll}(D)$ attains its minimum value $1/n$ only for the uniform distribution.
|
| 101 |
+
|
| 102 |
+
We prove Theorem 1.5 by giving a deterministic construction of a cube generating sequence $J$ such that a random walk on $\text{Cay}(G, J)$ mixes in $O(\log n)$ steps. We first describe a randomized construction in Section 2.1, which shows the existence of such a sequence. The construction is based on analysis of [7]. This is then derandomized in Section 2.2.
|
| 103 |
+
|
| 104 |
+
## 2.1 Randomized construction
|
| 105 |
+
|
| 106 |
+
For a sequence of group elements $J = \langle g_1, \dots, g_k \rangle$, we consider the Cayley graph $\text{Cay}(G, J)$, which is, in general, a directed multigraph in which both in-degree and out-degree of every vertex is $k$. Let $A$ denote the adjacency matrix of $\text{Cay}(G, J)$. The lazy random walk is defined by the probability transition matrix $(A+I)/2$ where $I$ is the identity matrix. Let $Q_J$ denote the probability distribution obtained after $m$ steps of the lazy random walk. Pak [7] has analyzed the distribution $Q_J$ and shown that for a random $J$ of $O(\log n)$ size and $m = O(\log n)$, $Q_J$ is $1/n^{O(1)}$-close to the uniform distribution. We note that Pak works with the $L_\infty$ norm. Our aim is to give an efficient deterministic construction of $J$. It turns out for us that the $L_2$ norm and the collision probability
|
| 107 |
+
---PAGE_BREAK---
|
| 108 |
+
|
| 109 |
+
are the right tools to work with since we can compute these quantities exactly as we fix elements of $J$ one by one.
|
| 110 |
+
|
| 111 |
+
Consider any length-$m$ sequence $I = \langle i_1, \dots, i_m \rangle \in [k]^m$, where $i_j$s are indices that refer to elements in the set $J$. Let $R_I^j$ denote the following probability distribution on $G$. For each $x \in G$: $R_I^j(x) = \text{Pr}_{\bar{\epsilon}}[g_{i_1}^{\epsilon_1} \cdots g_{i_m}^{\epsilon_m} = x]$, where $\bar{\epsilon} = (\epsilon_1, \dots, \epsilon_m)$ and each $\epsilon_i \in \{0, 1\}$ is picked independently and uniformly at random. Notice that for each $x \in G$ we have: $Q_J(x) = \frac{1}{k^m} \sum_{I \in [k]^m} R_I^J(x)$.
|
| 112 |
+
|
| 113 |
+
Further, notice that $R_I^J$ is precisely the probability distribution defined by the cube generating sequence $\langle g_{i_1}, g_{i_2}, \dots, g_{i_m} \rangle$, and the above equation states that the distribution $Q_J$ is the average over all $I \in [k]^m$ of the $R_I^J$.
|
| 114 |
+
|
| 115 |
+
In general, the indices in $I \in [k]^m$ are not distinct. Let $L(I)$ denote the sequence of distinct indices occurring in $I$, in the order of their first occurrence in $I$, from left to right. We refer to $L(I)$ as the L-subsequence of $I$. Clearly, the sequence $L(I)$ will itself define a probability distribution $R_{L(I)}^J$ on the group $G$.
|
| 116 |
+
|
| 117 |
+
Suppose the elements of $J$ are independently, randomly picked from $G$. The following lemma shows for any $I \in [k]^m$ that if $R_{L(I)}^J$ is $\delta$-close to uniform distribution (in $L_2$ norm), in expectation, then so is $R_I^J$. We state it in terms of collision probabilities.
|
| 118 |
+
|
| 119 |
+
**Lemma 2.2** For a fixed $I$, If $\mathbb{E}_J[\text{Coll}(R_{L(I)}^J)] = \mathbb{E}_J[\sum_{g \in G} R_{L(I)}^J(g)^2] \leq 1/n + \delta$ then $\mathbb{E}_J[\text{Coll}(R_I^J)] = \mathbb{E}_J[\sum_{g \in G} R_I^J(g)^2] \leq 1/n + \delta$.
|
| 120 |
+
|
| 121 |
+
A proof of Lemma 2.2 is in the appendix to keep our presentation self-contained. A similar lemma for the $L_\infty$ norm is shown in [7, Lemma 1] (though it is not stated there in terms of the expectation).
|
| 122 |
+
|
| 123 |
+
When elements of $J$ are picked uniformly and independently from $G$, by Theorem 1.4, $\mathbb{E}_J[\text{Coll}(R_{L(I)}^J)] = \mathbb{E}_J[\sum_{g \in G} R_{L(I)}^J(g)^2] = \frac{1}{n} + \frac{1}{2\ell}(1 - \frac{1}{n})$, where $\ell$ is the length of the L-subsequence. Thus the expectation is small provided $\ell$ is large enough. It turns out that most $I \in [k]^m$ have sufficiently long L-subsequences (Lemma 2.3). A similar result appears in [7]. We give a proof of Lemma 2.3 in the appendix.
|
| 124 |
+
|
| 125 |
+
**Lemma 2.3** [7] Let $a = \frac{k}{\ell-1}$. The probability that a sequence of length $m$ over $[k]$ does not have an L-subsequence of length $\ell$ is at most $\frac{(ae)^{\frac{k}{a}}}{a^m}$.
|
| 126 |
+
|
| 127 |
+
To ensure the above probability is bounded by $\frac{1}{2^m}$, it suffices to choose $m > \frac{(k/a) \log(ae)}{\log(a/2)}$.
|
| 128 |
+
|
| 129 |
+
The following lemma (which is again an $L_2$ norm version of a similar statement from [7]), we observe that the expected distance from the uniform distribution is small, when $I \in [k]^m$ is picked uniformly at random. The proof of the lemma is given in the appendix.
|
| 130 |
+
|
| 131 |
+
**Lemma 2.4** $\mathbb{E}_J[\text{Coll}(Q_J)] = \mathbb{E}_J[\sum_{g \in G} Q_J(g)^2] \leq \frac{1}{n} + \frac{1}{2\Theta(m)}$.
|
| 132 |
+
|
| 133 |
+
We can make $\frac{1}{2\Theta(m)} < \frac{1}{nc}$ for some $c > 0$, by choosing $m = O(\log n)$. That also fixes $k$ to be $O(\log n)$ suitably.
|
| 134 |
+
---PAGE_BREAK---
|
| 135 |
+
|
| 136 |
+
## 2.2 Deterministic construction
|
| 137 |
+
|
| 138 |
+
Our goal is to compute, for any given constant $c > 0$, a multiset $J$ of $k$ group elements of $G$ such that $\text{Coll}(Q_J) = \sum_{g \in G} Q_J(g)^2 \le 1/n + 1/n^c$, where both $k$ and $m$ are $O(\log n)$. For each $J$ observe, by Cauchy-Schwarz inequality, that
|
| 139 |
+
|
| 140 |
+
$$ \text{Coll}(Q_J) = \sum_{g \in G} Q_J(g)^2 \le \sum_{g \in G} \frac{1}{k^m} \sum_{I \in [k]^m} R_I^J(g)^2 = \frac{1}{k^m} \sum_{I \in [k]^m} \text{Coll}(R_I^J). \quad (1) $$
|
| 141 |
+
|
| 142 |
+
Our goal can now be restated: it suffices to construct in deterministic polynomial time a multiset $J$ of group elements such that the average collision probability $\frac{1}{k^m} \sum_{I \in [k]^m} \text{Coll}(R_I^J) \le 1/n + 1/n^c$.
|
| 143 |
+
|
| 144 |
+
Consider the random set $J = \{X_1, \dots, X_k\}$ with each $X_i$ a uniformly and independently distributed random variable over $G$. Combined with the proof of Lemma 2.4 (in particular from Equation 17), we observe that for any constant $c > 1$ there are $k$ and $m$, both $O(\log n)$ such that
|
| 145 |
+
|
| 146 |
+
$$ \mathbb{E}_J[\text{Coll}(Q_J)] \le \mathbb{E}_J[\mathbb{E}_{I \in [k]^m} \text{Coll}(R_I^J)] \le \frac{1}{n} + \frac{1}{n^c}. \quad (2) $$
|
| 147 |
+
|
| 148 |
+
Our deterministic algorithm will fix the elements in $J$ in stages. At stage 0 the set $J = J_0 = \{X_1, X_2, \dots, X_k\}$ consists of independent random elements $X_i$ drawn from the group $G$. Suppose at the $j^{th}$ stage, for $j < k$, the set we have is $J = J_j = \{x_1, x_2, \dots, x_j, X_{j+1}, \dots, X_k\}$, where each $x_r(1 \le r \le j)$ is a fixed element of $G$ and the $X_s(j+1 \le s \le k)$ are independent random elements of $G$ such that
|
| 149 |
+
|
| 150 |
+
$$ \mathbb{E}_J[\mathbb{E}_{I \in [k]^m} \text{Coll}(R_I^J)] \le 1/n + 1/n^c. $$
|
| 151 |
+
|
| 152 |
+
**Remark.**
|
| 153 |
+
|
| 154 |
+
1. In the above expression, the expectation is over the random elements of $J$.
|
| 155 |
+
|
| 156 |
+
2. If we can compute in poly($n$) time a choice $x_{j+1}$ for $X_{j+1}$ such that $\mathbb{E}_J[\mathbb{E}_{I \in [k]^m} \text{Coll}(R_I^J)] \le 1/n + 1/n^c$ then we can compute the desired generating set $J$ in polynomial (in $n$) time.
|
| 157 |
+
|
| 158 |
+
Given $J = J_j = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$ with $j$ fixed elements and $k-j$ random elements, it is useful to partition the set of sequences $[k]^m$ into subsets $S_{r,l}$ where $I \in S_{r,l}$ if and only if there are exactly $r$ indices in $I$ from $\{1, \dots, j\}$, and of the remaining $m-r$ indices of $I$ there are exactly $\ell$ distinct indices. We now define a suitable generalization of L-subsequences.
|
| 159 |
+
|
| 160 |
+
**Definition 2.5** An $(r, \ell)$-normal sequence for $J$ is a sequence $\{n_1, n_2, \dots, n_r, \dots, n_{r+\ell}\} \in [k]^{r+\ell}$ such that the indices $n_s, 1 \le s \le r$ are in $\{1, 2, \dots, j\}$ and the indices $n_s, s > \ell$ are all distinct and in $\{j+1, \dots, k\}$. I.e. the first $r$ indices (possibly with repetition) are from the fixed part of $J$ and the last $\ell$ are all distinct elements from the random part of $J$.
|
| 161 |
+
|
| 162 |
+
**Transforming $S_{r,\ell}$ to $(r, \ell)$-normal sequences**
|
| 163 |
+
|
| 164 |
+
We use the simple fact that if $y \in G$ is picked uniformly at random and $x \in G$ be any element independent of $y$, then the distribution of $xyx^{-1}$ is uniform in $G$.
|
| 165 |
+
|
| 166 |
+
Let $I = \langle i_1, \dots, i_m \rangle \in S_{r,\ell}$ be a sequence. Let $F = \langle i_{f_1}, \dots, i_{f_r} \rangle$ be the subsequence of indices for the fixed elements in $I$. Let $R = \langle i_{s_1}, \dots, i_{s_{m-r}} \rangle$ be the subsequence of indices for the random elements in $I$, and $L = \langle i_{e_1}, \dots, i_{e_\ell} \rangle$ be the L-subsequence in $R$. More precisely, notice that $R$ is a
|
| 167 |
+
---PAGE_BREAK---
|
| 168 |
+
|
| 169 |
+
sequence in {$j+1, \dots, k$$}^{m-r}$ and $L$ is the L-subsequence for $R$. The $(r, \ell)$ normal sequence $\hat{I}$ of
|
| 170 |
+
$I \in S_{r,\ell}$ is the sequence $\langle i_{f_1}, \dots, i_{f_r}, i_{e_1}, \dots, i_{e_\ell} \rangle$.
|
| 171 |
+
|
| 172 |
+
We recall here that the multiset $J = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$ is defined as before. For ease of notation we denote the list of elements of $J$ by $g_t$, $1 \le t \le k$. I.e. $g_t = x_t$ for $t \le j$ and $g_t = X_t$ for $t > j$. Consider the distribution of the products $g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m}$ where $\epsilon_i \in \{0, 1\}$ are independent and uniformly picked at random. Then we can write
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
\begin{aligned}
|
| 176 |
+
g_{i_1}^{\epsilon_1} \cdots g_{i_m}^{\epsilon_m} &= z_0 g_{i_{f_1}}^{\epsilon_{f_1}} z_1 g_{i_{f_2}}^{\epsilon_{f_2}} z_2 \cdots z_{r-1} g_{i_{fr}}^{\epsilon_{fr}} z_r, && \text{where} \\
|
| 177 |
+
z_0 z_1 \cdots z_r &= g_{i_{s_1}}^{\epsilon_{s_1}} g_{i_{s_2}}^{\epsilon_{s_2}} \cdots g_{i_{s_{m-r}}}^{\epsilon_{s_{m-r}}}.
|
| 178 |
+
\end{aligned}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
By conjugation, we can rewrite the above expression as $g_{i_{f_1}}^{\epsilon_{f_1}} z z_1 g_{i_{f_2}}^{\epsilon_{f_2}} z_2 \dots g_{i_{fr}}^{\epsilon_{fr}} z_r$, where $z = g_{i_{f_1}}^{-\epsilon_{f_1}} z_0 g_{i_{f_1}}^{\epsilon_{f_1}}$.
|
| 182 |
+
|
| 183 |
+
We refer to this transformation as moving $g_{i_{f_1}}^{\epsilon_{f_1}}$ to the left. Successively moving the elements
|
| 184 |
+
$g_{i_{f_1}}^{\epsilon_{f_1}}$, $g_{i_{f_2}}^{\epsilon_{f_2}}$, ..., $g_{i_{fr}}^{\epsilon_{fr}}$ to the left we can write
|
| 185 |
+
|
| 186 |
+
$$ g_{i_1}^{\epsilon_1} \cdots g_{i_m}^{\epsilon_m} = g_{i_1}^{\epsilon_{f_1}} \cdots g_{i_r}^{\epsilon_{fr}} z'_0 z'_1 \cdots z'_r, $$
|
| 187 |
+
|
| 188 |
+
where each $z'_t = u_t z_t u_t^{-1}$, and $u_t$ is a product of elements from the fixed element set $\{x_1, \dots, x_j\}$. Notice that each $z_t$ is a product of some consecutive sequence of elements from $\langle g_{i_{s_1}}^{\epsilon_{s_1}}, g_{i_{s_2}}^{\epsilon_{s_2}}, \dots, g_{i_{s_{m-r}}}^{\epsilon_{s_{m-r}}} \rangle$. If $z_t = \prod_{a=b}^{c} g_{i_{sa}}^{\epsilon_{sa}}$ then $z'_t = \prod_{a=b}^{c} u_t g_{i_{sa}}^{\epsilon_{sa}} u_t^{-1}$. Thus, the product $z'_0 z'_1 \dots z'_r$, is of the form
|
| 189 |
+
|
| 190 |
+
$$ z'_0 z'_1 \dots z'_r = \prod_{a=1}^{m-r} h_{s_a}^{\epsilon_{s_a}}, $$
|
| 191 |
+
|
| 192 |
+
where each $h_{s_a} = y_a g_{i_{s_a}}^{\epsilon_{s_a}} y_a^{-1}$, for some elements $y_a \in G$. In this expression, observe that for distinct indices $a$ and $b$, we may have $i_{s_a} = i_{s_b}$ and $y_a \neq y_b$ and hence, in general, $h_{s_a} \neq h_{s_b}$.
|
| 193 |
+
|
| 194 |
+
Recall that the L-subsequence $L = (i_{e_1}, \dots, i_{e_\ell})$ is a subsequence of $R = (i_{s_1}, \dots, i_{s_{m-\ell}})$. Consequently, let $(h_{e_1}, h_{e_2}, \dots, h_{e_\ell})$ be the sequence of all *independent* random elements in the above product $\prod_{a=1}^{m-r} h_{s_a}^{\epsilon_{s_a}}$ that correspond to the L-subsequence. To this product, we again apply the transformation of moving to the left, the elements $h_{e_1}^{\epsilon_{e_1}}, h_{e_2}^{\epsilon_{e_2}}, \dots, h_{e_\ell}^{\epsilon_{e_\ell}}$, in that order. Putting it all together we have
|
| 195 |
+
|
| 196 |
+
$$ g_{i_1}^{\epsilon_1} \cdots g_{i_m}^{\epsilon_m} = g_{i_1}^{\epsilon_{f_1}} \cdots g_{i_r}^{\epsilon_{fr}} h_{e_1}^{\epsilon_{e_1}} \cdots h_{e_\ell}^{\epsilon_{e_\ell}} y(\bar{\epsilon}), $$
|
| 197 |
+
|
| 198 |
+
where $y(\bar{\epsilon})$ is an element in $G$ that depends on $J$, $I$ and $\bar{\epsilon}$, where $\bar{\epsilon}$ consists of all the $\epsilon_j$ for $j \in I \setminus (F \cup L)$. Let $J(I)$ denote the multiset of group elements obtained from $J$ by replacing the subset $\{g_{i e_1}, g_{i e_2}, \dots, g_{i e_\ell}\}$ with $\{h_{e_1}, h_{e_2}, \dots, h_{e_\ell}\}$. It follows from our discussion that $J(I)$ has exactly $j$ fixed elements $x_1, x_2, \dots, x_j$ and $k-j$ uniformly distributed independent random elements. Recall that $\hat{I} = (i_{f_1}, i_{f_2}, \dots, i_{fr}, i_{e_1}, i_{e_2}, \dots, i_{e_\ell})$ is the $(r, \ell)$-normal sequence for $I$. Analogous to Lemma 2.2, we now compare the probability distributions $R_I^J$ and $\hat{R_I}^{J(I)}$. The proof of the lemma is in the appendix.
|
| 199 |
+
---PAGE_BREAK---
|
| 200 |
+
|
| 201 |
+
**Lemma 2.6** For each $j \le k$ and $J = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$ (where $x_1, \dots, x_j \in G$ are fixed elements and $X_{j+1}, \dots, X_k$ are independent uniformly distributed in $G$), and for each $I \in [k]^m$, $\mathbb{E}_J[\text{Coll}(R_I^J)] \le \mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$.
|
| 202 |
+
|
| 203 |
+
**Remark 2.7** Here it is important to note that the expectation $\mathbb{E}_J[\text{Coll}(R_I^J)]$ is over the random elements in $J$. On the other hand, the expectation $\mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$ is over the random elements in $J(I)$ (which are conjugates of the random elements in $J$). In the rest of this section, we need to keep this meaning clear when we use $\mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$ for different $I \in [k]^m$.
|
| 204 |
+
|
| 205 |
+
By averaging the above inequality over all $I$ sequences and using Equation 1, we get
|
| 206 |
+
|
| 207 |
+
$$ \mathbb{E}_J[\text{Coll}(Q_J)] \le \mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(R_I^J)] \le \mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})]. \quad (3) $$
|
| 208 |
+
|
| 209 |
+
Now, by Equation 2 and following the proof of Lemma 2.4, when all $k$ elements in $J$ are random then we have $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})] \le 1/n + 1/n^c$. Suppose for any $J = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$ we can compute $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})]$ in deterministic polynomial (in $n$) time. Then, given the bound $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})] \le 1/n + 1/n^c$ for $J = \{x_1, \dots, x_j, X_{j+1}, \dots, X_k\}$, we can clearly fix the $(j+1)^{st}$ element of $J$ by choosing $X_{j+1} := x_{j+1}$ which minimizes the expectation $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})]$. Also, it follows easily from Equation 3 and the above lemma that $\mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})] \le \delta$ implies $\mathbb{E}_J \text{Coll}(Q_J) \le \mathbb{E}_J \mathbb{E}_{I \in [k]^m} [\text{Coll}(R_I^J)] \le \delta$. In particular, when $J$ is completely fixed after $k$ stages, and if $\mathbb{E}_{I \in [k]^m} [\text{Coll}(\hat{R}_I^{J(I)})] \le \delta$ then $\text{Coll}(Q_J) \le \delta$.
|
| 210 |
+
|
| 211 |
+
**Remark 2.8** In fact, the quantity $\mathbb{E}_{I \in [k]^m}[\text{Coll}(\hat{R}_I^{J(I)})]$ plays the role of a pessimistic estimator for $\mathbb{E}_{I \in [k]^m}[\text{Coll}(R_I^J)]$.
|
| 212 |
+
|
| 213 |
+
We now proceed to explain the algorithm that fixes $X_{j+1}$. To this end, it is useful to rewrite this as
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
\begin{align}
|
| 217 |
+
\mathbb{E}_J \mathbb{E}_I [\text{Coll}(\hat{R}_I^{J(I)})] &= \frac{1}{k^m} \left[ \sum_{r,\ell} \sum_{I \in S_{r,\ell}} \mathbb{E}_J [\text{Coll}(\hat{R}_I^{J(I)})] \right] \\
|
| 218 |
+
&= \sum_{r,\ell} \frac{|S_{r,\ell}|}{k^m} \mathbb{E}_{I \in S_{r,\ell}} \mathbb{E}_J [\text{Coll}(\hat{R}_I^{J(I)})] \tag{4}
|
| 219 |
+
\end{align}
|
| 220 |
+
$$
|
| 221 |
+
|
| 222 |
+
For any $r, \ell$ the size of $S_{r,\ell}$ is computable in polynomial time (Lemma 2.9). We include a proof in the appendix.
|
| 223 |
+
|
| 224 |
+
**Lemma 2.9** For each $r$ and $\ell$, $|S_{r,\ell}|$ can be computed in time polynomial in $n$.
|
| 225 |
+
|
| 226 |
+
Since $r, \ell$ is of $O(\log n)$, it is clear from Equation 4 that it suffices to compute $\mathbb{E}_{I \in S_{r,\ell}} \mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$ in polynomial time for any given $r$ and $\ell$. We reduce this computation to counting number of paths in weighted directed acyclic graphs. To make the reduction clear, we simply the expression $\mathbb{E}_{I \in S_{r,\ell}} \mathbb{E}_J[\text{Coll}(\hat{R}_I^{J(I)})]$ as follows.
|
| 227 |
+
---PAGE_BREAK---
|
| 228 |
+
|
| 229 |
+
Let $\bar{u}$ be a sequence of length $r$ from the fixed elements $x_1, x_2, \dots, x_j$. We identify $\bar{u}$ as an element in $[j]^r$. The number of $I$ sequences in $S_{r,l}$ that have $\bar{u}$ as the prefix in the $(r, l)$ normal sequence $\hat{I}$ is $\frac{|S_{r,\ell}|}{j^r}$. Recall that $R_{\hat{I}}^{J(I)}(g) = \text{Prob}_{\bar{\epsilon}}[g_{i_{f_1}}^{\epsilon_1} \dots g_{i_{f_r}}^{\epsilon_r} h_{e_1}^{\epsilon_{r+1}} \dots h_{e_\ell}^{\epsilon_{r+\ell}} = g]$. Let $\bar{u} = (g_{i_{f_1}}, \dots, g_{i_{f_r}})$. It is convenient to denote the element $g_{i_{f_1}}^{\epsilon_1} \dots g_{i_{f_r}}^{\epsilon_r} h_{e_1}^{\epsilon_{r+1}} \dots h_{e_\ell}^{\epsilon_{r+\ell}}$ by $M(\bar{u}, \bar{\epsilon}, \hat{I}, J)$.
|
| 230 |
+
|
| 231 |
+
Let $\bar{\epsilon} = (\epsilon_1, \dots, \epsilon_{r+\ell})$ and $\bar{\epsilon}' = (\epsilon'_1, \dots, \epsilon'_{r+\ell})$ be randomly picked from $\{0, 1\}^{r+\ell}$. Then
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
\begin{align}
|
| 235 |
+
\mathrm{Coll}(R_{\hat{I}}^{J(I)}) &= \sum_{g \in G} (R_{\hat{I}}^{J(I)}(g))^2 \\
|
| 236 |
+
&= \mathrm{Prob}_{\bar{\epsilon}, \bar{\epsilon}'} [M(\bar{u}, \bar{\epsilon}, \hat{I}, J) = M(\bar{u}, \bar{\epsilon}', \hat{I}, J)]. \tag{5}
|
| 237 |
+
\end{align}
|
| 238 |
+
$$
|
| 239 |
+
|
| 240 |
+
For fixed $\bar{\epsilon}, \bar{\epsilon}'$ and $\bar{u} \in [j]^r$, let $S_{r,\ell}^{\bar{u}}$ be the set of all $I \in S_{r,\ell}$ such that the subsequence of indices of $I$ for the fixed elements $\{x_1, x_2, \dots, x_j\}$ is precisely $\bar{u}$. Notice that $|S_{r,\ell}^{\bar{u}}| = \frac{|S_{r,\ell}|}{j^r}$.
|
| 241 |
+
|
| 242 |
+
Then we have the following.
|
| 243 |
+
|
| 244 |
+
$$
|
| 245 |
+
\mathbb{E}_{I \in S_{r,\ell}} \mathbb{E}_J \left[ \left( \sum_{g \in G} (R_{\hat{I}}^{J(I)}(g))^2 \right) \right] = \frac{1}{2^{2(\ell+r)}} \left[ \sum_{\bar{\epsilon}, \bar{\epsilon}' \in \{0,1\}^{\ell+r}} \frac{1}{|S_{r,\ell}|} \sum_{\bar{u} \in [j]^r} \sum_{I \in S_{r,\ell}^{\bar{u}}} \mathbb{E}_J [\chi_{M(\bar{u},\bar{\epsilon},\hat{I},J)=M(\bar{u},\bar{\epsilon}',\hat{I},J)}] \right] (6)
|
| 246 |
+
$$
|
| 247 |
+
|
| 248 |
+
where $\chi_{M(\bar{u},\bar{\epsilon},\hat{I},J)=M(\bar{u},\bar{\epsilon}',\hat{I},J)}$ is a 0-1 indicator random variable that gets 1 when $M(\bar{u},\bar{\epsilon},\hat{I},J) = M(\bar{u},\bar{\epsilon}',\hat{I},J)$ and 0 otherwise. Crucially, we note the following:
|
| 249 |
+
|
| 250 |
+
**Claim 2.10** For each $I \in S_{r,\ell}^{\bar{u}}$ and for fixed $\bar{\epsilon}, \bar{\epsilon}'$, the random variables $\chi_{M(\bar{u},\bar{\epsilon},\hat{I},J)=M(\bar{u},\bar{\epsilon}',\hat{I},J)}$ are identically distributed.
|
| 251 |
+
|
| 252 |
+
The claim follows from the fact that for each $I \in S_{r,\ell}^{\bar{u}}$, the fixed part in $\hat{I}$ is $\bar{u}$ and elements in the unfixed part are identically and uniformly distributed in $G$. We simplify the expression in Equation 6 further.
|
| 253 |
+
|
| 254 |
+
$$
|
| 255 |
+
\begin{align}
|
| 256 |
+
\frac{1}{|S_{r,\ell}|} & \left[ \sum_{\tilde{u} \in [j]^r} \sum_{I \in S_{r,\ell}^{\tilde{u}}} \mathbb{E}_J[\chi_{M(\tilde{u},\tilde{\epsilon},\hat{I},J)=M(\tilde{u},\tilde{\epsilon}',\hat{I},J)}] \right] &&= \frac{1}{|S_{r,\ell}|} \left[ \sum_{\tilde{u} \in [j]^r} \frac{|S_{r,\ell}|}{j^r} \mathbb{E}_J[\chi_{M(\tilde{u},\tilde{\epsilon},\hat{I},J)=M(\tilde{u},\tilde{\epsilon}',\hat{I},J)}] \right] &&(7) \\
|
| 257 |
+
&&= \sum_{\tilde{u} \in [j]^r} \frac{1}{j^r} \mathbb{E}_J[\chi_{M(\tilde{u},\tilde{\epsilon},\hat{I},J)=M(\tilde{u},\tilde{\epsilon}',\hat{I},J)}] &&(8)
|
| 258 |
+
\end{align}
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
where Equation 7 follows from Claim 2.10. Let $p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}')$ be the number of different assignments of $\ell$ random elements in $J$ such that $M(\bar{u}, \bar{\epsilon}, \hat{I}, J) = M(\bar{u}, \bar{\epsilon}', \hat{I}, J)$. Then it is easy to see that
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
\sum_{\tilde{u} \in [j]^r} \frac{1}{j^r} \mathbb{E}_J[\chi_{M(\tilde{u}, \tilde{\epsilon}, \hat{I}, J) = M(\tilde{u}, \tilde{\epsilon}', \hat{I}, J)}] = \sum_{\tilde{u}} \frac{1}{j^r} p_{\tilde{u}}(\tilde{\epsilon}, \tilde{\epsilon}') \frac{1}{n^\ell}. \quad (9)
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
where the factor $\frac{1}{n^\ell}$ accounts for the fact that $\ell$ unfixed elements of $J$ are picked uniformly and independently at random from the group $G$.
|
| 268 |
+
---PAGE_BREAK---
|
| 269 |
+
|
| 270 |
+
Notice that $2^{r+\ell} \le 2^m = n^{O(1)}$ for $m = O(\log n)$ and $\bar{\epsilon}, \bar{\epsilon}' \in \{0,1\}^{r+\ell}$. Then, combining the Equation 4 and Equation 9, it is clear that to compute $\mathbb{E}_J \mathbb{E}_I[\text{Coll}(R_{\hat{I}}^{J(I)})]$ in polynomial time, it suffices to compute $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \right] \frac{1}{n^{\ell}}$ (for fixed $r, \ell, \bar{\epsilon}, \bar{\epsilon}'$) in polynomial time. We now turn to this problem.
|
| 271 |
+
|
| 272 |
+
## 2.3 Reduction to counting paths in weighted DAGs
|
| 273 |
+
|
| 274 |
+
We will interpret the quantity $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \right] \frac{1}{n^{\ell}}$ as the sum of weights of paths between a source vertex $s$ and sink vertex $t$ in a layered weighted directed acyclic graph $H = (V, E)$. The vertex set $V$ is $G \times G \times [r+\ell+1] \cup \{s,t\}$, and $s = (e, e, 0)$, where $e$ is the identity element in $G$. The source vertex $s$ is at 0-th layer and the sink $t$ is at the $r + \ell + 2$-th layer. Let $S = \{x_1, x_2, \dots, x_j\}$. The edge set is the union $E = E_s \cup E_S \cup E_{G\setminus S} \cup E_t$, where
|
| 275 |
+
|
| 276 |
+
$$
|
| 277 |
+
\begin{align*}
|
| 278 |
+
E_s &= \{(s, (g, h, 1)) \mid g, h \in G\} \\
|
| 279 |
+
E_S &= \{((g, h, t), (gx^{\epsilon_t}, hx^{\epsilon'_t}, t+1)) \mid g, h \in G, x \in S, 1 \le t \le r\}, \\
|
| 280 |
+
E_{G\setminus S} &= \{((g, h, t), (gx^{\epsilon_t}, hx^{\epsilon'_t}, t+1)) \mid g, h \in G, x \in G, r < t \le r+\ell\}, \text{ and} \\
|
| 281 |
+
E_t &= \{((g, g, r+\ell+1), t) \mid g \in G\}.
|
| 282 |
+
\end{align*}
|
| 283 |
+
$$
|
| 284 |
+
|
| 285 |
+
All edges in $E_s$ and $E_t$ have weights 1 each. Each edge in $E_S$ has weight $\frac{1}{j}$. Each edge in $E_{G\setminus S}$ has weight $\frac{1}{n}$.
|
| 286 |
+
|
| 287 |
+
Each s-to-t directed path in the graph $G$ corresponds to an $(r, \ell)$-normal sequence $\hat{I}$ (corresponding to some $I \in S_{r,\ell}$), along with an assignment of group elements to the $\ell$ distinct independent random elements that occur in it. For a random $I \in S_{r,\ell}$, the group element corresponding to each of the $r$ “fixed” positions is from $\{x_1, x_2, \dots, x_j\}$ with probability $1/j$ each. Hence each edge in $E_S$ has weight $1/j$. Similarly, the $\ell$ distinct indices in $I$ (from $\{X_{j+1}, \dots, X_k\}$) are assigned group elements independently and uniformly at random. Hence edges in $E_{G\setminus S}$ has weight $\frac{1}{n}$.
|
| 288 |
+
|
| 289 |
+
The weight of an s-to-t path is a product of the weights of edges on the path. The graph depends on $j, \bar{\epsilon},$ and $\bar{\epsilon}'$. So for fixed $r, \ell$, we denote it as $H_{r,\ell}(j, \bar{\epsilon}, \bar{\epsilon}')$. The following claim is immediate from the Equation 9.
|
| 290 |
+
|
| 291 |
+
**Claim 2.11** *The sum of weights of all s to t paths in $H_{j,\bar{\epsilon},\bar{\epsilon}'}$ is $\sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \frac{1}{n^{\ell}}$.*
|
| 292 |
+
|
| 293 |
+
In the following lemma we observe that $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \frac{1}{n^{\ell}} \right]$ can be computed in polynomial time. The proof is easy.
|
| 294 |
+
|
| 295 |
+
**Lemma 2.12** *For each $j, \bar{\epsilon}, \bar{\epsilon}', r, \ell$, the quantity $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \frac{1}{n^{\ell}} \right]$ can be computed in time polynomial in $n$.*
|
| 296 |
+
|
| 297 |
+
**Proof:** The graph $H_{r,l}(j, \bar{\epsilon}, \bar{\epsilon}')$ has $n^2$ vertices in each intermediate layer. For each $1 \le t \le r+\ell+2$, we define a matrix $M_{t-1}$ whose rows are indexed by the vertices of layer $t-1$ and columns by vertices of layer $t$, and the $(a,b)^{th}$ entry of $M_{t-1}$ is the weight of the edge $(a,b)$ in the graph $H_{j,\bar{\epsilon},\bar{\epsilon}'}$. Their product $M = \prod_{\ell=0}^{r+\ell+1} M_t$ is a scalar which is precisely $\left[ \sum_{\bar{u} \in [j]^r} \frac{1}{j^r} p_{\bar{u}}(\bar{\epsilon}, \bar{\epsilon}') \frac{1}{n^{\ell}} \right]$. As the product of the matrices $M_t$ can be computed in time polynomial in $n$, the lemma follows. $\square$
|
| 298 |
+
---PAGE_BREAK---
|
| 299 |
+
|
| 300 |
+
To summarize, we describe the $(j+1)^{st}$ stage of the algorithm, where a group element $x_{j+1}$ is chosen for $X_{j+1}$. The algorithm cycles through all $n$ choices for $x_{j+1}$. For each choice of $x_{j+1}$, and for each $\bar{\epsilon}, \epsilon'$, and $r, \ell$, the graph $H_{r,\ell}(j+1, \bar{\epsilon}, \epsilon')$ is constructed. Using Lemma 2.12, the expression in 4 is computed for each choice of $x_{j+1}$ and the algorithm fixes the choice that minimizes this expression. This completes the proof of Theorem 1.5.
|
| 301 |
+
|
| 302 |
+
By Theorem 1.2 we can bound the absolute value of the second largest eigenvalue of the matrix for Cay($G$, $J$). Theorem 1.5 yields that the resulting distribution after an $O(\log n)$ step random walk on Cay($G$, $J$) is $\frac{1}{\text{poly}(n)}$ close to the uniform distribution in the $L_2$ norm. Theorem 1.2 is in terms of the $L_1$ norm. However, since $|L_1| \le n|L_\infty| \le n|L_2|$, Theorem 1.5 guarantees that the resulting distribution is $\frac{1}{\text{poly}(n)}$ close to the uniform distribution also in $L_1$ norm. Choose $\tau = m = c' \log n$ and $\epsilon = \frac{1}{n^c}$ in Theorem 1.2, where $c, c'$ are fixed from Theorem 1.5. Then $\lambda_{max} \le \frac{1}{2O(c/c')} < 1$. This completes the proof of Corollary 1.6. $\square$
|
| 303 |
+
|
| 304 |
+
# 3 Undirected Expanding Cayley Graphs
|
| 305 |
+
|
| 306 |
+
In this section, we show a deterministic polynomial-time construction of a generating set $J$ for any group $G$ (given by table) such that a lazy random walk on the *undirected* Cayley graph Cay$(G, J \cup J^{-1})$ mixes well. As a consequence, we get Cayley graphs which have a constant spectral gap (an alternative proof of a result in [10]). Our construction is based on a simple adaptation of techniques used in Section 2.
|
| 307 |
+
|
| 308 |
+
The key point in the undirected case is that we will consider a generalization of Erdös-Renyi sequences. We consider the distribution on $G$ defined by $g_1^{\epsilon_1} \dots g_k^{\epsilon_k}$ where $\epsilon_i \in_R \{-1, 0, 1\}$. The following lemma is an easy generalization the Erdös-Renyi result (Theorem 1.4). A similar theorem appears in [3, Theorem 14]. Our main focus in the current paper is the derandomized construction of Cayley expanders. Towards that, to make our paper self-contained, we include a short proof of Lemma 3.1 in the appendix.
|
| 309 |
+
|
| 310 |
+
**Lemma 3.1** Let $G$ be a finite group and $J = \langle g_1, \dots, g_k \rangle$ be a sequence of $k$ elements of $G$ picked uniformly and independently at random. Let $D_J$ be the following distribution: $D_J(x) = \text{Pr}_{\{\epsilon_i \in R \{-1, 0, 1\} : 1 \le i \le k\}} [g_1^{\epsilon_1} \cdots g_k^{\epsilon_k} = x]$ for $x \in G$, and $U$ be the uniform distribution on $G$. Then $\mathbb{E}_J[\sum_{x \in G} (D_J(x))^2] = \mathbb{E}_J[\text{Coll}(D_J)] \le (\frac{8}{9})^k + \frac{1}{n}$.
|
| 311 |
+
|
| 312 |
+
## Deterministic construction
|
| 313 |
+
|
| 314 |
+
First, we note that analogues of Lemma 2.2, 2.3, and 2.4 hold in the undirected case too. In particular, When elements of $J$ are picked uniformly and independently from $G$, by Lemma 3.1, we have $\mathbb{E}_J[\text{Coll}(R_{L(I)}^J)] = \mathbb{E}_J[\sum_{g \in G} (R_{L(I)}^J(g))^2] \le (\frac{8}{9})^\ell + \frac{1}{n}$, where $\ell$ is the length of the L-subsequence $L(I)$ of $I$. Now we state Lemma 3.2 below, which is a restatement of Lemma 2.4 for the undirected case. The proof is exactly similar to the proof of Lemma 2.4. As before, we again consider the probability that an $I$ sequence of length $m$ does not have an $L$ sequence of length $\ell$. Also, we fix $\ell, m$ to $O(\log n)$ appropriately.
|
| 315 |
+
|
| 316 |
+
**Lemma 3.2** Let $Q_J(g) = \frac{1}{km} \sum_{I \in [k]^m} R_I(g)$. Then $\mathbb{E}_J[\text{Coll}(Q_J)] = \mathbb{E}_J[\sum_{g \in G} Q_J(g)^2] \le 1/n + 2(\frac{8}{9})^{\Theta(m)}$.
|
| 317 |
+
---PAGE_BREAK---
|
| 318 |
+
|
| 319 |
+
Building on this, we can extend the results in Section 2.2 to the undirected case too in a straightforward manner. In particular, we can use essentially the same algorithm as described in Lemma 2.12 to compute the quantity in Equation 5 in polynomial time also in the undirected setting. The only difference we need to incorporate is that now $\bar{\epsilon}, \epsilon' \in \{-1, 0, 1\}^{r+\ell}$. This essentially completes the proof of Theorem 1.7. We do not repeat all the details here.
|
| 320 |
+
|
| 321 |
+
Finally, we derive Corollary 1.8. The normalized adjacency matrix of the undirected Cayley graph (corresponding to the lazy walk we consider) is given by $A = \frac{1}{3}I + \frac{1}{3k}(P_J + P_{J-1})$ where $P_J$ and $P_{J-1}$ are the corresponding permutation matrices defined by the sets $J$ and $J^{-1}$. As in the proof of Corollary 1.8, we bound the distance of the resulting distribution from the uniform distribution in $L_1$ norm. Let $m = c' \log n$ be suitably fixed from the analysis and $|A^m \bar{v} - \bar{u}|_1 \le \frac{1}{nc}$. Then by Theorem 1.1, the spectral gap $1-|\lambda_1| \ge \frac{c}{c'}$. Hence the Cayley graph is a spectral expander. It follows easily that the standard undirected Cayley graph with adjacency matrix $\frac{1}{2k}(P_J + P_{J-1})$ is also a spectral expander.
|
| 322 |
+
|
| 323 |
+
# 4 Deterministic construction of Erdös-Rényi sequences
|
| 324 |
+
|
| 325 |
+
In this section, we prove Theorem 1.9. We use the method of conditional expectations as follows: From Theorem 1.4, we know that $E_J\|D_J - U\|_2^2 = \frac{1}{2^k}(1-\frac{1}{n})$. Therefore there exists a setting of $J$, say $J = \langle x_1, \dots, x_k \rangle$, such that $\|D_J - U\|_2^2 \le \frac{1}{2^k}(1-\frac{1}{n})$. We find such a setting of $J$ by fixing its elements one by one. Let $\delta = \frac{1}{nc}$, $c > 1$ be the required closeness parameter. Thus we need $k$ such that $\frac{1}{2^k} \le \delta$. It suffices to take $k > c \log n$. We denote the expression $X_{i_1}^{\epsilon_1} \dots X_{i_t}^{\epsilon_t}$ by $\bar{X}^\epsilon$ when the length $t$ of the sequence is clear from the context.
|
| 326 |
+
|
| 327 |
+
Let after $i$th step, $x_1, \dots, x_i$ be fixed and $X_{i+1}, \dots, X_k$ are to be picked. At this stage, by our choice of $x_1, \dots, x_i$, we have $E_J=(X_{i+1},\dots,X_k)(\|D_J - U\|_2^2 | X_1 = x_1,\dots,X_i=x_i) \le \frac{1}{2^k}(1-\frac{1}{n})$. Now we cycle through all the group elements for $X_{i+1}$ and fix $X_{i+1} = x_{i+1}$ such that the $E_J=(X_{i+2},\dots,X_k)(\|D_J - U\|_2^2 | X_1 = x_1,\dots,X_{i+1}=x_{i+1}) \le \frac{1}{2^k}(1-\frac{1}{n})$. Such an $x_{i+1}$ always exists by a standard averaging argument. In the next theorem, we show that the conditional expectations are efficiently computable at every stage. Theorem 1.9 is an immediate corollary.
|
| 328 |
+
|
| 329 |
+
Assume that we have picked $x_1, \dots, x_i$ from $G$, and $X_{i+1}, \dots, X_k$ are to be picked from $G$. Let the choice of $x_1, \dots, x_i$ be such that $E_J=(X_{i+1},\dots,X_k)(\|D_J - U\|_2^2 | X_1 = x_1,\dots,X_i=x_i) \le \frac{1}{2^k}(1-\frac{1}{n})$. Let, for $x \in G$ and $J = \langle X_1, \dots, X_k \rangle$
|
| 330 |
+
|
| 331 |
+
$$Q_J(x) = \mathrm{Pr}_{\bar{\epsilon} \in \{0,1\}^k} [\bar{X}^{\bar{\epsilon}} = x]$$
|
| 332 |
+
|
| 333 |
+
When $J$ is partly fixed,
|
| 334 |
+
|
| 335 |
+
$$
|
| 336 |
+
\begin{align*}
|
| 337 |
+
\hat{Q}_J(x) &= \mathrm{Pr}_{\bar{\epsilon}_1 \in \{0,1\}^i, \bar{\epsilon}_2 \in \{0,1\}^{k-i}} [\bar{x}^{\bar{\epsilon}_1} \cdot \bar{X}^{\bar{\epsilon}_2} = x] \\
|
| 338 |
+
&= \sum_{y \in G} \mathrm{Pr}_{\bar{\epsilon}_1} [\bar{x}^{\bar{\epsilon}_1} = y] \mathrm{Pr}_{\bar{\epsilon}_2} [\bar{X}^{\bar{\epsilon}_2} = y^{-1}x] \\
|
| 339 |
+
&= \sum_{y \in G} \mu(y) \mathrm{Pr}_{\bar{\epsilon}_2} [\bar{X}^{\bar{\epsilon}_2} = y^{-1}x] \\
|
| 340 |
+
&= \sum_{y \in G} \mu(y) \hat{Q}_{\bar{X}}(y^{-1}x)
|
| 341 |
+
\end{align*}
|
| 342 |
+
$$
|
| 343 |
+
---PAGE_BREAK---
|
| 344 |
+
|
| 345 |
+
where $\mu(y) = \mathrm{Pr}_{\bar{\epsilon}_1}[\bar{x}^{\bar{\epsilon}_1} = y]$. Then $\mathbb{E}_J[\mathrm{Coll}(D_J)] = \mathbb{E}_J\|D_J - U\|_2^2 + \frac{1}{n}$, and $\mathbb{E}_J[\mathrm{Coll}(\hat{Q}_J)] = (\mathbb{E}_J\|D_J - U\|_2^2 | X_1 = x_1, X_2 = x_2, \dots, X_i = x_i) + \frac{1}{n}$.
|
| 346 |
+
|
| 347 |
+
Next theorem completes the proof.
|
| 348 |
+
|
| 349 |
+
**Theorem 4.1** For any finite group $G$ of order $n$ given as multiplication table, $\mathbb{E}_J[\mathrm{Coll}(\hat{Q}_J)]$ can be computed in time polynomial in $n$.
|
| 350 |
+
|
| 351 |
+
**Proof:**
|
| 352 |
+
|
| 353 |
+
$$ \mathbb{E}_J[\mathrm{Coll}(\hat{Q}_J)] = \mathbb{E}_J \sum_{x \in G} \hat{Q}_J^2(x). \quad (10) $$
|
| 354 |
+
|
| 355 |
+
Now we compute $\mathbb{E}_J \sum_{x \in G} \hat{Q}_J^2(x)$.
|
| 356 |
+
|
| 357 |
+
$$
|
| 358 |
+
\begin{align}
|
| 359 |
+
\mathbb{E}_J \sum_{x \in G} \hat{Q}_J^2(x) &= \mathbb{E}_J \sum_{x \in G} \left( \sum_{y \in G} \mu(y) \hat{Q}_{\bar{X}}(y^{-1}x) \right) \left( \sum_{z \in G} \mu(z) \hat{Q}_{\bar{X}}(z^{-1}x) \right) \\
|
| 360 |
+
&= \sum_{y,z \in G} \mu(y)\mu(z) \mathbb{E}_J \sum_{x \in G} [\hat{Q}_{\bar{X}}(y^{-1}x) \hat{Q}_{\bar{X}}(z^{-1}x)]. \tag{11}
|
| 361 |
+
\end{align}
|
| 362 |
+
$$
|
| 363 |
+
|
| 364 |
+
Now,
|
| 365 |
+
|
| 366 |
+
$$
|
| 367 |
+
\begin{align}
|
| 368 |
+
\sum_{x \in G} [\hat{Q}_{\bar{X}}(y^{-1}x) \hat{Q}_{\bar{X}}(z^{-1}x)] &= \sum_{x \in G} \mathrm{Pr}_{\bar{\epsilon}}[\bar{X}^{\bar{\epsilon}} = y^{-1}x] \mathrm{Pr}_{\bar{\epsilon}'}[\bar{X}^{\bar{\epsilon}'} = z^{-1}x] \\
|
| 369 |
+
&= \frac{1}{2^{2k}} \sum_{x, \bar{\epsilon}, \bar{\epsilon}'} \chi_{y^{-1}x}(\bar{\epsilon}) \chi_{z^{-1}x}(\bar{\epsilon}') \\
|
| 370 |
+
&= \frac{1}{2^{2k}} \left( \sum_{\bar{\epsilon}=\bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon}) \chi_{z^{-1}x}(\bar{\epsilon}') + \sum_{\bar{\epsilon} \neq \bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon}) \chi_{z^{-1}x}(\bar{\epsilon}') \right) (12)
|
| 371 |
+
\end{align}
|
| 372 |
+
$$
|
| 373 |
+
|
| 374 |
+
where $\chi_a(\bar{\epsilon})$ is an indicator variable which is 1 if $\bar{X}^{\bar{\epsilon}} = a$ and 0 otherwise. If $\bar{\epsilon} = \bar{\epsilon}'$ then $\chi_{y^{-1}x}(\bar{\epsilon}) \cdot$
|
| 375 |
+
$\chi_{z^{-1}x}(\bar{\epsilon}') = \delta_{y,z}$, where $\delta_{a,b} = 1$ whenever $a=b$ and 0 otherwise.
|
| 376 |
+
|
| 377 |
+
For $\bar{\epsilon} \neq \bar{\epsilon}'$, $\chi_{y^{-1}x}(\bar{\epsilon}) \cdot \chi_{z^{-1}x}(\bar{\epsilon}') = 1$ only if $y\bar{X}^{\bar{\epsilon}} = z\bar{X}^{\bar{\epsilon}'} = x$. Therefore for $\bar{\epsilon} \neq \bar{\epsilon}'$, we have
|
| 378 |
+
|
| 379 |
+
$$ \frac{1}{2^{2k}} \sum_{\bar{\epsilon} \neq \bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon}) \cdot \chi_{z^{-1}x}(\bar{\epsilon}') = \mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'} \delta_{y\bar{X}^{\bar{\epsilon}},z\bar{X}^{\bar{\epsilon}'}} (1-\delta_{\bar{\epsilon},\bar{\epsilon}'}). $$
|
| 380 |
+
|
| 381 |
+
Putting this in Equation 12, we get
|
| 382 |
+
|
| 383 |
+
$$ \frac{1}{2^{2k}} \left( \sum_{\bar{\epsilon}=\bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon})\chi_{z^{-1}x}(\bar{\epsilon}') + \sum_{\bar{\epsilon}\neq\bar{\epsilon}'} \sum_{x \in G} \chi_{y^{-1}x}(\bar{\epsilon})\chi_{z^{-1}x}(\bar{\epsilon}') \right) = \frac{n}{2^k}\delta_{y,z} + \mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'}\delta_{y\bar{X}^{\bar{\epsilon}},z\bar{X}^{\bar{\epsilon}'}}(1-\delta_{\bar{\epsilon},\bar{\epsilon}'}). $$
|
| 384 |
+
|
| 385 |
+
Therefore we get
|
| 386 |
+
|
| 387 |
+
$$
|
| 388 |
+
\begin{align}
|
| 389 |
+
\mathbb{E}_J \sum_{x \in G} \hat{Q}_{\bar{X}}(y^{-1}x) \cdot \hat{Q}_{\bar{X}}(z^{-1}x) &= \frac{n}{2^k} \delta_{y,z} + \mathbb{E}_J [\mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'} [\delta_{y\bar{X}^{\bar{\epsilon}},z\bar{X}^{\bar{\epsilon}'}}(1-\delta_{\bar{\epsilon},\bar{\epsilon}'})]] \\
|
| 390 |
+
&= \frac{n}{2^k} \delta_{y,z} + \mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'} [(1-\delta_{\bar{\epsilon},\bar{\epsilon}'})\mathbb{E}_J [\delta_{y\bar{X}^{\bar{\epsilon}},z\bar{X}^{\bar{\epsilon}'}}]] \\
|
| 391 |
+
&= \frac{n}{2^k} \delta_{y,z} + \mathbb{E}_{\bar{\epsilon},\bar{\epsilon}'} [(1-\delta_{\bar{\epsilon},\bar{\epsilon}'})\mathrm{Pr}_{\bar{X}}(y\bar{X}^{\bar{\epsilon}} = z\bar{X}^{\bar{\epsilon}'})] \tag{13}
|
| 392 |
+
\end{align}
|
| 393 |
+
$$
|
| 394 |
+
---PAGE_BREAK---
|
| 395 |
+
|
| 396 |
+
**Claim 4.2** For $\bar{\epsilon} \neq \bar{\epsilon}'$, $\Pr_{\bar{X}}(y\bar{X}\bar{\epsilon} = z\bar{X}\bar{\epsilon}') = \frac{1}{n}$.
|
| 397 |
+
|
| 398 |
+
**Proof:** Let $j$ be the smallest index from left such that $\epsilon_j \neq \epsilon'_j$. Let $X_{i+1}^{\epsilon_1}, \dots, X_{i+j-1}^{\epsilon_{j-1}} = a$. Let $X_{i+j+1}^{\epsilon_{i+1}}, \dots, X_k^{\epsilon_{k-i}} = b$ and $X_{i+j+1}^{\epsilon'_1}, \dots, X_k^{\epsilon'_{k-i}} = b'$. Also, without loss of generality, let $\epsilon_j = 1$ and $\epsilon'_j = 0$. Then we have $\Pr_{\bar{X}}(y\bar{X}\bar{\epsilon} = z\bar{X}\bar{\epsilon}') = \Pr_{X_{i+j}}(yaX_{i+j}b = zab') = \frac{1}{n}$. $\square$
|
| 399 |
+
|
| 400 |
+
Thus Equation 13 becomes
|
| 401 |
+
|
| 402 |
+
$$ \mathbb{E}_J \sum_{x \in G} \hat{Q}_{\bar{X}}(y^{-1}x) \cdot \hat{Q}_{\bar{X}}(z^{-1}x) = \frac{n}{2^k} \delta_{y,z} + \frac{2^{2k} - 2^k}{n2^{2k}} $$
|
| 403 |
+
|
| 404 |
+
Putting this in Equation 11, we get
|
| 405 |
+
|
| 406 |
+
$$ \mathbb{E}_J[\text{Coll}(\hat{Q}_J)] = \mathbb{E}_J \sum_{x \in G} \hat{Q}_J^2(x) = \sum_{y,z \in G} \frac{1}{2^{2k}} [2^k \cdot n \cdot \delta_{y,z} + (2^{2k} - 2^k) \cdot \frac{1}{n}] \mu(y)\mu(z) \quad (14) $$
|
| 407 |
+
|
| 408 |
+
Clearly, for any $y \in G$, $\mu(y)$ can be computed in time $O(2^i)$ which is a polynomial in $n$ since $i \le k = O(\log n)$. Also from Equation 14, it is clear that $\mathbb{E}_J[\text{Coll}(\hat{Q}_J)]$ is computable in polynomial (in $n$) time. $\square\square$
|
| 409 |
+
|
| 410 |
+
# 5 Summary
|
| 411 |
+
|
| 412 |
+
Constructing explicit Cayley expanders on finite groups is an important problem. In this paper, we give simple deterministic construction of Cayley expanders that have a constant spectral gap. Our method is completely different and elementary than the existing techniques [10].
|
| 413 |
+
|
| 414 |
+
The main idea behind our work is a deterministic polynomial-time construction of a cube generating sequence $J$ of size $O(\log|G|)$ such that $\text{Cay}(G, J)$ has a rapid mixing property. In randomized setting, Pak [7] has used similar ideas to construct Cayley expanders. In particular, we also give a derandomization of a well-known result of Erdös and Rényi [2].
|
| 415 |
+
|
| 416 |
+
# References
|
| 417 |
+
|
| 418 |
+
[1] Noga Alon and Yuval Roichman. Random cayley graphs and expanders. *Random Struct. Algorithms*, 5(2):271–285, 1994.
|
| 419 |
+
|
| 420 |
+
[2] Paul Erdös and Alfréd Rényi. Probabilistic methods in group theory. *Journal D'analyse Mathematique*, 14(1):127–138, 1965.
|
| 421 |
+
|
| 422 |
+
[3] Martin Hildebrand. A survey of results on random random walks on finite groups. *Probability Surveys*, 2:33–63, 2005.
|
| 423 |
+
|
| 424 |
+
[4] Shlomo Hoory, Nati Linial, and Avi Wigderson. Expander graphs and their applications. *Bull. AMS*, 43(4):439–561, 2006.
|
| 425 |
+
|
| 426 |
+
[5] Alex Lubotzky, R. Phillips, and Peter Sarnak. Ramanujan graphs. *Combinatorica*, 8(3):261–277, 1988.
|
| 427 |
+
---PAGE_BREAK---
|
| 428 |
+
|
| 429 |
+
[6] Ravi Montenegro and Prasad Tetali. Mathematical aspects of mixing times in markov chains. *Foundations and Trends in Theoretical Computer Science*, 1(3), 2005.
|
| 430 |
+
|
| 431 |
+
[7] Igor Pak. Random cayley graphs with $o(\log[g])$ generators are expanders. In *Proceedings of the 7th Annual European Symposium on Algorithms*, ESA '99, pages 521–526. Springer-Verlag, 1999.
|
| 432 |
+
|
| 433 |
+
[8] Dana Randall. Rapidly mixing markov chains with applications in computer science and physics. *Computing in Science and Engg.*, 8(2):30–41, 2006.
|
| 434 |
+
|
| 435 |
+
[9] Omer Reingold. Undirected connectivity in log-space. *J. ACM*, 55(4), 2008.
|
| 436 |
+
|
| 437 |
+
[10] Avi Wigderson and David Xiao. Derandomizing the ahlswede-winter matrix-valued chernoff bound using pessimistic estimators, and applications. *Theory of Computing*, 4(1):53–76, 2008.
|
| 438 |
+
---PAGE_BREAK---
|
| 439 |
+
|
| 440 |
+
# Appendix
|
| 441 |
+
|
| 442 |
+
We include a proof of Lemma 2.2.
|
| 443 |
+
|
| 444 |
+
## Proof of Lemma 2.2
|
| 445 |
+
|
| 446 |
+
**Proof:** We use the simple fact that if $y \in G$ is picked uniformly at random and $x \in G$ be any element independent of $y$, then the distribution of $xyx^{-1}$ is uniform in $G$.
|
| 447 |
+
|
| 448 |
+
Let $I = \langle i_1, \dots, i_m \rangle$, and $L = \langle i_{r_1}, \dots, i_{r_\ell} \rangle$ be the corresponding L-subsequence (clearly, $r_1 = 1$). Let $J = \langle g_1, g_2, \dots, g_k \rangle$ be uniform and independent random elements from $G$. Consider the distribution of the products $g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m}$ where $\epsilon_i \in \{0, 1\}$ are independent and uniformly picked at random. Then we can write
|
| 449 |
+
|
| 450 |
+
$$g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m} = g_{i_{r_1}}^{\epsilon_{r_1}} x_1 g_{i_{r_2}}^{\epsilon_{r_2}} x_2 \dots x_{\ell-1} g_{i_{r_\ell}}^{\epsilon_{r_\ell}} x_\ell,$$
|
| 451 |
+
|
| 452 |
+
where, by definition of L-subsequence, notice that $x_j$ is a product of elements from $\{g_{i_{r_1}}, g_{i_{r_2}}, \dots, g_{i_{r_{j-1}}}\}$ for each $j$. By conjugation, we can rewrite the above expression as
|
| 453 |
+
|
| 454 |
+
$$g_{i_{r_1}}^{\epsilon_{r_1}} x_1 g_{i_{r_2}}^{\epsilon_{r_2}} x_2 \dots h^{\epsilon_{r_\ell}} x_{\ell-1} x_\ell, \text{ where}$$
|
| 455 |
+
|
| 456 |
+
$$h^{\epsilon_{r_\ell}} = x_{\ell-1} g_{i_{r_\ell}}^{\epsilon_{r_\ell}} x_{\ell-1}^{-1}.$$
|
| 457 |
+
|
| 458 |
+
We refer to this transformation as moving $x_{\ell-1}$ to the right. Successively applying this transformation to $x_{\ell-2}, x_{\ell-3}, \dots, x_1$ we can write
|
| 459 |
+
|
| 460 |
+
$$g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m} = h_{i_{r_1}}^{\epsilon_{r_1}} h_{i_{r_2}}^{\epsilon_{r_2}} \dots h_{i_{r_\ell}}^{\epsilon_{r_\ell}} x_1 x_2 \dots x_{\ell-1} x_\ell,$$
|
| 461 |
+
|
| 462 |
+
where each $h_{i_{r_j}}$ is a conjugate $z_j g_{i_{r_j}} z_j^{-1}$. Crucially, notice that the group element $z_j$ is a product of elements from $\{g_{i_1}, g_{i_2}, \dots, g_{i_{r_{j-1}}}\}$ for each $j$. As a consequence of this and the fact that $\{g_{i_1}, g_{i_2}, \dots, g_{i_{r_\ell}}\}$ are all independent uniformly distributed elements of $G$, it follows that $\{h_{i_1}, h_{i_2}, \dots, h_{i_\ell}\}$ are all independent uniformly distributed elements of $G$. Let $J'$ denote the set of $k$ group elements obtained from $J$ by replacing the subset $\{g_{i_1}, g_{i_2}, \dots, g_{i_\ell}\}$ with $\{h_{i_1}, h_{i_2}, \dots, h_{i_\ell}\}$. Clearly, $J'$ is a set of $k$ independent, uniformly distributed random group elements from $G$.
|
| 463 |
+
|
| 464 |
+
Thus, we have
|
| 465 |
+
|
| 466 |
+
$$g_{i_1}^{\epsilon_1} \dots g_{i_m}^{\epsilon_m} = h_{i_1}^{\epsilon_1} \dots h_{i_\ell}^{\epsilon_\ell} x(\bar{\epsilon}),$$
|
| 467 |
+
|
| 468 |
+
where $x(\bar{\epsilon}) = x_1 x_2 \dots x_r$ is an element in $G$ that depends on $J, I$ and $\bar{\epsilon}$, where $\bar{\epsilon}$ consists of all the $\epsilon_j$ for $j \in I \setminus L$. Hence, for each $g \in G$, observe that we can write
|
| 469 |
+
|
| 470 |
+
$$
|
| 471 |
+
\begin{align*}
|
| 472 |
+
R_I^J(g) &= \operatorname{Prob}_{\epsilon_1, \ldots, \epsilon_m} \left[ \prod_{j=1}^{m} g_{ij}^{\epsilon_j} = g \right] \\
|
| 473 |
+
&= \operatorname{Prob}_{\epsilon_1, \ldots, \epsilon_m} [h_{i_1}^{\epsilon_1} \cdots h_{i_\ell}^{\epsilon_\ell} = g x(\bar{\epsilon})^{-1}] \\
|
| 474 |
+
&= E_{\bar{\epsilon}}[R'_{L(I)}(gx(\bar{\epsilon})^{-1})].
|
| 475 |
+
\end{align*}
|
| 476 |
+
$$
|
| 477 |
+
---PAGE_BREAK---
|
| 478 |
+
|
| 479 |
+
Therefore we have the following:
|
| 480 |
+
|
| 481 |
+
$$
|
| 482 |
+
\begin{align*}
|
| 483 |
+
\mathbb{E}_J[\mathrm{Coll}(R_I^J)] &= \mathbb{E}_J\left[\sum_g (R_I^J(g))^2\right] \\
|
| 484 |
+
&= \mathbb{E}_J\left[\sum_g (\mathbb{E}_{\bar{\epsilon}} R_{L(I)}^J (gx(\bar{\epsilon})^{-1}))^2\right] \\
|
| 485 |
+
&\le \mathbb{E}_J\left[\sum_g \mathbb{E}_{\bar{\epsilon}}(R_{L(I)}^J (gx(\bar{\epsilon})^{-1}))^2\right] \tag{15} \\
|
| 486 |
+
&= \mathbb{E}_{\bar{\epsilon}}\left[\mathbb{E}_J\left[\sum_g (R_{L(I)}^J (gx(\bar{\epsilon})^{-1}))^2\right]\right] \\
|
| 487 |
+
&= \mathbb{E}_{\bar{\epsilon}}\left[\mathbb{E}_J\left[\sum_h (R_{L(I)}^J(h))^2\right]\right] \\
|
| 488 |
+
&= \mathbb{E}_J\left[\sum_h (R_{L(I)}^J(h))^2\right] \\
|
| 489 |
+
&= \mathbb{E}_J[\mathrm{Coll}(R_{L(I)}^J)] \le \frac{1}{n} + \delta \tag{16}
|
| 490 |
+
\end{align*}
|
| 491 |
+
$$
|
| 492 |
+
|
| 493 |
+
where the inequality in 15 follows from Cauchy-Schwarz inequality and the last step follows from the assumption of the lemma.
|
| 494 |
+
|
| 495 |
+
□□
|
| 496 |
+
|
| 497 |
+
We use simple counting argument to prove Lemma 2.3. A similar lemma appears in [7].
|
| 498 |
+
|
| 499 |
+
**Proof of Lemma 2.3**
|
| 500 |
+
|
| 501 |
+
**Proof:** Consider the event that a sequence $X$ of length $m$ does not have an L-subsequence of length $\ell$. Thus it has at most $\ell - 1$ distinct elements, which can be chosen in at most $\binom{k}{\ell-1}$ ways. The $m$ length sequence can be formed from them in at most $[\ell-1]^m$ ways. Therefore
|
| 502 |
+
|
| 503 |
+
$$
|
| 504 |
+
\begin{align*}
|
| 505 |
+
\Pr[X \text{ has L-subsequence of length } < \ell] & \leq \frac{\binom{k}{\ell-1} [\ell-1]^m}{k^m} \\
|
| 506 |
+
& \leq \left(\frac{ke}{\ell-1}\right)^{\ell-1} \cdot \left(\frac{\ell-1}{k}\right)^m \\
|
| 507 |
+
& = e^{\ell-1} \left(\frac{\ell-1}{k}\right)^{m-\ell+1} \\
|
| 508 |
+
& = \frac{e^{\ell-1}}{a^{m-(k/a)}} = \frac{(ae)^{k/a}}{a^m}. \tag*{\square\square}
|
| 509 |
+
\end{align*}
|
| 510 |
+
$$
|
| 511 |
+
|
| 512 |
+
Next we prove Lemma 2.4.
|
| 513 |
+
|
| 514 |
+
**Proof of Lemma 2.4**
|
| 515 |
+
|
| 516 |
+
**Proof:**
|
| 517 |
+
|
| 518 |
+
We call $I \in [k]^m$ good if it has an L-subsequence of length at least $\ell$, else we call it bad.
|
| 519 |
+
---PAGE_BREAK---
|
| 520 |
+
|
| 521 |
+
$$
|
| 522 |
+
\begin{align*}
|
| 523 |
+
\mathbb{E}_J[\mathrm{Coll}(Q_J)] &= \mathbb{E}_J\left[\sum_{g \in G} Q_J^2(g)\right] \\
|
| 524 |
+
&= \mathbb{E}_J\left[\sum_{g \in G} (\mathbb{E}_I(R_I(g))^2]\right] \\
|
| 525 |
+
&\leq \mathbb{E}_J\left[\sum_{g \in G} \mathbb{E}_I(R_I^2(g))\right] \quad \text{By Cauchy-Schwarz inequality} \tag{17} \\
|
| 526 |
+
&= \mathbb{E}_I[\mathbb{E}_J[\mathrm{Coll}(R_I)]] \\
|
| 527 |
+
&\leq \frac{1}{k^m} \mathbb{E}_J\left[\sum_{I \in [k]^m} \sum_{g \in G} (R_I^J(g))^2 + \sum_{\substack{I \in [k]^m \\ I \text{ is bad}}} 1\right] \\
|
| 528 |
+
&\leq \mathrm{Pr}_I[I \text{ is good}] \left(\frac{1}{n} + \frac{1}{2^\ell}\right) + \mathrm{Pr}_I[I \text{ is bad}] \tag{18}
|
| 529 |
+
\end{align*}
|
| 530 |
+
$$
|
| 531 |
+
|
| 532 |
+
Here the last step follows from Lemma 2.2 and Theorem 1.4. Now we fix $m$ from Lemma 2.3 appropriately to $O(\log n)$ such that $\mathrm{Pr}_I[I \text{ is bad}] \le \frac{1}{2^m}$ and choose $\ell = \Theta(m)$. Hence we get that $\mathbb{E}_J[\mathrm{Coll}(Q_J)] \le \frac{1}{n} + \frac{1}{2^{\Theta(m)}}$. $\square\square$
|
| 533 |
+
|
| 534 |
+
Next, we give the proof of Lemma 2.6.
|
| 535 |
+
|
| 536 |
+
# 6 Proof of Lemma 2.6
|
| 537 |
+
|
| 538 |
+
**Proof:** For each $g \in G$, we can write
|
| 539 |
+
|
| 540 |
+
$$
|
| 541 |
+
\begin{align*}
|
| 542 |
+
R_I^J(g) &= \operatorname{Prob}_{\epsilon_1, \dots, \epsilon_m} \left[ \prod_{j=1}^{m} g_{ij}^{\epsilon_j} = g \right] = \operatorname{Prob}_{\epsilon_1, \dots, \epsilon_m} \left[ g_{i_{f_1}}^{\epsilon_{f_1}} \cdots g_{i_{f_r}}^{\epsilon_{f_r}} h_{e_1}^{\epsilon_{e_1}} \cdots h_{e_\ell}^{\epsilon_{e_\ell}} = gy(\bar{\epsilon})^{-1} \right] \\
|
| 543 |
+
&= E_{\bar{\epsilon}}[R_{\hat{I}}^{J(I)}(gy(\bar{\epsilon})^{-1})].
|
| 544 |
+
\end{align*}
|
| 545 |
+
$$
|
| 546 |
+
|
| 547 |
+
Therefore we have the following:
|
| 548 |
+
|
| 549 |
+
$$
|
| 550 |
+
\begin{align}
|
| 551 |
+
\mathbb{E}_J[\mathrm{Coll}(R_I^J)] &= \mathbb{E}_J\left[\sum_g (R_I^J(g))^2\right] \nonumber \\
|
| 552 |
+
&= \mathbb{E}_J\left[\sum_g (\mathbb{E}_{\bar{\epsilon}} R_{\hat{I}}^{J(I)}(gy(\bar{\epsilon})^{-1}))^2\right] \nonumber \\
|
| 553 |
+
&\leq \mathbb{E}_J\left[\sum_g (\mathbb{E}_{\bar{\epsilon}}(R_{\hat{I}}^{J(I)}(gy(\bar{\epsilon})^{-1}))^2\right] \tag{19} \\
|
| 554 |
+
&= \mathbb{E}_{\bar{\epsilon}}[\mathbb{E}_J\left[\sum_g (R_{\hat{I}}^{J(I)}(gy(\bar{\epsilon})^{-1}))^2\right]] \nonumber \\
|
| 555 |
+
&= \mathbb{E}_{\bar{\epsilon}}[\mathbb{E}_J\left[\sum_h (R_{\hat{I}}^{J(I)}(h))^2\right]] \nonumber \\
|
| 556 |
+
&= \mathbb{E}_J[\mathrm{Coll}(R_{\hat{I}}^{J(I)})], \nonumber
|
| 557 |
+
\end{align}
|
| 558 |
+
$$
|
| 559 |
+
|
| 560 |
+
where the inequality 19 follows from Cauchy-Schwarz inequality. $\square\square$
|
| 561 |
+
|
| 562 |
+
We include a short proof of Lemma 2.9.
|
| 563 |
+
---PAGE_BREAK---
|
| 564 |
+
|
| 565 |
+
**Proof of Lemma 2.9**
|
| 566 |
+
|
| 567 |
+
**Proof:** There are $\binom{m}{r}$ ways of picking $r$ positions for the fixed elements in $I$. Each such index can be chosen in $j$ ways. From the $(k-j)$ random elements of $J$, $\ell$ distinct elements can be picked in $\binom{k-j}{\ell}$ ways. Let $n_{m-r,\ell}$ be the number of sequences of length $m-r$ that can be constructed out of $\ell$ distinct integers such that every integer appears at least once. Clearly, $|S_{r,\ell}| = \binom{m}{r} j^{\binom{k-j}{\ell}} n_{m-r,\ell}$. It is well known that $n_{m-r,\ell}$ is the coefficient of $x^{m-r}/(m-r)!$ in $(e^x - 1)^\ell$. Thus, by the binomial theorem $n_{m-r,\ell} = \sum_{i=0}^\ell (-1)^i \binom{\ell}{i} (\ell-i)^{m-r}$. Since $m = O(\log n)$ and $\ell \le m$, $n_{m-r,\ell}$ can be computed in time polynomial in $n$. □□
|
| 568 |
+
|
| 569 |
+
Next, we give a proof of Lemma 3.1.
|
| 570 |
+
|
| 571 |
+
**Proof of Lemma 3.1**
|
| 572 |
+
|
| 573 |
+
**Proof:** The proof closely follows the proof of Erdös-Rényi for the case $\bar{\epsilon} \in \{0,1\}^k$. We briefly sketch the argument below for the sake of completeness.
|
| 574 |
+
|
| 575 |
+
We denote the expression $g_1^{\epsilon_1} \cdots g_k^{\epsilon_k}$ by $\bar{g}^{\bar{\epsilon}}$. For a given $J$, $\chi_x(\bar{\epsilon}) = 1$ if $\bar{g}^{\bar{\epsilon}} = x$ and $0$ otherwise. Let $S_1 = \{(\bar{\epsilon}, \bar{\epsilon}') | \bar{\epsilon} \neq \bar{\epsilon}'; \exists i$ such that $\bar{\epsilon}_i \neq \bar{\epsilon}'_i$ and $\bar{\epsilon}_i \bar{\epsilon}'_i = 0\}$. Let $S_2 = \{(\bar{\epsilon}, \bar{\epsilon}') | \bar{\epsilon} \neq \bar{\epsilon}'; \bar{\epsilon}_i \neq \bar{\epsilon}'_i \Rightarrow \bar{\epsilon}_i \bar{\epsilon}'_i = -1\}$
|
| 576 |
+
|
| 577 |
+
$$
|
| 578 |
+
\begin{aligned}
|
| 579 |
+
\mathbb{E}_J[\mathrm{Coll}(D_J)] &= \mathbb{E}_J\left[\sum_{x \in G} (D_J(x))^2\right] \\
|
| 580 |
+
&= \mathbb{E}_J\left[\sum_{x \in G} (\mathrm{Pr}_{\bar{\epsilon}}[\bar{g}^{\bar{\epsilon}} = x])^2\right] \\
|
| 581 |
+
&= \frac{1}{3^{2k}} \mathbb{E}_J\left[\sum_{x \in G} \left(\sum_{\bar{\epsilon}} \chi_x(\bar{\epsilon})\right) \left(\sum_{\bar{\epsilon}'} \chi_x(\bar{\epsilon}')\right)\right] \\
|
| 582 |
+
&= \frac{1}{3^{2k}} \left[ \sum_{\bar{\epsilon}=\bar{\epsilon}'} \mathbb{E}_J\left[\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}')\right] + \sum_{\bar{\epsilon} \neq \bar{\epsilon}'} \mathbb{E}_J\left[\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}')\right] \right] \\
|
| 583 |
+
&= \frac{1}{3^{2k}} \left( 3^k + \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_1} \mathbb{E}_J\left[\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}')\right] + \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_2} \mathbb{E}_J\left[\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}')\right] \right) \\
|
| 584 |
+
&= \frac{1}{3^{2k}} \left[ 3^k + \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_1} \mathrm{Pr}_{\bar{g}}(\bar{g}^{\bar{\epsilon}} = \bar{g}^{\bar{\epsilon}'}) + \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_2} \mathrm{Pr}_{\bar{g}}(\bar{g}^{\bar{\epsilon}} = \bar{g}^{\bar{\epsilon}'}) \right] \\
|
| 585 |
+
&\leq \frac{1}{3^k} + \left(1 - \frac{1}{3^k} - \frac{5^k}{9^k}\right) \frac{1}{n} + \frac{5^k}{9^k} \\
|
| 586 |
+
&= \left(1 - \frac{1}{n}\right) \left(\frac{1}{3^k} + \frac{5^k}{9^k}\right) + \frac{1}{n} \\
|
| 587 |
+
&< \left(\frac{8}{9}\right)^k + \frac{1}{n}
|
| 588 |
+
\end{aligned}
|
| 589 |
+
$$
|
| 590 |
+
|
| 591 |
+
To see the last step, first notice that if $\bar{\epsilon} = \bar{\epsilon}'$ then $\sum_{x \in G} \chi_x(\bar{\epsilon})\chi_x(\bar{\epsilon}') = 1$. A simple counting argument shows that $|S_2| = \sum_{i=0}^k {k \choose i} 2^i 3^{k-i} = 5^k$. So $\sum_{(\bar{\epsilon},\bar{\epsilon}') \in S_2} \mathrm{Pr}_{\bar{g}}(\bar{g}^{\bar{\epsilon}} = \bar{g}^{\bar{\epsilon}'}) \le 5^k$. Now consider
|
| 592 |
+
---PAGE_BREAK---
|
| 593 |
+
|
| 594 |
+
$(\bar{\epsilon}, \bar{\epsilon}') \in S_1$, let $j$ be the first position from left such that $\bar{\epsilon}_j \neq \bar{\epsilon}'_j$. W.l.o.g assume that $\bar{\epsilon}_j = 1$ (or $\bar{\epsilon}_j = -1$) and $\bar{\epsilon}'_j = 0$. In that case write $\bar{g}^{\bar{\epsilon}} = a_{g_j} b$ and $\bar{g}^{\bar{\epsilon}'} = a_{b'}$. Then $\mathrm{Pr}_{g_j}[g_j = b'b^{-1}] = \frac{1}{n}$. Hence
|
| 595 |
+
$$ \sum_{(\bar{\epsilon}, \bar{\epsilon}') \in S_1} \mathrm{Pr}_{\bar{g}}(\bar{g}^{\bar{\epsilon}} = \bar{g}^{\bar{\epsilon}'}) = \frac{9^k - 3^k - 5^k}{n}. \quad \square\square $$
|
samples_new/texts_merged/2918349.md
ADDED
|
@@ -0,0 +1,208 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Maneuvering Multi-Target Tracking Algorithm Based on
|
| 5 |
+
Modified Generalized Probabilistic Data Association
|
| 6 |
+
|
| 7 |
+
Zhentao Hu¹, Chunling Fu², Xianxing Liu¹
|
| 8 |
+
|
| 9 |
+
¹College of Computer and Information Engineering, Henan University, Kaifeng, China
|
| 10 |
+
|
| 11 |
+
²Basic Experiments Teaching Center, Henan University, Kaifeng, China
|
| 12 |
+
|
| 13 |
+
E-mail: hzt@henu.edu.cn
|
| 14 |
+
|
| 15 |
+
Received July 8, 2011; revised October 1, 2011; accepted November 1, 2011
|
| 16 |
+
|
| 17 |
+
**Abstract**
|
| 18 |
+
|
| 19 |
+
Aiming at the problem of strong nonlinear and effective echo confirm of multi-target tracking system in clutters environment, a novel maneuvering multi-target tracking algorithm based on modified generalized probabilistic data association is proposed in this paper. In view of the advantage of particle filter which can deal with the nonlinear and non-Gaussian system, it is introduced into the framework of generalized prob-abilistic data association to calculate the residual and residual covariance matrices, and the interconnection probability is further optimized. On that basis, the dynamic combination of particle filter and generalized probabilistic data association method is realized in the new algorithm. The theoretical analysis and experi-mental results show the filtering precision is obviously improved with respect to the tradition method using suboptimal filter.
|
| 20 |
+
|
| 21 |
+
**Keywords:** Multi-Target Tracking, Particle Filter, Generalized Probabilistic Data Association, Clutters
|
| 22 |
+
|
| 23 |
+
# 1. Introduction
|
| 24 |
+
|
| 25 |
+
In actual engineering applications, maneuvering multi-target tracking in clutters is always one of hottest and most difficult issues in target tracking studies, which could be solved by means of the two key technologies including filter design and data association. In recent years, the growth of computational power has made computer intensive statistical methods feasible. Based on the technique of sequential importance sampling and the recursive Bayesian filter principle, particle filter (PF) is particularly useful in dealing with nonlinear and non-Gaussian problems, and it can achieve the minimum variance estimation in theory [1-3]. Because of the above advantages, PF has been widely applied in many fields, such as signal processing, target tracking, fault diagnosis and image processing, et al. In data association methods, some novel solutions have been proposed to implement effective echo validation in clutters, mainly based on Bayesian estimation theory, evidential reasoning theory, and such intelligence calculation as fuzzy theory, neural networks and genetic evolution [4-7]. Where data association algorithms based on Bayesian estimation theory are the mainstream, of which probabilistic data association (PDA) and joint probabilistic data association (JPDA)
|
| 26 |
+
|
| 27 |
+
proposed by Bar-Shalom, et al., are always considered as the superior methods to solve the single target tracking and the multi-target tracking [8,9]. Two basic principles are applied in JPDA. One is that every measurement derives from unique target, the other is that observation deriving from one target is not more than one. Some scholars have attempted to replace the suboptimal filter by PF in JPDA, and the results show the tracking precision is obviously improved.
|
| 28 |
+
|
| 29 |
+
It has set the higher requirement for modern tracking and monitoring system, since the existence of various natural and artificial disturbance, the application of the penetration technology of large batches targets and the improvement of target maneuverability and control properties in the modern war environment, cause much denser formation and cross motion, which leads to the strong fuzzy and uncertainty of obtained data. When targets maneuvers with cross motion and denser formation, sensors are likely to regard many observations coming from different planes as one observation. In addition, with the improvement of resolution ratio of radar, the phenomenon, the many observations corresponding to one target, often arise from the multipath effect of observation and systematic error of networking radar. In these cases, the one-to-one correspondence rules between observation
|
| 30 |
+
---PAGE_BREAK---
|
| 31 |
+
|
| 32 |
+
and target is not coincident with actual facts. Quan P. et al. break the feasibility-based rule in JPDA and give the definition of generalized joint event and generalized event, and propose a new method of partition and combination about them. On this basis, the generalized probabilistic data association (GPAD) is proposed on account of Bayesian estimation criteria. Theoretical analysis and simulation results for various kinds of typical environment show that the filtering precision and real time of GPDA are superior to JPDA. However, the application of suboptimal filters in GPDA inevitably cause that filtering precision is limited the adverse effects of strong nonlinear of tracking system [10,11].
|
| 33 |
+
|
| 34 |
+
According to the analysis above, through the dynamic combination of particle filter and generalized probabilistic data association, a novel maneuvering multi-target tracking algorithm based on modified generalized probabilistic data association in clutters is proposed. Experimental results show the feasibility and validity of the algorithm.
|
| 35 |
+
|
| 36 |
+
## 2. Particle Filter
|
| 37 |
+
|
| 38 |
+
The problem of state estimation can be solved by calculating the posterior probability density function $p(x_k | z_{1:k})$ of the state variable $x_k$ at time k based on all the available data of observation sequence $z_{1:k} = \{z_1, z_2, ..., z_k\}$. Because the complete information of sequential estimation is in $p(x_k | z_{1:k})$, some parameters which system state estimation need can be obtained, such as mean and variance, etc. The concrete implementation is to approximate $p(x_k | z_{1:k})$ with particles in PF, and the mathematical description is written as
|
| 39 |
+
|
| 40 |
+
$$p(x_k | z_{1:k}) \approx \sum_{i=1}^{N} \delta(x_k - x_i^k)/N \quad (1)$$
|
| 41 |
+
|
| 42 |
+
where $\delta(\cdot)$ is Dirac's delta function. $x_k$ represents particle used in estimated system, which is sampled directly from $p(x_k | z_{1:k})$. However, $p(x_k | z_{1:k})$ is unknown generally, and the above process is often impossible to implement. The difficulty can be circumvented by sampling particles $\{x_i^k, \omega_i^k\}_{i=1}^N$ with associated importance weights from a known and easy-to-sample proposal distribution $q(x_k | z_{1:k})$. The process is described as the importance sampling. Where the associated importance weights of particle is defined as
|
| 43 |
+
|
| 44 |
+
$$\omega_k^i \propto p(x_k^i | z_{1:k}) / q(x_k^i | z_{1:k}) \quad (2)$$
|
| 45 |
+
|
| 46 |
+
To depict further the generation of $x_k^i$, the proposal distribution $q(x_k | z_{1:k})$ is factorized as follows
|
| 47 |
+
|
| 48 |
+
$$q(x_k | z_{1:k}) = q(x_k | x_{k-1}, z_{1:k}) q(x_{k-1} | z_{1:k-1}) \quad (3)$$
|
| 49 |
+
|
| 50 |
+
It is known that $x_k^i$ is sampled by augmenting each $x_{k-1}^i$ sampled from the proposal distribution $q(x_{k-1} | z_{1:k-1})$ with the new state sampled from $q(x_k | x_{k-1}, z_{1:k})$. In order to obtain the recursive equation of particle weights $\omega_k^i$, $p(x_k | z_{1:k})$ is expressed in terms of $p(z_k | x_k)$, $p(x_k | x_{k-1})$ and $p(x_{k-1} | z_{1:k-1})$. Noting that
|
| 51 |
+
|
| 52 |
+
$$\begin{align}
|
| 53 |
+
p(x_k | z_{1:k}) &= p(z_k | x_k, z_{1:k-1}) p(x_k | z_{1:k-1}) / p(z_k | z_{1:k-1}) \tag{4} \\
|
| 54 |
+
&\propto p(z_k | x_k) p(x_k | x_{k-1}) p(x_{k-1} | z_{1:k-1})
|
| 55 |
+
\end{align}$$
|
| 56 |
+
|
| 57 |
+
Under assumptions that states subject to a Markov process and the observations are conditionally independent, and combining with Equations (2)-(4), the particle weights is given by
|
| 58 |
+
|
| 59 |
+
$$\omega_k^i = \omega_{k-1}^i p(z_k | x_k^i) p(x_k^i | x_{k-1}^i) / q(x_k^i | x_{k-1}^i, z_{1:k}) \quad (5)$$
|
| 60 |
+
|
| 61 |
+
In the practical application, the proposal distribution is commonly selected as
|
| 62 |
+
|
| 63 |
+
$$q(x_k^i | x_{k-1}^i, z_{1:k}) = p(x_k^i | x_{k-1}^i) \quad (6)$$
|
| 64 |
+
|
| 65 |
+
Substituting Equation (6) into Equation (5), the particle weights update equation can then be shown to be
|
| 66 |
+
|
| 67 |
+
$$\omega_k^i = \omega_{k-1}^i p(z_k | x_k^i) \quad (7)$$
|
| 68 |
+
|
| 69 |
+
Then $\omega_k^i$ is normalized before the re-sampling stage, and $\omega_i^j$ denotes normalized weights. The key idea of re-sampling is to eliminate particles that have small weights and to duplicate particles with large weights, under the conditions of the total particles number invariant. A set of new particles $\{x_k^j, \omega_k^j\}_{j=1}^N$ are sampled after the re-sampling stage. According to Monte Carlo simulation technology, state estimation can be ultimately achieved by calculating the arithmetic mean of $\{x_k^j, \omega_k^j\}_{j=1}^N$. At present, re-sampling methods are mainly in the following categories: the residual re-sampling, the system re-sampling, the polynomial re-sampling, etc. That is standard particle filter and also known as bootstrap filter.
|
| 70 |
+
|
| 71 |
+
## 3. Maneuvering Multi-Target Tracking Algorithm Based on Modified Generalized Probabilistic Data Association
|
| 72 |
+
|
| 73 |
+
Data association is one of the key technologies in multi-target tracking, because it directly affects on the whole performance of tracking system. Based on the multiplexing principle with observation and target, GPDA is considered as a kind of better echo confirmation method. It is known that the construction of GPDA completely adopts the framework of Kalman filter, thus GPDA lacks the effective processing ability for strong nonlinear cases.
|
| 74 |
+
---PAGE_BREAK---
|
| 75 |
+
|
| 76 |
+
For nonlinear system, the extended Kalman filter (EKF) can directly replace KF, but the filtering precision of EKF sometimes is hard to meet the practical needs. Considering that PF and GPDA can effective treat strong nonlinear problem and echo confirmation, respectively. In this section, we give the generalized probabilistic data association based on particle filter (GPDA-PF) in clutters is proposed.
|
| 77 |
+
|
| 78 |
+
### 3.1. Generalized Probabilistic Data Association
|
| 79 |
+
|
| 80 |
+
Considering *T* targets move in radar scanning region, the observations consist of the real measurement and clutters in each sample time. The state equation and observation equation of the *t*-th target is modeled as the following form.
|
| 81 |
+
|
| 82 |
+
$$x_k^t = f^t(x_{k-1}^t, u_{k-1}^t) \quad t = 1, 2, \dots, T \qquad (7)$$
|
| 83 |
+
|
| 84 |
+
$$z_{k,m} = h(x_k^t, v_k) \quad m = 1, 2, \dots, M \qquad (8)$$
|
| 85 |
+
|
| 86 |
+
where $x_k^t$ and $z_{k,m}$ denote the unknown state vector of $t$-th target and $m$-th observation vector at time $k$, respectively. $f^t(\cdot)$ and $h(\cdot)$ denote the evolution function of state and observation, respectively. System noise $u_k^t$ and observation noise $v_k$ are subject to white noise sequence, respectively, and meet independently identically distribution. Let $\bar{z}_k = \{z_{k,1}, z_{k,2}, \dots, z_{k,M}\}$ denotes the candidate echo set that fall into correlation window at time $k$. Different from feasibility-based rule in JPDA, GPDA adopts the following rules. Firstly, each target has possessed observations (one or more, including zero observation). Secondly, each observation originates from targets (one or more, including zero target). Thirdly, the probability corresponding to any target (observation) and observation (target) should be not less than the other correlated events probability in last two rules. Here, the zero target refers to no target, but it may be the new target of target concerned outside or the false object from interferences or clutters. The zero observation refers to no observation, namely target is not detected.
|
| 87 |
+
|
| 88 |
+
The first rule shows that observations can be multiplexed when target is considered as a benchmark, which is mainly used to solve the association problem between one target and multiple observations. The second rule shows that target can be multiplexed when observation is considered as a benchmark, which is mainly used to solve the association problem between one observation target and multiple targets. The third rule shows that the probability of one-to-one correlated events is dominant among all the correlated events assumed. To calculate interconnected probability in GPDA, the generalized joint events set $\mathcal{O}$ and poly-probability matrix *D* are defined as follows.
|
| 89 |
+
|
| 90 |
+
$\mathcal{O}_i$ and $\mathcal{O}_m$ denote generalized events subset which meet the first rule and the second rule, respectively. $d_{m,i}$ denotes statistical distance between the $m$-th observation and the $i$-th target.
|
| 91 |
+
|
| 92 |
+
$$L_{m,t} = \begin{cases} P_G^{-1} |2\pi S_k^t|^{-1/2} & t \neq 0, m \neq 0 \\ \times \exp\left[-\frac{1}{2}(\nu_{k,m}^t)^T (S_k^t)^{-1} \nu_{k,m}^t\right] & t \neq 0, m \neq 0 \\ (nV)^{-1}(1 - P_D P_G) & t \neq 0, m = 0 \\ \lambda & t = 0, m \neq 0 \\ 0 & t = 0, m = 0 \end{cases} \qquad (11)$$
|
| 93 |
+
|
| 94 |
+
$$v_{k,m}^t = z_{k,m}^t - \hat{z}_{k/k-1}^t \qquad (12)$$
|
| 95 |
+
|
| 96 |
+
$v_{k,m}^t$ and $S_k^t$ denote the residual and residual covariance matrices at time $k$, respectively. $z_{k,m}^t$ denotes the $m$-th confirmed echo from target, and $\hat{z}_{k/k-1}^t$ denotes the one-step state prediction of the $t$-th target. $V_k$ denotes the volume of correlation window. $P_G$ denotes the probability of true observations falling into the correlation window, and $P_D$ denotes the target detection probability, that is the complete detection probability of true observation. $V_k$ denotes the volume of correlation window and $n$ denotes coefficient and is usually taken as the positive integer. Assuming the false-alarm and the numbers of clutters are subject to the uniform distribution and the Poisson distribution, respectively. $\lambda$ denotes the space density of clutters, that is the expectation number of clutters in unit volume. The interconnection probability $\beta_{k,m}^t$ of the $m$-th confirmed echo is calculated as.
|
| 97 |
+
|
| 98 |
+
$$\beta_{k,m}^{t} = \frac{1}{c} \left( \varepsilon_{m,t} \prod_{tr=0}^{T} \sum_{r=0}^{M} \varepsilon_{r,tr} + \xi_{m,t} \prod_{r=0}^{M} \sum_{tr=0}^{T} \xi_{r,tr} \right) \qquad (13)$$
|
| 99 |
+
|
| 100 |
+
$$\varepsilon_{m,t} = l_{m,t} / \sum_{m=0}^{M} l_{m,t} \qquad (14)$$
|
| 101 |
+
|
| 102 |
+
$$\xi_{m,t} = l_{m,t} / \sum_{i=0}^{T} l_{m,i} \qquad (15)$$
|
| 103 |
+
|
| 104 |
+
$r$ and $r$ denote the index of target and observation label, respectively. $c$ is normalized coefficient.
|
| 105 |
+
|
| 106 |
+
### 3.2. Generalized Probabilistic Data Association Based on Particle Filter
|
| 107 |
+
|
| 108 |
+
Firstly, particles are sampling from the proposal distribution on account of prior model information, and then one step observation prediction of particle $z_{k/k-1}^{i,t}$ and $S_k^t$ are calculated by the following equations.
|
| 109 |
+
|
| 110 |
+
$$x_k^{i,t} = f(x_{k-1}^{i,t}, u_{k-1}^t) \qquad (16)$$
|
| 111 |
+
|
| 112 |
+
$$z_{k/k-1}^{i,t} = h(x_k^{i,t}) \qquad (17)$$
|
| 113 |
+
|
| 114 |
+
$$\hat{z}_{k/k-1}^{i,t} = \sum_{i=1}^{N} z_{k/k-1}^{i,t} / N \qquad (18)$$
|
| 115 |
+
---PAGE_BREAK---
|
| 116 |
+
|
| 117 |
+
$$S_k^t = \frac{\sum_{i=1}^{N} [z_{k/k-1}^{i,t} - \hat{z}_{k/k-1}^t] [z_{k/k-1}^{j,t} - \hat{z}_{k/k-1}^t]^T}{N} \quad (19)$$
|
| 118 |
+
|
| 119 |
+
Echo confirmation principle is realized by the following equation.
|
| 120 |
+
|
| 121 |
+
$$g = v_{k,m}^t S_{k,m}^t (v_{k,m}^t)^T \leq \gamma \quad (20)$$
|
| 122 |
+
|
| 123 |
+
where $\gamma$ denotes the threshold of $\chi^2$ hypothesis testing. Then $\beta_{k,m}^t$ of the confirmed echo $\bar{\zeta}_{k,m}^t$ is calculated by the poly-probability matrix $D$. The equivalent observation is solved by $\beta_{k,m}^t$, $\bar{\zeta}_{k,m}^t$ and $\hat{z}_{k/k-1}^t$.
|
| 124 |
+
|
| 125 |
+
$$\hat{z}_{k/k}^t = \hat{z}_{k/k-1}^t + \sum_{m=0}^{M} \beta_{k,m}^t (\bar{\zeta}_{k,m}^t - \hat{z}_{k/k-1}^t) \quad (21)$$
|
| 126 |
+
|
| 127 |
+
The likelihood score that particle is relative to $\hat{z}_{k/k}$, is used to measure particle weights, and then weights are normalized.
|
| 128 |
+
|
| 129 |
+
$$\hat{\sigma}_k^{i,t} = p(\hat{z}_{k/k}^t | x_k^{j,t}) / \sum_{i=1}^{N} p(\hat{z}_{k/k}^t | x_k^{j,t}) \quad (22)$$
|
| 130 |
+
|
| 131 |
+
The re-sampling is realized by normalized weights $\hat{\sigma}_k^{i,t}$, and $\{x_k^{j,t}\}_{j=1}^N$ are obtained. On the basis of Monte Carlo simulation principle, the state estimation of t-th target can be solved as follows.
|
| 132 |
+
|
| 133 |
+
$$\hat{x}_{k/k}^t = \sum_{j=1}^{N} x_k^{j,t} / N \quad (23)$$
|
| 134 |
+
|
| 135 |
+
## 4. Simulation Results and Analysis
|
| 136 |
+
|
| 137 |
+
To illustrate the performance of GPDA-PF, the example of maneuvering target tracking based on two-coor-dinate radar is given. The target moves within the horizontal-vertical plane according to the standard second-order model.
|
| 138 |
+
|
| 139 |
+
$$X_k^t = FX_{k-1}^t + Gu_{k-1}^t, \quad t=1,2$$
|
| 140 |
+
|
| 141 |
+
$$z_k = [\sqrt{qt}[(x_k^t)^2 + (y_k^t)^2] \tan^{-1}(y_k^t/x_k^t)]^T + v_k$$
|
| 142 |
+
|
| 143 |
+
where $X_k^t = [x_k^t, \tilde{x}_k^t, y_k^t, \tilde{y}_k^t]^T$ denotes state vector of t-th taarget. $x_k^t$, $\tilde{x}_k^t$, $y_k^t$ and $\tilde{y}_k^t$ denote position component and velocity component in the horizontal direction and the vertical direction, respectively. $F = [\begin{matrix} f_{cv} & f \\ f & f_{cv} \end{matrix}]$ denotes the system state transition matrices, $f_{cv} = [\begin{matrix} 1 & \tau \\ 0 & 1 \end{matrix}]$, and $f = [\begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix}]$. $G = [\begin{matrix} 0 & 0 & \tau/2 & \tau \\ \tau/2 & \tau & 0 & 0 \end{matrix}]^T$ denotes the system noise matrix. $\tau=1s$ denotes the sampling time. $u_k^1$ and $u_k^2$ denote system noise vector, and suppose they are subject to zero-mean Gaussian white noise with standard deviation $Q_k^1 = 0.15I$, $Q_k^2 = Q_k^1 \cdot v_k$ denote the observation noise vector and suppose it is subject
|
| 144 |
+
|
| 145 |
+
to zero-mean Gaussian white noise process with standard deviation $[\begin{matrix} R_r & 0 \\ 0 & R_\theta \end{matrix}]$, here the noise standard deviations of radial distance component and azimuth angle component are $R_r = 0.1$ km and $R_\theta = 0.3^\circ$, respectively. $P_G = 0.97$, $P_D = 0.99$ and $\gamma=16$. $X_0^1 = [\begin{matrix} 2 & 0.2 & 2 & 0.2 \end{matrix}]^T$ and $X_0^2 = [\begin{matrix} 2 & 0.2 & 14 & -0.2 \end{matrix}]^T$ denote the actual initial states of two targets, and the negative sign of state vector denotes that targets move on the negative half shaft of X axis (horizontal direction) and Y axis (vertical direction). The number of Monte Carlo simulation is 50 and the number of particles is 1000, and the total simulation step T is 60s. In order to verify the effect of clutters for algorithm performances, two kinds of simulation results are compared when $\lambda$ is 0.002 and 0.0055, respectively. And the root mean square error is used as the performance evaluation index of algorithm precision, which is defined as RMSE = $\{ \sum_{\eta=1}^{Num} (X_k^t - \hat{X}_{k/k,\eta})^2 / Num \}^{1/2}$, where $X_k^t$ and $\hat{X}_{k/k,\eta}$ denote the true state value and the state estimation value of the t-th target in $\eta$ times Monte Carlo simulations at current time, respectively.
|
| 146 |
+
|
| 147 |
+
Two target trajectories and clutters distribution are given in Figure 1 under $\lambda=0.002$ and $\lambda=0.0055$. By 50 times Monte Carlo simulations, the comparison of
|
| 148 |
+
|
| 149 |
+
Figure 1. Trajectory of target and clutters distribution. (a) $\lambda = 0.002$; (b) $\lambda = 0.0055$.
|
| 150 |
+
---PAGE_BREAK---
|
| 151 |
+
|
| 152 |
+
**Figure 2.** RMSE of position estimation of target 1. (a) Horizontal direction; (b) Vertical direction.
|
| 153 |
+
|
| 154 |
+
**Figure 3.** RMSE of position estimation of target 2. (a) Horizontal direction; (b) Vertical direction.
|
| 155 |
+
|
| 156 |
+
**Table 1.** The comparison of the mean of RMSE under $\lambda = 0.002$ and $\lambda = 0.0055$.
|
| 157 |
+
|
| 158 |
+
<table><thead><tr><th>Algorithm</th><th>GPDA-EKF</th><th>GPDA-PF</th></tr></thead><tbody><tr><td>Target 1 in X direction</td><td>0.0705/0.0728</td><td>0.0549/0.0570</td></tr><tr><td>Target 1 in Y direction</td><td>0.0712/0.0737</td><td>0.0556/0.0578</td></tr><tr><td>Target 2 in X direction</td><td>0.0744/0.0759</td><td>0.0626/0.0635</td></tr><tr><td>Target 2 in Y direction</td><td>0.0763/0.0801</td><td>0.0634/0.0640</td></tr></tbody></table>
|
| 159 |
+
|
| 160 |
+
the RMSE of state estimation based on GPDA-EKF and GPDA-PF under $\lambda = 0.0055$ are given in Figures 2 and 3. The data from Table 1 quantitatively show the mean of RMSE of state estimation, when $\lambda$ is 0.002 and 0.0055, respectively. According to the above comparison of RMSE, it is shown that the filter precision of GPDA-PF is superior to GPDA-EKF. In addition, the following conclusions can be drawn by the analysis of data from Table 1 with the increase of clutters number in tracking environment, the filter precision of two algorithms all decline, but the performance of GPDA-PF is always stably superior to GPDA-EKF. In general case, PF is used as filter can lead to the increase of computational complexity, and the simulation also gets the same result in this paper. However, the real time of algorithm has a close relationship with the number of particle and filtering initial value. When the prior information is better, namely, the filtering initial value is close to the real state of target or system model is more accurate, the real time of GPDA-PF is effectively improved. Based on the above results, PF will be extended into the maneuvering multi-target tracking in clutters, which is our next research direction.
|
| 161 |
+
|
| 162 |
+
## 5. Conclusions
|
| 163 |
+
|
| 164 |
+
A novel maneuvering multi-target tracking algorithm based on modified generalized probabilistic data association in clutters is proposed in this paper. The new algorithm effectively improves the decline problem of filtering precision caused by system strong nonlinear and dense clutters environment. The theory analysis and simulation results show GPDA-PF has the following advantages relative to existing methods. Firstly, adopting the basis framework of PF, so it preserves the advantage to solve nonlinear and non-Gaussian problems. Secondly, the construction of GPDA-PF avoids the derivation of Jacobi matrix and the calculation of state prediction covariance matrix and state estimation covariance matrix when EKF is utilized, which make the algorithm simple and is easy to realize. Finally, the feasibility-based rule
|
| 165 |
+
---PAGE_BREAK---
|
| 166 |
+
|
| 167 |
+
of GPDA is accord with the actual situation of modern
|
| 168 |
+
battlefield environment, which enhances the adaptability
|
| 169 |
+
of algorithm and improves the reliability and stability for
|
| 170 |
+
target tracking result.
|
| 171 |
+
|
| 172 |
+
6. Acknowledgements
|
| 173 |
+
|
| 174 |
+
The project work is supported by the National Natural
|
| 175 |
+
Science Foundation of China (60972119, 61170243) and
|
| 176 |
+
the Science Technology Department Natural Science
|
| 177 |
+
Foundation of Henan Province (112102210196). In addi-
|
| 178 |
+
tion, we thank Dr. Yandong Hou and Prof. Quan Pan for
|
| 179 |
+
helpful discussions.
|
| 180 |
+
|
| 181 |
+
7. References
|
| 182 |
+
|
| 183 |
+
[1] O. Cappe, S. J. Godsill and E. Moulines, "An Overview of Existing Methods and Recent Advances in Sequential Monte Carlo," *Proceedings of the IEEE*, Vol. 95, No. 5, 2007, pp. 899-924. doi:10.1109/JPROC.2007.893250
|
| 184 |
+
|
| 185 |
+
[2] M. S. Arulampalam, S. Maskell, N. Gordon, et al., "A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking," *IEEE Transactions on Signal Processing*, Vol. 50, No. 2, 2002, pp. 174-188.
|
| 186 |
+
doi:10.1109/78.978374
|
| 187 |
+
|
| 188 |
+
[3] H. A. P. Blom and E. A. Bloem, "Exact Bayesian and Particle Filtering of Stochastic Hybrid Systems," *IEEE Transactions on Aerospace and Electronic Systems*, Vol. 43, No. 1, 2007, pp. 55-70.
|
| 189 |
+
doi:10.1109/TAES.2007.357154
|
| 190 |
+
|
| 191 |
+
[4] S. Puranik and J. K. Tugnait, "Tracking of Multiple Maneuvering Targets Using Multiscan JPDA and IMM Filtering," *IEEE Transactions on Aerospace and Electronic Systems*, Vol. 43, No. 1, 2007, pp. 23-35.
|
| 192 |
+
doi:10.1109/TAES.2007.357152
|
| 193 |
+
|
| 194 |
+
[5] H. X. Liu, Y. Liang, Q. Pan, et al., "A Multi-Path Viterbi Data Association Algorithm," *Acta Electronica Sinica*, Vol. 34, No. 3, 2006, pp. 1640-1644.
|
| 195 |
+
|
| 196 |
+
[6] R. L. Popp, K. R. Pattipati and Y. Bar-Shalom, "M-Best S-D Assignment Algorithm with Application to Multi-Target Tracking," *IEEE Transactions on Aerospace and Electronic Systems*, Vol. 37, No. 1, 2001, pp. 22-39.
|
| 197 |
+
doi:10.1109/7.913665
|
| 198 |
+
|
| 199 |
+
[7] H. L. Kennedy, "Comparison of MHT and PDA Track Initiation Performance," *International Conference on Radar*, Adelaide, 2-5 September 2008, pp. 508-512.
|
| 200 |
+
doi:10.1109/RADAR.2008.4653977
|
| 201 |
+
|
| 202 |
+
[8] M. Ekman, "Particle Filters and Data Association for Multi-Target Tracking," *The 11th International Conference on Information Fusion*, Cologne, 30 June-3 July 2008, pp. 1-8.
|
| 203 |
+
|
| 204 |
+
[9] Z. T. Hu, Q. Pan and F. Yang, "A Novel Maneuvering Multi-Target Tracking Algorithm Based on Multiple Model Particle Filter in Clutters," *High Technology Letters*, Vol. 17, No. 1, 2011, pp. 19-24.
|
| 205 |
+
|
| 206 |
+
[10] X. N. Ye, Q. Pan and Y. M. Cheng, "A New and Better Algorithm for Multi-Target Tracking in Dense Clutter," *Journal of Northwestern Polytechnical University*, Vol. 22, No. 3, 2004, pp. 388-391.
|
| 207 |
+
|
| 208 |
+
[11] Q. Pan, X. N. Ye and H. C. Zhang, "Generalized Probability Data Association Algorithm," *Acta Electronica Sinica*, Vol. 33, No. 3, 2005, pp. 467-472.
|
samples_new/texts_merged/305525.md
ADDED
|
@@ -0,0 +1,295 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Topology Proceedings
|
| 5 |
+
|
| 6 |
+
**Web:** http://topology.auburn.edu/tp/
|
| 7 |
+
|
| 8 |
+
**Mail:** Topology Proceedings
|
| 9 |
+
Department of Mathematics & Statistics
|
| 10 |
+
Auburn University, Alabama 36849, USA
|
| 11 |
+
|
| 12 |
+
**E-mail:** topolog@auburn.edu
|
| 13 |
+
|
| 14 |
+
**ISSN:** 0146-4124
|
| 15 |
+
|
| 16 |
+
COPYRIGHT © by Topology Proceedings. All rights reserved.
|
| 17 |
+
---PAGE_BREAK---
|
| 18 |
+
|
| 19 |
+
# SPLITTABILITY OVER LINEAR ORDERINGS
|
| 20 |
+
|
| 21 |
+
A. J. Hanna* and T.B.M. McMaster†
|
| 22 |
+
|
| 23 |
+
## Abstract
|
| 24 |
+
|
| 25 |
+
A partial order X is splittable over a partial order Y if for every subset A of X there exists an order preserving mapping $f : X \to Y$ such that $f^{-1}f(A) = A$. We define a cardinal function $sc(X)$ (the 'splittability ceiling' for X) to be the least cardinal $\beta$ such that the disjoint sum of $\beta$ copies of X fails to split over a single copy of X. We allow $sc(X) = \infty$ to cover the case where arbitrarily many disjoint copies may be split. We investigate this cardinal function with respect to (linear) partial orders.
|
| 26 |
+
|
| 27 |
+
## 1. Introduction
|
| 28 |
+
|
| 29 |
+
A. V. Arhangel'skiǐ formulated and developed a range of definitions of splittability (or cleavability) in topology (see for example [1, 2]), of which the following are amongst the most basic.
|
| 30 |
+
|
| 31 |
+
**Definition 1.1.** For topological spaces X and Y:
|
| 32 |
+
|
| 33 |
+
— *X is splittable over Y along the subset A of X if there exists continuous f : X → Y such that:*
|
| 34 |
+
|
| 35 |
+
* The research of the first author was supported by a distinction award scholarship from the Department of Education for Northern Ireland.
|
| 36 |
+
|
| 37 |
+
† The authors would like to express their gratitude to Steven Watson for his helpful comments and insight, especially regarding Theorem 1.8.
|
| 38 |
+
*Mathematics Subject Classification:* 06A05, 06A06, 54A25, 54C99
|
| 39 |
+
**Key words:** splittability, partially ordered set, splittability ceiling
|
| 40 |
+
---PAGE_BREAK---
|
| 41 |
+
|
| 42 |
+
(i) $f(A) \cap f(X \setminus A) = \emptyset$ or, equivalently,
|
| 43 |
+
|
| 44 |
+
(ii) $f^{-1}f(A) = A.
|
| 45 |
+
|
| 46 |
+
- *X is splittable over Y if for every subset A of X there exists continuous f : X → Y such that $f^{-1}f(A) = A.$*
|
| 47 |
+
|
| 48 |
+
It quickly becomes apparent that splittability is not exclu-
|
| 49 |
+
sively a topological idea. Indeed, only a routine translation into
|
| 50 |
+
the language of the appropriate category is required for an anal-
|
| 51 |
+
ogous definition of splittability over other structures. (For ex-
|
| 52 |
+
ample, splittability over semigroups is considered in [6].)
|
| 53 |
+
|
| 54 |
+
**Definition 1.2.** Let $X$ and $Y$ be partially ordered sets (posets).
|
| 55 |
+
|
| 56 |
+
– A map $f$ between partial orders is increasing (or order preserving) if $x \le y$ implies $f(x) \le f(y)$.
|
| 57 |
+
|
| 58 |
+
– *X is splittable over Y along the subset A of X if there exists increasing $f: X \to Y$ such that $f^{-1}f(A) = A.$*
|
| 59 |
+
|
| 60 |
+
– *X is splittable over Y if for every subset A of X there exists increasing f : X → Y such that f⁻¹f(A) = A.*
|
| 61 |
+
|
| 62 |
+
The following result was obtained by D. J. Marron [4, 5]:
|
| 63 |
+
|
| 64 |
+
**Theorem 1.3.** A poset *X* is splittable over the *n*-point chain if and only if:
|
| 65 |
+
|
| 66 |
+
(i) *X does not contain a chain of height greater than n, and*
|
| 67 |
+
|
| 68 |
+
(ii) *X does not contain two disjoint chains of height n.*
|
| 69 |
+
|
| 70 |
+
**Note 1.4.** The previous result shows that it is not possible to split the (disjoint) sum of two copies of a finite chain over a single copy of the same finite chain. However, it is possible to disjointly embed two copies of $\omega$ (the positive integers with usual ordering) into a single copy of $\omega$. Clearly, then, it is possible to split 'two disjoint copies' of $\omega$ over $\omega$.
|
| 71 |
+
---PAGE_BREAK---
|
| 72 |
+
|
| 73 |
+
In general, suppose that $\alpha$ copies of a poset $X$ can be disjointly embedded into a single copy of $X$. It is clear that the disjoint sum of $\alpha$ copies of $X$ will split over a single copy of $X$. Indeed, if $(X \cdot \alpha)$ can be embedded into $X$, we can split the sum of $\alpha$ copies of $X$ over $X$.
|
| 74 |
+
|
| 75 |
+
For notation and further information on linear orderings the interested reader is referred to [8].
|
| 76 |
+
|
| 77 |
+
**Definition 1.5** (the ‘splittability ceiling’ for $X$). Let $sc(X)$ be the least cardinal $\beta$ such that the (disjoint) sum of $\beta$ copies of $X$ fails to split over a single copy of $X$. We allow $sc(X) = \infty$ to cover the case where the sum of arbitrarily many disjoint copies may be split.
|
| 78 |
+
|
| 79 |
+
**Note 1.6.** The critical case for deciding $sc(X)$ is reached in attempting to split $2^{|X|}$ copies. If we have more than $2^{|X|}$ disjoint copies of $X$ and split along some subset of their sum, then there must be copies which we are splitting along the same subset (since $X$ has precisely $2^{|X|}$ subsets) and hence the ‘same’ map will do. In other words, if $sc(X) \ge 2^{|X|}$ then $sc(X) = \infty$.
|
| 80 |
+
|
| 81 |
+
**Definition 1.7.** [8] A cardinal number $\aleph_\alpha$ is said to be regular if it is not the sum of fewer than $\aleph_\alpha$ cardinal numbers smaller than $\aleph_\alpha$.
|
| 82 |
+
|
| 83 |
+
**Theorem 1.8.** For any partial order $X$, if $sc(X) \neq \infty$ then $sc(X)$ is a regular cardinal.
|
| 84 |
+
|
| 85 |
+
*Proof.* Suppose $sc(X) = \lambda < \infty$ is not a regular cardinal; then $\lambda$ can be expressed as the sum of $\alpha$ cardinals $\beta_i$ each less than $\lambda$, where $\alpha$ is less than $\lambda$. Let $Y = \bigcup_{i \in \lambda} X_i$ be the disjoint union of $\lambda$ copies of $X$. We can write $Y = \bigcup_{\substack{i \in \alpha \\ j \in \beta_i}} (\bigcup X_j)$. For each $i \in \alpha$ we can split $\bigcup_{j \in \beta_i} X_j$ over a single copy $X_{\beta_i}$ of $X$, since $\beta_i < \lambda$.
|
| 86 |
+
---PAGE_BREAK---
|
| 87 |
+
|
| 88 |
+
Likewise we can split $\bigcup_{i \in \alpha} X_{\beta_i}$ over a single copy of $X$, since $\alpha < \lambda$.
|
| 89 |
+
|
| 90 |
+
Hence we can split $\lambda$ copies of $X$ (along any subset) over $X$ - a contradiction. $\square$
|
| 91 |
+
|
| 92 |
+
**Proposition 1.9.** *The splittability ceiling for the chain of positive integers $\omega$ is infinity (i.e. $sc(\omega) = \infty$).*
|
| 93 |
+
|
| 94 |
+
*Proof.* Given $X$, the (disjoint) sum of copies of $\omega$ and a subset $A$ of $X$, we define a map $f : X \to \omega$ as follows:
|
| 95 |
+
|
| 96 |
+
$$f(x) = \begin{cases} x & (\text{if } x \in A \text{ and } x \text{ is odd}) \text{ or } (x \notin A \text{ and } x \text{ is even}), \\ x+1 & \text{otherwise.} \end{cases}$$
|
| 97 |
+
|
| 98 |
+
It is clear that $f$ is increasing and that $f(A)$ is a subset of the odds while $f(X \setminus A)$ is a subset of the evens. It follows that $f$ splits $X$ along $A$ over $\omega$ as required. $\square$
|
| 99 |
+
|
| 100 |
+
The corresponding result holds for the negative integers $\omega^*$ and for the integers $\omega^* + \omega$.
|
| 101 |
+
|
| 102 |
+
**Proposition 1.10.** Let $\alpha$ be an ordinal (considered as a linear order). Then
|
| 103 |
+
|
| 104 |
+
$$sc(\alpha) = \begin{cases} \infty & \text{if } \alpha \text{ is a limit ordinal} \\ 2 & \text{if } \alpha \text{ is a non-limit ordinal} \end{cases}$$
|
| 105 |
+
|
| 106 |
+
*Proof.* Note that each element in a limit ordinal has an immediate successor. The first part of the result follows from similar methods as employed for $\omega$. If $\alpha$ is a non-limit ordinal we specify the subset $A$ to contain the 'odd' ordinals less than $\alpha$. Similarly we specify the subset $B$ to contain the 'even' ordinals less than $\alpha$. We can express $\alpha = \xi + n$ where $\xi$ is a limit ordinal and $n$ is finite. Now $f(x) \ge x$ for all $x \in \alpha$ and $f(x) = x$ for $x = \xi + i$ ($0 \le i < n$) whenever $f$ is a map splitting $\alpha$ along $A$ or $B$ over $\alpha$. Clearly it will not be possible to split the sum of two copies of $\alpha$ along $A$ and $B$ respectively over a single copy of $\alpha$. $\square$
|
| 107 |
+
---PAGE_BREAK---
|
| 108 |
+
|
| 109 |
+
**Proposition 1.11.** *The splittability ceiling for the chain of rationals η is infinity (i.e. sc(η) = ∞).*
|
| 110 |
+
|
| 111 |
+
*Proof.* Decompose η into two disjoint subsets C and D, each of which is dense in η. Enumerate both C and D in an arbitrary fashion. Given disjoint copies of η and a subset A to split along, define a map for each copy. Begin by enumerating the copy $X_1 = \{x_1, x_2, x_3, \dots\}$. If $x_1 \in A (\notin A)$ map $x_1$ to the first point in the enumeration of C (D). The process continues inductively (using a method similar to that devised by Cantor to show that every countable linear order can be embedded into η). □
|
| 112 |
+
|
| 113 |
+
**Proposition 1.12.** *The splittability ceiling for the chain of the real numbers λ is c+ (i.e. sc(λ) = c+).*
|
| 114 |
+
|
| 115 |
+
*Proof.* We first note that it is possible to disjointly embed continuum-many copies of $\lambda$ into $\lambda$. To prove the result we show that there are only continuum-many increasing maps from the reals into the reals. We know that there are only continuum-many maps from the rationals into the reals. Given increasing $f: \mathbb{R} \to \mathbb{R}$ consider its restriction to the rationals $f|_{\mathbb{Q}}$. For how many increasing maps $g: \mathbb{R} \to \mathbb{R}$ do we have $f|_{\mathbb{Q}} = g|_{\mathbb{Q}}$?
|
| 116 |
+
|
| 117 |
+
We can show that $f$ and $g$ can only differ at countably many points: for each irrational $x$ select both a strictly decreasing sequence $(a_n)$ and a strictly increasing sequence $(b_n)$ of rationals, each converging to $x$. Now $(f(a_n))$ converges to some limit $l$ while $(f(b_n))$ converges to some limit $l'$. If $l = l'$ then $f(x) = g(x) = l$, otherwise $f(x), g(x) \in [l', l]$ and $f(X) \cap [l', l] = \{f(x)\}$. Since there can only be countably many disjoint intervals in the reals, there can only be countably many points $x$ where $f(x) \neq g(x)$.
|
| 118 |
+
|
| 119 |
+
It follows that there can only be continuum-many maps within each equivalence class. Hence there are at most continuum-many increasing maps from the reals to the reals. Clearly if we have more than continuum-many disjoint copies of $\lambda$ and pick different subsets in them, then the union of these copies cannot be
|
| 120 |
+
---PAGE_BREAK---
|
| 121 |
+
|
| 122 |
+
split over a single copy of $\lambda$ along the union of these sets, due
|
| 123 |
+
to the cardinality restriction on increasing maps.
|
| 124 |
+
□
|
| 125 |
+
|
| 126 |
+
Similar arguments can be used to locate an upper bound for
|
| 127 |
+
the number of increasing maps from any linear order into itself.
|
| 128 |
+
The most interesting case appears to be that of the countable
|
| 129 |
+
linear orders. Moreover, unless the order is scattered it can be
|
| 130 |
+
shown that the splittability ceiling will be infinity. This follows
|
| 131 |
+
since any non-scattered linear order will contain a copy of $\eta$ and
|
| 132 |
+
we already know that $sc(\eta) = \infty$.
|
| 133 |
+
|
| 134 |
+
## 2. Countable Linear Orderings
|
| 135 |
+
|
| 136 |
+
**Lemma 2.1.** Let $X$ be a partial order. If $sc(X) > 2$ then $sc(X) \geq \aleph_0$.
|
| 137 |
+
|
| 138 |
+
*Proof.* Let $X_1, X_2, X_3$ be disjoint copies of the partial order $X$, with subsets $A_1, A_2, A_3$ respectively. Let $X_4$ be a fourth copy of $X$. Since $sc(X) > 2$ we can split $X_1 \cup X_2$ along $A_1 \cup A_2$ over $X_4$ using an increasing map $f$ (i.e. $f^{-1}f(A_1 \cup A_2) = A_1 \cup A_2$). Now split $X_3 \cup X_4$ along $B \cup A_3$ (where $B = f(A_1 \cup A_2)$) over $X$ using an increasing map $g$ (i.e. $g^{-1}g(f(A_1 \cup A_2) \cup A_3) = f(A_1 \cup A_2) \cup A_3$).
|
| 139 |
+
|
| 140 |
+
Define a map $h : X_1 \cup X_2 \cup X_3 \to X$ by
|
| 141 |
+
|
| 142 |
+
$$h(x) = \begin{cases} g \circ f(x) & \text{if } x \in X_1 \cup X_2, \\ g(x) & \text{if } x \in X_3; \end{cases}$$
|
| 143 |
+
|
| 144 |
+
then $h$ splits $X_1 \cup X_2 \cup X_3$ along $A_1 \cup A_2 \cup A_3$ over $X$; for suppose $x \in X_1 \cup X_2 \cup X_3$ and
|
| 145 |
+
|
| 146 |
+
$$\begin{align*}
|
| 147 |
+
h(x) \in h(A_1 \cup A_2 \cup A_3) &= h(A_1 \cup A_2) \cup h(A_3) \\
|
| 148 |
+
&= g \circ f(A_1 \cup A_2) \cup g(A_3) \\
|
| 149 |
+
&= g(f(A_1 \cup A_2) \cup A_3).
|
| 150 |
+
\end{align*}$$
|
| 151 |
+
|
| 152 |
+
If $x \in X_3$ then $h(x) = g(x) \in g(f(A_1 \cup A_2) \cup A_3)$ so $x \in f(A_1 \cup A_2) \cup A_3$ and $x \in A_3$. If $x \in X_1 \cup X_2$ then $h(x) = g \circ f(x) \in g(f(A_1 \cup A_2) \cup A_3)$ so $f(x) \in f(A_1 \cup A_2) \cup A_3$.
|
| 153 |
+
---PAGE_BREAK---
|
| 154 |
+
|
| 155 |
+
Fig. 1. Splitting 3 copies of X over a single copy of X
|
| 156 |
+
|
| 157 |
+
Hence $f(x) \in f(A_1 \cup A_2)$ and $x \in A_1 \cup A_2$.
|
| 158 |
+
|
| 159 |
+
Clearly this argument can be extended by induction so that
|
| 160 |
+
$sc(X) > n$ for all $n \in \mathbb{N}$. $\square$
|
| 161 |
+
|
| 162 |
+
**Corollary 2.2.** Let X be a finite partial order; then $sc(X) = 2$ or $\infty$.
|
| 163 |
+
|
| 164 |
+
We show now that the previous result extends to countable
|
| 165 |
+
linear partial orders. To do so, we employ the notion of an ‘order
|
| 166 |
+
shuffling’ and a result due to J. L. Orr.
|
| 167 |
+
|
| 168 |
+
**Definition 2.3.** [7] Let A be a countable linearly ordered set. A function $f : A \to N^+$ is called an order shuffling on A. A linearly ordered set B shuffles into (A, f) if there is an increasing surjection $\sigma$ from B onto A such that the cardinality of $\sigma^{-1}\{a\}$ is at least $f(a)$ for all but finitely many $a \in A$. If this holds for all $a \in A$ then B shuffles into (A, f) exactly.
|
| 169 |
+
---PAGE_BREAK---
|
| 170 |
+
|
| 171 |
+
**Theorem 2.4.** [7] Let A be a countable scattered linear ordering and let f be an order shuffling on A; then A shuffles into (A, f).
|
| 172 |
+
|
| 173 |
+
**Lemma 2.5.** Let X be a countable scattered linear order; then there exist an order preserving surjection $\pi : X \to X$ and points $\{a_1, a_2, \dots, a_n\}$ such that:
|
| 174 |
+
|
| 175 |
+
(i) $|\pi^{-1}(x)| > 1$ for each $x \in X \setminus \{a_1, a_2, \dots, a_n\}$, and
|
| 176 |
+
|
| 177 |
+
(ii) $\pi^{-1}(a_i) = \{a_i\}$ for each $i \in \{1, 2, \dots, n\}$.
|
| 178 |
+
|
| 179 |
+
*Proof*. We use Theorem 2.4 to find an order preserving surjection $\pi : X \to X$ such that $|\pi^{-1}(x)| > 1$ for all but $n$ elements $\{a_1, a_2, \dots, a_n\}$. We assume that $n$ is minimal and that $a_1 < a_2 < \dots < a_n$.
|
| 180 |
+
|
| 181 |
+
Note that if $\pi^{-1}(\{a_1, a_2, \dots, a_n\}) \subseteq \{a_1, a_2, \dots, a_n\}$ then since $\pi$ is order preserving $\pi^{-1}(a_i) = \{a_i\}$. Suppose that $\pi$ does not exhibit property (ii); then there exists $i$ such that the singleton pre-image of $a_i$ under $\pi$ is not contained in $\{a_1, a_2, \dots, a_n\}$. Let $\rho = \pi \circ \pi$ and consider $\rho^{-1}(x)$ for some $x \in X$. If $x \notin \{a_1, a_2, \dots, a_n\}$ then $|\pi^{-1}(x)| > 1$, hence $|\rho^{-1}(x)| > 1$.
|
| 182 |
+
|
| 183 |
+
If $x = a_j$ for $j \neq i$ then clearly $|\rho^{-1}(x)| \ge 1$, but if $x = a_i$ then we can find $y \in X \setminus \{a_1, a_2, \dots, a_n\}$ such that $\pi(y) = x$. Now $|\pi^{-1}(y)| > 1$, so $|\rho^{-1}(x)| > 1$, but $\rho$ now contradicts the minimality of $n$. $\square$
|
| 184 |
+
|
| 185 |
+
**Theorem 2.6.** Let X be a countable linear ordering; then $sc(X) = 2$ or $sc(X) = \infty$.
|
| 186 |
+
|
| 187 |
+
*Proof.* We know that if X is not scattered, then X contains a copy of the rationals, so $sc(X) = \infty$. We also know that if $sc(X) > 2$ then $sc(X) \ge \aleph_0$, that is, we can split the sum of any finite number of copies of X over a single copy. Let X be a countable scattered linear order such that $sc(X) > 2$. Using Lemma 2.5 it is possible to find an increasing surjection $\pi : X \to X$ and points $\{a_1, a_2, \dots, a_n\}$ such that:
|
| 188 |
+
|
| 189 |
+
(i) $|\pi^{-1}(x)| > 1$ for each $x \in X \setminus \{a_1, a_2, \dots, a_n\}$ and
|
| 190 |
+
---PAGE_BREAK---
|
| 191 |
+
|
| 192 |
+
(ii) $\pi^{-1}(a_i) = \{a_i\}$ for each $i = 1, 2, \dots, n$.
|
| 193 |
+
|
| 194 |
+
For each $x \in X \setminus \{a_1, a_2, \dots, a_n\}$ choose $x_1, x_2 \in \pi^{-1}(x)$ with
|
| 195 |
+
$x_1 < x_2$.
|
| 196 |
+
Let $Y = \bigcup_{i \in \beta} X_i$ be the disjoint union of $\beta$ copies of $X$. Let
|
| 197 |
+
$A = \bigcup_{i \in \beta} A_i$ where $A_i \subseteq X_i$. For each subset $B$ of $\{a_1, a_2, \dots, a_n\}$
|
| 198 |
+
|
| 199 |
+
let $X_B$ be a copy of $X$. For each $i \in \beta$ let $C_i = A_i \cap \{a_1, a_2, \dots, a_n\}$
|
| 200 |
+
and define a map $f_i : X_i \to X_{C_i}$ as follows:
|
| 201 |
+
|
| 202 |
+
$$f_i(x) = \begin{cases} a_i & \text{if } x = a_i, \\ x_1 & \text{if } x \in A_i \setminus \{a_1, a_2, \dots, a_n\}, \\ x_2 & \text{if } x \notin A_i \cup \{a_1, a_2, \dots, a_n\}. \end{cases}$$
|
| 203 |
+
|
| 204 |
+
These maps can be used to split $Y$ along $A$ over $2^n$ copies of $X$ (using $f$ say), which can in turn be split along $f(A)$ over a single copy of $X$. Hence we can split $\beta$ copies of $X$ over $X$, so $sc(X) = \infty$. $\square$
|
| 205 |
+
|
| 206 |
+
**Note 2.7.** Given a countable scattered linear order $X$, for $x, y \in X$ we set $x \equiv y$ if and only if there are only finitely many $z \in X$ such that $x < z < y$ or $y < z < x$, and thus obtain an equivalence relation on $X$. Let us denote the equivalence class of a point $x \in X$ by $e(x)$. Now we can determine a subset $A$ of $X$ such that between each two points in $A$ we can find a point not in $A$ and vice versa. The first step is to select a point $x$ from each equivalence class. We assign a point $y \in e(x)$ to the set $A$ if there are an even number of points between $x$ and $y$ (inclusive). We say that $A$ and $X \setminus A$ alternate in $X$. Note that this only works because the order under consideration is scattered.
|
| 207 |
+
|
| 208 |
+
**Lemma 2.8.** Let $X$ be a countable scattered linear order with $sc(X) > 2$. For each $x \in X$ there exists an order preserving injection $f: X \to X$ such that $x \notin f(X)$.
|
| 209 |
+
|
| 210 |
+
*Proof.* Let $x \in X$, where $X$ is a countable scattered linear order with $sc(X) > 2$. Choose a subset $A$ of $X$ that alternates in $X$
|
| 211 |
+
---PAGE_BREAK---
|
| 212 |
+
|
| 213 |
+
as described in Note 2.7. Let $Y = X_1 \cup X_2$ be the disjoint union of 2 copies of $X$. Let $B = A_1 \cup A_2$ where $A = A_1 \subseteq X_1$ and $X \setminus A = A_2 \subseteq X_2$. Choose $f$ that splits $Y$ along $B$ over $X$ and set $f_i = f|_{X_i}$ for $i=1,2$. The choice of $A$ ensures that both $f_1$ and $f_2$ are order preserving injections. If $f_1(X_1)$ or $f_2(X_2)$ do not contain $x$ we have found a suitable map. Otherwise we can find distinct $a_1, a_2 \in X$ such that $f_1(a_1) = f_2(a_2) = x$. Now $a_1 < a_2$ say, so define a map $g: X \to X$ by
|
| 214 |
+
|
| 215 |
+
$$ g(z) = \begin{cases} f_2(z) & \text{for } z < a_2 \\ f_1(z) & \text{for } z \ge a_2. \end{cases} $$
|
| 216 |
+
|
| 217 |
+
This map is an order preserving injection and $x \notin g(X)$. $\square$
|
| 218 |
+
|
| 219 |
+
**Lemma 2.9.** Let $X$ be a countable scattered linear order such that for each $x \in X$ there exists an order preserving injection $f : X \to X$ such that $x \notin f(X)$. If $A$ is a finite subset of $X$ there exists an order preserving injection $g : X \to X$ such that $A \cap g(X) = \emptyset$.
|
| 220 |
+
|
| 221 |
+
*Proof*. Let $A = \{a_1, a_2, \dots, a_n\}$ be a finite subset of $X$. Suppose that there exists an order preserving injection $g : X \to X$ such that $g(X) \cap \{a_1, a_2, \dots, a_k\} = \emptyset$. If $k < n$, then either $a_{k+1} \notin g(X)$ or there exists $b \in X$ such that $g(b) = a_{k+1}$. In the first case, let $h = g$, and in the second case, choose an order preserving injection $f : X \to X$ such that $b \notin f(X)$ and set $h = g \circ f$. Now $h$ is an order preserving injection and $h(X) \cap \{a_1, a_2, \dots, a_{k+1}\} = \emptyset$, and we repeat the above argument. When $k=n$, we are done. $\square$
|
| 222 |
+
|
| 223 |
+
**Theorem 2.10.** Let $X$ be a countable linear order; then $\mathrm{sc}(X) = \infty$ if and only if $2 \cdot X$ order embeds into $X$.
|
| 224 |
+
|
| 225 |
+
*Proof.* We need only prove that if $X$ is a countable linear order and $\mathrm{sc}(X) = \infty$ then $2 \cdot X$ order embeds into $X$. If $X$ is not scattered then $X$ contains a subset isomorphic to the rationals. Since every countable linear order embeds into the rationals (see
|
| 226 |
+
---PAGE_BREAK---
|
| 227 |
+
|
| 228 |
+
[8]) clearly $2 \cdot X$ order embeds into $X$. We assume now that $X$ is scattered. First find an increasing surjection $\sigma : X \to X$ such that $|\sigma^{-1}(x)| \ge 2$ for all $x \in X \setminus \{a_1, a_2, \dots, a_m\}$. It is possible (via Lemmas 2.8 and 2.9) to find an order preserving injection $f : X \to X$ such that $a_i \notin f(X)$ for all $i$.
|
| 229 |
+
|
| 230 |
+
Set $Y = \sigma^{-1}(f(X)) \subseteq X$ and define $\pi : Y \to X$ by $\pi = f^{-1} \circ \sigma$. It follows that $\pi$ is order preserving and that $|\pi^{-1}(x)| \ge 2$ for all $x \in X$.
|
| 231 |
+
|
| 232 |
+
Select, for each $x$, two points $x_0, x_1 \in \pi^{-1}(x)$ with $x_0 < x_1$. Define $\phi : \{0, 1\} \times X \to X$ by
|
| 233 |
+
|
| 234 |
+
$$ \phi(i, x) = \begin{cases} x_0 & \text{if } i = 0, \\ x_1 & \text{if } i = 1. \end{cases} $$
|
| 235 |
+
|
| 236 |
+
Clearly, $\phi$ order embeds $2 \cdot X$ into $X$. $\square$
|
| 237 |
+
|
| 238 |
+
**Lemma 2.11.** The following statements are equivalent for any linear order $X$:
|
| 239 |
+
|
| 240 |
+
(i) $2 \cdot X$ order embeds into $X$,
|
| 241 |
+
|
| 242 |
+
(ii) $n \cdot X$ order embeds into $X$ for all $n \in \mathbb{N}$,
|
| 243 |
+
|
| 244 |
+
(iii) $n \cdot X$ order embeds into $X$ for some $n \in \mathbb{N}$ where $n > 1$.
|
| 245 |
+
|
| 246 |
+
*Proof.* We prove first that (i) implies (ii). Let $X$ be a linear order such that $2 \cdot X$ order embeds into $X$; that is, there exists an order preserving injection $f : \{0, 1\} \times X \to X$. Suppose that $k \cdot X$ order embeds into $X$ for all $k < n$ for some $n \in \mathbb{N}$. Hence there exists an order preserving injection $g : \{0, 1, \dots, k-2\} \times X \to X$. Define a map $h : \{0, 1, \dots, k-1\} \times X \to 2 \cdot X$ as follows:
|
| 247 |
+
|
| 248 |
+
$$ h(i, x) = \begin{cases} (0, g(i, x)) & \text{if } i < k-1, \\ (1, g(k-2, x)) & \text{if } i = k-1. \end{cases} $$
|
| 249 |
+
|
| 250 |
+
Now define $\pi : \{0, 1, \dots, k-1\} \times X \to X$ as $\pi = f \circ h$. It follows that $\pi$ is an order preserving injection so by induction we have shown that $n \cdot X$ order embeds into $X$ for all $n \in \mathbb{N}$. That (ii) implies (iii) is trivial. Finally (iii) implies (i) since $2 \cdot X$ will clearly order embed into $n \cdot X$ for any $n > 1$. $\square$
|
| 251 |
+
---PAGE_BREAK---
|
| 252 |
+
|
| 253 |
+
**Theorem 2.12.** Let $X$ be a countable linear order. Then $sc(X) = \infty$ if and only if $sc(n \cdot X) = \infty$ for all $n \in \mathbb{N}$.
|
| 254 |
+
|
| 255 |
+
*Proof.* If $sc(X) = \infty$ then $2 \cdot X$ (and hence $k \cdot X$ for all $k \in \mathbb{N}$) will order embed into $X$ by Lemma 2.11. It follows that $2n \cdot X$ will order embed into $n \cdot X$ and hence into $X$, a sufficient condition for $sc(n \cdot X) = \infty$.
|
| 256 |
+
|
| 257 |
+
If $sc(n \cdot X) = \infty$ then $2n \cdot X$ order embeds into $n \cdot X$ by Theorem 2.10. That is, we can find an order preserving injection $f : \{0, 1, \dots, 2n-1\} \times X \to \{0, 1, \dots, n-1\} \times X$. For any $x \in X$ we can find $x', x'' \in X$ such that:
|
| 258 |
+
|
| 259 |
+
$$f(0,x) \le (0, x') < (0, x'') \le f(2n-1,x).$$
|
| 260 |
+
|
| 261 |
+
Define a map $g : \{0, 1\} \times X \to X$ by
|
| 262 |
+
|
| 263 |
+
$$g(i, x) = \begin{cases} x' & \text{if } i = 0, \\ x'' & \text{if } i = 1. \end{cases}$$
|
| 264 |
+
|
| 265 |
+
Clearly $g$ is an order preserving injection that order embeds $2 \cdot X$ into $X$, a sufficient condition for $sc(X) = \infty$ by Theorem 2.10. $\square$
|
| 266 |
+
|
| 267 |
+
**Theorem 2.13.** Let $X$ be a countable linear order. Then $sc(X) = \infty$ if and only if $n \cdot X$ order embeds into $X$ for all $n \in \mathbb{N}$.
|
| 268 |
+
|
| 269 |
+
## References
|
| 270 |
+
|
| 271 |
+
[1] A. V. Arhangel'skii, *A general concept of cleavability of topological spaces over a class of spaces*, Abstracts Tiraspol Symposium (1985) (Stiinca, Kishinev, 1985), 8–10 (in Russian).
|
| 272 |
+
|
| 273 |
+
[2] A. V. Arhangel'skii, *A survey of cleavability*, Topology and its applications **54** (1993) 141–163.
|
| 274 |
+
|
| 275 |
+
[3] A. J. Hanna and T. B. M. McMaster, *Some results on cleavability*, submitted.
|
| 276 |
+
---PAGE_BREAK---
|
| 277 |
+
|
| 278 |
+
[4] D. J. Marron, * Splittability in ordered sets and in ordered spaces*, Ph.D. thesis, Queen's University Belfast (1997).
|
| 279 |
+
|
| 280 |
+
[5] D. J. Marron and T. B. M. McMaster, * Splittability in ordered sets spaces*, Proc. Eighth Prague Topological Symp., (1996) 280-282.
|
| 281 |
+
[located in Topology Atlas at http://www.unipissing.ca/topology]
|
| 282 |
+
|
| 283 |
+
[6] D. J. Marron and T. B. M. McMaster, *Cleavability in semi-groups*, to appear in Semigroup Forum.
|
| 284 |
+
|
| 285 |
+
[7] J. L. Orr, *Shuffling of linear orders*, Canad. Math. Bull. **38**(2) (1995), 223-229.
|
| 286 |
+
|
| 287 |
+
[8] J. G. Rosenberg, *Linear orderings*, Pure and Applied Mathematics, Academic Press (1982).
|
| 288 |
+
|
| 289 |
+
Department of Pure Mathematics, The Queen's University of
|
| 290 |
+
Belfast, University Road, Belfast, BT7 1NN, United Kingdom
|
| 291 |
+
|
| 292 |
+
*E-mail address: a.hanna@qub.ac.uk*
|
| 293 |
+
|
| 294 |
+
Department of Pure Mathematics, The Queen's University of
|
| 295 |
+
Belfast, University Road, Belfast, BT7 1NN, United Kingdom
|
samples_new/texts_merged/3226827.md
ADDED
|
@@ -0,0 +1,194 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# EXPLAIN: A Tool for Performing Abductive Inference
|
| 5 |
+
|
| 6 |
+
Isil Dillig and Thomas Dillig
|
| 7 |
+
|
| 8 |
+
{idillig, tdillig}@cs.wm.edu
|
| 9 |
+
|
| 10 |
+
Computer Science Department, College of William & Mary
|
| 11 |
+
|
| 12 |
+
**Abstract.** This paper describes a tool called EXPLAIN for performing abductive inference. Logical abduction is the problem of finding a simple explanatory hypothesis that explains observed facts. Specifically, given a set of premises Γ and a desired conclusion φ, abductive inference finds a simple explanation ψ such that Γ ∧ ψ |= φ, and ψ is consistent with known premises Γ. Abduction has many useful applications in verification, including inference of missing preconditions, error diagnosis, and construction of compositional proofs. This paper gives a brief tutorial introduction to EXPLAIN and describes the basic inference algorithm.
|
| 13 |
+
|
| 14 |
+
## 1 Introduction
|
| 15 |
+
|
| 16 |
+
The fundamental ingredient of automated logical reasoning is *deduction*, which allows deriving valid conclusions from a given set of premises. For example, consider the following set of facts:
|
| 17 |
+
|
| 18 |
+
(1) $\forall x. (\text{duck}(x) \Rightarrow \text{quack}(x))$
|
| 19 |
+
|
| 20 |
+
(2) $\forall x. ((\text{duck}(x) \lor \text{goose}(x)) \Rightarrow \text{waddle}(x))$
|
| 21 |
+
|
| 22 |
+
(3) $\text{duck}(\text{donald})$
|
| 23 |
+
|
| 24 |
+
Based on these premises, logical deduction allows us to reach the conclusion:
|
| 25 |
+
|
| 26 |
+
$$ \text{waddle}(\text{donald}) \land \text{quack}(\text{donald}) $$
|
| 27 |
+
|
| 28 |
+
This form of forward deductive reasoning forms the basis of all SAT and SMT solvers as well as first-order theorem provers and verification tools used today.
|
| 29 |
+
|
| 30 |
+
A complementary form of logical reasoning to deduction is *abduction*, as introduced by Charles Sanders Peirce [1]. Specifically, abduction is a form of backward logical reasoning, which allows inferring likely premises from a given conclusion. Going back to our earlier example, suppose we know premises (1) and (2), and assume that we have observed that the formula waddle(donald) ∧ quack(donald) is true. Here, since the given premises do not imply the desired conclusion, we would like to find an explanatory hypothesis ψ such that the following deduction is valid:
|
| 31 |
+
|
| 32 |
+
$$
|
| 33 |
+
\begin{array}{c}
|
| 34 |
+
\forall x. (\text{duck}(x) \Rightarrow \text{quack}(x)) \\
|
| 35 |
+
\forall x. ((\text{duck}(x) \lor \text{goose}(x)) \Rightarrow \text{waddle}(x)) \\
|
| 36 |
+
\psi \\
|
| 37 |
+
\hline
|
| 38 |
+
\text{waddle}(\text{donald}) \land \text{quack}(\text{donald})
|
| 39 |
+
\end{array}
|
| 40 |
+
$$
|
| 41 |
+
---PAGE_BREAK---
|
| 42 |
+
|
| 43 |
+
The problem of finding a logical formula $\psi$ for which the above deduction is valid is known as *abductive inference*. For our example, many solutions are possible, including the following:
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
\begin{align*}
|
| 47 |
+
\psi_1 &: \text{duck}(\text{donald}) \wedge \neg\text{quack}(\text{donald}) \\
|
| 48 |
+
\psi_2 &: \text{waddle}(\text{donald}) \wedge \text{quack}(\text{donald}) \\
|
| 49 |
+
\psi_3 &: \text{goose}(\text{donald}) \wedge \text{quack}(\text{donald}) \\
|
| 50 |
+
\psi_4 &: \text{duck}(\\
|
| 51 |
+
&\qquad \text{donald})
|
| 52 |
+
\end{align*}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
While all of these solutions make the deduction valid, some of these solutions are more desirable than others. For example, $\psi_1$ contradicts known facts and is therefore a useless solution. On the other hand, $\psi_2$ simply restates the desired conclusion, and despite making the deduction valid, gets us no closer to explaining the observation. Finally, $\psi_3$ and $\psi_4$ neither contradict the premises nor restate the conclusion, but, intuitively, we prefer $\psi_4$ over $\psi_3$ because it makes fewer assumptions.
|
| 56 |
+
|
| 57 |
+
At a technical level, given premises $\Gamma$ and desired conclusion $\phi$, abduction is the problem of finding an explanatory hypothesis $\psi$ such that:
|
| 58 |
+
|
| 59 |
+
(1) $\Gamma \wedge \psi \models \phi$
|
| 60 |
+
|
| 61 |
+
(2) $\Gamma \wedge \psi \nvDash \text{false}$
|
| 62 |
+
|
| 63 |
+
Here, the first condition states that $\psi$, together with known premises $\Gamma$, entails the desired conclusion $\phi$. The second condition stipulates that $\psi$ is consistent with known premises. As illustrated by the previous example, there are many solutions to a given abductive inference problem, but the most desirable solutions are usually those that are as simple and as general as possible.
|
| 64 |
+
|
| 65 |
+
Recently, abductive inference has found many useful applications in verification, including inference of missing function preconditions [2, 3], diagnosis of error reports produced by verification tools [4], and for computing underapproximations [5]. Furthermore, abductive inference has also been used for inferring specifications of library functions [6] and for automatically synthesizing circular compositional proofs of program correctness [7].
|
| 66 |
+
|
| 67 |
+
In this paper, we describe our tool, called **EXPLAIN**, for performing logical abduction in the combination theory of Presburger arithmetic and propositional logic. The solutions computed by EXPLAIN are both simple and general: EXPLAIN always yields a logically weakest solution containing the fewest possible variables.
|
| 68 |
+
|
| 69 |
+
## 2 A Tutorial Introduction to EXPLAIN
|
| 70 |
+
|
| 71 |
+
The EXPLAIN tool is part of the SMT solver MISTRAL, which is available at http://www.cs.wm.edu/~tdillig/mistral under GPL license. MISTRAL is written in C++ and provides a C++ interface for EXPLAIN. In this section, we give a brief tutorial on how to solve abductive inference problems using EXPLAIN.
|
| 72 |
+
|
| 73 |
+
As an example, consider the abduction problem defined by the premises $x \le 0$ and $y > 1$ and the desired conclusion $2x - y + 3z \le 10$ in the theory of linear
|
| 74 |
+
---PAGE_BREAK---
|
| 75 |
+
|
| 76 |
+
1. Term* x = VariableTerm::make("x");
|
| 77 |
+
|
| 78 |
+
2. Term* y = VariableTerm::make("y");
|
| 79 |
+
|
| 80 |
+
3. Term* z = VariableTerm::make("z");
|
| 81 |
+
|
| 82 |
+
4. Constraint c1(x, ConstantTerm::make(0), ATOM_LEQ);
|
| 83 |
+
|
| 84 |
+
5. Constraint c2(y, ConstantTerm::make(1), ATOM_GT);
|
| 85 |
+
|
| 86 |
+
6. Constraint premises = c1 & c2;
|
| 87 |
+
|
| 88 |
+
7. map<Term*, long int> elems;
|
| 89 |
+
|
| 90 |
+
8. elems[x] = 2;
|
| 91 |
+
|
| 92 |
+
9. elems[y] = -1;
|
| 93 |
+
|
| 94 |
+
10. elems[z] = 3;
|
| 95 |
+
|
| 96 |
+
11. Term* t = ArithmeticTerm::make(elems);
|
| 97 |
+
|
| 98 |
+
12. Constraint conclusion(t, ConstantTerm::make(10), ATOM_LEQ);
|
| 99 |
+
|
| 100 |
+
13. Constraint explanation = conclusion.abduce(premises);
|
| 101 |
+
|
| 102 |
+
14. cout << "Explanation: " << explanation << endl;
|
| 103 |
+
|
| 104 |
+
Fig. 1: C++ code showing how to use EXPLAIN for performing abduction
|
| 105 |
+
|
| 106 |
+
integer arithmetic. In other words, we want to find a simple formula $\psi$ such that:
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
\begin{array}{l}
|
| 110 |
+
x \le 0 \land y > 1 \land \psi \models 2x - y + 3z \le 10 \\
|
| 111 |
+
x \le 0 \land y > 1 \land \psi \not\models false
|
| 112 |
+
\end{array}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
Figure 1 shows C++ code for using EXPLAIN to solve the above abductive inference problem. Here, lines 1-12 construct the constraints used in the example, while line 13 invokes the **abduce** method of EXPLAIN for performing abduction. Lines 1-3 construct variables *x*, *y*, *z*, and lines 4 and 5 form the constraints *x* ≤ 0 and *y* > 1 respectively. In MISTRAL, the operators &, |, ! are overloaded and are used for conjoining, disjoining, and negating constraints respectively. Therefore, line 6 constructs the premise *x* ≤ 0 ∧ *y* > 1 by conjoining c1 and c2. Lines 7-12 construct the desired conclusion 2*x* − *y* + 3*z* ≤ 10. For this purpose, we first construct the arithmetic term 2*x* − *y* + 3*z* (lines 7-11). An ArithmeticTerm consists of a map from terms to coefficients; for instance, for the term 2*x* − *y* + 3*z*, the coefficients of *x*, *y*, *z* are specified as 2, −1, 3 in the elemts map respectively.
|
| 116 |
+
|
| 117 |
+
The more interesting part of Figure 1 is line 13, where we invoke the **abduce** method to compute a solution to our abductive inference problem. For this example, the solution computed by EXPLAIN (and printed out at line 14) is *z* ≤ 4. It is easy to confirm that *z* ≤ 4 ∧ *x* ≤ 0 ∧ *y* > 1 logically implies 2*x* − *y* + 3*z* ≤ 10 and that *z* ≤ 4 is consistent with our premises.
|
| 118 |
+
|
| 119 |
+
In general, the abductive solutions computed by EXPLAIN have two theoretical guarantees: First, they contain as few variables as possible. For instance, in our example, although $z-x \leq 4$ is also a valid solution to the abduction problem, EXPLAIN always yields a solution with the fewest number of variables because such solutions are generally simpler and more concise. Second, among the class of solutions that contain the same set of variables, EXPLAIN always yields the
|
| 120 |
+
---PAGE_BREAK---
|
| 121 |
+
|
| 122 |
+
logically weakest explanation. For instance, in our example, while $z = 0$ is also a valid solution to the abduction problem, it is logically stronger than $z \le 4$. Intuitively, logically weak solutions to the abduction problem are preferable because they make fewer assumptions and are therefore more likely to be true.
|
| 123 |
+
|
| 124 |
+
## 3 Algorithm for Performing Abductive Inference
|
| 125 |
+
|
| 126 |
+
In this section, we describe the algorithm used in EXPLAIN for performing abductive inference. First, let us observe that the entailment $\Gamma \wedge \psi \models \phi$ can be rewritten as $\psi \models \Gamma \Rightarrow \phi$. Furthermore, in addition to entailing $\Gamma \Rightarrow \phi$, we want $\psi$ to obey the following three requirements:
|
| 127 |
+
|
| 128 |
+
1. The solution $\psi$ should be consistent with $\Gamma$ because an explanation that contradicts known premises is not useful
|
| 129 |
+
|
| 130 |
+
2. To ensure the simplicity of the explanation, $\psi$ should contain as few variables as possible
|
| 131 |
+
|
| 132 |
+
3. To capture the generality of the abductive explanation, $\psi$ should be no stronger than any other solution $\psi'$ satisfying the first two requirements
|
| 133 |
+
|
| 134 |
+
Now, consider a minimum satisfying assignment (MSA) of $\Gamma \Rightarrow \phi$. An MSA of a formula $\varphi$ is a partial satisfying assignment of $\varphi$ that contains as few variables as possible. The formal definition of MSAs as well as an algorithm for computing them are given in [8]. Clearly, an MSA $\sigma$ of $\Gamma \Rightarrow \phi$ entails $\Gamma \Rightarrow \phi$ and satisfies condition (2). Unfortunately, an MSA of $\Gamma \Rightarrow \phi$ does not satisfy condition (3), as it is a logically strongest solution containing a given set of variables.
|
| 135 |
+
|
| 136 |
+
Given an MSA of $\Gamma \Rightarrow \phi$ containing variables $V$, we observe that a logically weakest solution containing only $V$ is equivalent to $\forall \bar{V}$. ($\Gamma \Rightarrow \phi$), where $\bar{V}$ = free($\Gamma \Rightarrow \phi$)-$V$. Hence, given an MSA of $\Gamma \Rightarrow \phi$ consistent with $\Gamma$, an abductive solution satisfying all conditions (1)-(3) can be obtained by applying quantifier elimination to $\forall \bar{V}$. ($\Gamma \Rightarrow \phi$).
|
| 137 |
+
|
| 138 |
+
Thus, to solve the abduction problem, what we want is a largest set of variables $X$ such that $(\forall X.(\Gamma \Rightarrow \phi)) \wedge \Gamma$ is satisfiable. We call such a set of variables $X$ a maximum universal subset (MUS) of $\Gamma \Rightarrow \phi$ with respect to $\Gamma$. Given an MUS $X$ of $\Gamma \Rightarrow \phi$ with respect to $\Gamma$, the desired solution to the abductive inference problem is obtained by eliminating quantifiers from $\forall X.(\Gamma \Rightarrow \phi)$ and then simplifying the resulting formula with respect to $\Gamma$ using the algorithm from [9].
|
| 139 |
+
|
| 140 |
+
Pseudo-code for our algorithm for solving an abductive inference problem defined by premises $\Gamma$ and conclusion $\phi$ is shown in Figure 2. The **abduce** function given in lines 1-5 first computes an MUS of $\Gamma \Rightarrow \phi$ with respect to $\Gamma$ using the helper **find_mus** function. Given such a maximum universal subset $X$, we obtain a quantifier-free abductive solution $\chi$ by applying quantifier elimination to the formula $\forall X.(\Gamma \Rightarrow \phi)$. Finally, at line 4, to ensure that the final abductive solution does not contain redundant subparts that are implied by the premises, we apply the simplification algorithm from [9] to $\chi$. This yields our final abductive solution $\psi$ which satisfies our criteria of minimality and generality and that is not redundant with respect to the original premises.
|
| 141 |
+
---PAGE_BREAK---
|
| 142 |
+
|
| 143 |
+
```
|
| 144 |
+
abduce(φ, Γ) {
|
| 145 |
+
1. φ = (Γ ⇒ φ)
|
| 146 |
+
2. Set X = find_mus(φ, Γ, free(φ), 0)
|
| 147 |
+
3. χ = elim(∀X.φ)
|
| 148 |
+
4. ψ = simplify(χ, Γ)
|
| 149 |
+
5. return ψ
|
| 150 |
+
}
|
| 151 |
+
|
| 152 |
+
find_mus(φ, Γ, V, L) {
|
| 153 |
+
6. If V = ∅ or |V| ≤ L return ∅
|
| 154 |
+
7. U = free(φ) - V
|
| 155 |
+
8. if( UNSAT (Γ ∧ ∀U.φ)) return ∅
|
| 156 |
+
9. Set best = ∅
|
| 157 |
+
10. choose x ∈ V
|
| 158 |
+
|
| 159 |
+
11. if(SAT(∀x.φ)) {
|
| 160 |
+
12. Set Y = find_mus(∀x.φ, Γ, V \ {x}, L - 1);
|
| 161 |
+
13. If (|Y| + 1 > L) { best = Y ∪ {x}; L = |Y| + 1 }
|
| 162 |
+
14. Set Y = find_mus(φ, Γ, V \ {x}, L);
|
| 163 |
+
15. If (|Y| > L) { best = Y }
|
| 164 |
+
|
| 165 |
+
16. return best;
|
| 166 |
+
}
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
+
Fig. 2: Algorithm for performing abduction
|
| 170 |
+
|
| 171 |
+
The function `find_mus` used in `abduce` is shown in lines 6-16 of Figure 2. This algorithm directly extends the `find_mus` algorithm we presented earlier in [8] to exclude universal subsets that contradict Γ. At every recursive invocation, `find_mus` picks a variable x from the set of free variables in φ. It then recursively invokes `find_mus` to compute the sizes of the universal subsets with and without x and returns the larger universal subset. In this algorithm, L is a lower bound on the size of the MUS and is used to terminate search branches that cannot improve upon an existing solution. Therefore, the search for an MUS terminates if we either cannot improve upon an existing solution L, or the universal subset U at line 7 is no longer consistent with Γ. The return value of `find_mus` is therefore a largest set X of variables for which Γ ∧ ∀X.φ is satisfiable.
|
| 172 |
+
|
| 173 |
+
# 4 Experimental Evaluation
|
| 174 |
+
|
| 175 |
+
To explore the size of abductive solutions and the cost of computing such solutions in practice, we collected 1455 abduction problems generated by the Compass program analysis system for inferring missing preconditions of functions. In each abduction problem $(\Gamma \land \psi) \Rightarrow \phi$, $\Gamma$ represents known invariants, and
|
| 176 |
+
---PAGE_BREAK---
|
| 177 |
+
|
| 178 |
+
Fig. 3: Size of Formula vs. Size of Abductive Solution and Time for Abduction
|
| 179 |
+
|
| 180 |
+
$\phi$ is the weakest precondition of an assertion in some function $f$. Hence, the solution $\psi$ to the abduction problem represents a potential missing precondition of $f$ sufficient to guarantee the safety of the assertion.
|
| 181 |
+
|
| 182 |
+
The left-hand side of Figure 3 plots the size of the formula $\Gamma \Rightarrow \phi$, measured as the number of leaves in the formula, versus the size of the computed abductive solution. As this graph shows, the abductive solution is generally much smaller than the original formula, demonstrating that our abduction algorithm generates small explanations in practice. The right-hand side of Figure 3 plots the size of the formula $\Gamma \Rightarrow \phi$ versus the time taken to solve the abduction problem. As expected, the time increases with formula size, but remains tractable even for the largest abduction problems in our benchmark set.
|
| 183 |
+
|
| 184 |
+
## References
|
| 185 |
+
|
| 186 |
+
1. Peirce, C.: Collected papers of Charles Sanders Peirce. Belknap Press (1932)
|
| 187 |
+
2. Calcagno, C., Distefano, D., O'Hearn, P., Yang, H.: Compositional shape analysis by means of bi-abduction. POPL 44(1) (2009) 289–300
|
| 188 |
+
3. Giacobazzi, R.: Abductive analysis of modular logic programs. In: Proceedings of the 1994 International Symposium on Logic programming, Citeseer (1994) 377–391
|
| 189 |
+
4. Dillig, I., Dillig, T., Aiken, A.: Automated error diagnosis using abductive inference. In: PLDI. (2012)
|
| 190 |
+
5. Gulwani, S., McCloskey, B., Tiwari, A.: Lifting abstract interpreters to quantified logical domains. In: POPL, ACM (2008) 235–246
|
| 191 |
+
6. Zhu, H., Dillig, I., Dillig, T.: Abduction-based inference of library specifications for source-sink property verification. In: Technical Report, College of William & Mary. (2012)
|
| 192 |
+
7. Li, B., Dillig, I., Dillig, T., McMillan, K., Sagiv, M.: Synthesis of circular compositional program proofs via abduction. In: To appear in TACAS. (2013)
|
| 193 |
+
8. Dillig, I., Dillig, T., McMillan, K., Aiken, A.: Minimum satisfying assignments for SMT, CAV (2012)
|
| 194 |
+
9. Dillig, I., Dillig, T., Aiken, A.: Small formulas for large programs: On-line constraint simplification in scalable static analysis. Static Analysis (2011) 236–252
|
samples_new/texts_merged/3251599.md
ADDED
|
@@ -0,0 +1,679 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Research Article
|
| 5 |
+
|
| 6 |
+
On Retarded Integral Inequalities for Dynamic Systems
|
| 7 |
+
on Time Scales
|
| 8 |
+
|
| 9 |
+
Qiao-Luan Li,¹ Xu-Yang Fu,¹ Zhi-Juan Gao,¹ and Wing-Sum Cheung²
|
| 10 |
+
|
| 11 |
+
¹College of Mathematics & Information Science, Hebei Normal University, Shijiazhuang 050024, China
|
| 12 |
+
|
| 13 |
+
²Department of Mathematics, The University of Hong Kong, Hong Kong
|
| 14 |
+
|
| 15 |
+
Correspondence should be addressed to Wing-Sum Cheung; wscheung@hku.hk
|
| 16 |
+
|
| 17 |
+
Received 13 September 2013; Accepted 16 January 2014; Published 20 February 2014
|
| 18 |
+
|
| 19 |
+
Academic Editor: Jaeyoung Chung
|
| 20 |
+
|
| 21 |
+
Copyright © 2014 Qiao-Luan Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
| 22 |
+
|
| 23 |
+
The object of this paper is to establish some nonlinear retarded inequalities on time scales which can be used as handy tools in the theory of integral equations with time delays.
|
| 24 |
+
|
| 25 |
+
**1. Introduction**
|
| 26 |
+
|
| 27 |
+
Integral inequalities play an important role in the qualitative analysis of differential and integral equations. The well-known Gronwall inequality provides explicit bounds for solutions of many differential and integral equations. On the basis of various initiatives, this inequality has been extended and applied to various contexts (see, e.g., [1-4]), including many retarded ones (see, e.g., [5-9]).
|
| 28 |
+
|
| 29 |
+
Recently, Ye and Gao [7] obtained the following.
|
| 30 |
+
|
| 31 |
+
**Theorem A.** Let $I = [t_0, T) \subset \mathbb{R}$, $a(t), b(t) \in C(I, \mathbb{R}^+)$, $\phi(t) \in C([t_0 - r, t_0], \mathbb{R}^+)$, $a(t_0) = \phi(t_0)$, and $u(t) \in C([t_0 - r, T), \mathbb{R}^+)$ with
|
| 32 |
+
|
| 33 |
+
$$
|
| 34 |
+
\begin{aligned}
|
| 35 |
+
& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) u(s-r) ds, && t \in [t_0, T) \\
|
| 36 |
+
& u(t) \le \phi(t), && t \in [t_0 - r, t_0),
|
| 37 |
+
\end{aligned}
|
| 38 |
+
\quad (1) $$
|
| 39 |
+
|
| 40 |
+
where $\beta > 0$. Then, the following assertions hold.
|
| 41 |
+
|
| 42 |
+
(i) Suppose that $\beta > 1/2$. Then,
|
| 43 |
+
|
| 44 |
+
$$
|
| 45 |
+
\begin{aligned}
|
| 46 |
+
& u(t) \le e^t [w_1(t) + y_1(t)]^{1/2}, && t \in [t_0 + r, T), \\
|
| 47 |
+
& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) \phi(s-r) ds, && (2) \\
|
| 48 |
+
& t \in [t_0, t_0+r),
|
| 49 |
+
\end{aligned}
|
| 50 |
+
$$
|
| 51 |
+
|
| 52 |
+
where $K_1 = \Gamma(2\beta - 1)e^{-2r}/4^{\beta-1}$, $C_1 = \max\{2, e^{2r}\}$, $w_1(t) = C_1e^{-2t_0}a^2(t)$, $\phi_1(t) = C_1e^{-2t_0}\phi^2(t)$, and
|
| 53 |
+
|
| 54 |
+
$$
|
| 55 |
+
\begin{aligned}
|
| 56 |
+
& y_1(t) \\
|
| 57 |
+
& = \int_{t_0}^{t_0+r} K_1 b^2(s) \phi_1(s-r) ds \\
|
| 58 |
+
& \quad \cdot \exp \left( \int_{t_0+r}^{t} K_1 b^2(\tau) d\tau \right) \\
|
| 59 |
+
& + \int_{t_0+r}^{t} w_1(s-r) K_1 b^2(s) \exp \left( \int_{s}^{t} K_1 b^2(\tau) d\tau \right) ds.
|
| 60 |
+
\end{aligned}
|
| 61 |
+
\quad (3) $$
|
| 62 |
+
|
| 63 |
+
If, in addition, $a(t)$ and $\phi(t)$ are nondecreasing $C^1$-functions, then
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
\begin{aligned}
|
| 67 |
+
& u(t) \le \sqrt{C_1} a(t) \exp \left( t - t_0 + \frac{K_1}{2} \int_{t_0}^{t} b^2(s) ds \right), && (4) \\
|
| 68 |
+
& t \in [t_0, T).
|
| 69 |
+
\end{aligned}
|
| 70 |
+
\quad (ii) $$
|
| 71 |
+
|
| 72 |
+
(ii) Suppose that $0 < \beta \le 1/2$. Then,
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
\begin{aligned}
|
| 76 |
+
& u(t) \le e^t [w_2(t) + y_2(t)]^{1/q}, && t \in [t_0 + r, T), \\
|
| 77 |
+
& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) \phi(s-r) ds, && (5) \\
|
| 78 |
+
& t \in [t_0, t_0 + r),
|
| 79 |
+
\end{aligned}
|
| 80 |
+
$$
|
| 81 |
+
---PAGE_BREAK---
|
| 82 |
+
|
| 83 |
+
where $K_2 = [(\Gamma(1 - (\beta p))/p^{1-p(1-\beta)})^{1/p}, C_2 =$
|
| 84 |
+
max $\{2^{q-1}, e^{qr}\}, w_2(t) = C_2 e^{-qt_0} a^q(t), \phi_2(t) = C_2 e^{-qt_0} \phi^q(t)$,
|
| 85 |
+
$\psi(t) = 2^{q-1} K_2^q e^{-qr} b^q(t),$ and
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\begin{equation}
|
| 89 |
+
\begin{aligned}
|
| 90 |
+
y_2(t) &= \int_{t_0}^{t_0+r} \psi(s) \phi_2(s-r) ds \cdot \exp \left( \int_{t_0+r}^{t} \psi(\tau) d\tau \right) \\
|
| 91 |
+
&\quad + \int_{t_0+r}^{t} w_2(s-r) \psi(s) \exp \left( \int_{s}^{t} \psi(\tau) d\tau \right) ds.
|
| 92 |
+
\end{aligned}
|
| 93 |
+
\tag{6}
|
| 94 |
+
\end{equation}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
If, in addition, $a(t)$ and $\phi(t)$ are nondecreasing $C^1$-functions,
|
| 98 |
+
then
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
u(t) \le C_2^{1/q} a(t) \exp \left( t - t_0 + \frac{1}{q} \int_{t_0}^t \psi(s) ds \right), \quad (7)
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
t \in [t_0, T).
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
In this paper, we will further investigate functions $u$
|
| 109 |
+
satisfying the following more general inequalities:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\begin{align}
|
| 113 |
+
& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) u^{n/m} (s-r) \Delta s, \notag \\
|
| 114 |
+
& \phantom{u(t) \le a(t) + } t \in [t_0, T]_{\mathbb{T}}, \tag{8} \\
|
| 115 |
+
& u(t) \le \phi(t), \quad t \in [t_0-r, t_0]_{\mathbb{T}}, \notag \\
|
| 116 |
+
& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} [b(s) u^{n/m}(s) + c(s) u^{n/m}(s-r)] \Delta s, \notag \\
|
| 117 |
+
& \phantom{u(t) \le a(t) + } t \in [t_0, T]_{\mathbb{T}}, \tag{9}
|
| 118 |
+
\end{align}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
u(t) \le a(t)
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
+ \int_{t_0}^{t} (t-s)^{\beta-1} [b(s) u^{n/m}(s) + c(s) u^{n/m}(s-r)] \Delta s, \\
|
| 127 |
+
t \in [t_0, T)_\mathbb{T},
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
u(t) \leq \phi(t), \quad t \in [t_0 - r, t_0]_{\mathbb{T}},
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
where $\mathbb{T}$ is any time scale, $u(t)$, $a(t)$, $b(t)$, $c(t)$, and $\phi(t)$ are real-valued nonnegative rd-continuous functions defined on $\mathbb{T}$, $m$ and $n$ are positive constants, $m \ge n$, $m \ge 1$, $(1/p) + (1/m) = 1$, $\beta > (p-1)/p$, and $[t_0, T]_\mathbb{T} := [t_0, T) \cap \mathbb{T}$.
|
| 135 |
+
|
| 136 |
+
First, we make a preliminary definition.
|
| 137 |
+
|
| 138 |
+
**Definition 1.** We say that a function $p : \mathbb{T} \to \mathbb{R}$ is regressive provided that
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
1 + \mu(t)p(t) \neq 0, \quad \forall t \in \mathbb{T}^k
|
| 142 |
+
\quad (10)
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
holds, where $\mu(t)$ is graininess function; that is, $\mu(t) := \sigma(t) - t$. The set of all regressive and rd-continuous functions $f : \mathbb{T} \to \mathbb{R}$ will be denoted by $\mathcal{R}$.
|
| 146 |
+
|
| 147 |
+
**2. Main Results**
|
| 148 |
+
|
| 149 |
+
For convenience, we first cite the following lemma.
|
| 150 |
+
|
| 151 |
+
**Lemma 2** (see [10]). Let $a \ge 0$, $p \ge q \ge 0$, $p \ne 0$; then
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
a^{q/p} \leq \frac{q}{p} K^{\frac{(q-p)}{p}} a + \frac{p-q}{p} K^{\frac{q}{p}}
|
| 155 |
+
\quad (11)
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
for any $K > 0$.
|
| 159 |
+
|
| 160 |
+
**Lemma 3.** Let $a(t) \ge 0$, $b(t) > 0$, $p(t) := nb(t)/m$, $-b \in$
|
| 161 |
+
$\mathcal{R}^+ := \{f \in \mathcal{R} : 1 + \mu(t)f(t) > 0, \text{ for all } t \in \mathbb{T}\}$, $\phi(t) \ge 0$ is
|
| 162 |
+
rd-continuous on $[t_0 - r, t_0]_{\mathbb{T}}$, and $r \ge 0$ and $m \ge n > 0$ are
|
| 163 |
+
real constants. If $u(t) \ge 0$ is rd-continuous and
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
\begin{equation}
|
| 167 |
+
\begin{aligned}
|
| 168 |
+
& u^m(t) \le a(t) + \int_{t_0}^{t} b(s) u^n(s-r) \Delta s, && t \in [t_0, T]_{\mathbb{T}}, \\
|
| 169 |
+
& u(t) \le \phi(t), && t \in [t_0 - r, t_0]_{\mathbb{T}},
|
| 170 |
+
\end{aligned}
|
| 171 |
+
\tag{12}
|
| 172 |
+
\end{equation}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
then
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\begin{equation}
|
| 179 |
+
\begin{split}
|
| 180 |
+
u^m(t) &\le a(t) + \int_{t_0+r}^{t} p(s)a(s-r)e_{-p}(s,t)\Delta s \\
|
| 181 |
+
&\quad + e_{-p}(t_0+r,t) \int_{t_0}^{t_0+r} b(s)\phi^n(s-r)\Delta s \\
|
| 182 |
+
&\quad + \frac{m-n}{n}(e_{-p}(t_0+r,t)-1)
|
| 183 |
+
\end{split}
|
| 184 |
+
\tag{13}
|
| 185 |
+
\end{equation}
|
| 186 |
+
$$
|
| 187 |
+
|
| 188 |
+
for $t \in [t_0 + r, T)_\mathbb{T}$ and
|
| 189 |
+
|
| 190 |
+
$$
|
| 191 |
+
u^m(t) \leq a(t) + \int_{t_0}^{t} b(s) \phi^n(s-r) \Delta s
|
| 192 |
+
\quad (14)
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
for $t \in [t_0, t_0 + r)_T$.
|
| 196 |
+
|
| 197 |
+
Furthermore, if $a(t)$ and $\phi(t)$ are nondecreasing with $a(t_0) = \phi^n(t_0)$, then
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
u^m(t) \le c(t)e_b(t_0, t), \quad t \in [t_0, T)_T, \quad (15)
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
where $c(t) := a(t) + (m-n)/n$.
|
| 204 |
+
|
| 205 |
+
*Proof.* Let $z(t) = \int_{t_0}^t b(s)u^n(s-r)\Delta s$. Then, $z(t_0) = 0$, $u^m(t) \le a(t)+z(t)$ and $z(t)$ is positive, nondecreasing for $t \in [t_0, T)_T$. By Lemma 2, we get
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
\begin{align*}
|
| 209 |
+
z^\Delta (t) &= b(t) u^n (t-r) \le b(t) [a(t-r) + z(t-r)]^{n/m} \\
|
| 210 |
+
&\le b(t) \left[ \frac{n}{m} (a(t-r) + z(t-r)) + \frac{m-n}{m} \right] \\
|
| 211 |
+
&\le \frac{n}{m} b(t) z(\sigma(t)) + \frac{n}{m} b(t) a(t-r) + \frac{m-n}{m} b(t) \\
|
| 212 |
+
&= p(t) z(\sigma(t)) + p(t) a(t-r) + \frac{m-n}{n} p(t)
|
| 213 |
+
\end{align*}
|
| 214 |
+
\tag{16}
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
for $t \in [t_0 + r, T)_T$. Multiplying (16) by $e_{-p}(t, t_0 + r) > 0$, we get
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
(z(t)e_{-p}(t, t_0 + r))^{\Delta} &\le p(t)a(t-r)e_{-p}(t, t_0 + r) \\
|
| 221 |
+
&\quad + \frac{m-n}{n}p(t)e_{-p}(t, t_0 + r).
|
| 222 |
+
\tag{17}
|
| 223 |
+
$$
|
| 224 |
+
---PAGE_BREAK---
|
| 225 |
+
|
| 226 |
+
Integrating both sides from $t_0 + r$ to $t$, we obtain
|
| 227 |
+
|
| 228 |
+
$$
|
| 229 |
+
\begin{align}
|
| 230 |
+
z(t) \le e_{-p}(t_0+r,t)z(t_0+r) & \nonumber \\
|
| 231 |
+
& + e_{-p}(t_0+r,t) \int_{t_0+r}^{t} p(s)a(s-r)e_{-p}(s,t_0+r) \Delta s \nonumber \\
|
| 232 |
+
& + \frac{m-n}{n} (e_{-p}(t_0+r,t)-1). \tag{18}
|
| 233 |
+
\end{align}
|
| 234 |
+
$$
|
| 235 |
+
|
| 236 |
+
For $t \in [t_0, t_0 + r)_{\mathbb{T}}$, $z^{\Delta}(t) \le b(t)\phi^n(t-r)$, so
|
| 237 |
+
|
| 238 |
+
$$
|
| 239 |
+
z(t) \leq \int_{t_0}^{t} b(s) \phi^n(s-r) \Delta s. \quad (19)
|
| 240 |
+
$$
|
| 241 |
+
|
| 242 |
+
Using (18) and (19), we get
|
| 243 |
+
|
| 244 |
+
$$
|
| 245 |
+
\begin{align}
|
| 246 |
+
z(t) \le e_{-p}(t_0+r,t) & \int_{t_0}^{t_0+r} b(s) \phi^n(s-r) \Delta s \nonumber \\
|
| 247 |
+
& + \int_{t_0+r}^{t} p(s) a(s-r) e_{-p}(s,t) \Delta s \tag{20} \\
|
| 248 |
+
& + \frac{m-n}{n} (e_{-p}(t_0+r,t)-1) \nonumber
|
| 249 |
+
\end{align}
|
| 250 |
+
$$
|
| 251 |
+
|
| 252 |
+
for $t \in [t_0 + r, T)_\mathbb{T}$.
|
| 253 |
+
|
| 254 |
+
Noting that $u^m(t) \le a(t) + z(t)$, inequalities (13) and (14) follow.
|
| 255 |
+
|
| 256 |
+
Finally, if $a(t)$ and $\phi(t)$ are nondecreasing, then for $t \in [t_0, t_0 + r)_{\mathbb{T}}$, by (14), we have
|
| 257 |
+
|
| 258 |
+
$$
|
| 259 |
+
\begin{equation}
|
| 260 |
+
\begin{aligned}
|
| 261 |
+
u^m(t) &\le a(t) + \phi^n (t-r) \int_{t_0}^t b(s) \Delta s \\
|
| 262 |
+
&\le a(t) \left( 1 + \int_{t_0}^t b(s) \Delta s \right) \le c(t) e_{-b}(t_0, t).
|
| 263 |
+
\end{aligned}
|
| 264 |
+
\tag{21}
|
| 265 |
+
\end{equation}
|
| 266 |
+
$$
|
| 267 |
+
|
| 268 |
+
If $t \in [t_0 + r, T)_\mathbb{T}$, by (13),
|
| 269 |
+
|
| 270 |
+
$$
|
| 271 |
+
\begin{align*}
|
| 272 |
+
& u^m(t) \le a(t) + e_{-p}(t_0+r,t)a(t) \int_{t_0}^{t_0+r} b(s) \Delta s \\
|
| 273 |
+
& \phantom{u^m(t) \le} + a(t) \int_{t_0+r}^{t} p(s) e_{-p}(s,t) \Delta s \\
|
| 274 |
+
& \phantom{u^m(t) \le} + \frac{m-n}{n} \int_{t_0+r}^{t} p(s) e_{-p}(s,t) \Delta s \\
|
| 275 |
+
& \le c(t) + e_{-p}(t_0+r,t)c(t) \int_{t_0}^{t_0+r} b(s) \Delta s \tag{22} \\
|
| 276 |
+
& \phantom{u^m(t) \le} + c(t) \int_{t_0+r}^{t} p(s) e_{-p}(s,t) \Delta s \\
|
| 277 |
+
& = c(t)e_{-p}(t_0+r,t) \left(1 + \int_{t_0}^{t_0+r} b(s)\Delta s\right) \\
|
| 278 |
+
& \le c(t)e_{-b}(t_0,t).
|
| 279 |
+
\end{align*}
|
| 280 |
+
$$
|
| 281 |
+
|
| 282 |
+
The proof is complete. $\square$
|
| 283 |
+
|
| 284 |
+
**Theorem 4.** Assume that $u(t)$ satisfies condition (8), $a(t) \ge 0$, $K := 2^{m-1}\Gamma^{m-1}(p\beta - p + 1)(m/pn)^{\beta m-1}e^{-nr}$, $b_1(t) := (n/m)Kb^m(t)$, $-Kb^m \in \mathcal{R}^+$; then
|
| 285 |
+
|
| 286 |
+
$$
|
| 287 |
+
\begin{align}
|
| 288 |
+
u(t) &\le e^t [w_1(t) + y_1(t)]^{1/m}, && t \in [t_0 + r, T)_\mathbb{T}, \\
|
| 289 |
+
u(t) &\le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) \phi^{n/m} (s-r) \Delta s, && (23)
|
| 290 |
+
\end{align}
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
t \in [t_0, t_0 + r)_\mathbb{T},
|
| 295 |
+
$$
|
| 296 |
+
|
| 297 |
+
where $w_1(t) := 2^{m-1}a^m(t)e^{-mt_0}$, $\phi_1(t) := e^{-t_0}e^r\phi(t)$,
|
| 298 |
+
and $y_1(t) := \int_{t_0+r}^{t} b_1(s)w_1(s-r)e_{-b_1}(s,t)\Delta s + e_{-b_1}(t_0+r,t)\int_{t_0}^{t_0+r} K b^m(s)\phi_1^n(s-r)\Delta s + ((m-n)/n)(e_{-b_1}(t_0+r,t)-1)$.
|
| 299 |
+
|
| 300 |
+
If, in addition, $a(t)$ and $\phi(t)$ are nondecreasing, and
|
| 301 |
+
$a^m(t_0) = 2^{1-m}e^{(m-n)t_0}e^{nr}\phi^n(t_0)$, then
|
| 302 |
+
|
| 303 |
+
$$
|
| 304 |
+
u(t) \le e^t [\alpha(t) e_{-Kb^n}(t_0, t)]^{1/m}, \quad t \in [t_0, T)_\mathbb{T}, \quad (24)
|
| 305 |
+
$$
|
| 306 |
+
|
| 307 |
+
where $\alpha(t) := w_1(t) + (m-n)/n$
|
| 308 |
+
|
| 309 |
+
*Proof.* The second inequality in (23) is obvious. Next, we will prove the first inequality in (23). For $t \in [t_0, T)_\mathbb{T}$, using Hölder's inequality with indices *p* and *m*, we obtain from (8)
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
\begin{align}
|
| 313 |
+
u(t) &\le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} e^{ns/m} b(s) e^{-ns/m} u^{n/m} (s-r) \Delta s \notag \\
|
| 314 |
+
&\le a(t) + \left( \int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \right)^{1/p} \notag \\
|
| 315 |
+
&\qquad \times \left( \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s \right)^{1/m}. \tag{25}
|
| 316 |
+
\end{align}
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
By Jensen's inequality $(\sum_{i=1}^n x_i)^{\sigma} \le n^{\sigma-1} (\sum_{i=1}^n x_i^{\sigma})$, we get
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
u^m(t) \le 2^{m-1} a^m(t)
|
| 323 |
+
+ 2^{m-1} \left( \int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \right)^{m/p}
|
| 324 |
+
\times \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s. \tag{26}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
For the first integral in (26), we have the estimate
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
\begin{align}
|
| 331 |
+
&\int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \\
|
| 332 |
+
&= \int_{0}^{t-t_0} \tau^{p\beta-p} e^{pn(t-\tau)/m} \Delta\tau \\
|
| 333 |
+
&\le e^{pnt/m} \int_{0}^{t} \tau^{p\beta-p} e^{-pn\tau/m} \Delta\tau \tag{27} \\
|
| 334 |
+
&= e^{pnt/m} \left(\frac{m}{pn}\right)^{p\beta-p+1} \int_{0}^{pnt/m} \sigma^{p\beta-p} e^{-\sigma}\Delta\sigma \\
|
| 335 |
+
&< e^{pnt/m} \left(\frac{m}{pn}\right)^{p\beta-p+1} \Gamma(p\beta - p + 1).
|
| 336 |
+
\end{align}
|
| 337 |
+
$$
|
| 338 |
+
---PAGE_BREAK---
|
| 339 |
+
|
| 340 |
+
Hence,
|
| 341 |
+
|
| 342 |
+
$$
|
| 343 |
+
\begin{equation} \tag{28}
|
| 344 |
+
\begin{aligned}
|
| 345 |
+
& u^m(t) \le 2^{m-1} a^m(t) + 2^{m-1} e^{mt} \Gamma^{m-1}(p\beta - p + 1) \\
|
| 346 |
+
& \quad \times \left(\frac{m}{pn}\right)^{\beta m-1} \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s \\
|
| 347 |
+
& \le 2^{m-1} a^m(t) e^{-mt_0} + 2^{m-1} \Gamma^{m-1}(p\beta - p + 1) \left(\frac{m}{pn}\right)^{\beta m-1} \\
|
| 348 |
+
& \qquad \times \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s.
|
| 349 |
+
\end{aligned}
|
| 350 |
+
\end{equation}
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
and so
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
\begin{align*}
|
| 357 |
+
& (u(t)e^{-t})^m \\
|
| 358 |
+
&\le 2^{m-1} a^m(t) e^{-mt_0} + 2^{m-1} \Gamma^{m-1}(p\beta - p+1) \left(\frac{m}{pn}\right)^{\beta m-1} \\
|
| 359 |
+
&\qquad \times \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s-r) \Delta s. \tag{29}
|
| 360 |
+
\end{align*}
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
Let $v(t) := e^{-t}u(t)$; then we have
|
| 364 |
+
|
| 365 |
+
$$
|
| 366 |
+
\begin{equation}
|
| 367 |
+
\begin{aligned}
|
| 368 |
+
v^m(t) &\le w_1(t) + K \int_{t_0}^{t} b^m(s) v^n(s-r) \Delta s, \\
|
| 369 |
+
&\qquad t \in [t_0, T)_{\mathbb{T}}.
|
| 370 |
+
\end{aligned}
|
| 371 |
+
\tag{30}
|
| 372 |
+
\end{equation}
|
| 373 |
+
$$
|
| 374 |
+
|
| 375 |
+
For $t \in [t_0 - r, t_0]_{\mathbb{T}}$, we have $e^{-t}u(t) \le e^{-t}\phi(t) \le e^r e^{-t_0}\phi(t)$;
|
| 376 |
+
that is, $v(t) \le \phi_1(t)$. By Lemma 3, we get
|
| 377 |
+
|
| 378 |
+
$$
|
| 379 |
+
\begin{equation}
|
| 380 |
+
\begin{aligned}
|
| 381 |
+
v^m(t) &\le w_1(t) + \int_{t_0+r}^{t} b_1(s) w_1(s-r) e_{-b_1}(s,t) \Delta s \\
|
| 382 |
+
&\quad + e_{-b_1}(t_0+r,t) \int_{t_0}^{t_0+r} K b^m(s) \phi_1^n(s-r) \Delta s \\
|
| 383 |
+
&\quad + \frac{m-n}{n} (e_{-b_1}(t_0+r,t)-1).
|
| 384 |
+
\end{aligned}
|
| 385 |
+
\tag{31}
|
| 386 |
+
\end{equation}
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
Hence, the first inequality in (23) follows.
|
| 390 |
+
|
| 391 |
+
Finally, if $a(t)$ and $\phi(t)$ are nondecreasing, and $a^m(t_0) = 2^{1-m}e^{(m-n)t_0}\phi^n(t_0)e^{nr}$, by Lemma 3, we have
|
| 392 |
+
|
| 393 |
+
$$
|
| 394 |
+
u(t) \le e^t [\alpha(t) e_{-Kb^m}(t_0, t)]^{1/m}, \quad t \in [t_0, T)_{\mathbb{T}}. \quad (32)
|
| 395 |
+
$$
|
| 396 |
+
|
| 397 |
+
The proof is complete.
|
| 398 |
+
|
| 399 |
+
**Lemma 5.** Let $a(t) \ge 0$, $b(t) > 0$, $c(t) > 0$, $p(t) := (nb(t)/m)$,
|
| 400 |
+
$q(t) := (nc(t)/m)$, $\gamma(t) := a(t) + (m-n)/n$ and $-p, -(p+c) \in$
|
| 401 |
+
$\mathbb{R}^+$ and let $\phi(t) \ge 0$ be rd-continuous on $[t_0 - r, t_0]_{\mathbb{T}}$, where
|
| 402 |
+
$r \ge 0$ and $m \ge n > 0$ are real constants. If $u(t) \ge 0$ is rd-
|
| 403 |
+
continuous and
|
| 404 |
+
|
| 405 |
+
$$
|
| 406 |
+
\begin{equation}
|
| 407 |
+
\begin{aligned}
|
| 408 |
+
& u^m(t) \le a(t) + \int_{t_0}^{t} [b(s) u^n(s) + c(s) u^n(s-r)] \Delta s, \\
|
| 409 |
+
& \qquad t \in [t_0, T)_\mathbb{T},
|
| 410 |
+
\end{aligned}
|
| 411 |
+
\tag{33}
|
| 412 |
+
\end{equation}
|
| 413 |
+
$$
|
| 414 |
+
|
| 415 |
+
$$
|
| 416 |
+
u(t) \leq \phi(t), \quad t \in [t_0 - r, t_0]_{\mathbb{T}},
|
| 417 |
+
$$
|
| 418 |
+
|
| 419 |
+
then
|
| 420 |
+
|
| 421 |
+
$$
|
| 422 |
+
\begin{align*}
|
| 423 |
+
& u^m(t) \\
|
| 424 |
+
&\leq a(t) \\
|
| 425 |
+
&\quad + \int_{t_0+r}^{t} [p(s)\gamma(s)+q(s)\gamma(s-r)] e_{-(p+q)}(s,t)\Delta s \\
|
| 426 |
+
&\quad + e_{-(p+q)}(t_0+r,t) \\
|
| 427 |
+
&\quad \times \int_{t_0}^{t_0+r} [p(s)\gamma(s)+c(s)\phi^n(s-r)] e_{-p}(s,t_0+r)\Delta s
|
| 428 |
+
\end{align*}
|
| 429 |
+
\tag{34}
|
| 430 |
+
$$
|
| 431 |
+
|
| 432 |
+
for $t \in [t_0 + r, T)_\mathbb{T}$ and
|
| 433 |
+
|
| 434 |
+
$$
|
| 435 |
+
u^m(t) \leq a(t) + \int_{t_0}^{t} [p(s)\gamma(s) + c(s)\phi^n(s-r)] e_{-p}(s,t)\Delta s
|
| 436 |
+
$$
|
| 437 |
+
|
| 438 |
+
for $t \in [t_0, t_0 + r]_{\mathbb{T}}$.
|
| 439 |
+
|
| 440 |
+
Furthermore, if $a(t)$ and $\phi(t)$ are nondecreasing with $a(t_0) = \phi^n(t_0)$, then
|
| 441 |
+
|
| 442 |
+
$$
|
| 443 |
+
u^m(t) \leq \gamma(t) e_{-(p+c)}(t_0, t), \quad t \in [t_0, T)_T. \quad (36)
|
| 444 |
+
$$
|
| 445 |
+
|
| 446 |
+
Proof. Let $z(t) = \int_{t_0}^t [b(s)u^n(s)+c(s)u^n(s-r)]\Delta s$. Then, $z(t_0) = 0$, $u^m(t) \leq a(t) + z(t)$, $z(t)$ is positive and nondecreasing for $t \in [t_0, T)_T$. Further, we have
|
| 447 |
+
|
| 448 |
+
$$
|
| 449 |
+
z^\Delta (t) = b (t) u^n (t) + c (t) u^n (t-r). \quad (37)
|
| 450 |
+
$$
|
| 451 |
+
|
| 452 |
+
For $t \in [t_0, t_0 + r]_{\mathbb{T}}$, using Lemma 2, we have
|
| 453 |
+
|
| 454 |
+
$$
|
| 455 |
+
z^\Delta (t) &\le b (t) (a (t) + z (t))^{n/m} + c (t) \phi^n (t-r) \\
|
| 456 |
+
&\le b (t) \left[ \frac{n}{m} (a (t) + z (t)) + \frac{m-n}{m} \right] + c (t) \phi^n (t-r) \\
|
| 457 |
+
&\le p (t) \gamma (t) + p (t) z (\sigma (t)) + c (t) \phi^n (t-r),
|
| 458 |
+
$$
|
| 459 |
+
|
| 460 |
+
$$
|
| 461 |
+
(e_{-p}(t, t_0) z(t))^\Delta \le (p(t)\gamma(t)+c(t)\phi^n(t-r))e_{-p}(t, t_0). \quad (38)
|
| 462 |
+
$$
|
| 463 |
+
|
| 464 |
+
Integrating both sides from $t_0$ to $t$, we obtain
|
| 465 |
+
|
| 466 |
+
$$
|
| 467 |
+
z(t) \leq \int_{t_0}^{t} [p(s)\gamma(s)+c(s)\phi^n(s-r)] e_{-p}(s,t)\Delta s. \quad (39)
|
| 468 |
+
$$
|
| 469 |
+
---PAGE_BREAK---
|
| 470 |
+
|
| 471 |
+
For $t \in [t_0 + r, T)_{\mathbb{T}}$,
|
| 472 |
+
|
| 473 |
+
$$
|
| 474 |
+
\begin{aligned}
|
| 475 |
+
z^{\Delta}(t) &\le b(t)[a(t) + z(t)]^{n/m} \\
|
| 476 |
+
&\quad + c(t)[a(t-r) + z(t-r)]^{n/m} \\
|
| 477 |
+
&\le b(t)\left(\frac{n}{m}(a(t)+z(t)) + \frac{m-n}{m}\right) \\
|
| 478 |
+
&\quad + c(t)\left(\frac{n}{m}(a(t-r)+z(t-r)) + \frac{m-n}{m}\right) \\
|
| 479 |
+
&\le \left(\frac{n}{m}b(t) + \frac{n}{m}c(t)\right)z(\sigma(t)) + \frac{n}{m}b(t)a(t) \\
|
| 480 |
+
&\quad + \frac{m-n}{m}c(t)a(t-r) + \frac{m-n}{m}b(t) + \frac{m-n}{m}c(t) \\
|
| 481 |
+
&\le (p(t)+q(t))z(\sigma(t)) + p(t)\gamma(t) + q(t)\gamma(t-r).
|
| 482 |
+
\end{aligned}
|
| 483 |
+
\tag{40}
|
| 484 |
+
$$
|
| 485 |
+
|
| 486 |
+
Hence, we get
|
| 487 |
+
|
| 488 |
+
$$
|
| 489 |
+
\begin{align}
|
| 490 |
+
(e_{-(p+q)}(t, t_0 + r) z(t))^\Delta & \tag{41} \\
|
| 491 |
+
&\le (p(t) \gamma(t) + q(t) \gamma(t-r)) e_{-(p+q)}(t, t_0 + r). \nonumber
|
| 492 |
+
\end{align}
|
| 493 |
+
$$
|
| 494 |
+
|
| 495 |
+
Integrating both sides from $t_0 + r$ to $t$, we obtain
|
| 496 |
+
|
| 497 |
+
$$
|
| 498 |
+
\begin{align*}
|
| 499 |
+
z(t) &\le e_{-(p+q)}(t_0+r,t)z(t_0+r) \\
|
| 500 |
+
&\quad + e_{-(p+q)}(t_0+r,t) \\
|
| 501 |
+
&\quad \times \int_{t_0+r}^{t} [p(s)\gamma(s) + q(s)\gamma(s-r)] e_{-(p+q)}(s,t_0+r) \Delta s \\
|
| 502 |
+
&\le e_{-(p+q)}(t_0+r,t) \\
|
| 503 |
+
&\quad \times \int_{t_0}^{t_0+r} [p(s)\gamma(s) + c(s)\phi^n(s-r)] e_{-p}(s,t_0+r) \Delta s \\
|
| 504 |
+
&\quad + \int_{t_0+r}^{t} [p(s)\gamma(s) + q(s)\gamma(s-r)] e_{-(p+q)}(s,t) \Delta s.
|
| 505 |
+
\end{align*}
|
| 506 |
+
\tag{42}
|
| 507 |
+
$$
|
| 508 |
+
|
| 509 |
+
Using $u^m(t) \le a(t) + z(t)$, we get inequalities (34) and (35).
|
| 510 |
+
Finally, if $a(t)$ and $\phi(t)$ are nondecreasing, then, by (35),
|
| 511 |
+
|
| 512 |
+
$$
|
| 513 |
+
\begin{align}
|
| 514 |
+
u^m(t) &\le \gamma(t) \left( 1 + \int_{t_0}^{t} (p(s) + c(s)) e_{-p}(s,t) \Delta s \right) \notag \\
|
| 515 |
+
&\le \gamma(t) \left( 1 + \int_{t_0}^{t} (p(s) + c(s)) e_{-(p+c)}(s,t) \Delta s \right) \tag{43} \\
|
| 516 |
+
&\le \gamma(t) e_{-(p+c)}(t_0,t) \notag
|
| 517 |
+
\end{align}
|
| 518 |
+
$$
|
| 519 |
+
|
| 520 |
+
for $t \in [t_0, t_0 + r)_{\mathbb{T}}$. Furthermore, by (34),
|
| 521 |
+
|
| 522 |
+
$$
|
| 523 |
+
\begin{align*}
|
| 524 |
+
u^m(t) &\le \gamma(t) + \gamma(t) e_{-(p+q)}(t_0 + r, t) \\
|
| 525 |
+
&\quad \times \int_{t_0}^{t_0+r} (p(s)+c(s)) e_{-p}(s,t_0+r) \Delta s \\
|
| 526 |
+
&\quad + \gamma(t) \int_{t_0+r}^{t} (p(s)+q(s)) e_{-(p+q)}(s,t) \Delta s \\
|
| 527 |
+
&\le \gamma(t) e_{-(p+q)}(t_0+r, t) \\
|
| 528 |
+
&\quad \times \left( 1 + \int_{t_0}^{t_0+r} (p(s)+c(s)) e_{-(p+c)}(s,t_0+r) \Delta s \right) \\
|
| 529 |
+
&= \gamma(t) e_{-(p+c)}(t_0, t)
|
| 530 |
+
\end{align*}
|
| 531 |
+
\tag{44}
|
| 532 |
+
$$
|
| 533 |
+
|
| 534 |
+
for $t \in [t_0 + r, T)_{\mathbb{T}}$. The proof is complete. $\square$
|
| 535 |
+
|
| 536 |
+
**Theorem 6.** Assume that $u(t)$ satisfies condition (9), $a(t) \ge 0$, $K := 3^{m-1}\Gamma^{m-1}(p\beta - p + 1)(m/pn)^{\beta m-1}$, $p(t) := nKb^m(t)/m$, $c_1(t) := Ke^{-mr}c^m(t)$, $q(t) := (n/m)c_1(t)$, $-p, -(p+c_1) \in \mathbb{R}^+$.
|
| 537 |
+
|
| 538 |
+
If, in addition, $a(t)$ and $\phi(t)$ are nondecreasing, and
|
| 539 |
+
$a^m(t_0) = 3^{1-m}e^{(m-n)t_0}e^{nr}\phi^n(t_0)$, then
|
| 540 |
+
|
| 541 |
+
$$
|
| 542 |
+
u(t) \le e^{\ell} [\gamma(t) e_{-(p+c_1)}(t_0, t)]^{1/m}, \quad t \in [t_0, T]_{\mathbb{T}}, \quad (45)
|
| 543 |
+
$$
|
| 544 |
+
|
| 545 |
+
where $\gamma(t) = 3^{m-1}a^m(t)e^{-mt_0} + (m-n)/n$.
|
| 546 |
+
|
| 547 |
+
Proof. For $t \in [t_0, T)_\mathbb{T}$, using Hölder's inequality with indices $p$ and $m$, we obtain from (9) that
|
| 548 |
+
|
| 549 |
+
$$
|
| 550 |
+
\begin{align*}
|
| 551 |
+
u(t) &\le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} e^{ns/m} b(s) e^{-ns/m} u^{n/m}(s) \Delta s \\
|
| 552 |
+
&\quad + \int_{t_0}^{t} (t-s)^{\beta-1} e^{ns/m} c(s) e^{-ns/m} u^{n/m}(s-r) \Delta s \\
|
| 553 |
+
&\le a(t) + \left( \int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \right)^{1/p} \\
|
| 554 |
+
&\quad \times \left( \int_{t_0}^{t} b^m(s) e^{-ns} u^n(s) \Delta s \right)^{1/m} \\
|
| 555 |
+
&\quad + \left( \int_{t_0}^{t} (t-s)^{p\beta-p} e^{pns/m} \Delta s \right)^{1/p} \\
|
| 556 |
+
&\quad \times \left( \int_{t_0}^{t} c^m(s) e^{-ns} u^n(s-r) \Delta s \right)^{1/m}
|
| 557 |
+
\end{align*}
|
| 558 |
+
$$
|
| 559 |
+
---PAGE_BREAK---
|
| 560 |
+
|
| 561 |
+
$$
|
| 562 |
+
\begin{equation}
|
| 563 |
+
\begin{aligned}
|
| 564 |
+
& \le a(t) + e^{nt/m} \left(\frac{m}{pn}\right)^{\beta-1+1/p} \Gamma^{1/p}(p\beta - p + 1) \\
|
| 565 |
+
& \quad \times \left[ \left( \int_{t_0}^t b^m(s) e^{-ns} u^n(s) \Delta s \right)^{1/m} \right. \\
|
| 566 |
+
& \qquad \left. + \left( \int_{t_0}^t c^m(s) e^{-ns} u^n(s-r) \Delta s \right)^{1/m} \right].
|
| 567 |
+
\end{aligned}
|
| 568 |
+
\tag{46}
|
| 569 |
+
\end{equation}
|
| 570 |
+
$$
|
| 571 |
+
|
| 572 |
+
By Jensen's inequality $(\sum_{i=1}^n x_i)^\sigma \le n^{\sigma-1} (\sum_{i=1}^n x_i^\sigma)$, we get
|
| 573 |
+
|
| 574 |
+
$$
|
| 575 |
+
\begin{align*}
|
| 576 |
+
& u^m(t) \\
|
| 577 |
+
&\le 3^{m-1}a^m(t) + 3^{m-1}e^{nt}\left(\frac{m}{pn}\right)^{(m\beta-1)}\Gamma^{m-1}(p\beta - p + 1) \\
|
| 578 |
+
&\quad \times \left(\int_{t_0}^t b^m(s)e^{-ns}u^n(s)\Delta s + \int_{t_0}^t c^m(s)e^{-ns}u^n(s-r)\Delta s\right). \tag{47}
|
| 579 |
+
\end{align*}
|
| 580 |
+
$$
|
| 581 |
+
|
| 582 |
+
So,
|
| 583 |
+
|
| 584 |
+
$$
|
| 585 |
+
\begin{equation}
|
| 586 |
+
\begin{aligned}
|
| 587 |
+
& (u(t)e^{-t})^m \\
|
| 588 |
+
&\le 3^{m-1} a^m(t) e^{-mt_0} \\
|
| 589 |
+
&\quad + 3^{m-1} \left(\frac{m}{pn}\right)^{(m\beta-1)} \Gamma^{m-1}(p\beta - p + 1) \\
|
| 590 |
+
&\quad \times \left( \int_{t_0}^t b^m(s) e^{-ns} u^n(s) \Delta s + \int_{t_0}^t c^m(s) e^{-ns} u^n(s-r) \Delta s \right).
|
| 591 |
+
\end{aligned}
|
| 592 |
+
\tag{48}
|
| 593 |
+
\end{equation}
|
| 594 |
+
$$
|
| 595 |
+
|
| 596 |
+
Let $v(t) := e^{-t}u(t)$, $w_2(t) := 3^{m-1}a^m(t)e^{-mt_0}$; we have
|
| 597 |
+
|
| 598 |
+
$$
|
| 599 |
+
\begin{equation}
|
| 600 |
+
\begin{aligned}
|
| 601 |
+
v^m(t) &\le w_2(t) + \int_{t_0}^t K b^m(s) v^n(s) \Delta s \\
|
| 602 |
+
&\quad + \int_{t_0}^t K e^{-nr} c^m(s) v^n(s-r) \Delta s
|
| 603 |
+
\end{aligned}
|
| 604 |
+
\tag{49}
|
| 605 |
+
\end{equation}
|
| 606 |
+
$$
|
| 607 |
+
|
| 608 |
+
for $t \in [t_0, T]_\mathbb{T}$. For $t \in [t_0 - r, t_0]_\mathbb{T}$, we have $e^{-t}u(t) \le$
|
| 609 |
+
$e^{-t}\phi(t) \le e^{-t_0}e^r\phi(t)$; that is, $v(t) \le \phi_1(t)$. By Lemma 5, we
|
| 610 |
+
get
|
| 611 |
+
|
| 612 |
+
$$
|
| 613 |
+
u(t) \le e^t [\gamma(t) e_{-(p+c_1)}(t_0, t)]^{1/m}, \quad t \in [t_0, T]_{\mathbb{T}}. \quad (50)
|
| 614 |
+
$$
|
| 615 |
+
|
| 616 |
+
The proof is complete.
|
| 617 |
+
|
| 618 |
+
The following is a simple consequence of Theorem 4.
|
| 619 |
+
|
| 620 |
+
**Corollary 7.** Suppose that $m = n = 2$,
|
| 621 |
+
|
| 622 |
+
$$
|
| 623 |
+
\begin{align}
|
| 624 |
+
& u(t) \le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) u(s-r) \Delta s, \notag \\
|
| 625 |
+
& \phantom{u(t) \le a(t) + } t \in [t_0, T), \tag{51} \\
|
| 626 |
+
& u(t) \le \phi(t), \quad t \in [t_0 - r, t_0), \notag
|
| 627 |
+
\end{align}
|
| 628 |
+
$$
|
| 629 |
+
|
| 630 |
+
then
|
| 631 |
+
|
| 632 |
+
$$
|
| 633 |
+
\begin{align*}
|
| 634 |
+
u(t) &\le e^t \left[ w_1(t) + \int_{t_0+r}^t Kb^2(s) w_1(s-r) e_{-Kb^2}(s,t) \Delta s \right. \\
|
| 635 |
+
&\qquad \left. + e_{-Kb^2}(t_0+r,t) \right. \\
|
| 636 |
+
&\qquad \left. \times \int_{t_0}^{t_0+r} Kb^2(s) \phi_1^2(s-r) \Delta s \right]^{1/2}, \\
|
| 637 |
+
&\qquad t \in [t_0+r,T)_\mathbb{T}, \\
|
| 638 |
+
u(t) &\le a(t) + \int_{t_0}^{t} (t-s)^{\beta-1} b(s) \phi(s-r) \Delta s, \\
|
| 639 |
+
&\qquad t \in [t_0, t_0+r)_\mathbb{T},
|
| 640 |
+
\end{align*}
|
| 641 |
+
$$
|
| 642 |
+
|
| 643 |
+
where $K := \Gamma(2\beta - 1)e^{-2r} \cdot (1/4^{\beta-1})$, $w_1(t) := 2a^2(t)e^{-2t_0}$,
|
| 644 |
+
$\phi_1(t) := e^{-t_0}e^r\phi(t)$.
|
| 645 |
+
|
| 646 |
+
If $\mathbb{T} = \mathbb{R}$, then the conclusion reduces to that of Theorem A for $\beta > 1/2$.
|
| 647 |
+
|
| 648 |
+
**Conflict of Interests**
|
| 649 |
+
|
| 650 |
+
The authors declare that there is no conflict of interests regarding the publication of this paper.
|
| 651 |
+
|
| 652 |
+
**Acknowledgments**
|
| 653 |
+
|
| 654 |
+
The first author's research was supported by NNSF of China (11071054), Natural Science Foundation of Hebei Province (A2011205012). The corresponding author's research was partially supported by an HKU URG grant.
|
| 655 |
+
|
| 656 |
+
**References**
|
| 657 |
+
|
| 658 |
+
[1] R. P. Agarwal, S. Deng, and W. Zhang, “Generalization of a retarded Gronwall-like inequality and its applications,” *Applied Mathematics and Computation*, vol. 165, no. 3, pp. 599–612, 2005.
|
| 659 |
+
|
| 660 |
+
[2] B. G. Pachpatte, “Explicit bounds on certain integral inequalities,” *Journal of Mathematical Analysis and Applications*, vol. 267, no. 1, pp. 48–61, 2002.
|
| 661 |
+
|
| 662 |
+
[3] W.-S. Cheung, “Some new nonlinear inequalities and applications to boundary value problems,” *Nonlinear Analysis: Theory, Methods & Applications*, vol. 64, no. 9, pp. 2112–2128, 2006.
|
| 663 |
+
|
| 664 |
+
[4] C.-J. Chen, W.-S. Cheung, and D. Zhao, “Gronwall-Bellman-type integral inequalities and applications to BVPs,” *Journal of Inequalities and Applications*, vol. 2009, Article ID 258569, 15 pages, 2009.
|
| 665 |
+
|
| 666 |
+
[5] Y. G. Sun, “On retarded integral inequalities and their applications,” *Journal of Mathematical Analysis and Applications*, vol. 301, no. 2, pp. 265–275, 2005.
|
| 667 |
+
|
| 668 |
+
[6] H. Zhang and F. Meng, “On certain integral inequalities in two independent variables for retarded equations,” *Applied Mathematics and Computation*, vol. 203, no. 2, pp. 608–616, 2008.
|
| 669 |
+
|
| 670 |
+
[7] H. Ye and J. Gao, “Henry-Gronwall type retarded integral inequalities and their applications to fractional differential
|
| 671 |
+
---PAGE_BREAK---
|
| 672 |
+
|
| 673 |
+
equations with delay,” *Applied Mathematics and Computation*, vol. 218, no. 8, pp. 4152–4160, 2011.
|
| 674 |
+
|
| 675 |
+
[8] O. Lipovan, “A retarded Gronwall-like inequality and its applications,” *Journal of Mathematical Analysis and Applications*, vol. 252, no. 1, pp. 389–401, 2000.
|
| 676 |
+
|
| 677 |
+
[9] O. Lipovan, “A retarded integral inequality and its applications,” *Journal of Mathematical Analysis and Applications*, vol. 285, no. 2, pp. 436–443, 2003.
|
| 678 |
+
|
| 679 |
+
[10] F. Jiang and F. Meng, “Explicit bounds on some new nonlinear integral inequalities with delay,” *Journal of Computational and Applied Mathematics*, vol. 205, no. 1, pp. 479–486, 2007.
|
samples_new/texts_merged/3295535.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples_new/texts_merged/3438890.md
ADDED
|
@@ -0,0 +1,226 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Footstep Planning Based on Univector Field Method for
|
| 5 |
+
Humanoid Robot
|
| 6 |
+
|
| 7 |
+
Youngdae Hong and Jong-Hwan Kim
|
| 8 |
+
|
| 9 |
+
Department of Electrical Engineering and Computer Science, KAIST
|
| 10 |
+
Daejeon, Korea
|
| 11 |
+
{ydhong,johkim}@rit.kaist.ac.kr
|
| 12 |
+
http://rit.kaist.ac.kr
|
| 13 |
+
|
| 14 |
+
**Abstract.** This paper proposes a footstep planning algorithm based on univector field method optimized by evolutionary programming for humanoid robot to arrive at a target point in a dynamic environment. The univector field method is employed to determine the moving direction of the humanoid robot at every footstep. Modifiable walking pattern generator, extending the conventional 3D-LIPM method by allowing the ZMP variation while in single support phase, is utilized to generate every joint trajectory of a robot satisfying the planned footstep. The proposed algorithm enables the humanoid robot not only to avoid either static or moving obstacles but also step over static obstacles. The performance of the proposed algorithm is demonstrated by computer simulations using a modeled small-sized humanoid robot HanSaRam (HSR)-VIII.
|
| 15 |
+
|
| 16 |
+
**Keywords:** Footstep planning, univector field method, evolutionary programming, humanoid robot, modifiable walking pattern generator.
|
| 17 |
+
|
| 18 |
+
# 1 Introduction
|
| 19 |
+
|
| 20 |
+
These days research on a humanoid robot has made rapid progress for dexterous motions with the hardware development. Various humanoid robots have demonstrated stable walking with control schemes [1]-[5]. Considering the future of the humanoid robot as a service robot, research on navigation in indoor environments such as homes and offices with obstacles is now needed.
|
| 21 |
+
|
| 22 |
+
In indoor environments, most of research on navigation has been carried out for differential drive mobile robots. The navigation method of the mobile robots is categorized into separated navigation and unified navigation. The separated navigation method, such as structural navigation and deliberative navigation, separates path planning and path following as two isolated tasks. In the path planning step, a path generation algorithm is developed which connects the staring point with the end point without crossing the obstacles. To find the shortest path many searching algorithms such as A\* algorithm and dynamic programming have been applied [6]. On the other hand, in unified navigation method such as the artificial potential field method [7], [8], the path planning step and the path following step are unified in one task.
|
| 23 |
+
|
| 24 |
+
In the navigation research, differential drive mobile robots make a detour to avoid obstacles to arrive at a goal position. On the other hand, humanoid robots are able to
|
| 25 |
+
---PAGE_BREAK---
|
| 26 |
+
|
| 27 |
+
traverse obstacles by their legs. When they move around in an environment, positions of their footprints are important as there are obstacles. Thus, the study of footstep planning for humanoid robots is an important research issue.
|
| 28 |
+
|
| 29 |
+
As research on footstep planning, algorithm obtaining information of obstacle's shape and location by sensors was presented [9]. Through obtained information, a robot determines its step length which is predefined as three type step lengths and its motion such as circumventing, stepping over or stepping on obstacles. Also, an algorithm finding alternative path employing A* by heuristic cost function was developed [10]. Stable region of robot's footprints is predetermined and then a few of placements of them are selected as a discrete set. This algorithm checks collision between a robot and obstacles by 2D polygon intersection test. Human-like strategy for footstep planning was also presented [11].
|
| 30 |
+
|
| 31 |
+
In this paper, a footstep planning algorithm based on the univector field method for humanoid robot is proposed. The univector field method is one of the unified navigation methods, which is designed for fast differential drive mobile robots to enhance performances. Using this method, robot can navigate rapidly to the desired position and orientation without oscillations and unwanted inefficient motions [12], [13]. The footstep planning algorithm determines moving direction of a humanoid robot in real time and has low computing cost by employing the univector field method. Besides, it is able to modify foot placement depending on obstacle's position. Inputting the moving direction and step length of a robot at every footstep to modifiable walking pattern generator [14], every joint trajectory is generated. The proposed algorithm generates an evolutionary optimized path by evolutionary programming (EP) considering hardware limit of a robot and makes a robot arrive at a goal with desired direction. Computer simulations are carried out by a model of HanSaRam (HSR)-VIII which is a small-sized humanoid robot developed in Robot Intelligence Technology (RIT) Lab, KAIST.
|
| 32 |
+
|
| 33 |
+
The rest of the paper is organized as follows: Section 2 describes an overview of univector field method and Section 3 explains MWPG. In Section 4 a footstep planning algorithm is proposed. Computer simulation results are presented in Section 5. Finally concluding remarks follow in Section 6.
|
| 34 |
+
|
| 35 |
+
## 2 Univector Field Method
|
| 36 |
+
|
| 37 |
+
The univector field method is one of path planning methods developed for a differential drive mobile robot. The univector field consists of *move-to-goal univector field* which leads a robot to move to a destination and *avoid-obstacle univector field* which makes a robot avoid obstacles. Its moving direction is decided by combining *move-to-goal* univector field and *avoid-obstacle univector field*. The univector field method requires relatively low computing power because it does not generate a whole path from a start point to a destination before moving, but generates a moving direction decided at every step in real time. In addition, it is easy to plan a path in a dynamic environment with moving obstacles. Thus, this method of path planning is adopted and extended for a humanoid robot.
|
| 38 |
+
---PAGE_BREAK---
|
| 39 |
+
|
| 40 |
+
## 2.1 Move-to-Goal Univector Field
|
| 41 |
+
|
| 42 |
+
The move-to-goal univector field is defined as
|
| 43 |
+
|
| 44 |
+
$$ \mathbf{v}_{muf} = [-\cos(\theta_{muf}) - \sin(\theta_{muf})]^T, \quad (1) $$
|
| 45 |
+
|
| 46 |
+
where
|
| 47 |
+
|
| 48 |
+
$$ \theta_{muf} = \cos^{-1}\left(\frac{p_x - g_x}{d_{goal}}\right), d_{goal} = \sqrt{(p_x - g_x)^2 + (p_y - g_y)^2}, $$
|
| 49 |
+
|
| 50 |
+
$\theta_{muf}$ is the angle from x-axis of the goal at robot's position, $d_{goal}$ is the distance between the center of a goal and robot's position, and $(p_x, p_y)$ and $(g_x, g_y)$ are the robot's position and the goal position, respectively.
|
| 51 |
+
|
| 52 |
+
## 2.2 Avoid-Obstacle Univector Field
|
| 53 |
+
|
| 54 |
+
The avoid-obstacle univector field is defined as
|
| 55 |
+
|
| 56 |
+
$$ \mathbf{v}_{auf} = [\cos(\theta_{auf}) \sin(\theta_{auf})]^T, \quad (2) $$
|
| 57 |
+
|
| 58 |
+
where
|
| 59 |
+
|
| 60 |
+
$$ \theta_{auf} = \cos^{-1}\left(\frac{p_x - o_x}{d_{ob}}\right), d_{ob} = \sqrt{(p_x - o_x)^2 + (p_y - o_y)^2}, $$
|
| 61 |
+
|
| 62 |
+
$\theta_{auf}$ is the angle from x-axis of an obstacle at robot's position, $d_{ob}$ is the distance between the center of an obstacle and robot's position and $(o_x, o_y)$ is the position of an obstacle.
|
| 63 |
+
|
| 64 |
+
Total univector field is determined by properly combining the move-to-goal univector field and the avoid-obstacle univector field. Total univector $\mathbf{v}_{tuf}$ is defined as
|
| 65 |
+
|
| 66 |
+
$$ \mathbf{v}_{tuf} = w_{muf}\mathbf{v}_{muf} + w_{auf}\mathbf{v}_{auf}, \quad (3) $$
|
| 67 |
+
|
| 68 |
+
where $w_{muf}$ and $w_{auf}$ represent the scale factor of the move-to-goal univector field and the avoid-obstacle univector field, respectively.
|
| 69 |
+
|
| 70 |
+
# 3 Modifiable Walking Pattern Generator
|
| 71 |
+
|
| 72 |
+
The modifiable walking pattern generator (MWPG) extended the conventional 3D-LIPM method by allowing the ZMP variation while in single support phase. In the conventional 3D-LIPM without the ZMP variation, only the homogeneous solutions of the 3D-LIPM dynamic equation were considered. However, considering the particular solutions, more extensive and unrestricted walking patterns could be generated by allowing the ZMP variation. The solutions with both homogeneous and particular parts are as follows:
|
| 73 |
+
|
| 74 |
+
Sagittal motion:
|
| 75 |
+
|
| 76 |
+
$$ \begin{bmatrix} x_f \\ v_f T_c \end{bmatrix} = \begin{bmatrix} C_T & S_T \\ S_T & C_T \end{bmatrix} \begin{bmatrix} x_i \\ v_i T_c \end{bmatrix} - \frac{1}{T_c} \begin{bmatrix} \int_0^T S_i \bar{p}(t) dt \\ \int_0^T C_i \bar{p}(t) dt \end{bmatrix}, \quad (4) $$
|
| 77 |
+
---PAGE_BREAK---
|
| 78 |
+
|
| 79 |
+
Lateral motion:
|
| 80 |
+
|
| 81 |
+
$$ \begin{bmatrix} y_f \\ w_f T_c \end{bmatrix} = \begin{bmatrix} C_T & S_T \\ S_T & C_T \end{bmatrix} \begin{bmatrix} y_i \\ w_i T_c \end{bmatrix} - \frac{1}{T_c} \begin{bmatrix} \int_0^T S_t \bar{p}(t) dt \\ \int_0^T C_t \bar{p}(t) dt \end{bmatrix}, \quad (5) $$
|
| 82 |
+
|
| 83 |
+
where $(x_i, v_i)/(x_f, v_f)$ and $(y_i, w_i)/(y_f, w_f)$ represent initial/final position and velocity of the CM in the sagittal and lateral plane, respectively. $S_t$ and $C_t$ are defined as $\cosh(t/T_c)$ and $\sinh(t/T_c)$ with time constant $T_c = \sqrt{Z_c/g}$. The functions $p(t)$ and $q(t)$ are ZMP trajectories for the sagittal and lateral planes, respectively. $\bar{p}(t) = p(T-t)$ and $\bar{q}(t) = q(T-t)$. Through the variation of the ZMP, the walking state (WS), which is the state of the point mass in the 3D-LIPM represented in terms of CM position and linear velocity can be moved to the desired WS in the region of possible trajectories expanded by applying the particular solutions. By means of the MWPG, a humanoid robot can change both sagittal and lateral step lengths, rotation angle of ankles and the period of the walking pattern [14].
|
| 84 |
+
|
| 85 |
+
# 4 Footstep Planning Algorithm
|
| 86 |
+
|
| 87 |
+
In this section, a footstep planning algorithm for a humanoid robot is described. It decides moving orientation at every footstep by univector field navigation method. Using the determined orientations, it calculates exact foot placement. Subsequently, by in-putting the moving direction and step length of a robot at every footstep by proposed footstep planning algorithm to MWPG, every joint trajectory is generated to satisfy the planned footstep.
|
| 88 |
+
|
| 89 |
+
## 4.1 Path Planning
|
| 90 |
+
|
| 91 |
+
To apply univector field method to the path generation of a humanoid robot, the following three issues are considered. To generate a natural and effective path, obstacle's boundary and virtual obstacle [15] are introduced to the avoid-obstacle univector field considering the obstacle's size and movement, respectively. Also, a hyperbolic spiral univector field is developed as a move-to-goal univector field in order to reach a destination with a desired orientation [13].
|
| 92 |
+
|
| 93 |
+
**Boundary of Avoid-Obstacle Univector Field.** The repulsive univector field by obstacles is not generated at every position but generated in a restricted range by applying a boundary to the avoid-obstacle univector field. Also, the more the robot's position becomes distant from the center of an obstacle, the more the magnitude of the repulsive univector field decreases linearly. Consequently, a robot is not influenced the repulsive univector field at the region which is away from the boundary of obstacles. Considering this boundary effect, the avoid-obstacle univector $\mathbf{v}_{auf}$ is defined as
|
| 94 |
+
|
| 95 |
+
$$ \mathbf{v}_{auf} = k_b [\cos(\theta_{auf}) \sin(\theta_{auf})]^T \quad (6) $$
|
| 96 |
+
|
| 97 |
+
where
|
| 98 |
+
|
| 99 |
+
$$ k_b = \frac{d_{boun} - (d_{ob} - o_{size})}{d_{boun}}, $$
|
| 100 |
+
---PAGE_BREAK---
|
| 101 |
+
|
| 102 |
+
o_size is the obstacle's radius, d_boun is the size of boundary and k_b is a scale factor. By introducing the boundary into the avoid-obstacle univector field, an effective path is generated.
|
| 103 |
+
|
| 104 |
+
**Virtual Obstacle.** The virtual obstacle is defined by introducing a shifting vector to the center position of a real obstacle, where the direction of shifting vector is opposed to the robots moving direction and the magnitude is proportional to the robots moving velocity. Then, the position of the center of the virtual obstacle is obtained as
|
| 105 |
+
|
| 106 |
+
$$[o_x^{\text{virtual}}, o_y^{\text{virtual}}]^T = [o_x^{\text{real}}, o_y^{\text{real}}]^T + \mathbf{s}, \quad (7)$$
|
| 107 |
+
|
| 108 |
+
$$\mathbf{s} = -k_v \mathbf{v}_{\text{robot}},$$
|
| 109 |
+
|
| 110 |
+
where $(o_x^{\text{virtual}}, o_y^{\text{virtual}})$ is the virtual obstacle's position, $(o_x^{\text{real}}, o_y^{\text{real}})$ is the real obstacle's position, $\mathbf{s}$ is the shifting vector, $k_v$ is the scale factor of the virtual obstacle and $\mathbf{v}_{\text{robot}}$ is the robot's velocity vector. When calculating the avoid-obstacle univector, the virtual obstacle's positions are used instead of the real obstacles. By introducing the virtual obstacle, a robot can avoid obstacles more safely and smoothly by a generated path at every step.
|
| 111 |
+
|
| 112 |
+
**Hyperbolic Spiral Univector Field.** The move-to-goal univector field is designed by the hyperbolic spiral for a robot to get to a target point with a desired orientation. The hyperbolic spiral univector field $\mathbf{v}_{huf}$ is defined as
|
| 113 |
+
|
| 114 |
+
$$\mathbf{v}_{huf} = [\cos(\phi_h) \sin(\phi_h)]^T, \quad (8)$$
|
| 115 |
+
|
| 116 |
+
where
|
| 117 |
+
|
| 118 |
+
$$\phi_h = \begin{cases} \theta \pm \frac{\pi}{2} \left(2 - \frac{d_e+k_r}{\rho+k_r}\right) & \text{if } \rho > d_e \\ \theta \pm \frac{\pi}{2} \sqrt{\frac{\rho}{d_e}} & \text{if } 0 \le \rho \le d_e, \end{cases}$$
|
| 119 |
+
|
| 120 |
+
$\theta$ is the angle from x-axis of the goal at robot's position. The notation $\pm$ represents the direction of movement, where + is when a robot moves clockwise and - counter-clockwise. $k_r$ is an adjustable parameter. If $k_r$ becomes larger, the maximal value of curvature derivative decreases and the contour of the spiral becomes smoother. $\rho$ is the distance between the center of the destination and robot's position $d_e$ is predefined radius that decides the size of the spiral.
|
| 121 |
+
|
| 122 |
+
By designing a move-to-goal univector field with hyperbolic spiral, a robot can arrive at a destination with any orientation angle. In this paper, in order to obtain the desired posture at a target position, two hyperbolic spiral univector fields are combined. The move-to-goal univector field is defined as
|
| 123 |
+
|
| 124 |
+
$$\phi_{\text{muf}} = \begin{cases} \theta_{\text{up}} + \frac{\pi}{2} \left(2 - \frac{d_e+k_r}{\rho_{\text{up}}+k_r}\right) & \text{if } p_y^h > g_{\text{size}} \\ \theta_{\text{down}} - \frac{\pi}{2} \left(2 - \frac{d_e+k_r}{\rho_{\text{down}}+k_r}\right) & \text{if } p_y^h < -g_{\text{size}}, \\ \theta_{\text{dir}} & \text{otherwise} \end{cases}, \quad (9)$$
|
| 125 |
+
|
| 126 |
+
with
|
| 127 |
+
|
| 128 |
+
$$\rho_{\text{up}} = \sqrt{p_x^{h2} + (p_y^h - d_e - g_{\text{size}})^2}, \quad \rho_{\text{down}} = \sqrt{p_x^{h2} + (p_y^h + d_e + g_{\text{size}})^2},$$
|
| 129 |
+
---PAGE_BREAK---
|
| 130 |
+
|
| 131 |
+
$$ \theta_{up} = \tan^{-1}\left(\frac{p_y^h - d_e - g_{size}}{p_x^h}\right) + \theta_{dir}, \quad \theta_{down} = \tan^{-1}\left(\frac{p_y^h + d_e + g_{size}}{p_x^h}\right) + \theta_{dir}, $$
|
| 132 |
+
|
| 133 |
+
$$ \mathbf{p}^h = \mathbf{M}_{rot} \mathbf{M}_{trans} \mathbf{p}, $$
|
| 134 |
+
|
| 135 |
+
$$ \mathbf{M}_{trans} = \begin{bmatrix} 1 & 0 & -g_x \\ 0 & 1 & -g_y \\ 0 & 0 & 1 \end{bmatrix}, \quad \mathbf{M}_{rot} = \begin{bmatrix} \cos(-\theta_{dir}) & -\sin(-\theta_{dir}) & 0 \\ \sin(-\theta_{dir}) & \cos(-\theta_{dir}) & 0 \\ 0 & 0 & 1 \end{bmatrix}, $$
|
| 136 |
+
|
| 137 |
+
$$ \mathbf{p} = [p_x \ p_y \ 1]^T, \quad \mathbf{p}^h = [p_x^h \ p_y^h \ 1]^T, $$
|
| 138 |
+
|
| 139 |
+
where $g_{size}$ is the radius of the goal region and $\theta_{dir}$ is the desired arrival angle at a target. By using the move-to-goal univector field which is composed of the hyperbolic spiral univector fields, a robot can arrive at a goal with any arrival angles.
|
| 140 |
+
|
| 141 |
+
## 4.2 Footstep Planning
|
| 142 |
+
|
| 143 |
+
While a humanoid robot moves towards a destination, there is a situation when it has to step over an obstacle if it is not too high. This is the main difference from the path planning for a differential drive mobile robot, as it tries to find a detour route to circumvent obstacles instead of stepping over them. In this section, a footstep planning algorithm is proposed, which enables a robot to traverse over the obstacles effectively.
|
| 144 |
+
|
| 145 |
+
It is very natural and efficient way that a robot steps over them instead of detouring, if its moving direction is maintained. The proposed algorithm enables a robot step over the obstacles with minimal step length while maintaining its moving direction. It is
|
| 146 |
+
|
| 147 |
+
**Fig. 1.** Stepping over an obstacle. (a) Left leg is supporting leg without additional step (b) Left leg is supporting leg with additional step (c) Right leg is supporting leg without additional step (d) Right leg is supporting leg with additional step.
|
| 148 |
+
---PAGE_BREAK---
|
| 149 |
+
|
| 150 |
+
Fig. 2. Stepping over an obstacle when an obstacle is in front of one leg
|
| 151 |
+
|
| 152 |
+
assumed that the shape of obstacles is a rectangle with narrow width and long length as shown in Fig. 1.
|
| 153 |
+
|
| 154 |
+
The forward and backward step length from a supporting leg of a humanoid robot are restricted because of hardware limitation. If an obstacle is wider in width than the maximum step length of a humanoid robot, it is not able to step over an obstacle. Thus, a humanoid robot has to step over an obstacle with the shortest possible step length in order to step over the widest possible obstacle. The step length of a humanoid robot is determined by which leg is a supporting leg when it steps over an obstacle. As the proposed algorithm considers these facts, it enables a robot to step over obstacles with the shortest step length. Fig. 1 shows the footprints to step over an obstacle using this algorithm. Fig. 1(a) and Fig. 1(d) are situations when a left foot comes close to the obstacle earlier than a right foot and Fig. 1(b) and Fig. 1(c) are situations when a right foot approaches the obstacle closely than the other one. In case of Fig. 1(a) and 1(b), the left leg is appropriate as a supporting leg for the minimum step length. On the other hand, the right leg is appropriate as a supporting leg in Fig. 1(c) and 1(d). Therefore, in order to make a left leg as a supporting leg in Fig. 1(b) and a right leg as a supporting leg in Fig. 1(d), one more step is needed before stepping over the obstacle, while such an additional step is not needed in Fig. 1(a) and 1(c).
|
| 155 |
+
|
| 156 |
+
There is a situation when an obstacle is only in front of one leg such that the other leg can be placed without considering the obstacle. The proposed algorithm deals with this situation such that it can step over the obstacle effectively like a human being. Fig. 2 shows the footprints of a robot in this case.
|
| 157 |
+
|
| 158 |
+
## 4.3 Parameter Optimization by Evolution Programming
|
| 159 |
+
|
| 160 |
+
A humanoid robot has the constraint of change in rotation of legs on account of the hardware limitation. Hence, when planning footsteps for a biped robot by the proposed algorithm, the maximum change in rotation of legs has to be assigned. In this algorithm, there are seven parameters to be assigned such as $k_v$ in the virtual obstacle, $d_{boun}$ in the avoid-obstacle univector field, $d_e$, $k_r$, $g_{size}$ in the move-to-goal univector field and $w_{muf}$, $w_{auf}$ in composition of the move-to-goal univector field and the avoid-obstacle univector field, respectively. A robot can arrive at a goal with the change in rotation of legs within constraints by selecting appropriate values of parameters mentioned above. Also to generate the most effective path, EP is employed to choose the values of parameters. The fitness function in EP is designed considering the followings:
|
| 161 |
+
---PAGE_BREAK---
|
| 162 |
+
|
| 163 |
+
* A robot should arrive at a destination with a minimum position error.
|
| 164 |
+
* The facing direction of a robot at a destination should be the desired one.
|
| 165 |
+
* A robot should not collide with obstacles.
|
| 166 |
+
* The change in rotation of legs should not exceed the constraint value.
|
| 167 |
+
|
| 168 |
+
Consequently, the fitness function is defined as
|
| 169 |
+
|
| 170 |
+
$$f = -(k_p P_{err} + k_q | \theta_{err} | + k_{col} N_{col} + k_{const} N_{const}) \quad (10)$$
|
| 171 |
+
|
| 172 |
+
where $N_{const}$ is the number of constraint violations of change in rotation of legs, $N_{col}$ is the number of obstacle collisions of the robot, $\theta_{err}$ is the difference between the desired orientation and the orientation of a robot at a goal, $P_{err}$ is the position error at a goal and $k_{const}, k_{col}, k_q, k_p$ are constants.
|
| 173 |
+
|
| 174 |
+
# 5 Simulation Results
|
| 175 |
+
|
| 176 |
+
HSR-VIII (Fig. 3(a)) is a small-sized humanoid robot that has been continuously undergoing redesign and development in RIT Lab, KAIST since 2,000. Its height and weight are 52.8 cm and 5.5 kg, respectively. It has 26 DOFs which consists of 12 DC motors with harmonic drives for reduction gears in the lower body and 14 RC servo motors in the upper body. HSR-VIII was modeled by Webot which is the 3D mobile robotics simulation software [16]. Simulations were carried out with Webot of the HSR-VIII model by applying the proposed footstep planning algorithm.
|
| 177 |
+
|
| 178 |
+
Through the simulation, seven parameters in the algorithm were optimized by EP. Maximum rotating angle of the robot's ankles was selected heuristically as 40°. After 100 generations, the parameters were optimized as $k_v=1.94$, $d_{boun}=20.09$, $d_e=30.04$, $k_r=0.99$, $g_{size}=0.94$, $w_{muf}=1.96$, $w_{auf}=1.46$.
|
| 179 |
+
|
| 180 |
+
Fig. 3(b) shows the sequence of robot's footsteps as a 2D simulation result, where there were ten obstacles of three different kinds such as five static circular obstacles
|
| 181 |
+
|
| 182 |
+
**Fig. 3.** (a) HSR-VIII. (b) Sequence of footsteps in the environment with ten obstacles of three different kinds.
|
| 183 |
+
---PAGE_BREAK---
|
| 184 |
+
|
| 185 |
+
**Fig. 4.** Snap shots of 3D simulation result by Webot in the environment with ten obstacles of three different kinds. (A goal is a circle in the right bottom corner.)
|
| 186 |
+
|
| 187 |
+
and two moving circular obstacles and three static rectangular obstacles with a height of 1.0 cm. The desired angle at a destination was fixed at 90° from x-axis. As shown in the figure, by the proposed algorithm the robot moves from a start point to a target goal in the right bottom corner, while avoiding static and moving circular obstacles and stepping over static rectangular ones by adjusting its step length. In addition, the robot faces the desired orientation at the goal. Fig. 5 shows the 3D simulation result by Webot, where the environment is the same as that used in the 2D simulation. Similar result was obtained as in Fig. 3(b). In particular, in third and sixth snapshots of the Fig. 10, it can be seen that the robot makes a turn before colliding with the moving circular obstacles predicting their movement.
|
| 188 |
+
|
| 189 |
+
# 6 Conclusion
|
| 190 |
+
|
| 191 |
+
The real-time footstep planning algorithm was proposed for a humanoid robot to travel to a destination avoiding and stepping over obstacles. The univector field method was adopted to determine the heading direction and using the determined orientations, exact foot placement was calculated. The proposed algorithm generated the efficient path by applying a boundary to the avoid-obstacle univector field and introducing the virtual obstacle concept. Furthermore, it enables a robot to get to a destination with a desired orientation by employing the hyperbolic spiral univector field. The proposed algorithm made a robot possible to step over an obstacle with minimal step length maintaining its heading orientation. It also considered the situation when an obstacle is in front of only one leg. In this case, it steps over the obstacle while placing the other leg properly as a supporting one. The effectiveness of the algorithm was demonstrated by computer simulations in dynamic environment. As a further work, experiments with a real small-sized humanoid robot HSR-VIII will be carried out using a global camera to demonstrate the applicability of the proposed algorithm.
|
| 192 |
+
---PAGE_BREAK---
|
| 193 |
+
|
| 194 |
+
References
|
| 195 |
+
|
| 196 |
+
1. Nishiwaki, K., Sugihara, T., Kagami, S., Kanehiro, F., Inaba, M., Inoue, H.: Design and Development of Research Platform for Perception- Action Integration in Humanoid Robot: H6. In: Proc. IEEE/RSJ Int. Conference on Intelligent Robots and Systems, pp. 1559–1564 (2000)
|
| 197 |
+
|
| 198 |
+
2. Kaneko, K., Kanehiro, F., Kajita, S., Hirukawa, H., Kawasaki, T., Hirata, M., Akachi, K., Isozumi, T.: Humanoid Robot HRP-2. In: Proc. IEEE Int'l. Conf. on Robotics and Automation, ICRA 2004 (2004)
|
| 199 |
+
|
| 200 |
+
3. Sakagami, Y., Watanabe, R., Aoyama, C., Matsunaga, S., Higaki, N., Fujimura, K.: The intelligent ASIMO: system overview and integration. In: Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 2478–2483 (2002)
|
| 201 |
+
|
| 202 |
+
4. Ogura, Y., Aikawa, H., Shimomura, K., Kondo, H., Morishima, A., Lim, H., Takanishi, A.: Development of a New Humanoid Robot WABIAN-2. In: Proc. IEEE Int'l. Conf. on Robotics and Automation, ICRA 2006 (2006)
|
| 203 |
+
|
| 204 |
+
5. Kim, Y.-D., Lee, B.-J., Ryu, J.-H., Kim, J.-H.: Landing Force Control for Humanoid Robot by Time-Domain Passivity Approach. IEEE Trans. on Robotics 23(6), 1294–1301 (2007)
|
| 205 |
+
|
| 206 |
+
6. Kanal, L., Kumar, V. (eds.): Search in Artificial Intelligence. Springer, New York (1988)
|
| 207 |
+
|
| 208 |
+
7. Borenstein, J., Koren, Y.: Real-time obstacle avoidance for fast mobile robots. IEEE Trans. Syst., Man, Cybern. 20, 1179–1187 (1989)
|
| 209 |
+
|
| 210 |
+
8. Borenstein, J., Koren, Y.: The vector field histogram-fast obstacle avoidance for mobile robots. IEEE Trans. Syst., Man, Cybern. 7, 278–288 (1991)
|
| 211 |
+
|
| 212 |
+
9. Yagi, M., Lumelsky, V.: Biped Robot Locomotion in Scenes with Unknown Obstacles. In: Proc. IEEE Int'l. Conf. on Robotics and Automation (ICRA 1999), Detroit, MI, May 1999, pp. 375–380 (1999)
|
| 213 |
+
|
| 214 |
+
10. Chestnutt, J., Lau, M., Cheung, G., Kuffner, J., Hodgins, J., Kanade, T.: Footstep planning for the honda asimo humanoid. In: Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 631–636 (2005)
|
| 215 |
+
|
| 216 |
+
11. Ayaz, Y., Munawar, K., Bilal Malik, M., Konno, A., Uchiyama, M.: Human-Like Approach to Footstep Planning Among Obstacles for Humanoid Robots. In: Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 5490–5495 (2006)
|
| 217 |
+
|
| 218 |
+
12. Kim, Y.-J., Kim, J.-H., Kwon, D.-S.: Evolutionary Programming-Based Uni-vector Field Navigation Method for Fast Mobile Robots. IEEE Trans. on Systems, Man and Cybernetics - Part B - Cybernetics 31(3), 450–458 (2001)
|
| 219 |
+
|
| 220 |
+
13. Kim, Y.-J., Kim, J.-H., Kwon, D.-S.: Univector Field Navigation Method for Fast Mobile Robots. Korea Advanced Institute of Science and Technology, Ph.D. Thesis
|
| 221 |
+
|
| 222 |
+
14. Lee, B.-J., Stonier, D., Kim, Y.-D., Yoo, J.-K., Kim, J.-H.: Modifiable Walking Pattern of a Humanoid Robot by Using Allowable ZMP Variation. IEEE Transaction on Robotics 24(4), 917–925 (2008)
|
| 223 |
+
|
| 224 |
+
15. Lim, Y.-S., Choi, S.-H., Kim, J.-H., Kim, D.-H.: Evolutionary Univector Field-based Navigation with Collision Avoidance for Mobile Robot. In: Proc. 17th World Congress The International Federation of Automatic Control, Seoul, Korea (July 2008)
|
| 225 |
+
|
| 226 |
+
16. Michel, O.: Cyberbotics Ltd. WebotsTM: Professional mobile robot simulation. Int. J. of Advanced Robotic Systems 1(1), 39–42 (2004)
|
samples_new/texts_merged/3450399.md
ADDED
|
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Note 7 Supplement: RSA Extras
|
| 5 |
+
|
| 6 |
+
Computer Science 70
|
| 7 |
+
University of California, Berkeley
|
| 8 |
+
|
| 9 |
+
Summer 2018
|
| 10 |
+
|
| 11 |
+
## 1 One-Time Pad
|
| 12 |
+
|
| 13 |
+
The exclusive OR (XOR) $x \oplus y$ of two bits $x$ and $y$ is defined by:
|
| 14 |
+
|
| 15 |
+
<table><thead><tr><th>x</th><th>y</th><th>x ⊕ y</th></tr></thead><tbody><tr><td>0</td><td>0</td><td>0</td></tr><tr><td>0</td><td>1</td><td>1</td></tr><tr><td>1</td><td>0</td><td>1</td></tr><tr><td>1</td><td>1</td><td>0</td></tr></tbody></table>
|
| 16 |
+
|
| 17 |
+
In other words, $x \oplus y$ equals 1 if and only if $x$ and $y$ are different bits. Notice that $x \oplus y$ is the same as $x + y \bmod 2$. For any $x \in \{0, 1\}$, we have $x \oplus x = 0$ and $x \oplus 0 = x$. So, for any $y \in \{0, 1\}$, we have $y \oplus x \oplus x = y \oplus 0 = y$.
|
| 18 |
+
|
| 19 |
+
We can extend the XOR operation to work on bit strings $x$ and $y$ of the same length by applying the XOR operation bitwise.
|
| 20 |
+
|
| 21 |
+
**Example 1.** $01000 \oplus 11100 = 10100$.
|
| 22 |
+
|
| 23 |
+
For bit strings $x$ and $y$ of the same length, we again have $y \oplus x \oplus x = y$. This actually gives us the simplest method to encrypt our messages, known as the **one-time pad**. To send a message $m$ (a bit string), the sender and receiver both agree (in advance) on a secret key $k$, which is a bit string of the same length as the message. The sender sends $m \oplus k$ to the receiver, and the receiver decrypts the message by $m \oplus k \oplus k = m$.
|
| 24 |
+
|
| 25 |
+
If an eavesdropper intercepts the encrypted message $m \oplus k$, then without knowledge of the secret key $k$, the one-time pad is unbreakable. Indeed, since the secret key is unknown, then the eavesdropper must think that any secret
|
| 26 |
+
---PAGE_BREAK---
|
| 27 |
+
|
| 28 |
+
key is possible. Given any message $m'$, then $m' \oplus m \oplus k \oplus m' = m \oplus k$, which means that the encrypted message $m \oplus k$ could have also come from the message $m'$ with the secret key $m \oplus k \oplus m'$. We have just shown that the encrypted message could have come from *any* starting message, which means that the eavesdropper knows nothing about the original message.
|
| 29 |
+
|
| 30 |
+
The one-time pad is not very convenient, however, because in order to guarantee the safety of the scheme, the secret key should really be discarded after one use (hence the name “one-time pad”). Since the sender and receiver must agree upon the secret key beforehand, the inability to reuse the secret key significantly hinders the practicality of the scheme. Nevertheless, the one-time pad can be useful when combined with other schemes.
|
| 31 |
+
|
| 32 |
+
## 2 Application of RSA: Digital Signatures
|
| 33 |
+
|
| 34 |
+
A signature is meant to provide proof of an individual's identity. In order for the signature to be a valid proof, the signature must have the property that no other individual can produce the same signature. Unfortunately, in the real world, we know that signatures can be forged.
|
| 35 |
+
|
| 36 |
+
Inspired by this idea, we introduce the concept of a **digital signature**. As before, a digital signature is supposed to provide proof of an individual's identity. However, the property that “no other individual can produce the same signature” is replaced by the property that “no other individual can reliably produce the same signature *efficiently*”. The idea is that someone who wants to forge the signature must use some brute force method which is computationally infeasible, e.g., would require centuries or more to compute.
|
| 37 |
+
|
| 38 |
+
Suppose that you have a RSA public key ($N,e$) with corresponding private key $d$. One way to provide a “signature” is to reveal your private key $d$. If we assume that RSA is unbreakable, then the private key cannot be computed efficiently from the public key, so this would indeed constitute a signature. Unfortunately this has the drawback of revealing your private key.
|
| 39 |
+
|
| 40 |
+
Instead, the signature scheme proceeds as follows. A verifier provides the individual with some randomly chosen $x \in \{0, 1, \dots, N-1\}$ and asks the individual for $x^d \mod N$. The verifier can then check that $x^{ed} \equiv x \pmod N$.
|
| 41 |
+
|
| 42 |
+
If the individual knows the private key $d$, then this computation is fast. However, a forger without knowledge of the private key must labor to find the $y \in \{0, 1, \dots, N-1\}$ such that $y^e \equiv x \pmod N$. If RSA is unbreakable, then this cannot be done efficiently. Presently we believe that you cannot do
|
| 43 |
+
---PAGE_BREAK---
|
| 44 |
+
|
| 45 |
+
meaningfully better than exhaustive search, which can easily take centuries if $N$ is large enough.
|
| 46 |
+
|
| 47 |
+
The verifier can play this game with the individual multiple times until
|
| 48 |
+
the verifier is satisfied that the individual is not forging the signature.
|
| 49 |
+
|
| 50 |
+
# 3 RSA Attacks
|
| 51 |
+
|
| 52 |
+
The RSA scheme presented in the notes is known as “textbook RSA”. When RSA is used in practice, there are extra bells and whistles that are added to the scheme to improve its security. In this section we describe a couple of known attacks against the RSA scheme.
|
| 53 |
+
|
| 54 |
+
The first attack warns against using RSA alone. Suppose that you take
|
| 55 |
+
your credit card $m$ and pass it to the encryption function $E$ to get your
|
| 56 |
+
encrypted credit card number $E(m)$. The encrypted credit card number $E(m)$
|
| 57 |
+
is then sent to a company such as Amazon in order to complete a credit
|
| 58 |
+
card transaction. However, an eavesdropper sees $E(m)$. The eavesdropper
|
| 59 |
+
can then send $E(m)$ to the company again in order to make his or her own
|
| 60 |
+
purchases, effectively stealing your credit card.
|
| 61 |
+
|
| 62 |
+
The method to prevent this attack is to take your credit card number $m$, and in each new transaction, pad your credit card number with a randomly generated string at the end to form a longer, random string $m'$. Then, send $E(m')$ to the company. This is called *RSA with padding*. The randomness ensures that even if you send the same message twice, the encrypted messages will most likely differ, so that if the company receives the same encrypted message $E(m)$ twice in a row, then it will know to be suspicious.
|
| 63 |
+
|
| 64 |
+
The second attack is about unwittingly giving away information. Say that an attacker intercepts the encrypted message $E(m)$. Since the attacker cannot decrypt the message, it asks the company to decrypt the message in a roundabout way. First the attacker picks a random number $r$, and asks the company to please decrypt the message $E(m) \cdot r^e \bmod N$, where $(N, e)$ is the public key. After multiplying $E(m)$ by $r^e$, the result is a seemingly innocuous string, so the company complies with the request, sending back the decrypted message $mr$. Now, since the attacker knows $r$, he or she also knows $r^{-1} \bmod N$, and using this, the attacker can recover the original message $m$.
|
| 65 |
+
|
| 66 |
+
It may be surprising to learn that our cryptosystems (such as RSA) are
|
| 67 |
+
not *provably* secure, but nevertheless they are used every day.
|
samples_new/texts_merged/3461249.md
ADDED
|
@@ -0,0 +1,272 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Generalizing Robot Imitation Learning with Invariant Hidden Semi-Markov Models
|
| 5 |
+
|
| 6 |
+
Ajay Kumar Tanwani¶,§, Jonathan Lee§, Brijen Thananjeyan§, Michael Laskey§, Sanjay Krishnan§, Roy Fox§, Ken Goldberg§, Sylvain Calinon¶
|
| 7 |
+
|
| 8 |
+
**Abstract.** Generalizing manipulation skills to new situations requires extracting invariant patterns from demonstrations. For example, the robot needs to understand the demonstrations at a higher level while being invariant to the appearance of the objects, geometric aspects of objects such as its position, size, orientation and viewpoint of the observer in the demonstrations. In this paper, we propose an algorithm that learns a joint probability density function of the demonstrations with invariant formulations of hidden semi-Markov models to extract invariant segments (also termed as sub-goals or options), and smoothly follow the generated sequence of states with a linear quadratic tracking controller. The algorithm takes as input the demonstrations with respect to different coordinate systems describing virtual landmarks or objects of interest with a task-parameterized formulation, and adapt the segments according to the environmental changes in a systematic manner. We present variants of this algorithm in latent space with low-rank covariance decompositions, semi-tied covariances, and non-parametric online estimation of model parameters under small variance asymptotics; yielding considerably low sample and model complexity for acquiring new manipulation skills. The algorithm allows a Baxter robot to learn a pick-and-place task while avoiding a movable obstacle based on only 4 kinesthetic demonstrations.
|
| 9 |
+
|
| 10 |
+
**Keywords:** hidden Markov models, imitation learning, adaptive systems
|
| 11 |
+
|
| 12 |
+
## 1 Introduction
|
| 13 |
+
|
| 14 |
+
Generative models are widely used in robot imitation learning to estimate the distribution of the data for regenerating samples from the model [1]. Common applications include probability density function estimation, image regeneration, dimensionality reduction and so on. The parameters of the model encode the task structure which is inferred from the demonstrations. In contrast to direct trajectory learning from demonstrations, many problems arise in robotic applications that require higher contextual level understanding of the environment. This requires learning invariant mappings in the demonstrations that can generalize across different environmental situations such as size, position, orientation of objects, and viewpoint of the observer. Recent trend in imitation learning is forgoing such a task structure for end-to-end supervised learning which requires a large amount of training demonstrations.
|
| 15 |
+
|
| 16 |
+
§University of California, Berkeley.
|
| 17 |
+
¶Idiap Research Institute, Switzerland.
|
| 18 |
+
Corresponding author: ajay.tanwani@berkeley.edu
|
| 19 |
+
---PAGE_BREAK---
|
| 20 |
+
|
| 21 |
+
Fig. 1: Conceptual illustration of hidden semi-Markov model (HSMM) for imitation learning: (left) 3-dimensional Z-shaped demonstrations composed of 5 equally spaced trajectory samples, (middle) demonstrations are encoded with a 3 state HMM represented by Gaussians (shown as ellipsoids) that represent the blue, green and red segments respectively. The transition graph shows a duration model (Gaussian) next to each node, (right) the generative model is combined with linear quadratic tracking (LQT) to synthesize motion in performing robot manipulation tasks from 5 different initial conditions marked with orange squares (see also Fig. 2).
|
| 22 |
+
|
| 23 |
+
The focus of this paper is to learn the joint probability density function of the human demonstrations with a family of **Hidden Markov Models (HMMs)** in an **unsupervised** manner [20]. We combine tools from statistical machine learning and optimal control to segment the demonstrations into different components or sub-goals that are sequenced together to perform manipulation tasks in a smooth manner. We first present a simple algorithm for imitation learning that combines the decoded state sequence of a hidden semi-Markov model [20,30] with a linear quadratic tracking controller to follow the demonstrated movement [2] (see Fig. 1). We then augment the model with a task-parameterized formulation such that it can be systematically adapted to changing situations such as pose/size of the objects in the environment [4,23,27]. We present latent space formulations of our approach to exploit the task structure using: 1) mixture of factor analyzers decomposition of the covariance matrix [14], 2) semi-tied covariance matrices of the mixture model [23], and 3) Bayesian non-parametric formulation of the model with Hierarchical Dirichlet process (HDP) for online learning under small variance asymptotics [24]. The paper unifies and extends our previous work on encoding manipulation skills in a task-adaptive manner [22,23,24]. Our objective is to reduce the number of demonstrations required for learning a new task, while ensuring effective generalization in new environmental situations.
|
| 24 |
+
|
| 25 |
+
## 1.1 Related Work
|
| 26 |
+
|
| 27 |
+
Imitation learning provides a promising approach to facilitate robot learning in the most 'natural' way. The main challenges in imitation learning include [16]: 1) **what-to-learn** – acquiring meaningful data to represent the important features of the task from demonstrations, and 2) **how-to-learn** – learning a control policy from the features to reproduce the demonstrated behaviour. Imitation learning algorithms typically fall into **behaviour cloning** or **inverse reinforcement learning (IRL)** approaches. IRL aims to recover the unknown reward function that is being optimized in the demonstrations, while be-
|
| 28 |
+
---PAGE_BREAK---
|
| 29 |
+
|
| 30 |
+
behaviour cloning approaches directly learn from human demonstrations in a supervised manner. Prominent approaches to imitation learning include Dynamic Movement Primitives [9], Generative Adversarial Imitation Learning [8], one-shot imitation learning [5] and so on [18].
|
| 31 |
+
|
| 32 |
+
This paper emphasizes learning manipulation skills from human demonstrations in an unsupervised manner using a family of hidden Markov models by sequencing the atomic movement segments or primitives. HMMs have been typically used for recognition and generation of movement skills in robotics [13]. Other related application contexts in imitation learning include options framework [10], sequencing primitives [15], and neural task programs [29].
|
| 33 |
+
|
| 34 |
+
A number of variants of HMMs have been proposed to address some of its shortcomings, including: 1) how to bias learning towards models with longer self-dwelling states, 2) how to robustly estimate the parameters with high-dimensional noisy data, 3) how to adapt the model with newly observed data, and 4) how to estimate the number of states that the model should possess. For example, [11] used HMMs to incrementally group whole-body motions based on their relative distance in HMM space. [13] presented an iterative motion primitive refinement approach with HMMs. [17] used the Beta Process Autoregressive HMM for learning from unstructured demonstrations. Figueroa et al. used the transformation invariant covariance matrix for encoding tasks with a Bayesian non-parametric HMM [6].
|
| 35 |
+
|
| 36 |
+
In this paper, we address these shortcomings with an algorithm that learns a hidden semi-Markov model [20,30] from a few human demonstrations for segmentation, recognition, and synthesis of robot manipulation tasks (see Sec. 2). The algorithm observes the demonstrations with respect to different coordinate systems describing virtual landmarks or objects of interest, and adapts the model according to the environmental changes in a systematic manner in Sec. 3. Capturing such invariant representations allows us to compactly encode the task variations than using a standard regression problem. We present variants of the algorithm in latent space to exploit the task structure in Sec. 4. In Sec. 5, we show the application of our approach to learning a pick-and-place task from a few demonstrations, with an outlook to our future work.
|
| 37 |
+
|
| 38 |
+
## 2 Hidden Markov Models
|
| 39 |
+
|
| 40 |
+
**Hidden Markov models (HMMs)** encapsulate the spatio-temporal information by augmenting a mixture model with latent states that sequentially evolve over time in the demonstrations [20]. HMM is thus defined as a doubly stochastic process, one with sequence of hidden states and another with sequence of observations/emissions. Spatio-temporal encoding with HMMs can handle movements with variable durations, recurring patterns, options in the movement, or partial/unaligned demonstrations. Without loss of generality, we will present our formulation with semi-Markov models for the remainder of the paper. Semi-Markov models relax the Markovian structure of state transitions by relying not only upon the current state but also on the duration/elapsed time in the current state, i.e., the underlying process is defined by a *semi-Markov chain* with a variable duration time for each state. The state duration stay is a random integer variable that assumes values in the set $\{1, 2, \dots, s^{\max}\}$. The value corresponds to the
|
| 41 |
+
---PAGE_BREAK---
|
| 42 |
+
|
| 43 |
+
number of observations produced in a given state, before transitioning to the next state. **Hidden Semi-Markov Models** (HSMMs) associate an observable output distribution with each state in a semi-Markov chain [30], similar to how we associated a sequence of observations with a Markov chain in a HMM.
|
| 44 |
+
|
| 45 |
+
Let $\{\xi_t\}_{t=1}^T$ denote the sequence of observations with $\xi_t \in \mathbb{R}^D$ collected while demonstrating a manipulation task. The state may represent the visual observation, kinesthetic data such as the pose and the velocities of the end-effector of the human arm, haptic information, or any arbitrary features defining the task variables of the environment. The observation sequence is associated with a hidden state sequence $\{z_t\}_{t=1}^T$ with $z_t \in \{1...K\}$ belonging to the discrete set of K cluster indices. The cluster indices correspond to different segments of the task such as reach, grasp, move etc. We want to learn the joint probability density of the observation sequence and the hidden state sequence. The transition between one segment $i$ to another segment $j$ is denoted by the transition matrix $a \in \mathbb{R}^{K \times K}$ with $a_{i,j} \triangleq P(z_t = j | z_{t-1} = i)$. The parameters $\{\mu_j^S, \Sigma_j^S\}$ represent the mean and the standard deviation of staying $s$ consecutive time steps in state $j$ as $p(s)$ estimated by a Gaussian $\mathcal{N}(s|\mu_j^S, \Sigma_j^S)$. The hidden state follows a categorical distribution with $z_t \sim \text{Cat}(\pi_{z_{t-1}})$ where $\pi_{z_{t-1}} \in \mathbb{R}^K$ is the next state transition distribution over state $z_{t-1}$ with $\Pi_i$ as the initial probability, and the observation $\xi_t$ is drawn from the output distribution of state $j$, described by a multivariate Gaussian with parameters $\{\mu_j, \Sigma_j\}$. The overall parameter set for an HSMM is defined by $\{\Pi_i, \{a_{i,m}\}_{m=1}^K, \mu_i, \Sigma_i, \mu_i^S, \Sigma_i^S\}_{i=1}^K$.
|
| 46 |
+
|
| 47 |
+
## 2.1 Encoding with HSMM
|
| 48 |
+
|
| 49 |
+
For learning and inference in a HMM [20], we make use of the intermediary variables as: 1) **forward variable**, $\alpha_{t,i}^{HMM} \triangleq P(z_t = i, \xi_1, ..., \xi_t|\theta)$: probability of a datapoint $\xi_t$ to be in state $i$ at time step $t$ given the partial observation sequence $\{\xi_1, ..., \xi_t\}$, 2) **backward variable**, $\beta_{t,i}^{HMM} \triangleq P(\xi_{t+1}, ..., \xi_T|z_t = i, \theta)$: probability of the partial observation sequence $\{\xi_{t+1}, ..., \xi_T\}$ given that we are in the $i$-th state at time step $t$, 3) **smoothed node marginal** $\gamma_{t,i}^{HMM} \triangleq P(z_t = i|\xi_1, ..., \xi_T, \theta)$: probability of $\xi_t$ to be in state $i$ at time step $t$ given the full observation sequence $\xi$, and 4) **smoothed edge marginal** $\zeta_{t,i,j}^{HMM} \triangleq P(z_t = i, z_{t+1} = j|\xi_1, ..., \xi_T, \theta)$: probability of $\xi_t$ to be in state $i$ at time step $t$ and in state $j$ at time step $t+1$ given the full observation sequence $\xi$. Parameters $\{\Pi_i, \{a_{i,m}\}_{m=1}^K, \mu_i, \Sigma_i\}_{i=1}^K$ are estimated using the EM algorithm for HMMs, and the duration parameters $\{\mu_i^S, \Sigma_i^S\}_{i=1}^K$ are estimated empirically from the data after training using the most likely hidden state sequence $z_t = \{z_1...z_T\}$ (see supplementary materials for details).
|
| 50 |
+
|
| 51 |
+
## 2.2 Decoding from HSMM
|
| 52 |
+
|
| 53 |
+
Given the learned model parameters, the probability of the observed sequence $\{\xi_1... \xi_t\}$ to be in a hidden state $z_t = i$ at the end of the sequence (also known as filtering prob-
|
| 54 |
+
---PAGE_BREAK---
|
| 55 |
+
|
| 56 |
+
lem) is computed with the help of the forward variable as
|
| 57 |
+
|
| 58 |
+
$$P(z_t | \xi_1, \dots, \xi_t) = h_{t,i}^{\text{HMM}} = \frac{\alpha_{t,i}^{\text{HMM}}}{\sum_{k=1}^{K} \alpha_{t,k}^{\text{HMM}}} = \frac{\pi_i \mathcal{N}(\xi_t | \mu_i, \Sigma_i)}{\sum_{k=1}^{K} \pi_k \mathcal{N}(\xi_t | \mu_k, \Sigma_k)}. \quad (1)$$
|
| 59 |
+
|
| 60 |
+
Sampling from the model for predicting the sequence of states over the next time horizon $P(z_t, z_{t+1}, \dots, z_{T_p} | \xi_1, \dots, \xi_t)$ can be done in two ways: **1) stochastic sampling:** the sequence of states is sampled in a probabilistic manner given the state duration and the state transition probabilities. By stochastic sampling, motions that contain different options and do not evolve only on a single path can also be represented. Starting from the initial state $z_t = i$, the $s$ duration steps are sampled from $\{\mu_i^S, \Sigma_i^S\}$, after which the next transition state is sampled $z_{t+s+1} \sim \pi_{z_{t+s}}$. The procedure is repeated for the given time horizon in a receding horizon manner; **2) deterministic sampling:** the most likely sequence of states is sampled and remains unchanged in successive sampling trials. We use the forward variable of HSMM for deterministic sampling from the model. The forward variable $\alpha_{t,i}^{\text{HMM}} \triangleq P(z_t = i, \xi_1, \dots, \xi_t|\theta)$ requires marginalizing over the duration steps along with all possible state sequences. The probability of a datapoint $\xi_t$ to be in state $i$ at time step $t$ given the partial observation sequence $\{\xi_1, \dots, \xi_t\}$ is now specified as [30]
|
| 61 |
+
|
| 62 |
+
$$\alpha_{t,i}^{\text{HSMM}} = \sum_{s=1}^{\min(s^{\max}, t-1)} \sum_{j=1}^{K} \alpha_{t-s,j}^{\text{HSMM}} a_{j,i} \mathcal{N}(s|\mu_i^S, \Sigma_i^S) \prod_{c=t-s+1}^{t} \mathcal{N}(\xi_c | \mu_i, \Sigma_i), \quad (2)$$
|
| 63 |
+
|
| 64 |
+
where the initialization is given by $\alpha_{1,i}^{\text{HSMM}} = \Pi_i N(1|\mu_i^S, \Sigma_i^S) N(\xi_1|\mu_i, \Sigma_i)$, and the output distribution in state $i$ is conditionally independent for the $s$ duration steps given as $\prod_{c=t-s+1}^{t} N(\xi_c | \mu_i, \Sigma_i)$. Note that for $t < s^{\max}$, the sum over duration steps is computed for $t-1$ steps, instead of $s^{\max}$. Without the observation sequence for the next time steps, the forward variable simplifies to
|
| 65 |
+
|
| 66 |
+
$$\alpha_{t,i}^{\text{HSMM}} = \sum_{s=1}^{\min(s^{\max}, t-1)} \sum_{j=1}^{K} \alpha_{t-s,j}^{\text{HSMM}} a_{j,i} N(s|\mu_i^S, \Sigma_i^S). \quad (3)$$
|
| 67 |
+
|
| 68 |
+
The forward variable is used to plan the movement sequence for the next $T_p$ steps with $t = t + 1... T_p$. During prediction, we only use the transition matrix and the duration model to plan the future evolution of the initial/current state and omit the influence of the spatial data that we cannot observe, i.e., $N(\xi_t|\mu_i, \Sigma_i) = 1$ for $t > 1$. This is used to retrieve a step-wise reference trajectory $N(\hat{\mu}_t, \hat{\Sigma}_t)$ from a given state sequence $z_t$ computed from the forward variable with,
|
| 69 |
+
|
| 70 |
+
$$z_t = \{z_t, \dots, z_{T_p}\} = \arg\max_i \alpha_{t,i}^{\text{HSMM}}, \quad \hat{\mu}_t = \mu_{z_t}, \quad \hat{\Sigma}_t = \Sigma_{z_t}. \quad (4)$$
|
| 71 |
+
|
| 72 |
+
Fig. 2 shows a conceptual representation of the step-wise sequence of states generated by deterministically sampling from HSMM encoding of the Z-shaped data. In the next section, we show how to synthesise robot movement from this step-wise sequence of states in a smooth manner.
|
| 73 |
+
---PAGE_BREAK---
|
| 74 |
+
|
| 75 |
+
Fig. 2: Sampling from HSMM from an unseen initial state $\xi_0$ over the next time horizon and tracking the step-wise desired sequence of states $\mathcal{N}(\hat{\mu}_t, \hat{\Sigma}_t)$ with a linear quadratic tracking controller. Note that this converges although $\xi_0$ was not previously encountered.
|
| 76 |
+
|
| 77 |
+
## 2.3 Motion Generation with Linear Quadratic Tracking
|
| 78 |
+
|
| 79 |
+
We formulate the motion generation problem given the step-wise desired sequence of states $\{\mathcal{N}(\hat{\mu}_t, \hat{\Sigma}_t)\}_{t=1}^{T_p}$ as sequential optimization of a scalar cost function with a linear quadratic tracker (LQT) [2]. The control policy $u_t$ at each time step is obtained by minimizing the cost function over the finite time horizon $T_p$,
|
| 80 |
+
|
| 81 |
+
$$ c_t(\xi_t, u_t) = \sum_{t=1}^{T_p} (\xi_t - \hat{\mu}_t)^{\top} Q_t (\xi_t - \hat{\mu}_t) + u_t^{\top} R_t u_t, \quad (5) $$
|
| 82 |
+
|
| 83 |
+
s.t. $\xi_{t+1} = A_d\xi_t + B_d u_t,$
|
| 84 |
+
|
| 85 |
+
starting from the initial state $\xi_1$ and following the discrete linear dynamical system specified by $A_d$ and $B_d$. We consider a linear time-invariant double integrator system to describe the system dynamics. Alternatively, a time-varying linearization of the system dynamics along the reference trajectory can also be used to model the system dynamics without loss of generality. Both discrete and continuous time linear quadratic regulator/tracker can be used to follow the desired trajectory. The discrete time formulation, however, gives numerically stable results for a wide range of values of $R$. The control law $u_t^*$ that minimizes the cost function in Eq. (5) under finite horizon subject to the linear dynamics in discrete time is given as,
|
| 86 |
+
|
| 87 |
+
$$ u_t^* = K_t(\hat{\mu}_t - \xi_t) + u_t^{\text{FF}}, \quad (6) $$
|
| 88 |
+
|
| 89 |
+
where $K_t = [K_t^P, K_t^V]$ are the full stiffness and damping matrices for the feedback term, and $u_t^{\text{FF}}$ is the feedforward term (see supplementary materials for computing the
|
| 90 |
+
---PAGE_BREAK---
|
| 91 |
+
|
| 92 |
+
Fig. 3: Task-parameterized formulation of HSMM: four demonstrations on left are observed from two coordinate systems that define the start and end position of the demonstration (starting in purple position and ending in green position for each demonstration). The generative model is learned in the respective coordinate systems. The model parameters in respective coordinate systems are adapted to the new unseen object positions by computing the products of linearly transformed Gaussian mixture components. The resulting HSMM is combined with LQT for smooth retrieval of manipulation tasks.
|
| 93 |
+
|
| 94 |
+
gains). Fig. 2 shows the results of applying discrete LQT on the desired step-wise sequence of states sampled from an HSMM encoding of the Z-shaped demonstrations. Note that the gains can be precomputed before simulating the system if the reference trajectory does not change during the reproduction of the task. The resulting trajectory $\xi_t^*$ smoothly tracks the step-wise reference trajectory $\hat{\mu}_t$ and the gains $K_t^P, K_t^V$ locally stabilize $\xi_t^*$ along $\xi_t^*$ in accordance with the precision required during the task.
|
| 95 |
+
|
| 96 |
+
# 3 Invariant Task-Parameterized HSMMs
|
| 97 |
+
|
| 98 |
+
Conventional approaches to encode task variations such as change in the pose of the object is to augment the state of the environment with the policy parameters [19]. Such an encoding, however, does not capture the geometric structure of the problem. Our approach exploits the problem structure by introducing the task parameters in the form of coordinate systems that observe the demonstrations from multiple perspectives. Task-parameterization enables the model parameters to adapt in accordance with the external task parameters that describe the environmental situation, instead of hard coding the solution for each new situation or handling it in an *ad hoc* manner [27]. When a different situation occurs (pose of the object changes), changes in the task parameters/reference frames are used to modulate the model parameters in order to adapt the robot movement to the new situation.
|
| 99 |
+
|
| 100 |
+
## 3.1 Model Learning
|
| 101 |
+
|
| 102 |
+
We represent the task parameters with $F$ coordinate systems, defined by $\{A_j, b_j\}_{j=1}^F$, where $A_j$ denotes the orientation of the frame as a rotation matrix and $b_j$ represents the origin of the frame. We assume that the coordinate frames are specified by the user, based on prior knowledge about the carried out task. Typically, coordinate frames will be attached to objects, tools or locations that could be relevant in the execution of a task. Each datapoint $\xi_t$ is observed from the viewpoint of $F$ different experts/frames,
|
| 103 |
+
---PAGE_BREAK---
|
| 104 |
+
|
| 105 |
+
with $\xi_t^{(j)} = A_j^{-1}(\xi_t - b_j)$ denoting the datapoint observed with respect to frame $j$. The parameters of the task-parameterized HSMM are defined by
|
| 106 |
+
|
| 107 |
+
$$ \theta = \left\{ \{\mu_i^{(j)}, \Sigma_i^{(j)}\}_{j=1}^F, \{a_{i,m}\}_{m=1}^K, \mu_i^S, \Sigma_i^S \right\}_{i=1}^K, $$
|
| 108 |
+
|
| 109 |
+
where $\mu_i^{(j)}$ and $\Sigma_i^{(j)}$ define the mean and the covariance matrix of $i$-th mixture component in frame $j$. Parameter updates of the task-parameterized HSMM algorithm remain the same as HSMM, except the computation of the mean and the covariance matrix is repeated for each coordinate system separately. The emission distribution of the $i$-th state is represented by the product of the probabilities of the datapoint to belong to the $i$-th Gaussian in the corresponding $j$-th coordinate system. The forward variable of HMM in the task-parameterized formulation is described as
|
| 110 |
+
|
| 111 |
+
$$ \alpha_{t,i}^{\text{TP-HMM}} = \left( \sum_{j=1}^{K} \alpha_{t-1,j}^{\text{HMM}} a_{j,i} \right) \prod_{j=1}^{F} \mathcal{N}(\xi_t^{(j)} | \mu_i^{(j)}, \Sigma_i^{(j)}). \quad (7) $$
|
| 112 |
+
|
| 113 |
+
Similarly, the backward variable $\beta_{t,i}^{\text{TP-HMM}}$, the smoothed node marginal $\gamma_{t,i}^{\text{TP-HMM}}$, and the smoothed edge marginal $\zeta_{t,i,j}^{\text{TP-HMM}}$ can be computed by replacing the emission distribution $\mathcal{N}(\xi_t | \mu_i, \Sigma_i)$ with the product of probabilities of the datapoint in each frame $\prod_{j=1}^{F} \mathcal{N}(\xi_t^{(j)} | \mu_i^{(j)}, \Sigma_i^{(j)})$. The duration model $\mathcal{N}(s|\mu_i^S, \Sigma_i^S)$ is used as a replacement of the self-transition probabilities $a_{i,i}$. The hidden state sequence over all demonstrations is used to define the duration model parameters $\{\mu_i^S, \Sigma_i^S\}$ as the mean and the standard deviation of staying $s$ consecutive time steps in the $i$-th state.
|
| 114 |
+
|
| 115 |
+
## 3.2 Model Adaptation in New Situations
|
| 116 |
+
|
| 117 |
+
In order to combine the output from coordinate frames of reference for an unseen situation represented by the frames $\{\tilde{\mathbf{A}}_j, \tilde{\mathbf{b}}_j\}_{j=1}^F$, we linearly transform the Gaussians back to the global coordinates with $\{\tilde{\mathbf{A}}_j, \tilde{\mathbf{b}}_j\}_{j=1}^F$, and retrieve the new model parameters $\{\tilde{\boldsymbol{\mu}}_i, \tilde{\boldsymbol{\Sigma}}_i\}$ for the $i$-th mixture component by computing the products of the linearly transformed Gaussians (see Fig. 3).
|
| 118 |
+
|
| 119 |
+
$$ \mathcal{N}(\tilde{\boldsymbol{\mu}}_i, \tilde{\boldsymbol{\Sigma}}_i) \propto \prod_{j=1}^{F} \mathcal{N}(\tilde{\mathbf{A}}_j \boldsymbol{\mu}_i^{(j)} + \tilde{\mathbf{b}}_j, \tilde{\mathbf{A}}_j \boldsymbol{\Sigma}_i^{(j)} \tilde{\mathbf{A}}_j^\top). \quad (8) $$
|
| 120 |
+
|
| 121 |
+
Evaluating the products of Gaussians represents the observation distribution of HSMM, whose output sequence is decoded and combined with LQT for smooth motion generation as shown in the previous section.
|
| 122 |
+
|
| 123 |
+
$$ \tilde{\Sigma}_i = \left( \sum_{j=1}^{F} (\tilde{\mathbf{A}}_j \boldsymbol{\Sigma}_i^{(j)} \tilde{\mathbf{A}}_j^\top)^{-1} \right)^{-1}, \qquad \tilde{\boldsymbol{\mu}}_i = \tilde{\Sigma}_i \sum_{j=1}^{F} (\tilde{\mathbf{A}}_j \boldsymbol{\Sigma}_i^{(j)} \tilde{\mathbf{A}}_j^\top)^{-1} (\tilde{\mathbf{A}}_j \boldsymbol{\mu}_i^{(j)} + \tilde{\mathbf{b}}_j). \quad (9) $$
|
| 124 |
+
---PAGE_BREAK---
|
| 125 |
+
|
| 126 |
+
Fig. 4: Parameters representation of a diagonal, full and mixture of factor analyzers decomposition of covariance matrix. Filled blocks represent non-zero entries.
|
| 127 |
+
|
| 128 |
+
# 4 Latent Space Representations
|
| 129 |
+
|
| 130 |
+
Dimensionality reduction has long been recognized as a fundamental problem in unsupervised learning. Model-based generative models such as HSMMs tend to suffer from the *curse of dimensionality* when few datapoints are available. We use statistical subspace clustering methods that reduce the number of parameters to be robustly estimated to address this problem. A simple way to reduce the number of parameters would be to constrain the covariance structure to a diagonal or spherical/isotropic matrix, and restrict the number of parameters at the cost of treating each dimension separately. Such decoupling, however, cannot encode the important motor control principles of coordination, synergies and action-perception couplings [28].
|
| 131 |
+
|
| 132 |
+
Consequently, we seek out a latent feature space in the high-dimensional data to reduce the number of model parameters that can be robustly estimated. We consider three formulations to this end: 1) low-rank decomposition of the covariance matrix using *Mixture of Factor Analyzers (MFA)* approach [14], 2) partial tying of the covariance matrices of the mixture model with the same set of basis vectors, albeit different scale with semi-tied covariance matrices [7,23], and 3) Bayesian non-parametric sequence clustering under small variance asymptotics [12,21,24]. All the decompositions can readily be combined with invariant task-parameterized HSMM and LQT for encapsulating reactive autonomous behaviour as shown in the previous section.
|
| 133 |
+
|
| 134 |
+
## 4.1 Mixture of Factor Analyzers
|
| 135 |
+
|
| 136 |
+
The basic idea of MFA is to perform subspace clustering by assuming the covariance structure for each component of the form,
|
| 137 |
+
|
| 138 |
+
$$ \Sigma_i = \Lambda_i \Lambda_i^\top + \Psi_i, \quad (10) $$
|
| 139 |
+
|
| 140 |
+
where $\Lambda_i \in \mathbb{R}^{D \times d}$ is the factor loadings matrix with $d < D$ for parsimonious representation of the data, and $\Psi_i$ is the diagonal noise matrix (see Fig. 4 for MFA representation in comparison to a diagonal and a full covariance matrix). Note that the mixture of probabilistic principal component analysis (MPPCA) model is a special case of MFA with the distribution of the errors assumed to be isotropic with $\Psi_i = I\sigma_i^2$ [26]. The MFA model assumes that $\xi_t$ is generated using a linear transformation of $d$-dimensional vector of latent (unobserved) factors $f_t$,
|
| 141 |
+
|
| 142 |
+
$$ \xi_t = \Lambda_i f_t + \mu_i + \epsilon, \quad (11) $$
|
| 143 |
+
---PAGE_BREAK---
|
| 144 |
+
|
| 145 |
+
where $\mu_i \in \mathbb{R}^D$ is the mean vector of the $i$-th factor analyzer, $f_t \sim N(0, I)$ is a normally distributed factor, and $\epsilon \sim N(0, \Psi_i)$ is a zero-mean Gaussian noise with diagonal covariance $\Psi_i$. The diagonal assumption implies that the observed variables are independent given the factors. Note that the subspace of each cluster is not spanned by orthogonal vectors, whereas it is a necessary condition in models based on eigendecomposition such as PCA. Each covariance matrix of the mixture component has its own subspace spanned by the basis vectors of $\Sigma_i$. As the number of components increases to encode more complex skills, an increasing large number of potentially redundant parameters are used to fit the data. Consequently, there is a need to share the basis vectors across the mixture components as shown below by semi-tying the covariance matrices of the mixture model.
|
| 146 |
+
|
| 147 |
+
## 4.2 Semi-Tied Mixture Model
|
| 148 |
+
|
| 149 |
+
When the covariance matrices of the mixture model share the same set of parameters for the latent feature space, we call the model a *semi-tied* mixture model [23]. The main idea behind semi-tied mixture models is to decompose the covariance matrix $\Sigma_i$ into two terms: a common latent feature matrix $H \in \mathbb{R}^{D \times D}$ and a component-specific diagonal matrix $\Sigma_i^{(\text{diag})} \in \mathbb{R}^{D \times D}$, i.e.,
|
| 150 |
+
|
| 151 |
+
$$ \Sigma_i = H \Sigma_i^{(\text{diag})} H^\top. \quad (12) $$
|
| 152 |
+
|
| 153 |
+
The latent feature matrix encodes the locally important synergistic directions represented by $D$ non-orthogonal basis vectors that are shared across all the mixture components, while the diagonal matrix selects the appropriate subspace of each mixture component as convex combination of a subset of the basis vectors of $H$. Note that the eigen decomposition of $\Sigma_i = U_i \Sigma_i^{(\text{diag})} U_i^\top$ contains $D$ basis vectors of $\Sigma_i$ in $U_i$. In comparison, semi-tied mixture model gives $D$ globally representative basis vectors that are shared across all the mixture components. The parameters $H$ and $\Sigma_i^{(\text{diag})}$ are updated in closed form with EM updates of HSMM [7].
|
| 154 |
+
|
| 155 |
+
The underlying hypothesis in semi-tying the model parameters is that similar coordination patterns occur at different phases in a manipulation task. By exploiting the spatial and temporal correlation in the demonstrations, we reduce the number of parameters to be estimated while locking the most important synergies to cope with perturbations. This allows the reuse of the discovered synergies in different parts of the task having similar coordination patterns. In contrast, the MFA decomposition of each covariance matrix separately cannot exploit the temporal synergies, and has more flexibility in locally encoding the data.
|
| 156 |
+
|
| 157 |
+
## 4.3 Bayesian Non-Parametrics under Small Variance Asymptotics
|
| 158 |
+
|
| 159 |
+
Specifying the number of latent states in a mixture model is often difficult. Model selection methods such as cross-validation or Bayesian Information Criterion (BIC) are typically used to determine the number of states. Bayesian non-parametric approaches comprising of Hierarchical Dirichlet Processes (HDPs) provide a principled model selection procedure by Bayesian inference in an HMM with infinite number of states [25].
|
| 160 |
+
---PAGE_BREAK---
|
| 161 |
+
|
| 162 |
+
Fig. 5: Bayesian non-parametric clustering of Z-shaped streaming data under small variance asymptotics with: (left) online DP-GMM, (right) online DP-MPPCA. Note that the number of clusters and the subspace dimension of each cluster is adapted in a non-parametric manner.
|
| 163 |
+
|
| 164 |
+
These approaches provide flexibility in model selection, however, their widespread use is limited by the computational overhead of existing sampling-based and variational techniques for inference. We take a **small variance asymptotics** approximation of the Bayesian non-parametric model that collapses the posterior to a simple deterministic model, while retaining the non-parametric characteristics of the algorithm.
|
| 165 |
+
|
| 166 |
+
Small variance asymptotic (SVA) analysis implies that the covariance matrix $\Sigma_i$ of all the Gaussians is set to the isotropic noise $\sigma^2$, i.e., $\Sigma_i \approx \lim_{\sigma^2 \to 0} \sigma^2 I$ in the likelihood function and the prior distribution [12,3]. The analysis yields simple deterministic models, while retaining the non-parametric nature. For example, SVA analysis of the Bayesian non-parametric GMM leads to the DP-means algorithm [12]. Similarly, SVA analysis of the Bayesian non-parametric HMM under Hierarchical Dirichlet Process (HDP) yields the segmental $k$-means problem [21].
|
| 167 |
+
|
| 168 |
+
Restricting the covariance matrix to an isotropic/spherical noise, however, fails to encode the coordination patterns in the demonstrations. Consequently, we model the covariance matrix in its intrinsic affine subspace of dimension $d_i$ with projection matrix $\Lambda_i^{d_i} \in \mathbb{R}^{D \times d_i}$, such that $d_i < D$ and $\Sigma_i = \lim_{\sigma^2 \to 0} \Lambda_i^{d_i} \Lambda_i^{d_i^\top} + \sigma^2 I$ (akin to DP-MPPCA model). Under this assumption, we apply the small variance asymptotic limit on the remaining $(D - d_i)$ dimensions to encode the most important coordination patterns while being parsimonious in the number of parameters (see Fig. 5). Performing small-variance asymptotics of the joint likelihood of HDP-HMM yields the maximum aposteriori estimates of the parameters by iteratively minimizing the loss function*
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
\begin{aligned}
|
| 172 |
+
\mathcal{L}(z, d, \mu, U, a) = & \sum_{t=1}^{T} \mathrm{dist}(\xi_t, \mu_{z_t}, U_{z_t}^{d_t})^2 + \lambda(K-1) \\
|
| 173 |
+
& + \lambda_1 \sum_{i=1}^{K} d_i - \lambda_2 \sum_{t=1}^{T-1} \log(a_{z_t, z_{t+1}}) + \lambda_3 \sum_{i=1}^{K} (\tau_i - 1),
|
| 174 |
+
\end{aligned}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
where $\mathrm{dist}(\xi_t, \mu_{z_t}, U_{z_t}^d)^2$ represents the distance of the datapoint $\xi_t$ to the subspace of cluster $z_t$ defined by mean $\mu_{z_t}$ and unit eigenvectors of the covariance matrix $U_{zt}^d$ (see supplementary materials for details). The algorithm optimizes the number of clusters
|
| 178 |
+
|
| 179 |
+
*Setting $d_i = 0$ by choosing $\lambda_1 \gg 0$ gives the loss function formulation with isotropic Gaussian under small variance asymptotics [21].
|
| 180 |
+
---PAGE_BREAK---
|
| 181 |
+
|
| 182 |
+
Fig. 6: (left) Baxter robot picks the glass plate with a suction lever and places it on the cross after avoiding an obstacle of varying height, (centre-left) reproduction for previously unseen object and obstacle position, (center-right) left-right HSMM encoding of the task with duration model shown next to each state ($s^{max} = 100$), (right) rescaled forward variable evolution of the forward variable over time.
|
| 183 |
+
|
| 184 |
+
and the subspace dimension of each cluster while minimizing the distance of the data-points to the respective subspaces of each cluster. The $\lambda_2$ term favours the transitions to states with higher transition probability (states which have been visited more often before), $\lambda_3$ penalizes for transition to unvisited states with $\tau_i$ denoting the number of distinct transitions out of state $i$, while $\lambda$ and $\lambda_1$ are the penalty terms for increasing the number of states and the subspace dimension of each output state distribution.
|
| 185 |
+
|
| 186 |
+
The analysis is used here for scalable online sequence clustering that is non-parametric in the number of clusters and the subspace dimension of each cluster. The resulting algorithm groups the data in its low dimensional subspace with non-parametric mixture of probabilistic principal component analyzers based on Dirichlet process, and captures the state transition and state duration information in a HDP-HSMM. The cluster assignment and the parameter updates at each iteration minimize the loss function, thereby, increasing the model fitness while penalizing for new transitions, new dimensions and/or new clusters. An interested reader can find more details of the algorithm in [24].
|
| 187 |
+
|
| 188 |
+
# 5 Experiments, Results and Discussion
|
| 189 |
+
|
| 190 |
+
We now show how our proposed work enables a Baxter robot to learn a pick-and-place task from a few human demonstrations. The objective of the task is to place the object in a desired target position by picking it from different initial poses of the object, while adapting the movement to avoid the obstacle. The setup of pick-and-place task with obstacle avoidance is shown in Fig. 6. The Baxter robot is required to grasp the glass plate with a suction lever placed in an initial configuration as marked on the setup. The obstacle can be vertically displaced to one of the 8 target configurations. We describe the task with two frames, one frame for the object initial configuration with {$A_1, b_1$} and other frame for the obstacle {$A_2, b_2$} with $A_2 = I$ and $b_2$ to specify the centre of the obstacle. We collect 8 kinesthetic demonstrations with different initial configurations of the object and the obstacle successively displaced upwards as marked with the visual tags in the figure. Alternate demonstrations are used for the training set, while the rest are used for the test set. Each observation comprises of the end-effector Cartesian position,
|
| 191 |
+
---PAGE_BREAK---
|
| 192 |
+
|
| 193 |
+
Fig. 7: Task-Parameterized HSMM performance on pick-and-place with obstacle avoidance task: (top) training set reproductions, (bottom) testing set reproductions.
|
| 194 |
+
|
| 195 |
+
quaternion orientation, gripper status (open/closed), linear velocity, quaternion derivative, and gripper status derivative with $D = 16$, $P = 2$, and a total of 200 datapoints per demonstration. We represent the frame $\{\mathbf{A}_1, \mathbf{b}_1\}$ as
|
| 196 |
+
|
| 197 |
+
$$ \mathbf{A}_1^{(n)} = \begin{bmatrix} \mathbf{R}_1^{(n)} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \varepsilon_1^{(n)} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{R}_1^{(n)} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \varepsilon_1(n)\end{bmatrix}, \quad \mathbf{b}_1^{(n)} = \begin{bmatrix} \mathbf{p}_1^{(n)} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{1}\end{bmatrix}, \qquad (13) $$
|
| 198 |
+
|
| 199 |
+
where $\mathbf{p}_1^{(n)} \in \mathbb{R}^3, \mathbf{R}_1^{(n)} \in \mathbb{R}^{3\times3}, \varepsilon_1^{(n)} \in \mathbb{R}^{4\times4}$ denote the Cartesian position, the rotation matrix and the quaternion matrix in the $n$-th demonstration respectively. Note that we do not consider time as an explicit variable as the duration model in HSMM encapsulates the timing information locally.
|
| 200 |
+
|
| 201 |
+
Performance setting in our experiments is as follows: $\{\pi_i, \mu_i, \Sigma_i\}_{i=1}^K$ are initialized using k-means clustering algorithm, $R = 9I$, where $I$ is the identity matrix, learning converges when the difference of log-likelihood between successive demonstrations is less than $1 \times 10^{-4}$. Results of regenerating the movements with 7 mixture components are shown in Fig. 7. For a given initial configuration of the object, the model parameters are adapted by evaluating the product of Gaussians for a new frame configuration. The reference trajectory is then computed from the initial position of the robot arm using the forward variable of HSMM and tracked using LQT. The robot arm moves from its initial configuration to align itself with the first frame $\{\mathbf{A}_1, \mathbf{b}_1\}$ to grasp the object, and follows it with the movement to avoid the obstacle and subsequently, align with the second frame $\{\mathbf{A}_2, \mathbf{b}_2\}$ before placing the object and returning to a neutral position. The model exploits variability in the observed demonstrations to statistically encode different phases of the task such as reach, grasp, move, place, return. The imposed
|
| 202 |
+
---PAGE_BREAK---
|
| 203 |
+
|
| 204 |
+
Fig. 8: Latent space representations of invariant task-parameterized HSMM for a randomly chosen demonstration from the test set. Black dotted lines show human demonstration, while grey line shows the reproduction from the model (see supplementary materials for details).
|
| 205 |
+
|
| 206 |
+
Table 1: Performance analysis of invariant hidden Markov models with training MSE, testing MSE, number of parameters for pick-and-place task. MSE (in meters) is computed between the demonstrated trajectories and the generated trajectories (lower is better). Latent space formulations give comparable task performance with much fewer parameters.
|
| 207 |
+
|
| 208 |
+
<table><thead><tr><th>Model</th><th>Training MSE</th><th>Testing MSE</th><th>Number of Parameters</th></tr></thead><tbody><tr><td colspan="4">pick-and-place via obstacle avoidance (<i>K</i> = 7, <i>F</i> = 2, <i>D</i> = 16)</td></tr><tr><td>HSMM</td><td><b>0.0026</b> ± <b>0.0009</b></td><td>0.014 ± 0.0085</td><td>2198</td></tr><tr><td>Semi-Tied HSMM</td><td>0.0033 ± 0.0016</td><td>0.0131 ± 0.0077</td><td>1030</td></tr><tr><td>MFA HSMM (<i>d</i><sub>k</sub> = 1)</td><td>0.0037 ± 0.0011</td><td><b>0.0109</b> ± <b>0.0068</b></td><td><b>742</b></td></tr><tr><td>MFA HSMM (<i>d</i><sub>k</sub> = 4)</td><td>0.0025 ± 0.0007</td><td>0.0119 ± 0.0077</td><td>1414</td></tr><tr><td>MFA HSMM (<i>d</i><sub>k</sub> = 7)</td><td>0.0023 ± 0.0009</td><td>0.0123 ± 0.0084</td><td>2086</td></tr><tr><td>SVA HDP HSMM<br>(<i>K</i> = 8, <i>d̄</i><sub>k</sub> = 3.94)</td><td>0.0073 ± 0.0024</td><td>0.0149 ± 0.0072</td><td>1352</td></tr></tbody></table>
|
| 209 |
+
|
| 210 |
+
structure with task-parameters and HSMM allows us to acquire a new task in a few human demonstrations, and generalize effectively in picking and placing the object. Table 1 evaluates the performance of the invariant task-parameterized HSMM with latent space representations. We observe significant reduction in the model parameters, while achieving better generalization on the unseen situations compared to the task-parameterized HSMM with full covariance matrices (see Fig. 8 for comparison across models). It is seen that the MFA decomposition gives the best performance on test set with much fewer parameters.
|
| 211 |
+
|
| 212 |
+
# 6 Conclusions
|
| 213 |
+
|
| 214 |
+
Learning from demonstrations is a promising approach to teach manipulation skills to robots. In contrast to deep learning approaches that require extensive training data, generative mixture models are useful for learning from a few examples that are not explicitly labelled. The formulations are inspired by the need to make generative mixture models easy to use for robot learning in a variety of applications, while requiring considerably less learning time.
|
| 215 |
+
---PAGE_BREAK---
|
| 216 |
+
|
| 217 |
+
We have presented formulations for learning invariant task representations with hidden semi-Markov models for recognition, prediction, and reproduction of manipulation tasks; along with learning in latent space representations for robust parameter estimation of mixture models with high-dimensional data. By sampling the sequence of states from the model and following them with a linear quadratic tracking controller, we are able to autonomously perform manipulation tasks in a smooth manner. This has enabled a Baxter robot to tackle a pick-and-place via obstacle avoidance problem from previously unseen configurations of the environment. A relevant direction of future work is to not rely on specifying the task parameters manually, but to infer generalized task representations from the videos of the demonstrations in learning the invariant segments. Moreover, learning the task model from a small set of labelled demonstrations in a semi-supervised manner is an important aspect in extracting meaningful segments from demonstrations.
|
| 218 |
+
|
| 219 |
+
**Acknowledgements:** This work was, in large part, carried out at Idiap Research Institute and Ecole Polytechnique Federale de Lausanne (EPFL) Switzerland. This work was in part supported by the DexROV project through the EC Horizon 2020 program (Grant 635491), and the NSF National Robotics Initiative Award 1734633 on Scalable Collaborative Human-Robot Learning (SCHooL). The information, data, comments, and views detailed herein may not necessarily reflect the endorsements of the sponsors.
|
| 220 |
+
|
| 221 |
+
## References
|
| 222 |
+
|
| 223 |
+
1. Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robot. Auton. Syst., 57(5):469-483, May 2009.
|
| 224 |
+
2. Francesco Borrelli, Alberto Bemporad, and Manfred Morari. Predictive control for linear and hybrid systems. Cambridge University Press, 2011.
|
| 225 |
+
3. Tamara Broderick, Brian Kulis, and Michael I. Jordan. Mad-bayes: Map-based asymptotic derivations from bayes. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pages 226-234, 2013.
|
| 226 |
+
4. S. Calinon. A tutorial on task-parameterized movement learning and retrieval. Intelligent Service Robotics, 9(1):1-29, 2016.
|
| 227 |
+
5. Yan Duan, Marcin Andrychowicz, Brad C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One-shot imitation learning. CoRR, abs/1703.07326, 2017.
|
| 228 |
+
6. Nadia Figueroa and Aude Billard. Transform-invariant non-parametric clustering of covariance matrices and its application to unsupervised joint segmentation and action discovery. CoRR, abs/1710.10060, 2017.
|
| 229 |
+
7. Mark J. F. Gales. Semi-tied covariance matrices for hidden markov models. IEEE Transactions on Speech and Audio Processing, 7(3):272-281, 1999.
|
| 230 |
+
8. Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. CoRR, abs/1606.03476, 2016.
|
| 231 |
+
9. A. Ijspeert, J. Nakanishi, P Pastor, H. Hoffmann, and S. Schaal. Dynamical movement primitives: Learning attractor models for motor behaviors. Neural Computation, (25):328-373, 2013.
|
| 232 |
+
10. S. Krishnan, R. Fox, I. Stoica, and K. Goldberg. DDCO: Discovery of Deep Continuous Options for Robot Learning from Demonstrations. CoRR, 2017.
|
| 233 |
+
11. D. Kulic, W. Takano, and Y. Nakamura. Incremental learning, clustering and hierarchy formation of whole body motion patterns using adaptive hidden markov chains. Intl Journal of Robotics Research, 27(7):761-784, 2008.
|
| 234 |
+
---PAGE_BREAK---
|
| 235 |
+
|
| 236 |
+
12. Brian Kulis and Michael I. Jordan. Revisiting k-means: New algorithms via bayesian non-parametrics. In *Proceedings of the 29th International Conference on Machine Learning (ICML-12)*, pages 513–520, New York, NY, USA, 2012. ACM.
|
| 237 |
+
|
| 238 |
+
13. D. Lee and C. Ott. Incremental motion primitive learning by physical coaching using impedance control. In *Proc. IEEE/RSJ Intl Conf. on Intelligent Robots and Systems (IROS)*, pages 4133–4140, Taipei, Taiwan, October 2010.
|
| 239 |
+
|
| 240 |
+
14. G. J. McLachlan, D. Peel, and R. W. Bean. Modelling high-dimensional data by mixtures of factor analyzers. *Computational Statistics and Data Analysis*, 41(3-4):379–388, 2003.
|
| 241 |
+
|
| 242 |
+
15. Jose Medina R. and Aude Billard. Learning Stable Task Sequences from Demonstration with Linear Parameter Varying Systems and Hidden Markov Models. In *Conference on Robot Learning (CoRL)*, 2017.
|
| 243 |
+
|
| 244 |
+
16. Chrystopher L. Nehaniv and Kerstin Dautenhahn, editors. *Imitation and social learning in robots, humans, and animals: behavioural, social and communicative dimensions*. Cambridge University Press, 2004.
|
| 245 |
+
|
| 246 |
+
17. Scott Niekum, Sarah Osentoski, George Konidaris, and Andrew G Barto. Learning and generalization of complex tasks from unstructured demonstrations. In *IEEE/RSJ International Conference on Intelligent Robots and Systems*, pages 5239–5246, 2012.
|
| 247 |
+
|
| 248 |
+
18. Takayuki Osa, Joni Pajarinen, Gerhard Neumann, Andrew Bagnell, Pieter Abbeel, and Jan Peters. *An Algorithmic Perspective on Imitation Learning*. Now Publishers Inc., Hanover, MA, USA, 2018.
|
| 249 |
+
|
| 250 |
+
19. Alexandros Paraschos, Christian Daniel, Jan R Peters, and Gerhard Neumann. Probabilistic movement primitives. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, *Advances in Neural Information Processing Systems 26*, pages 2616–2624. Curran Associates, Inc., 2013.
|
| 251 |
+
|
| 252 |
+
20. L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE, 77:257–285, 1989.
|
| 253 |
+
|
| 254 |
+
21. Anirban Roychowdhury, Ke Jiang, and Brian Kulis. Small-variance asymptotics for hidden markov models. In *Advances in Neural Information Processing Systems 26*, pages 2103–2111. Curran Associates, Inc., 2013.
|
| 255 |
+
|
| 256 |
+
22. A. K. Tanwani. *Generative Models for Learning Robot Manipulation Skills from Humans*. PhD thesis, Ecole Polytechnique Federale de Lausanne, Switzerland, 2018.
|
| 257 |
+
|
| 258 |
+
23. A. K. Tanwani and S. Calinon. Learning robot manipulation tasks with task-parameterized semitied hidden semi-markov model. *IEEE Robotics and Automation Letters*, 1(1):235–242, 2016.
|
| 259 |
+
|
| 260 |
+
24. Ajay Kumar Tanwani and Sylvain Calinon. Small variance asymptotics for non-parametric online robot learning. CoRR, abs/1610.02468, 2016.
|
| 261 |
+
|
| 262 |
+
25. Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. Hierarchical dirichlet processes. *Journal of the American Statistical Association*, 101(476):1566–1581, 2006.
|
| 263 |
+
|
| 264 |
+
26. M. E. Tipping and C. M. Bishop. Mixtures of probabilistic principal component analyzers. *Neural Computation*, 11(2):443–482, 1999.
|
| 265 |
+
|
| 266 |
+
27. A. D. Wilson and A. F. Bobick. Parametric hidden Markov models for gesture recognition. *IEEE Trans. on Pattern Analysis and Machine Intelligence*, 21(9):884–900, 1999.
|
| 267 |
+
|
| 268 |
+
28. D. M. Wolpert, J. Diedrichsen, and J. R. Flanagan. Principles of sensorimotor learning. *Nature Reviews*, 12:739–751, 2011.
|
| 269 |
+
|
| 270 |
+
29. Danfei Xu, Suraj Nair, Yuke Zhu, Julian Gao, Animesh Garg, Li Fei-Fei, and Silvio Savarese. Neural task programming: Learning to generalize across hierarchical tasks. CoRR, abs/1710.01813, 2017.
|
| 271 |
+
|
| 272 |
+
30. S.-Z. Yu. Hidden semi-Markov models. Artificial Intelligence, 174:215–243, 2010.
|
samples_new/texts_merged/3603622.md
ADDED
|
@@ -0,0 +1,245 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Microscopic study of low-lying yrast spectra and deformation systematics in neutron-rich 98-106Sr isotopes
|
| 5 |
+
|
| 6 |
+
ANIL CHANDAN, SURAM SINGH, ARUN BHARTI* and S K KHOSA
|
| 7 |
+
|
| 8 |
+
Department of Physics, University of Jammu (J&K), Jammu 180 006, India
|
| 9 |
+
|
| 10 |
+
*Corresponding author. E-mail: arunbharti_2003@yahoo.co.in
|
| 11 |
+
|
| 12 |
+
MS received 15 January 2009; revised 7 May 2009; accepted 23 May 2009
|
| 13 |
+
|
| 14 |
+
**Abstract.** Variation-after-projection (VAP) calculations in conjunction with Hartree-Bogoliubov (HB) ansatz have been carried out for A = 98-106 strontium isotopes. In this framework, the yrast spectra with $J^{\Pi} \ge 10^{+}$, $B(E2)$ transition probabilities, quadrupole deformation parameter and occupation numbers for various shell model orbits have been obtained. The results of the calculation for yrast spectra give an indication that it is important to include the hexadecapole-hexadecapole component of the two-body interaction for obtaining various nuclear structure quantities in Sr isotopes. Besides this, it is also found that the simultaneous polarization of $p_{3/2}$ and $f_{5/2}$ proton subshells is a significant factor in making a sizeable contribution to the deformation in neutron-rich Sr isotopes.
|
| 15 |
+
|
| 16 |
+
**Keywords.** Nuclear structure of 98-106Sr; variation-after-projection (VAP) calculations; calculated levels; $B(E2)$ transition probabilities; quadrupole $\beta_2$ deformation parameter.
|
| 17 |
+
|
| 18 |
+
PACS Nos 21.60.-n; 21.60.Jz; 27.60.+j
|
| 19 |
+
|
| 20 |
+
## 1. Introduction
|
| 21 |
+
|
| 22 |
+
The existence of a large deformation in the neutron-rich nuclei in the mass region A = 100 was established by Cheifetz et al [1]. Since then considerable effort has gone in understanding the properties of this region. It has been observed that neutron-rich isotopes with N ≥ 60 and A ≈ 100 are characterized by strong axial deformations. Quadrupole deformation of β = 0.4 has been deduced for 98Sr and 100Sr from the lifetimes of the first excited states and from mean square radii measured by collinear laser spectroscopy [2-6]. According to these results, ground state deformation remains constant after its sudden onset at N = 60. This trend even could continue at larger neutron numbers. The recent development in experimental techniques like the decay of on-line mass separated 98Rb to 98Sr by γ-spectroscopy [7] make attractive a compilation of some of the more general features of the structure of doubly even neutron-rich Sr isotopes. The 98Sr nucleus is well deformed.
|
| 23 |
+
---PAGE_BREAK---
|
| 24 |
+
|
| 25 |
+
Anil Chandan et al
|
| 26 |
+
|
| 27 |
+
The ground state band of $^{98}$Sr, in particular, exhibits excellent rotational properties with a large and rigid moment of inertia. $^{98}$Sr is predicted to have a well-deformed prolate ground state. The levels in $^{100}$Sr were first observed in a β-decay study of $^{100}$Rb by Azuma *et al* [8] who identified the $4^+ \rightarrow 2^+ \rightarrow 0^+$ cascade and performed the first lifetime measurement of the $2^+$ state, thereby, establishing large deformations. Further members of the ground state band upto $I^\pi = 10^+$ were identified in prompt-fission studies [9]. Sometime back, evidence for the $2^+$ level in $^{102}$Sr was observed from the decay study of $^{102}$Rb mass separated at the CERN-ISOLDE Facility [10]. It has been recently predicted [11] that $^{102}$Sr is a strongly deformed nucleus with properties close to the rotational limit. Presently, the $^{102}$Sr nucleus is the most deformed neutron-rich even-even isotope in the Sr region.
|
| 28 |
+
|
| 29 |
+
From the systematics of the $2^+$ states in neutron-rich Sr isotopes, one observes large decrease in $E_2^+$ energy as neutron number N changes from 58 to 60. The onset of deformation in this region for Sr is the most abrupt known for even-even nuclei, as evidenced by the fact that the $2^+$ energy decreases by a factor of 5.7 as N increases from 58 to 60 [12]. Besides this, it is also observed that the energy of the $2^+$ state decreases from 0.144 MeV in $^{98}$Sr to 0.126 MeV in $^{102}$Sr giving an indication that there is an increase in the degree of deformation as one moves from $^{98}$Sr to $^{102}$Sr. The experimental data for $^{104-106}$Sr are not available. From the observed data, it is clear that $^{102}$Sr is the most deformed nucleus in the Sr region.
|
| 30 |
+
|
| 31 |
+
A microscopic explanation for the onset of deformation at $N=60$ has been given by Federman and Pittel [13]. They argued that strong attractive n-p interaction between $(g_{7/2})\nu$ and $(g_{9/2})\pi$ spin-orbit partner (SOP) orbitals are the underlying cause of the unusual characteristics. The realization of large deformation requires that the spin-orbit partner orbitals lie near the Fermi surface, both prior to and after the onset of deformation. There is another school of thought put forth by mean-field theorists [14,15] who have assigned the development of large deformation in $A=100$ region to the occupation of low k-components of $(h_{11/2})\nu$ orbit. They find their mean field calculations indicating the appearance of $k=1/2$ component of $(h_{11/2})\nu$ orbit at the Fermi surface in $^{100-102}$Sr. It was shown by the authors in [16–19] that the phenomenological pairing plus quadrupole-quadrupole (PQ) model of the two-body interaction is highly reliable in this mass region. It was shown by Khosa and Sharma [19] that, two-body effective interactions have a dominantly quadrupole-quadrupole character and the deformation-producing tendency of neutron-proton (n-p) and like-particle interactions depend upon the degeneracy of the underlying single-particle valence space. One of the natural choices for the two-body residual interaction would, therefore, be pairing plus quadrupole-quadrupole (PQ). It turns out from the calculated values of energy spectra obtained with PQ interaction that, the agreement with experiment is not satisfactory. It becomes, therefore, necessary to add a correction term to the PQ interaction in the form of hexadecapole-hexdecapole interaction, which hereafter will be denoted as PQH interaction.
|
| 32 |
+
|
| 33 |
+
The purpose of the present work is to know whether the PQ model of two-body interaction can further be modified to produce better results in agreement with the experiments. We have, thus, examined the available yrast spectra in deformed neutron-rich Sr isotopes with $A=98-106$ in the framework of variation-after-projection (VAP) technique in conjunction with the HB ansatz for the trial wave
|
| 34 |
+
---PAGE_BREAK---
|
| 35 |
+
|
| 36 |
+
*Neutron-rich* $^{98-106}$Sr isotopes
|
| 37 |
+
|
| 38 |
+
functions resulting from PQH interaction. The deformed Hartree–Fock–Bogoliubov state of the nucleus is generated using a phenomenological PQH interaction with $ ^{56}\text{Ni} $ as the core.
|
| 39 |
+
|
| 40 |
+
The VAP prescription selects an optimum intrinsic state for each yrast level through a minimization of the expectation value of the Hamiltonian with respect to the states characterized by a definite angular momentum. Our VAP calculations performed with PQH model of two-body interaction show a marked improvement in agreement with the experimentally observed yrast spectra as compared to the yrast spectra obtained with the PQ interaction. The results obtained for B(E2) transition probabilities and quadrupole deformation parameter ($\beta_2$) are also found to be in reasonably good agreement with the experiments.
|
| 41 |
+
|
| 42 |
+
## 2. Calculational details
|
| 43 |
+
|
| 44 |
+
### 2.1 *The one- and two-body parts of Hamiltonian*
|
| 45 |
+
|
| 46 |
+
In our calculations presented here, we have employed the valence space spanned by $3s_{1/2}$, $2p_{1/2}$, $2p_{3/2}$, $2d_{3/2}$, $2d_{5/2}$, $1f_{5/2}$, $1g_{7/2}$, $1g_{9/2}$ and $1h_{11/2}$ orbits for protons and neutrons under the assumption of $N = Z = 28$ subshell closure. The single-particle energies (SPEs) that we have taken are (in MeV): $(3s_{1/2}) = 9.90$, $(2p_{1/2}) = 1.08$, $(2p_{3/2}) = 0.0$, $(2d_{3/2}) = 11.40$, $(2d_{5/2}) = 8.90$, $(1f_{5/2}) = 0.78$ $(1g_{7/2}) = 11.90$, $(1g_{9/2}) = 3.50$ and $(1h_{11/2}) = 12.90$. The energy values of single-particle orbits for $2p-1f-1g$ levels is the same as employed for $ ^{56}\text{Ni} $ core plus one nucleon. The energies of higher single-particle valence orbits is the same as used by Vergados and Kuo [20] relative to $1g_{9/2}$ valence orbit.
|
| 47 |
+
|
| 48 |
+
The two-body effective interaction that has been employed is of PQH type. The parameters of PQ part of the two-body interaction are also the same as used by Sharma *et al* [16]. The relative magnitudes of the parameters of the hexadecapole-hexadecapole parts of the two-body interaction were calculated from a relation suggested by Bohr and Mottelson [21]. According to them, the approximate magnitude of these coupling constants for isospin $T=0$ is given by
|
| 49 |
+
|
| 50 |
+
$$ \chi_{\lambda} = \frac{4\pi m\omega_0^2}{2\lambda + 1} A \langle r^{2\lambda-2} \rangle \quad \text{for } \lambda = 1, 2, 3, 4 \qquad (1) $$
|
| 51 |
+
|
| 52 |
+
and the parameters for $T=1$ are approximately half the magnitude of their $T=0$ counterparts. This relation was used to calculate the values of $\chi_{pp4}$ relative to $\chi_{pp}$ by generating the wave function for strontium isotopes and then calculating the values of $\langle r^{2\lambda-2} \rangle$ for $\lambda=2$ and $4$.
|
| 53 |
+
|
| 54 |
+
The values for hexadecapole-hexadecapole part of the two-body interaction turn out to be
|
| 55 |
+
|
| 56 |
+
$$ \chi_{pp4}(\chi_{nn4}) = -0.00033 \text{ MeV b}^{-8} \quad \text{and} \quad \chi_{pn4} = -0.00066 \text{ MeV b}^{-8}. $$
|
| 57 |
+
---PAGE_BREAK---
|
| 58 |
+
|
| 59 |
+
Anil Chandan et al
|
| 60 |
+
|
| 61 |
+
## 2.2 Projection of states of good angular momentum from axially-symmetric HB intrinsic states
|
| 62 |
+
|
| 63 |
+
The procedure for obtaining the axially symmetric HB intrinsic states has been discussed in ref. [22].
|
| 64 |
+
|
| 65 |
+
## 2.3 The variation-after-angular-momentum projection (VAP) method
|
| 66 |
+
|
| 67 |
+
The VAP calculations have been carried out as follows: We first generated the self-consistent HB solutions, $\Phi(\beta)$, by carrying out the HB calculations with the Hamiltonian $(H - \beta Q_0^2)$, where $\beta$ is a variational parameter. The selection of the optimum intrinsic states, $\Phi_{\text{opt}}(\beta_J)$, is then made by finding out the minimum of the projected energy
|
| 68 |
+
|
| 69 |
+
$$E_J(\beta) = \langle \Phi(\beta) | H P_{00}^J | \Phi(\beta) \rangle / \langle \Phi(\beta) | P_{00}^J | \Phi(\beta) \rangle \quad (3)$$
|
| 70 |
+
|
| 71 |
+
as a function of $\beta$. In other words, the optimum intrinsic state for each yrast $J$ satisfies the condition
|
| 72 |
+
|
| 73 |
+
$$\partial/\partial\beta[\langle\Phi(\beta)|HP_{00}^J|\Phi(\beta)\rangle/\langle\Phi(\beta)|P_{00}^J|\Phi(\beta)\rangle]|_{\beta=\beta_J} = 0. \quad (4)$$
|
| 74 |
+
|
| 75 |
+
# 3. Deformation systematics of Sr isotopes
|
| 76 |
+
|
| 77 |
+
From the systematics of 2+ states in $^{98-102}$Sr, it is observed that the energy of 2+ states decreases from 0.144 MeV in $^{98}$Sr to 0.129 MeV in $^{100}$Sr giving an indication that there is an increase in the degree of deformation as we move from $^{98}$Sr to $^{100}$Sr. This fact is also confirmed by the increase in the ratio $E_4^+/E_2^+$. The value of this ratio for $^{98}$Sr is 3.00 whereas its value for $^{100}$Sr is 3.23. For a rotational nucleus, the value of this ratio is 3.33. Besides this, it is observed that the 2+ state does not change much as we move from $^{100}$Sr to $^{102}$Sr and it changes only marginally by a factor of 0.003 MeV. This is indicative of the fact that if asymptotic deformation has taken place in $^{102}$Sr, there is very little chance of increasing deformation thereafter. This fact is also indicated by a small change in the value of the ratio $E_4^+/E_2^+$ from $^{100}$Sr to $^{102}$Sr. The value of this ratio changes from 3.23 for $^{100}$Sr to the recently predicted value of 3.31 for $^{102}$Sr [11]. Phenomenologically, it is well known that a nucleus having a smaller value of 2+ energy should have a larger deformation. Since $Q_2^+$ of a nucleus is directly related to its intrinsic quadrupole moment, one should, therefore, expect that a smaller energy value for 2+ state should manifest itself in terms of a larger value for the ratio of intrinsic quadrupole moment to the maximum possible intrinsic quadrupole moment for that nucleus in the SU(3) limit ($(Q_0^2)_{\text{HB}}/\langle Q_0^2 \rangle_{\text{max}}$) denoted hereafter as RQ and vice versa. (The SU(3) limit of the quadrupole moment for a particular nucleus in the HB framework is calculated by putting all the SPEs of the valence orbits equal to zero and thereby allowing the Nilsson orbits to fill up in the increasing order of quadrupole moment.) In other words, the observed systematics of $E_2^+$ with A should produce a corresponding
|
| 78 |
+
---PAGE_BREAK---
|
| 79 |
+
|
| 80 |
+
Neutron-rich $^{98-106}$Sr isotopes
|
| 81 |
+
|
| 82 |
+
**Table 1.** The experimental values of excitation energy of the $E_2^+$ state in MeV, proton ($\langle Q_0^2 \rangle_\pi$) and neutron ($\langle Q_0^2 \rangle_\nu$) intrinsic quadrupole moments, ratio (RQ) of intrinsic quadrupole moment ($\langle Q_0^2 \rangle_{HB}$) to the maximum possible value ($\langle Q_0^2 \rangle_{max}$) and the ratio $E_4^+/E_2^+$ for $^{98-106}$Sr isotopes obtained with PQH interaction. The quadrupole moments have been computed in units of b$^2$, where $b = \sqrt{\hbar/m\omega}$ is the oscillator parameter.
|
| 83 |
+
|
| 84 |
+
<table><thead><tr><th>Sr<br>nuclei<br>(A)</th><th>E<sub>2</sub><sup>+</sup><br>(exp.)*</th><th>⟨Q<sub>0</sub><sup>2</sup>⟩<sub>π</sub></th><th>⟨Q<sub>0</sub><sup>2</sup>⟩<sub>ν</sub></th><th>⟨Q<sub>0</sub><sup>2</sup>⟩<sub>HB</sub></th><th>⟨Q<sub>0</sub><sup>2</sup>⟩<sub>max</sub></th><th>RQ</th><th>E<sub>4</sub><sup>+</sup>/E<sub>2</sub><sup>+</sup><br>(exp.)</th></tr></thead><tbody><tr><td>98</td><td>0.144</td><td>35.18</td><td>36.44</td><td>71.62</td><td>118.10</td><td>0.60</td><td>3.00*</td></tr><tr><td>100</td><td>0.129</td><td>35.55</td><td>38.17</td><td>73.72</td><td>115.17</td><td>0.64</td><td>3.23*</td></tr><tr><td>102</td><td>0.126</td><td>35.47</td><td>38.83</td><td>74.30</td><td>110.04</td><td>0.67</td><td>3.31**</td></tr><tr><td>104</td><td>-</td><td>35.19</td><td>39.26</td><td>74.45</td><td>104.19</td><td>0.71</td><td>-</td></tr><tr><td>106</td><td>-</td><td>34.85</td><td>39.86</td><td>74.71</td><td>98.17</td><td>0.76</td><td>-</td></tr></tbody></table>
|
| 85 |
+
|
| 86 |
+
*Data taken from refs [7–10,31,32].
|
| 87 |
+
|
| 88 |
+
**Data taken from ref. [11].
|
| 89 |
+
|
| 90 |
+
inverse systematics of this ratio of quadrupole moments for the $^{98-106}$Sr with increasing A. Based on the above logic, the calculated values of this ratio of intrinsic quadrupole moment should therefore exhibit an increase in its value as we move from $^{98}$Sr to $^{102}$Sr and thereafter, it should show very small increase which could be indicative of the asymptotic onset of deformation in heavy Sr isotopes. In table 1, the results of HB calculations are presented. Note that the ratio RQ increases from 0.60 to 0.76 as we move from $^{98}$Sr to $^{106}$Sr.
|
| 91 |
+
|
| 92 |
+
We next focus our attention on the factors that could be responsible for the deformation of neutron-rich Sr isotopes. In this regard, it is important to discuss and highlight some of the well-accepted factors responsible for bringing sizeable collectivity in nuclei in the same mass region. It is generally felt that the neutron-proton (np) effective interactions possess a deformation producing tendency and the neutron-neutron (nn) or proton-proton (pp) effective interactions are mostly of spherifying nature [23–28]. These ideas have played a pivotal role in the development of the stretch scheme [26] of Danos and Gillet, the rotor model [27] of Arima and Gillet and the interacting boson model of Arima et al [28]. In this regard, the role of np interaction in spin-orbit partner (SOP) orbits in the context of general development of collective features was also suggested by Federman and Pittel [23] and Casten et al [29]. Their calculations provided evidence suggesting that np interaction between the valence nucleons in the SOP orbits – the orbits $(g_{9/2})_\pi$ and $(g_{7/2})_\nu$ – may be instrumental vis-à-vis the observed onset of deformation in Sr isotopes with $A \ge 100$. It may also be pointed out that the role of np interaction between the SOP orbits in producing deformation depends critically on the relative occupation probability of $(g_{9/2})_\pi$ and $(g_{7/2})_\nu$ orbits [30].
|
| 93 |
+
|
| 94 |
+
As is evident from the results presented in table 1, the deformation appearing in heavy Sr isotopes is 60% in $^{98}$Sr and 76% in $^{106}$Sr of the maximum possible deformation in these isotopes. This is indicated by the fact that RQ values change from 0.60 to 0.76 as we move from $^{98}$Sr to $^{106}$Sr. In order to understand how
|
| 95 |
+
---PAGE_BREAK---
|
| 96 |
+
|
| 97 |
+
Anil Chandan et al
|
| 98 |
+
|
| 99 |
+
**Table 2.** The subshell occupation numbers (protons) in the $^{98-106}$Sr nuclei with PQH interaction.
|
| 100 |
+
|
| 101 |
+
<table><thead><tr><th rowspan="2">Sr<br>nuclei<br>(A)</th><th colspan="9">Subshell occupation number</th></tr><tr><th>3s<sub>1/2</sub></th><th>2p<sub>1/2</sub></th><th>2p<sub>3/2</sub></th><th>2d<sub>3/2</sub></th><th>2d<sub>5/2</sub></th><th>1f<sub>5/2</sub></th><th>1g<sub>7/2</sub></th><th>1g<sub>9/2</sub></th><th>1h<sub>11/2</sub></th></tr></thead><tbody><tr><td>98</td><td>0.10</td><td>0.55</td><td>2.26</td><td>0.06</td><td>0.68</td><td>3.17</td><td>0.04</td><td>3.09</td><td>0.00</td></tr><tr><td>100</td><td>0.11</td><td>0.58</td><td>2.24</td><td>0.06</td><td>0.75</td><td>3.18</td><td>0.04</td><td>3.03</td><td>0.00</td></tr><tr><td>102</td><td>0.10</td><td>0.59</td><td>2.22</td><td>0.05</td><td>0.76</td><td>3.17</td><td>0.03</td><td>3.03</td><td>0.00</td></tr><tr><td>104</td><td>0.08</td><td>0.61</td><td>2.22</td><td>0.03</td><td>0.77</td><td>3.17</td><td>0.03</td><td>3.07</td><td>0.00</td></tr><tr><td>106</td><td>0.06</td><td>0.63</td><td>2.21</td><td>0.02</td><td>0.76</td><td>3.16</td><td>0.02</td><td>3.12</td><td>0.00</td></tr></tbody></table>
|
| 102 |
+
|
| 103 |
+
**Table 3.** The subshell occupation numbers (neutrons) in the $^{98-106}$Sr nuclei with PQH interaction.
|
| 104 |
+
|
| 105 |
+
<table><thead><tr><th rowspan="2">Sr<br>nuclei<br>(A)</th><th colspan="9">Subshell occupation number</th></tr><tr><th>3s<sub>1/2</sub></th><th>2p<sub>1/2</sub></th><th>2p<sub>3/2</sub></th><th>2d<sub>3/2</sub></th><th>2d<sub>5/2</sub></th><th>1f<sub>5/2</sub></th><th>1g<sub>7/2</sub></th><th>1g<sub>9/2</sub></th><th>1h<sub>11/2</sub></th></tr></thead><tbody><tr><td>98</td><td>0.73</td><td>1.99</td><td>3.98</td><td>1.32</td><td>3.04</td><td>5.97</td><td>2.21</td><td>9.80</td><td>2.92</td></tr><tr><td>100</td><td>0.82</td><td>1.99</td><td>3.98</td><td>1.43</td><td>3.05</td><td>5.97</td><td>2.76</td><td>9.81</td><td>3.70</td></tr><tr><td>102</td><td>0.94</td><td>1.99</td><td>3.98</td><td>1.55</td><td>4.01</td><td>5.97</td><td>3.27</td><td>9.84</td><td>4.40</td></tr><tr><td>104</td><td>1.06</td><td>1.99</td><td>3.98</td><td>1.68</td><td>4.49</td><td>5.99</td><td>3.78</td><td>9.88</td><td>5.14</td></tr><tr><td>106</td><td>1.14</td><td>1.99</td><td>3.99</td><td>1.79</td><td>4.84</td><td>5.99</td><td>4.24</td><td>9.92</td><td>6.10</td></tr></tbody></table>
|
| 106 |
+
|
| 107 |
+
this deformation arises, we present in tables 2 and 3, the results of occupation probabilities of various proton and neutron subshells. These results have been obtained using PQH interaction.
|
| 108 |
+
|
| 109 |
+
From table 2, it is observed that $p_{1/2}$, $p_{3/2}$, $f_{5/2}$ proton subshells are partially filled. The polarization of these subshells could be one of the important factors contributing to the appearance of deformation in Sr isotopes. Secondly, it is observed from this table that $(g_{9/2})_\pi$ occupation is sizeable and from table 3, one notices that there are neutrons in $g_{7/2}$ subshell. Thus, there is an opportunity for neutron-proton (np) interaction in spin-orbit partner (SOP) orbits – the orbits $(g_{9/2})_\pi$ and $(g_{7/2})_\nu$ – in this case, to operate. As pointed out by Federman and Pittel [13,23], this factor could also lead to deformation in heavy Sr isotopes. From table 3, we also notice that low k-components of $(h_{11/2})_\nu$ orbits are occupied in $^{98}$Sr to $^{106}$Sr. Since these low k-components are sharply downsloping, their occupation could also lead to large deformation in these isotopes. This has been claimed to be the mechanism behind large onset of deformation in Sr isotopes by mean field theorists [14,15]. From the above discussion, it is evident that there are three factors responsible for the deformation in $^{98}$Sr to $^{106}$Sr. The first factor is the polarization of $2p_{1/2}$, $2p_{3/2}$ and $2f_{5/2}$ proton subshells. Because of this polarization, the protons tend to occupy $1g_{9/2}$ proton orbit which makes it possible for np interaction to operate between SOP orbits – the $(g_{9/2})_\pi$ and $(g_{7/2})_\nu$ orbits in the present context – as
|
| 110 |
+
---PAGE_BREAK---
|
| 111 |
+
|
| 112 |
+
Neutron-rich $^{98-106}$Sr isotopes
|
| 113 |
+
|
| 114 |
+
there are already neutrons in ($g_{7/2}$) orbit. Besides this, the increasing trend in the occupation probability of $(1h_{11/2})\nu$ reinforces the development of deformation as we move from $^{98}$Sr to $^{106}$Sr. It may be noted that $(h_{11/2})\nu$ orbit is nearly half-filled in $^{106}$Sr making maximum contribution to the quadrupole moment.
|
| 115 |
+
|
| 116 |
+
4. Energy spectra in $^{98-106}$Sr
|
| 117 |
+
|
| 118 |
+
Now to test the reliability and efficiency of HB calculations performed with PQH model of two-body interaction, it is important to obtain satisfactory agreement for the yrast spectra. A projection calculation for the energy spectra of $^{98-106}$Sr has been carried out by employing the phenomenological PQ and PQH models of two-body interaction in the following manner:
|
| 119 |
+
|
| 120 |
+
Starting from the Hamiltonian ($H - \beta Q_0^2$), HB intrinsic state was obtained for a number of values of variational parameter ($\beta$) for each Sr isotope. From these intrinsic states, even spin and even parity angular momentum states were projected out. Then, the lowest energy value ($E_J^+$) corresponding to each angular momentum state ($J^+$) is collected to obtain the yrast spectra in each Sr nucleus.
|
| 121 |
+
|
| 122 |
+
In figures 1a and 1b, the yrast spectra for $^{98-106}$Sr is displayed. The spectra corresponding to Th.1 is obtained when PQ model of interaction is employed, whereas the spectra corresponding to Th.2 is obtained when PQH model of interaction is employed. It may be noted that the spectra corresponding to Th.2 is in satisfactory agreement with the experiment [31,32] and there is lot of improvement as we go from the spectra corresponding to Th.1 to the spectra corresponding to Th.2 when compared with the observed spectra. For example, in the case of $^{98}$Sr, the observed yrast $8^+$, $10^+$ and $12^+$ energy states are having energies 1.43 MeV, 2.12 MeV and 2.93 MeV respectively and the Th.1 spectra give the corresponding energy values as 3.30 MeV, 4.90 MeV and 6.80 MeV, which are sharply in disagreement with the observed yrast states. However, the projection calculations presented under Th.2 spectra give the energy values as 1.30 MeV, 2.00 MeV and 2.80 MeV for the yrast $8^+$, $10^+$ and $12^+$ states respectively. It is, therefore, very obvious that the energy values calculated under Th.2 are in satisfactory agreement with the observed energy values. The same trend is also observed for $^{100}$Sr isotope. We can, therefore, make a comment that the study of yrast spectra in $^{98-100}$Sr indicates that PQH model of interaction is an improvement over the PQ model of interaction in the case of Sr isotopes. Since the experimental spectra for $^{102-106}$Sr is not available (only upto $2^+$ state in $^{102}$Sr is available), it turns out that the different yrast states predicted by the Th.2 calculations for $^{102}$Sr, $^{104}$Sr and $^{106}$Sr will serve as a motivation for the experimentalists to look for these states in $^{102-106}$Sr. It may be noted that the calculations of spectra are carried out for the entire set of the $^{98-106}$Sr isotopes with a single set of input parameters. In table 4, the values of the variational pa- rameter ($\beta$), corresponding to which the yrast spectra for Th.2 has been obtained, are presented.
|
| 123 |
+
---PAGE_BREAK---
|
| 124 |
+
|
| 125 |
+
Figure 1. (a) Experimental and theoretical low-lying yrast spectra for $^{98-102}$Sr nuclei. (b) Theoretical low-lying yrast spectra for $^{104-106}$Sr nuclei.
|
| 126 |
+
---PAGE_BREAK---
|
| 127 |
+
|
| 128 |
+
Neutron-rich $^{98-106}$Sr isotopes
|
| 129 |
+
|
| 130 |
+
**Table 4.** Values of the variational parameter ($\beta$) and spins ($I^+$) corresponding to which the yrast spectra for Th.2 has been obtained in $^{98-106}$Sr.
|
| 131 |
+
|
| 132 |
+
<table><thead><tr><th>Nucleus</th><th>Spins (I<sup>+</sup>)</th><th>Variational parameter (β)</th></tr></thead><tbody><tr><td rowspan="3"><sup>98</sup>Sr</td><td>0<sup>+</sup> → 6<sup>+</sup></td><td>0.0</td></tr><tr><td>8<sup>+</sup> → 10<sup>+</sup></td><td>0.15</td></tr><tr><td>12<sup>+</sup> → 16<sup>+</sup></td><td>0.20</td></tr><tr><td rowspan="3"><sup>100</sup>Sr</td><td>0<sup>+</sup> → 8<sup>+</sup></td><td>0.0</td></tr><tr><td>10<sup>+</sup> → 12<sup>+</sup></td><td>0.10</td></tr><tr><td>14<sup>+</sup> → 16<sup>+</sup></td><td>0.15</td></tr><tr><td rowspan="3"><sup>102</sup>Sr</td><td>0<sup>+</sup> → 8<sup>+</sup></td><td>0.0</td></tr><tr><td>10<sup>+</sup> → 14<sup>+</sup></td><td>0.15</td></tr><tr><td>16<sup>+</sup></td><td>0.20</td></tr><tr><td rowspan="3"><sup>104</sup>Sr</td><td>0<sup>+</sup> → 4<sup>+</sup></td><td>0.0</td></tr><tr><td>6<sup>+</sup> → 12<sup>+</sup></td><td>0.10</td></tr><tr><td>14<sup>+</sup> → 16<sup>+</sup></td><td>0.15</td></tr><tr><td rowspan="3"><sup>106</sup>Sr</td><td>0<sup>+</sup> → 8<sup>+</sup></td><td>0.0</td></tr><tr><td>10<sup>+</sup> → 12<sup>+</sup></td><td>0.10</td></tr><tr><td>14<sup>+</sup> → 16<sup>+</sup></td><td>0.15</td></tr></tbody></table>
|
| 133 |
+
|
| 134 |
+
**5. Systematics of the calculated values of E2 transition probabilities in Sr isotopes**
|
| 135 |
+
|
| 136 |
+
The reliability and goodness of the HB wave function is also examined by calculating the *B(E2)* values. In table 5, the calculated values of *E2* transition probabilities between the states *E*<sub>*J*</sub> and *E*<sub>*J*+2</sub> are presented. The calculated values are expressed in parametric form in terms of the proton (*e*<sub>p</sub>) and neutron (*e*<sub>n</sub>) effective charges, such that *e*<sub>p</sub> = 1 + *e*<sub>eff</sub> and *e*<sub>n</sub> = *e*<sub>eff</sub>, and have been obtained through a rigorous projection calculation. The *B(E2: J*<sub>*i*</sub><sup>+</sup> → J*<sub>*f*</sub>*<sup>+</sup>)* values have been calculated in units of *e*<sub>p</sub>*b*<sub>*n*</sub><sup>2</sup> (where b<sub>n</sub> stands for barn, 1 barn = 10<sup>-28</sup> m<sup>2</sup>). The results indicate that by choosing *e*<sub>eff</sub> = 0.25, a good agreement with the observed values for *B(E2: 0<sup>+</sup> → 2<sup>+</sup>)* transition probabilities is obtained for <sup>98-100</sup>Sr nuclei. For example, in <sup>98</sup>Sr, the calculated value of *B(E2: 0<sup>+</sup> → 2<sup>+</sup>)* is 1.41 units and experimental value is 1.28(39) units. Similarly for <sup>100</sup>Sr, the calculated and observed values of *B(E2: 0<sup>+</sup> → 2<sup>+</sup>)* are 1.34 units and 1.42(8) units respectively. The experimental data for the higher transitions in <sup>98-100</sup>Sr is not available but we have calculated the data for the higher transitions, upto 8<sup>+</sup> → 10<sup>+</sup>, also. Similarly, the experimental data for any of the transitions in <sup>102-106</sup>Sr is not available but we have also calculated the data upto 8<sup>+</sup> → 10<sup>+</sup> transitions, corresponding to the same effective charge as used for <sup>98,100</sup>Sr, in these nuclei.
|
| 137 |
+
|
| 138 |
+
From the comparison of the calculated *B(E2)* values with the experimental values [34] for the 0<sup>+</sup> → 2<sup>+</sup> transitions in <sup>98-100</sup>Sr, it is satisfactory to note that the calculated *B(E2)* values are in good agreement with the experiments. Since the
|
| 139 |
+
---PAGE_BREAK---
|
| 140 |
+
|
| 141 |
+
Anil Chandan et al
|
| 142 |
+
|
| 143 |
+
**Table 5.** The reduced transition probabilities for E2 transitions for the yrast levels in the nuclei $^{98-106}$Sr. Here $e_p(e_n)$ denotes the effective charge for protons (neutrons). The entries presented in the third column correspond to the reduced matrix elements of the quadrupole operator between yrast states [16]. The reduced matrix elements have been expressed in a form that brings out their explicit dependence on the effective charges. The entries presented in the fourth column correspond to the effective charges indicated in the first column. The $B(E2)$ values are in units of $e^2b_n^2$ (where $b_n$ stands for barn, 1 barn = $10^{-28}$ m$^2$).
|
| 144 |
+
|
| 145 |
+
<table><thead><tr><th rowspan="2">Nucleus<br>(e<sub>p</sub>, e<sub>n</sub>)<br>(1)</th><th rowspan="2">Transition<br>(J<sub>i</sub><sup>+</sup> → J<sub>f</sub><sup>+</sup>)<br>(2)</th><th rowspan="2">$[B(E2: J<sub>i</sub><sup>+</sup> → J<sub>f</sub><sup>+</sup>)]<sup>1/2</sup><br>(3)</th><th colspan="2">$B(E2: J<sub>i</sub><sup>+</sup> → J<sub>f</sub><sup>+</sup>)</th></tr><tr><th>Theory<br>(4)</th><th>(Exp.)*<br>(5)</th></tr></thead><tbody><tr><td rowspan="5"><sup>98</sup>Sr (1.25, 0.25)</td><td>0<sup>+</sup> → 2<sup>+</sup></td><td>0.75e<sub>p</sub>+1.00e<sub>n</sub></td><td>1.41</td><td>1.28(39)</td></tr><tr><td>2<sup>+</sup> → 4<sup>+</sup></td><td>0.90e<sub>p</sub>+1.19e<sub>n</sub></td><td>2.02</td><td>-</td></tr><tr><td>4<sup>+</sup> → 6<sup>+</sup></td><td>0.94e<sub>p</sub>+1.25e<sub>n</sub></td><td>2.21</td><td>-</td></tr><tr><td>6<sup>+</sup> → 8<sup>+</sup></td><td>0.95e<sub>p</sub>+1.28e<sub>n</sub></td><td>2.27</td><td>-</td></tr><tr><td>8<sup>+</sup> → 10<sup>+</sup></td><td>0.95e<sub>p</sub>+1.30e<sub>n</sub></td><td>2.28</td><td>-</td></tr><tr><td rowspan="5"><sup>100</sup>Sr (1.25, 0.25)</td><td>0<sup>+</sup> → 2<sup>+</sup></td><td>0.76e<sub>p</sub>+0.84e<sub>n</sub></td><td>1.34</td><td>1.42(8)</td></tr><tr><td>2<sup>+</sup> → 4<sup>+</sup></td><td>0.91e<sub>p</sub>+0.93e<sub>n</sub></td><td>1.87</td><td>-</td></tr><tr><td>4<sup>+</sup> → 6<sup>+</sup></td><td>0.95e<sub>p</sub>+0.98e<sub>n</sub></td><td>2.05</td><td>-</td></tr><tr><td>6<sup>+</sup> → 8<sup>+</sup></td><td>0.97e<sub>p</sub>+1.00e<sub>n</sub></td><td>2.13</td><td>-</td></tr><tr><td>8<sup>+</sup> → 10<sup>+</sup></td><td>0.98e<sub>p</sub>+1.01e<sub>n</sub></td><td>2.18</td><td>-</td></tr><tr><td rowspan="5"><sup>102</sup>Sr (1.25, 0.25)</td><td>0<sup>+</sup> → 2<sup>+</sup></td><td>0.78e<sub>p</sub>+0.68e<sub>n</sub></td><td>1.31</td><td>-</td></tr><tr><td>2<sup>+</sup> → 4<sup>+</sup></td><td>0.93e<sub>p</sub>+0.81e<sub>n</sub></td><td>1.86</td><td>-</td></tr><tr><td>4<sup>+</sup> → 6<sup>+</sup></td><td>0.97e<sub>p</sub>+0.86e<sub>n</sub></td><td>2.03</td><td>-</td></tr><tr><td>6<sup>+</sup> → 8<sup>+</sup></td><td>0.99e<sub>p</sub>+0.88e<sub>n</sub></td><td>2.12</td><td>-</td></tr><tr><td>8<sup>+</sup> → 10<sup>+</sup></td><td>0.99e<sub>p</sub>+0.90e<sub>n</sub></td><td>2.13</td><td>-</td></tr><tr><td rowspan="5"><sup>104</sup>Sr (1.25, 0.25)</td><td>0<sup>+</sup> → 2<sup>+</sup></td><td>0.78e<sub>p</sub>+0.91e<sub>n</sub></td><td>1.44</td><td>-</td></tr><tr><td>2<sup>+</sup> → 4<sup>+</sup></td><td>0.93e<sub>p</sub>+1.08e<sub>n</sub></td><td>2.05</td><td>-</td></tr><tr><td>4<sup>+</sup> → 6<sup>+</sup></td><td>0.98e<sub>p</sub>+1.13e<sub>n</sub></td><td>2.27</td><td>-</td></tr><tr><td>6<sup>+</sup> → 8<sup>+</sup></td><td>0.99e<sub>p</sub>+1.16e<sub>n</sub></td><td>2.33</td><td>-</td></tr><tr><td>8<sup>+</sup> → 10<sup>+</sup></td><td>0.99e<sub>p</sub>+1.17e<sub>n</sub></td><td>2.34</td><td>-</td></tr><tr><td rowspan="5"><sup>106</sup>Sr (1.25, 0.25)</td><td>0<sup>+</sup> → 2<sup>+</sup></td><td>0.78e<sub>p</sub>+0.69e<sub>n</sub></td><td>1.31</td><td>-</td></tr><tr><td>2<sup>+</sup> → 4<sup>+</sup></td><td>0.93e<sub>p</sub>+0.83e<sub>n</sub></td><td>1.87</td><td>-</td></tr><tr><td>4<sup>+</sup> → 6<sup>+</sup></td><td>0.97e<sub>p</sub>+0.87e<sub>n</sub></td><td>2.04</td><td>-</td></tr><tr><td>6<sup>+</sup> → 8<sup>+</sup></td><td>0.99e<sub>p</sub>+0.89e<sub>n</sub></td><td>2.13</td><td>-</td></tr><tr><td>8<sup>+</sup> → 10<sup>+</sup></td><td>1.00e<sub>p</sub>+0.91e<sub>n</sub></td><td>2.18</td><td>-</td></tr></tbody></table>
|
| 146 |
+
|
| 147 |
+
*Exp. data taken from ref. [34].
|
| 148 |
+
|
| 149 |
+
experimental data for the higher transitions in $^{98-100}$Sr and any of the transitions in $^{102-106}$Sr are not available, it turns out that the calculated data predicted for different transitions in $^{98-106}$Sr will serve as a motivation for the experimentalists to look for this data.
|
| 150 |
+
---PAGE_BREAK---
|
| 151 |
+
|
| 152 |
+
Neutron-rich $^{98-106}$Sr isotopes
|
| 153 |
+
|
| 154 |
+
**6. Quadrupole deformations ($\beta_2$) in Sr isotopes**
|
| 155 |
+
|
| 156 |
+
We have calculated values for deformation parameter ($\beta_2$) for $^{98-106}$Sr. The deformation parameter $\beta_2$ is related to $B(E2)\uparrow$ by the formula suggested by Raman *et al* [33] as
|
| 157 |
+
|
| 158 |
+
$$ \beta_2 = (4\pi/3ZR_0^2)[B(E2)\uparrow/e^2]^{1/2}, \quad (5) $$
|
| 159 |
+
|
| 160 |
+
where $R_0$ is usually taken to be 1.2 Å$^{1/3}$ fm and $B(E2)\uparrow$ is in units of e$^2$b$_n^2$.
|
| 161 |
+
|
| 162 |
+
The deformation parameter $\beta_2$ has been calculated using the calculated $B(E2)\uparrow$ values, given in table 5. From the calculations, we find that $\beta_2$ values for the nuclei $^{98}$Sr, $^{100}$Sr, $^{102}$Sr, $^{104}$Sr and $^{106}$Sr are 0.42, 0.41, 0.40, 0.41 and 0.39 respectively. The experimental values [34] for $^{98}$Sr and $^{100}$Sr are 0.40(6) and 0.42(12) respectively. From the comparison of the data, we find that there is reasonable agreement for $\beta_2$ values for the nuclei $^{98-100}$Sr. The experimental data for $^{102-106}$Sr are not available.
|
| 163 |
+
|
| 164 |
+
**7. Conclusions**
|
| 165 |
+
|
| 166 |
+
From the results of our calculations, the following conclusions can be drawn:
|
| 167 |
+
|
| 168 |
+
(i) The VAP calculations performed with PQH interaction reproduce correctly the observed deformation systematics in $^{98-102}$Sr isotopes. The deformation develops because of the simultaneous polarization of ($p_{3/2}$) and ($f_{5/2}$) proton subshells and the operation of np interaction between ($g_{9/2}$)$_\pi$ and ($g_{7/2}$)$_\nu$ subshells. The polarization of $p_{3/2}$ or $f_{5/2}$ orbits is an important pre-requisite for the np interaction between SOP orbits to operate.
|
| 169 |
+
|
| 170 |
+
(ii) The yrast spectra obtained with the inclusion of hexadecapole interaction shows satisfactory agreement with the observed spectra compared to the spectra obtained with PQ model of interaction.
|
| 171 |
+
|
| 172 |
+
(iii) The values of hexadecapole interaction parameters employed by us are the appropriate ones in this mass region as, with them, the HB wave function yields values of $B(E2)$ which are in satisfactory agreement with experiments.
|
| 173 |
+
|
| 174 |
+
**References**
|
| 175 |
+
|
| 176 |
+
[1] E Cheifetz, R C Jarad, S G Thompson and J B Wilhelmy, Phys. Rev. Lett. **25**, 38 (1970)
|
| 177 |
+
|
| 178 |
+
[2] H Ohm, G Lhersonneau, K Sistemich, B Pfeiffer and K L Kratz, Z. Phys. **A327**, 483 (1987)
|
| 179 |
+
|
| 180 |
+
[3] G Lhersonneau, H Gabelmann, K L Kratz, B Pfeiffer, N Kaffrell and the ISOLDE Collaboration, Z. Phys. **A332**, 243 (1989)
|
| 181 |
+
|
| 182 |
+
[4] G Lhersonneau, H Gabelmann, N Kaffrell, K L Kratz, B Pfeiffer, K Heyde and the ISOLDE Collaboration, Z. Phys. **A337**, 143 (1990)
|
| 183 |
+
|
| 184 |
+
[5] F Buchinger, E B Ramsay, E Arnold, W Neu, R Neugart, K Wendt, R Silverans, E Lievens, L Vermeeren, D Berdichevsky, R Fleming and D W L Sprung, Phys. Rev. C **41**, 2883 (1990)
|
| 185 |
+
---PAGE_BREAK---
|
| 186 |
+
|
| 187 |
+
Anil Chandan et al
|
| 188 |
+
|
| 189 |
+
[6] P Lievens, R E Silverans, L Vermeeren, W Borchers, W Neu, R Neugart, K Wendt, F Buchinger, E Arnold and the ISOLDE Collaboration, Phys. Lett. B256, 141 (1991)
|
| 190 |
+
|
| 191 |
+
[7] G Lhersonneau, B Pfeiffer, R Capote, J M Quesada, H Gabelmann, K L Kratz and the ISOLDE Collaboration, Phys. Rev. C65, 024318 (2002)
|
| 192 |
+
|
| 193 |
+
[8] R E Azuma, G L Borchert, L C Carraz, P G Hansen, B Jonson, S Matttsson, O B Nielsen, G Nyman, I Ragnarson and H L Ravn, Phys. Lett. B86, 5 (1979)
|
| 194 |
+
|
| 195 |
+
[9] J H Hamilton, A V Ramayya, S J Zhu, G M Ter-Akopia, Yu Oganessian, J D Cole, J O Rasmussen and M A Stoyer, Prog. Part. Nucl. Phys. 35, 635 (1995)
|
| 196 |
+
|
| 197 |
+
[10] G Lhersonneau, B Pfeiffer, M Huhta, A Wohr, I Klockl, K L Gratz, J Aysto and the ISOLDE Collaboration, Z. Phys. A351, 357 (1995)
|
| 198 |
+
|
| 199 |
+
[11] S Verma, P Ahmad, R Devi and S K Khosa, Phys. Rev. C77, 024308 (2008)
|
| 200 |
+
|
| 201 |
+
[12] John C Hill, J A Winger, F K Wohn, R F Petry, J D Goulden, R L Gill, A Piotrowski and H Mach, Phys. Rev. C33, 5 (1985)
|
| 202 |
+
|
| 203 |
+
[13] P Federman and S Pittel, Phys. Rev. C20, 820 (1979)
|
| 204 |
+
|
| 205 |
+
[14] P Bonche, H Flocard, P H Heenen, S J Krieger and M S Weiss, Nucl. Phys. A443, 39 (1985)
|
| 206 |
+
|
| 207 |
+
[15] X Campi and M Epherre, Phys. Rev. C22, 2605 (1980)
|
| 208 |
+
|
| 209 |
+
[16] S K Sharma, P N Tripathi and S K Khosa, Phys. Rev. C38, 2935 (1988)
|
| 210 |
+
|
| 211 |
+
[17] P N Tripathi, S K Sharma and S K Khosa, Phys. Rev. C29, 1951 (1984)
|
| 212 |
+
|
| 213 |
+
[18] S K Khosa, P N Tripathi and S K Sharma, Phys. Lett. B119, 257 (1982)
|
| 214 |
+
|
| 215 |
+
[19] S K Khosa and S K Sharma, Phys. Rev. C25, 2715 (1981)
|
| 216 |
+
|
| 217 |
+
[20] J D Vergados and T T S Kuo, Phys. Lett. B35, 93 (1971)
|
| 218 |
+
|
| 219 |
+
[21] A Bohr and B R Mottelson, Nuclear structure (Benjamin, New York, 1975) Vol. II, p. 356
|
| 220 |
+
|
| 221 |
+
[22] S K Sharma, Nucl. Phys. A260, 226 (1976)
|
| 222 |
+
|
| 223 |
+
[23] P Federman and S Pittel, Phys. Lett. B69, 385 (1977)
|
| 224 |
+
|
| 225 |
+
[24] S Pittel, Nucl. Phys. A347, 417 (1980)
|
| 226 |
+
|
| 227 |
+
[25] S C K Nair, A Ansari and L Satpathi, Phys. Lett. B71, 257 (1977)
|
| 228 |
+
|
| 229 |
+
[26] M Danos and V Gillet, Phys. Rev. C161, 1034 (1967)
|
| 230 |
+
|
| 231 |
+
[27] A Arima and V Gillet, Ann. Phys. 66, 117 (1971)
|
| 232 |
+
|
| 233 |
+
[28] A Arima, T Ohtsuka, F Lachella and I Talmi, Phys. Lett. B66, 205 (1977)
|
| 234 |
+
|
| 235 |
+
[29] R F Casten et al., Phys. Lett. 47, 1433 (1981)
|
| 236 |
+
|
| 237 |
+
[30] P K Mattu and S K Khosa, Phys. Rev. C39, 2018 (1989)
|
| 238 |
+
|
| 239 |
+
[31] M Sakai, At. Data Nucl. Data Tables 31, 409 (1984)
|
| 240 |
+
|
| 241 |
+
[32] B Singh and Z Hu, Nucl. Data Sheets 98, 335 (2003)
|
| 242 |
+
|
| 243 |
+
[33] S Raman, C W Nestor, S Kahane and K H Bhatt, At. Data Nucl. Data Tables 42, 1 (1989)
|
| 244 |
+
|
| 245 |
+
[34] S Raman, C W Nestor and P Tikkanen, At. Data Nucl. Data Tables 78, 40 (2001)
|
samples_new/texts_merged/3723390.md
ADDED
|
@@ -0,0 +1,333 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Capacity of multiservice WCDMA Networks with variable GoS
|
| 5 |
+
|
| 6 |
+
Nidhi Hegde and Eitan Altman
|
| 7 |
+
|
| 8 |
+
INRIA 2004 route des Lucioles, B.P.93 06902 Sophia-Antipolis, France
|
| 9 |
+
Email: {Nidhi.Hegde, Eitan.Altman} @sophia.inria.fr
|
| 10 |
+
|
| 11 |
+
Abstract— Traditional definitions of capacity of CDMA networks are either related to the number of calls they can handle (pole capacity) or to the arrival rate that guarantees that the rejection rate (or outage) is below a given fraction (Erlang capacity). We extend the latter definition to other quality of service (QoS). We consider best-effort (BE) traffic sharing the network resources with real-time (RT) applications. BE applications can adapt their instantaneous transmission rate to the available one and thus need not be subject to admission control or outages. Their meaningful QoS is the average delay. The delay aware capacity is defined as the arrival rate of BE calls that the system can handle such that their expected delay is bounded by a given constant. We compute both the blocking probability of the RT traffic having an adaptive Grade of Service (GoS) as well as the expected delay of the BE traffic for an uplink multicell WCDMA system. This yields the Erlang capacity for former and the delay capacity for the latter.
|
| 12 |
+
|
| 13 |
+
## I. INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Third generation mobile networks such as the Universal Mobiles Telecommunications System (UMTS), will provide a wide variety of services to users, including multimedia applications and interactive real-time applications as well as best-effort applications such as file transfer, Internet browsing, and electronic mail. These services have varied quality of service (QoS) requirements; real time applications (RT) needs some guaranteed minimum transmission rate as well as delay bounds which requires reservation of system capacity. We assume that RT traffic is subject to Call Admission Control (CAC) in order to guarantee the minimum rates for accepted RT calls. This implies that RT traffic may suffer rejections whose rate is then an important QoS for such applications. In contrast, Best-effort (BE) applications can adapt their transmission rate to the network's available resources and is therefore not subject to CAC. The relevant QoS measure for BE traffic is then the expected sojourn time (or delay) of a call in the system (e.g. the expected time to download a file).
|
| 16 |
+
|
| 17 |
+
We consider BE traffic sharing the network resources with RT applications. Our aim is to compute both the blocking (or rejection) probability of the RT traffic as well as the expected delay of the BE traffic for an uplink multicell WCDMA system. Although RT calls need a minimum guaranteed transmission rate, they are assumed to be able to adapt to network resources in a way similar to the BE traffic. For example, in the case of voice applications, UMTS will use the Adaptive Multi-Rate (AMR) codec that offers eight different transmission rates of voice that vary between 4.75 kbps to 12.2 kbps, and that can be dynamically changed every 20 msec. Although both RT
|
| 18 |
+
|
| 19 |
+
and BE traffic have adaptive rates, we identify a key difference between the two: The *duration* of a RT call does not depend on the instantaneous assigned rate it gets (only the quality may change), whereas for BE calls, the *total volume transmitted* during the call does not depend on the assigned rate; the duration of BE calls therefore does depend on the dynamic rate assignment. We propose a probabilistic model that takes these features into account and enables to compute the performance measures of interest: we compute the blocking probabilities and the average throughput per RT calls, the expected average number of RT and BE calls in the system, and the expected delay of BE call.
|
| 20 |
+
|
| 21 |
+
We extend the notion of capacity in order to describe the amount of traffic for which the system can offer reasonable QoS. Traditional definitions of capacity of networks are either related to the number of calls they can handle (pole capacity) or to the arrival rate that guarantees that the rejection rate (or outage) is below a given fraction (Erlang capacity, see [11]). We extend the latter definition to other QoS. The delay aware capacity, suitable in particular for the BE traffic, is defined as the arrival rate of BE calls that the system can handle such that their expected delay is bounded by a given constant. We compute it as a function of other parameters of the system (rate of arrival and characteristics of RT traffic, the CAC and downgrading policy applied to RT traffic).
|
| 22 |
+
|
| 23 |
+
We briefly mention related work. In [10], an uplink CDMA with two classes is considered, the RT traffic is transmitted all the time, the non real time mobiles (NRT) are time-shared. A related idea has also been analyzed in [6]. The benefits of time sharing is studied and conditions for silencing some are obtained. The capacity of voice/data CDMA systems is also analyzed in [7] where both classes are modeled as VBR traffic. Adaptive features of transmission rates are not considered in the above references. In [1], the author considers the influence of the value of a fixed (not-adaptive) bandwidth per BE calls on the Erlang capacity of the system (that includes also RT calls), taking into account that a lower bandwidth implies longer call durations. A limiting capacity (as the fixed bandwidth vanishes) is identified and computed. Related work [2], [9] has also been done in wireline ATM networks (although without the power control aspects and without the downgrading features of wireless).
|
| 24 |
+
|
| 25 |
+
The structure of this paper is as follows. Next section introduces the model and preliminaries. Section III computes the performance of RT and BE traffic in the case of a
|
| 26 |
+
---PAGE_BREAK---
|
| 27 |
+
|
| 28 |
+
single sector using a matrix geometric approach. This is then extended in Section IV to the multisector multicell case using a fix point argument. In Section V we provide numerical examples and we end with a concluding section.
|
| 29 |
+
|
| 30 |
+
## II. PRELIMINARIES
|
| 31 |
+
|
| 32 |
+
We consider the uplink of a multi-service WCDMA system with K service classes. Let $X_j$ be the number of ongoing calls of type j in some given sector, and $\mathbf{X} = (X_1, \dots, X_K)$. In CDMA systems, in order for a signal to be received, the ratio of it's received power to the sum of the background noise and interference must be greater than a given constant. For some given $\mathbf{X}$, this condition is as follows [5]:
|
| 33 |
+
|
| 34 |
+
$$ \frac{P_j}{N + I_{\text{own}} + I_{\text{other}} - P_j} \triangleq \gamma_j \ge \tilde{\Delta}'_j, \quad j = 1, \dots, K, \quad (1) $$
|
| 35 |
+
|
| 36 |
+
where N is the background noise, and $I_{\text{own}}$ and $I_{\text{other}}$ are the total power received from the mobiles within the considered sector, and within the other sectors or cells, respectively. $\gamma_j$ is the ratio of received power to total receive noise and interference at the base station, SIR, and $\tilde{\Delta}'_j$ is the required SIR for a call of class j, given by $\tilde{\Delta}'_j = E_j/N_o R_j W$ where $E_j$ is the energy per transmitted bit of type j, $N_o$ is the thermal noise density, $W$ is the WCDMA modulation bandwidth, and $R_j$ is the transmission rate of the type j call.
|
| 37 |
+
|
| 38 |
+
The interference received from mobiles in the same sector is simply $I_{\text{own}} = \sum_{j=1}^K X_j P_j$. When $X_j$ for all $j=1, \dots, K$ is fixed, we also make the standard assumption [5] that the other-cell interference is proportional to interference for own cell, by some constant $f$, as such:
|
| 39 |
+
|
| 40 |
+
$$ I_{\text{other}} = f I_{\text{own}}. \quad (2) $$
|
| 41 |
+
|
| 42 |
+
Note that the above assumes perfect power control. Due to inaccuracies in the closed-loop fast power control mechanism, mainly due to shadow fading of the radio signal, the $\gamma_j$ may not be equal to $\tilde{\Delta}'_j$ at all times. We now define $\gamma_j$ to be a random variable of the form $\gamma_j = 10^{\xi_j/10}$, where $\xi_j \sim N(\mu_\xi, \sigma_\xi)$ includes the shadow fading component and $\sigma_\xi$ is the standard deviation of shadow fading with typical values between 0.3 and 2 dB [4], [11]. It follows then that $\gamma_j$ has a lognormal distribution given by: $f_{\gamma_j}(x_j) = \frac{h}{x_j \sigma_\xi \sqrt{2\pi}} \exp\left(-\frac{(h \ln(x_j) - \mu_\xi)^2}{2\sigma_\xi^2}\right)$ where $h = 10/\ln 10$.
|
| 43 |
+
|
| 44 |
+
Since $\gamma_j$ is now a random variable, we can write the condition (1) in terms of $\tilde{\gamma}_j$, the average received SIR. We would now like to determine the required SIR, $\tilde{\Delta}_j$ such that $\tilde{\gamma}_j = \tilde{\Delta}_j$ where $\tilde{\Delta}_j$ includes power control errors and replaces $\tilde{\Delta}'_j$ in (1). We determine $\tilde{\Delta}_j$ for the outage condition: $\Pr[\gamma_j \ge \tilde{\Delta}'_j] = \beta$ [12]. The reliability, $\beta$, is typically set to 99%. We have:
|
| 45 |
+
|
| 46 |
+
$$ \Pr[\gamma_j \ge \tilde{\Delta}'_j] = \beta = \int_{\tilde{\Delta}'}^{\infty} f_{\gamma_j}(x) dx = Q \left( \frac{h \ln \tilde{\Delta}' - \mu_{\xi}}{\sigma_{\xi}} \right) $$
|
| 47 |
+
|
| 48 |
+
where $Q(x) = \int_x^\infty \frac{1}{2\pi} e^{-t^2/2} dt.$
|
| 49 |
+
|
| 50 |
+
By inverting the above Q-function, we have:
|
| 51 |
+
|
| 52 |
+
$$ \tilde{\Delta}'_j = 10^{\left(\frac{Q^{-1}(\beta)\sigma_\xi}{10} + \frac{\mu_\xi}{10}\right)} \quad (3) $$
|
| 53 |
+
|
| 54 |
+
Since $\gamma_j$ is a lognormal random variable, its expectation is given by: $\tilde{\gamma}_j = \exp\left(\frac{\sigma_\xi^2}{2h^2} + \frac{\mu_\xi}{h}\right)$. We solve for $\mu_\xi$, to obtain:
|
| 55 |
+
|
| 56 |
+
$$ \mu_\xi = h \ln \tilde{\gamma}_j - \frac{\sigma_\xi^2}{2h} \quad (4) $$
|
| 57 |
+
|
| 58 |
+
We use (3) and (4) to get:
|
| 59 |
+
|
| 60 |
+
$$ \tilde{\Delta}'_j = \tilde{\gamma}_j 10^{\frac{Q^{-1}(\beta)\sigma_\xi}{10} - \frac{\sigma_\xi^2}{20h}} $$
|
| 61 |
+
|
| 62 |
+
We then have the SIR condition in (1) modified as follows:
|
| 63 |
+
|
| 64 |
+
$$ \tilde{\gamma}_j \geq \tilde{\Delta}'_j \Gamma = \frac{E_j R_j}{N_o W} \Gamma \triangleq \tilde{\Delta}_j \quad (5) $$
|
| 65 |
+
|
| 66 |
+
where
|
| 67 |
+
|
| 68 |
+
$$ \Gamma = 10^{\frac{\sigma_{\xi}^{2}}{20h} - \frac{Q^{-1}(\beta)\sigma_{\xi}}{10}}. $$
|
| 69 |
+
|
| 70 |
+
Note that $\Gamma$ is independent of service class. The value of $\Gamma$ is a function of the standard deviation of the shadow fading of users, $\sigma_\xi$, whose value varies with user mobility. Differences in the signal fading due only to user mobility are not considered in this paper. The above modified required SIR now includes a correction for imperfect power control.
|
| 71 |
+
|
| 72 |
+
Revisiting (1), we notice that in order serve a large of number of ongoing calls, that is to keep the $X_j$s high, we must keep the $P_j$s as low as possible. We then solve for the minimum required received power $P_j$ satisfying (5) which is known to be the one that gives strict equality $\tilde{\gamma}_j = \tilde{\Delta}_j$ in (5):
|
| 73 |
+
|
| 74 |
+
$$ P_j = \frac{N\Delta_j}{1 - (1+f)\sum_{k=1}^{K} X_k \Delta_k} \quad (6) $$
|
| 75 |
+
|
| 76 |
+
where $\Delta_j = \frac{\tilde{\Delta}'_j}{1+\tilde{\Delta}'_j}$ turns out to be the signal-to-total-power ratio, STPR (see [1, eq 4]).
|
| 77 |
+
|
| 78 |
+
Define the loading as:
|
| 79 |
+
|
| 80 |
+
$$ \theta = \sum_{j=1}^{K} X_j \Delta_j(\mathbf{X}). \quad (7) $$
|
| 81 |
+
|
| 82 |
+
This definition reflects the fact that $\Delta_j$ is a function of the number of each type of call in the system (since it depends on the transmission rate $R_j$ and since $R_j$ will be determined as a function of the system state). In this paper we consider both real time (RT) and best-effort (BE) services that receive a variable rate. As explained in Section III, the rate received by RT calls, and thus $\Delta_{RT}$, depends on the number of RT calls. The rate received by BE calls depends on both $X_{RT}$ and $X_{BE}$. We maintain this dependence throughout the paper, however for notational convenience we will sometimes drop the argument $(\mathbf{X})$.
|
| 83 |
+
|
| 84 |
+
Now we may define the integer capacity of the cell as the set $X^*$ of vectors $\mathbf{X}$ such that the received powers of the mobiles stays finite, i.e. the denominator of (6) does not vanish [1]. In the equation for minimum received power shown in (6),
|
| 85 |
+
---PAGE_BREAK---
|
| 86 |
+
|
| 87 |
+
this implies the condition $\theta(1+f) < 1$. The system prevents, through Call Admission Control (CAC), that the denominator vanishes; more generally, it is desirable to be even more conservative and to impose a bound on the capacity, $\Theta_{\epsilon} = 1 - \epsilon$ where $\epsilon > 0$. Thus the CAC will ensure that $\theta \le \Theta_{\epsilon}/(1+f)$. Later on we shall consider special combined policies for RT traffic that combine CAC with some rate adaptation, along with a rate adaptation for NRT traffic, which will result in a further restriction on the number of RT calls that the system can handle (which will also be called, with some abuse of notation, the integer capacity of RT traffic).
|
| 88 |
+
|
| 89 |
+
### III. SINGLE SECTOR IN ISOLATION
|
| 90 |
+
|
| 91 |
+
Let us first consider a single sector, so that we may exclude interference from other sectors and other cells in the calculations, thereby setting $f = 0$, in this section. We consider a base station with uplink capacity such that
|
| 92 |
+
|
| 93 |
+
$$ \theta \le \Theta_{\epsilon}. \quad (8) $$
|
| 94 |
+
|
| 95 |
+
Here we define *capacity* in terms of the sum of $\Delta$'s, STPR, of all users. We denote by individual normalized bandwidth, the individual required STPR that corresponds to a particular rate. For example, a call that requires a rate of $y$ bps requires a normalized bandwidth of $\Delta = \frac{E/N_o}{W/y+E/N_o}$ where $E/N_o$ is the requirement specified for the given service type of the call.
|
| 96 |
+
|
| 97 |
+
#### A. Real Time Calls
|
| 98 |
+
|
| 99 |
+
We assume a single type of RT calls capable of accepting a variable rate, with a requested transmission rate $R_{\text{RT}}^r$. From (5) and the definition of $\Delta_j$ that follows (6), we derive the required bandwidth $\Delta_{\text{RT}}^r$ that corresponds to rate $R_{\text{RT}}^r$:
|
| 100 |
+
|
| 101 |
+
$$ \Delta_{\text{RT}}^{r} = \frac{E_{\text{RT}}/N_o}{W/R_{\text{RT}}^{r} + E_{\text{RT}}/N_o}. $$
|
| 102 |
+
|
| 103 |
+
We now introduce the parameters of the call admission control for the RT traffic. All BE calls in the sector share equally the capacity remaining after RT calls have been allocated the required normalized bandwidth. In addition, we assume that some portion of the capacity is reserved for BE calls, thus the RT calls have a maximum capacity, denoted by $L_{\text{RT}}$. Let us denote $L_{\text{BE}}$ to be the minimum portion of the total capacity available for BE calls. We then have $L_{\text{BE}} = \Theta_{\epsilon} - L_{\text{RT}}$. We have the following condition for the capacity bound on RT calls:
|
| 104 |
+
|
| 105 |
+
$$ X_{\text{RT}}\Delta_{\text{RT}} \le L_{\text{RT}} \quad (9) $$
|
| 106 |
+
|
| 107 |
+
where $\Delta_{\text{RT}}$ is the normalized bandwidth received by each RT call. Note that this value will depend on the number of RT calls, and thus may vary.
|
| 108 |
+
|
| 109 |
+
The integer capacity for RT calls, such that they all receive the requested rate $R_{\text{RT}}^r$ and bandwidth $\Delta_{\text{RT}}^r$, is then given by
|
| 110 |
+
$$ N_{\text{RT}} = \left\lfloor \frac{L_{\text{RT}}}{\Delta_{\text{RT}}^r} \right\rfloor. $$
|
| 111 |
+
|
| 112 |
+
1) CAC and GoS control: In a strict call admission control scheme for RT calls, new RT call arrivals would be blocked and cleared when there are $N_{\text{RT}}$ RT calls in the sector. However, in UMTS, we can control the GoS, by providing RT calls with a variable transmission rate [3]. In such a case, we may allow more than $N_{\text{RT}}$ RT calls, at the expense of reducing the transmission rate of all RT calls, thus keeping the total normalized bandwidth occupied by all RT calls within the limit. Let us then define a second threshold for admission of RT calls, $M_{\text{RT}} > N_{\text{RT}}$. Call admission control for RT calls then is as follows. As long as the number of RT calls is less than $N_{\text{RT}}$, all RT calls receive the requested normalized bandwidth $\Delta_{\text{RT}}^r$. When the number $j$ of RT calls is more than $N_{\text{RT}}$ but not more than $M_{\text{RT}}$, all RT calls receive with equality a modified (reduced) normalized bandwidth, denoted here as $\Delta_{\text{RT}}^j$, such that (9) is still satisfied. If there are $M_{\text{RT}}$ RT calls in the sector, new RT call arrivals are blocked and cleared. $M_{\text{RT}}$ may be chosen so that RT calls receive a minimum transmission rate of $R_{\text{RT}}^m$, with normalized bandwidth $\Delta_{\text{RT}}^m$, even in the worst case. The integer capacity for RT calls then is $M_{\text{RT}} = \lceil \frac{L_{\text{RT}}}{\Delta_{\text{RT}}^m} \rceil$, where $\Delta_{\text{RT}}^m = \frac{E_{\text{RT}}/N_o}{W/R_{\text{RT}}^m+E_{\text{RT}}/N_o}$, as derived from (5). The bandwidth received by each RT call at some time $t$ is thus a function of $X_{\text{RT}}(t)$ as follows:
|
| 113 |
+
|
| 114 |
+
$$ \Delta_{\text{RT}}(X_{\text{RT}}(t)) = \begin{cases} \Delta_{\text{RT}}^{r} & 1 \le X_{\text{RT}}(t) \le N_{\text{RT}}; \\ L_{\text{RT}}/X_{\text{RT}}(t) & N_{\text{RT}} < X_{\text{RT}}(t) < M_{\text{RT}}. \end{cases} \quad (10) $$
|
| 115 |
+
|
| 116 |
+
2) RT Traffic Model: We assume that RT calls arrive according to a Poisson process with rate $\lambda_{\text{RT}}$. The duration of an RT call is assumed to have an exponential distribution with mean $1/\mu_{\text{RT}}$, and is not affected by the allocated bandwidth. Let $X_1(t)$ and $X_2(t)$ represent the number of RT customers and BE customers respectively, at time $t$ in the given sector. The number of RT calls in the system is not affected by the BE calls. Therefore, $X_1(t)$ follows a birth and death process, with birth rate $\lambda_{\text{RT}}$ and death rate $\mu_{\text{BE}}$. The steady-state probabilities $\pi_{\text{RT}}(x)$ of the number of RT calls $x$ in the system are given by:
|
| 117 |
+
|
| 118 |
+
$$ \mathrm{Pr}[X_{\mathrm{RT}} = x] = \lim_{t \to \infty} \mathrm{Pr}[X_{\mathrm{RT}}(t) = x] = \frac{\rho_{\mathrm{RT}}^x / x!}{\sum_{i=0}^{M_{\mathrm{RT}}} \rho_{\mathrm{RT}}^i / i!} \quad (11) $$
|
| 119 |
+
|
| 120 |
+
where $\rho_{\mathrm{RT}} = \lambda_{\mathrm{RT}}/\mu_{\mathrm{RT}}$. For RT calls, we are interested in the call blocking probability and the average throughput. The call blocking probability is given by:
|
| 121 |
+
|
| 122 |
+
$$ P_B^{\mathrm{RT}} = \pi_R(M_R) = \frac{\rho_R^{M_R}/M_R!}{\sum_{i=0}^{M_R} \rho_R^i / i!} \quad (12) $$
|
| 123 |
+
|
| 124 |
+
We define $r(x)$ to be the transmission rate received by RT calls when there are $x$ RT calls in the sector, as follows:
|
| 125 |
+
|
| 126 |
+
$$ r(X_{\text{RT}}) = \frac{\Delta_{\text{RT}}(X_{\text{RT}}) W}{(1 - \Delta_{\text{RT}}(X_{\text{RT}})) E_{\text{RT}}/N_o} $$
|
| 127 |
+
|
| 128 |
+
Since the transmission rate of RT calls is affected by the number of RT calls, we would like to include in our definition of expected throughput, a measure of the number of RT calls in the sector. We define the expected throughput per call as
|
| 129 |
+
---PAGE_BREAK---
|
| 130 |
+
|
| 131 |
+
the ratio of the expected global throughput to the expected number of RT calls in the sector, as follows:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\mathbb{E}[r(X_{\mathrm{RT}})] = \frac{\sum_{x=1}^{M_{\mathrm{RT}}} \mathrm{Pr}[X_{\mathrm{RT}} = x] x r(x)}{\sum_{x=1}^{M_{\mathrm{RT}}} \mathrm{Pr}[X_{\mathrm{RT}} = x] x} \quad (13)
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
B. Best-Effort Calls
|
| 138 |
+
|
| 139 |
+
We define $C(x)$ to be the capacity available to BE calls when there are $x$ RT calls, as follows:
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
C(x) = \begin{cases} \Theta_{\epsilon} - x \Delta_{\text{RT}}^r , & x \le N_{\text{RT}}; \\ L_{\text{BE}} , & N_{\text{RT}} < x \le M_{\text{RT}}. \end{cases}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
All BE calls in the sector share equally the available band-
|
| 146 |
+
width. We can then model BE service by a processor shar-
|
| 147 |
+
ing(PS) discipline with a random service capacity. We study
|
| 148 |
+
two performance metrics for BE calls: the average sojourn
|
| 149 |
+
time of a BE call for given values of RT and BE load, and
|
| 150 |
+
the maximum BE arrival rate such that the average delay is
|
| 151 |
+
always bounded by a given constant.
|
| 152 |
+
|
| 153 |
+
Best-effort calls arrive according to a Poisson process with rate $\lambda_{\text{BE}}$. The required workload of BE classes, i.e. file sizes, are i.i.d exponentially distributed with mean $1/\mu_{\text{BE}}$. The departure rate of BE calls is given by $\nu(X_{\text{RT}}) = \mu_{\text{BE}}R_{\text{BE}}(X_{\text{RT}})$, where $R_{\text{BE}}(X_{\text{RT}})$ is the total BE rate corresponding to the available BE capacity $C(X_{\text{RT}})$, as follows:
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
R_{\text{BE}}(X_{\text{RT}}) = \frac{C(X_{\text{RT}})W}{(1 - C(X_{\text{RT}})) E_{\text{BE}}/N_o}.
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
We assume no call admission control for BE calls. The process $(X_2(t), X_1(t))$ is an irreducible Markov chain. It is ergodic if and only if the average service capacity available to BE calls is greater than the BE load (as in [2]):
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
\mu_{\text{BE}} \mathbb{E} R_{\text{BE}}(X_{\text{RT}}) > \lambda_{\text{BE}}. \tag{14}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
Specifically, the process $(X_2(t), X_1(t))$ is a homogeneous quasi birth and death process(QBD) with the generator $Q$. The stationary distribution of this system, $\pi$, is calculated by $\pi Q = 0$, with the normalization condition $\pi e = 1$ where $e$ is a vector of ones of proper dimension. $\pi$ represents the steady-state probability of the two-dimensional process lexicographically: we partition $\pi$ as $[\pi(0), \pi(1), ...]$ with the vector $\pi(i)$ for level $i$, where the levels correspond to the number of BE calls in the system. We may further partition each level into the number of RT calls, $\pi(i) = [\pi(i, 0), \pi(i, 1), ..., \pi(i, M_{RT})]$, for $i \ge 0$.
|
| 166 |
+
|
| 167 |
+
The generator $Q$ has the form:
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
Q = \begin{bmatrix}
|
| 171 |
+
B & A_0 & 0 & 0 & \cdots \\
|
| 172 |
+
A_2 & A_1 & A_0 & 0 & \cdots \\
|
| 173 |
+
0 & A_2 & A_1 & A_0 & \cdots \\
|
| 174 |
+
0 & 0 & \ddots & \ddots & \ddots
|
| 175 |
+
\end{bmatrix} \quad (15)
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
where the matrices $B$, $A_0$, $A_1$, and $A_2$ are square matrices of
|
| 179 |
+
size ($M_{\text{RT}} + 1$). $A_0$ corresponds to a BE connection arrival,
|
| 180 |
+
given by $A_0 = \text{diag}(\lambda_{\text{BE}})$. $A_2$ corresponds to a departure of
|
| 181 |
+
a BE call. The departure rate for BE calls is $\nu(X_{\text{RT}})$. Thus
|
| 182 |
+
$A_2 = \text{diag}(\nu(i); 0 \le i \le M_{\text{RT}})$ $A_1$ corresponds to the arrival
|
| 183 |
+
|
| 184 |
+
and departure processes of the RT calls. $A_1$ is tri-diagonal as
|
| 185 |
+
follows:
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\begin{align*}
|
| 189 |
+
A_1[i, i+1] &= \lambda_{RT} \\
|
| 190 |
+
A_1[i, i-1] &= i\mu_{RT} \\
|
| 191 |
+
A_1[i, i] &= -\lambda_{RT} - i\mu_{RT} - \lambda_{BE} - \nu(i)
|
| 192 |
+
\end{align*}
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
We also have $B = A_1 + A_2$.
|
| 196 |
+
|
| 197 |
+
The steady-state equations can be written as:
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
0 = \pi(0)B + \pi(1)A_2 \quad (16)
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
0 = \pi(i-1)A_0 + \pi(i)A_1 + \pi(i+1)A_2, \quad i \ge 1 \quad (17)
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
We follow the matrix-geometric solution to this QBD [8].
|
| 208 |
+
Assuming stability as shown in (14), the steady-state solution
|
| 209 |
+
$\pi$ exists, and is given by:
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
\pi(i) = \pi(0)\mathbf{R}^i \qquad (18)
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
where the matrix **R** is the minimal non-negative solution to
|
| 216 |
+
the equation:
|
| 217 |
+
|
| 218 |
+
$$
|
| 219 |
+
A_0 + R A_1 + R^2 A_2 = 0 \quad (19)
|
| 220 |
+
$$
|
| 221 |
+
|
| 222 |
+
In order to solve for **R**, we find it efficient to write $A_1 = T-S$
|
| 223 |
+
where *S* is a diagonal matrix and *T* has a zero diagonal. The
|
| 224 |
+
diagonal matrix *S* is positive and invertible, and we may write
|
| 225 |
+
(19) as **R** = (*A*₀ + **R**T + **R**²*A*₂)*S⁻¹. This equation can then
|
| 226 |
+
be solved by successive iterations starting with **R** = 0, a zero
|
| 227 |
+
matrix.
|
| 228 |
+
|
| 229 |
+
Once the matrix **R** is known, we may find π(0) using the boundary condition (16) and the normalization πe = 1 which using (18) is equivalent to π(0)(I − R)⁻¹e = 1. The marginal distribution of the number of RT calls can easily be obtained by using (11). The marginal probability of the number BE calls is
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
\mathrm{Pr}[X_{\mathrm{BE}} = i] = \sum_{j=0}^{M_{\mathrm{RT}}} \pi(i,j) = \pi(i)e = \pi(0)\mathbf{R}^i e.
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
One way to compute the above is by finding the $M_{RT} + 1$
|
| 236 |
+
eigenvalues and corresponding eigenvectors of the matrix
|
| 237 |
+
$\mathbf{R}$. All $M_{RT} + 1$ eigenvalues of the matrix $\mathbf{R}$ are distinct
|
| 238 |
+
[9] and therefore $\mathbf{R}$ is diagonalizable. Define $D$ to be a
|
| 239 |
+
diagonal matrix containing the eigenvalues of $\mathbf{R}$, $r_i$, on the
|
| 240 |
+
diagonal, and $V$ to be a matrix containing the corresponding
|
| 241 |
+
eigenvectors, $v_i$ as columns. We then have:
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
\mathrm{Pr}[X_{\mathrm{BE}} = i] = \pi(0)\mathbf{R}^i e = \pi(0)V D^i V^{-1}e = \sum_{k=0}^{M_{\mathrm{RT}}} a_k r_k^i
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
where $a_k = \pi(0)v_k e'_k V^{-1}e$ and $e'_k$ is a zero vector of proper dimension with the $k$th element equal to one. The expectation of $X_{BE}$ is as follows:
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
\mathbb{E}[X_{\text{BE}}] = \sum_{k=0}^{M_{\text{RT}}} a_k \frac{r_k}{(1-r_k)^2} \quad (20)
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
We can now use Little’s Law to calculate the average sojourn time of a BE session, $T_{BE} = E[X_{BE}]/\lambda_{BE}$. Having obtained the expected delay of BE traffic in terms of the
|
| 254 |
+
---PAGE_BREAK---
|
| 255 |
+
|
| 256 |
+
system parameters, one can now obtain the delay aware capacity of BE traffic, i.e. the arrival rate of BE calls that the system can handle such that their expected delay is bounded by a given constant.
|
| 257 |
+
|
| 258 |
+
IV. EXTENSION TO MULTIPLE SECTORS
|
| 259 |
+
|
| 260 |
+
In this section we provide an analysis for the multi-sector multi-cell case, by including an approximation for the other-sector interference, $I_{\text{other}}$. Above in (2), we have made the assumption that $I_{\text{other}}$ is proportional to $I_{\text{own}}$ by a constant $f$. Such a definition of other sector interference and the subsequent derivation of minimum required received power in (6) holds for a static network with a fixed number of mobiles. However, in our dynamic model of stochastic arrivals and holding times, such a definition may not hold at all times. We then approximate the instantaneous interference $I_{\text{other}}$ by its average $\mathbb{E}[I_{\text{other}}]$. We modify (2) to $I_{\text{other}} = f\mathbb{E}[I_{\text{own}}] = \sum_{j=1}^{K} \mathbb{E}[X_j \Delta_j(\mathbf{X})]$. The minimum required received power in (6) is now as follows:
|
| 261 |
+
|
| 262 |
+
$$P_j = \frac{N \Delta_j}{1 - \sum_{j=1}^{K} X_j \Delta_j - f \mathbb{E}[X_j \Delta_j(\mathbf{X})]}$$
|
| 263 |
+
|
| 264 |
+
Let $G$ denote the expected other-sector (and cell) interference, $G = f \sum_{j=1}^{K} \mathbb{E}[X_j \Delta_j(\mathbf{X})]$. The equation for $P_j$, above then implies the condition $\theta \le 1-G$. This condition is equivalent to (8) with $\Theta_G = 1-G$ replacing $\Theta_\epsilon$.
|
| 265 |
+
|
| 266 |
+
The expected interference due to RT calls is calculated as follows:
|
| 267 |
+
|
| 268 |
+
$$f\mathbb{E}[X_{RT}\Delta_{RT}(X_{RT})] = f \sum_{i=0}^{M_{BE}} \pi_{RT}(i)i\Delta_{RT}(i)$$
|
| 269 |
+
|
| 270 |
+
where we use (11) for $\pi_{RT}(i)$. For BE calls, we need not calculate the steady state distribution $\pi$. Since BE calls use all of the remaining capacity, the sum of the STPRs of the BE calls, where there is at least one BE call, is simply the available BE capacity, $C(X_{RT})$. The expected interference due to BE calls is given by:
|
| 271 |
+
|
| 272 |
+
$$f\mathbb{E}[X_{BE}\Delta_{BE}(\mathbf{X})] = f(1-\pi(0)e)\sum_{i=0}^{M_{RT}} \pi_{RT}(i)C(i)$$
|
| 273 |
+
|
| 274 |
+
where $\pi(0)e$ is the probability that there are no BE calls in the sector, and can be calculated using only (16) and the normalization condition $\pi e = 1$. For each fixed value of $G$, say $g$, we can obtain the probabilities $\pi_{RT}$ and $\pi(0)$ using $\Theta_g$ instead of $\Theta_\epsilon$. We denote these values by $\pi_{RT}^g$ and $\pi^g(0)$ respectively, and the expectation operator corresponding to these probabilities as $\mathbb{E}^g$. Define $F(g) = f \sum_{j \in K} \mathbb{E}^g[X_j \Delta_j(\mathbf{X})]$. $G$ then is the solution of the fixed point equation:
|
| 275 |
+
|
| 276 |
+
$$g = F(g) \quad (21)$$
|
| 277 |
+
|
| 278 |
+
We can now set the BE threshold as $L_{\text{BE}}^g = \Theta_g - L_{\text{RT}}$. Under such a definition, for a given $L_{\text{RT}}$, $F(g)$ can be shown to be continuous in $g$. $F(g)$ also maps onto itself, and thus by the Brower Fixed Point Theorem, there exists a solution. $F(g)$ can be shown to be nonincreasing in $g$, implying uniqueness of the solution to (21).
|
| 279 |
+
|
| 280 |
+
V. NUMERICAL RESULTS
|
| 281 |
+
|
| 282 |
+
In this section we perform numerical experiments to evaluate the performance of RT and BE calls. The rate requested by the RT calls is 12.2kbps(the maximum rate for AMR speech service in UMTS [3]). For the results shown here we have assumed a minimum acceptable rate of 7.95kbps, which is one of the eight possible rates for the AMR speech class. We assume that the set of rates acceptable to RT calls is continuous. We assume no minimum rate for BE calls. The average file size of a BE call is assumed to be 20kBytes. We assume $E_{\text{RT}}/N_o = 4.1\text{dB}$, $E_{\text{BE}}/N_o = 3.1\text{dB}$ [3], a chip rate $W = 3.84\text{Mcps}$ and $\Theta_\epsilon = 1-10^{-5}$. We define the load in terms of the total RT rate available, $R_T$. The total RT rate is in turn defined as the product of the minimum RT rate and the integer capacity for RT calls if there were no BE threshold, $R_T = [\frac{\Theta_\epsilon}{\Delta_{RT}^m}] R_{RT}^m$. The normalized load for RT calls is defined by $\tilde{\rho}_{RT} = \frac{\lambda_{RT} R_{RT}^r}{\mu_{RT} R_T^r}$, and the BE normalized load is $\tilde{\rho}_{BE} = \frac{\lambda_{BE}}{\mu_{BE} R_T^r}$.
|
| 283 |
+
|
| 284 |
+
We consider the heavy traffic regime, where $\tilde{\rho}_{RT} = 0.5$ and $\tilde{\rho}_{BE} = 0.55$. We keep the normalized loads constant and vary the holding time of the RT calls. We evaluate the performance metrics of interest as a function of the BE reserved capacity, $L_{\text{BE}}$.
|
| 285 |
+
|
| 286 |
+
Figure 1 shows the change in RT call blocking probability, computed using (12), as the BE Threshold, $L_{\text{BE}}$ is varied from 0 to $\Theta_\epsilon$. As expected, as $L_{\text{BE}}$ is increased, there is less capacity available for RT calls, and their call blocking probability increases. We may observe the tradeoff between the service qualities of BE and RT calls in Figures 2 and 3. These figures show the expected RT throughput and expected BE sojourn time, respectively. In Figure 2 we see that the expected RT throughput, computed using (13), is close to the requested rate of 12.2kbps up to a BE threshold of approximately $L_{\text{BE}} = 0.35$. As $L_{\text{BE}}$ is increased further, the expected RT throughput gradually drops, always remaining above the minimum rate of 7.95kbps.
|
| 287 |
+
|
| 288 |
+
Fig. 1. RT Call Blocking for heavy traffic
|
| 289 |
+
|
| 290 |
+
The sensitivity of BE service quality is seen in Figures 3 and 4 with respect to not only the BE threshold, but also the RT call duration. In Figure 3 the expected BE sojourn time, computed using (20) and Little's Law, decreases as $L_{\text{BE}}$ is increased.
|
| 291 |
+
---PAGE_BREAK---
|
| 292 |
+
|
| 293 |
+
Fig. 2. Expected RT Throughput
|
| 294 |
+
|
| 295 |
+
For small values of $L_{BE}$ we see that the expected BE sojourn time varies greatly with increasing $L_{BE}$, when the duration of RT calls is large (smaller values of $\mu_{RT}$). The duration of the RT calls determines the time scale of the evolution of the number of RT calls in the system, and thus the available capacity for the BE calls. When the mean duration of RT calls is small, the number of RT calls evolves much faster relative to the BE calls, and thus we would expect the BE calls to obtain a capacity that is fairly constant. When the mean duration of RT calls is large, the changes in capacity received by BE calls might cause the BE queue to build up for long periods during which there are many ongoing RT calls, thus resulting in higher average sojourn times. For related results for non-variable RT GoS, see [2] and [9]. We observe from the figure that this effect can be diminished by increasing the BE threshold. An increase in $L_{BE}$ means that for BE calls the reserved capacity is substantial compared to the capacity remaining after RT calls are served, an effect similar to having a constant capacity.
|
| 296 |
+
|
| 297 |
+
Fig. 3. Expected BE Sojourn Time
|
| 298 |
+
|
| 299 |
+
The delay aware capacity of BE calls for a fixed RT load is shown in Figure 4. Here, we find the maximum BE arrival rate such that $T_{BE} \le c$, where c is a constant, set to 0.25 in this figure. As expected, the maximum BE arrival rate increases as $L_{BE}$ increases allowing a larger portion of the total capacity for BE calls. We note again the sensitivity to mean RT call duration at smaller values of $L_{BE}$, where the delay capacity
|
| 300 |
+
|
| 301 |
+
approximately doubles when $\mu_{RT}$ is changed from 10 to 0.001.
|
| 302 |
+
|
| 303 |
+
Fig. 4. BE Delay Aware Capacity
|
| 304 |
+
|
| 305 |
+
## VI. CONCLUSION
|
| 306 |
+
|
| 307 |
+
We have modelled resource sharing of BE applications with RT applications in WCDMA networks. Both type of traffic have flexibility to adapt to the available bandwidth but unlike BE traffic, RT traffic requires strict minimum bounds on the throughput. We studied the performance of both BE and RT traffic and examined the impact of reservation of some portion of the bandwidth for the BE applications. We introduced a novel capacity definition related to the delay of BE traffic and showed how to compute it.
|
| 308 |
+
|
| 309 |
+
## REFERENCES
|
| 310 |
+
|
| 311 |
+
[1] Eitan Altman. Capacity of multi-service cdma cellular networks with best-effort applications. In *Proceedings of ACM MOBICOM*, September 2002.
|
| 312 |
+
|
| 313 |
+
[2] Eitan Altman, Damien Artiges, and Karim Traore. On the integration of best-effort and guaranteed performance services. In *European Transactions on Telecommunications, Special Issue on Architectures, Protocols and Quality of Service for the Internet of the Future*, 2, February-March 1999.
|
| 314 |
+
|
| 315 |
+
[3] Harri Holma and Antti Toskala, editors. WCDMA for UMTS, Radio Access For Third Generation Mobile Communications. John Wiley & Sons, Ltd., 2001.
|
| 316 |
+
|
| 317 |
+
[4] Insoo Koo, JeeHwan Ahn, Jeong-A Lee, and Kiseon Kim. Analysis of erland capacity for the multimedia DS-CDMA systems. *IEICE Transactions of Fundamentals*, E82-A(5):849–55, May 1999.
|
| 318 |
+
|
| 319 |
+
[5] Jaana Laiho and Achim Wacker. Radio network planning process and methods for WCDMA. *Annales des Télécommunications*, 56(5-6):317–31, 2001.
|
| 320 |
+
|
| 321 |
+
[6] R. Leelahakriengkrai and R. Agrawal. Scheduling in multimedia CDMA wireless networks. Technical Report ECE-99-3, ECE Dept., University of Wisconsin-Madison, July 1999.
|
| 322 |
+
|
| 323 |
+
[7] N. Mandayam, J. Holtzman, and S. Barberis. Performance and capacity of a voice/data CDMA system with variable bit rate sources. In *Special Issue on Insights into Mobile Multimedia Communications*. Academic Press Inc., January 1997.
|
| 324 |
+
|
| 325 |
+
[8] M. F. Neuts. Matrix-geometric solutions in stochastic models: an algorithmic approach. The John Hopkins Unversity Press, 1981.
|
| 326 |
+
|
| 327 |
+
[9] R. Núnez Qeuija and O.J. Boxma. Analysis of a multi-server queueing model of ABR. *J. Appl. Math. Stoch. Anal.*, 11(3), 1998.
|
| 328 |
+
|
| 329 |
+
[10] S. Ramakrishna and Jack M. Holtzman. A scheme for throughput maximization in a dual-class CDMA system. *IEEE Journal Selected Areas in Comm.*, 16:830–44, 1998.
|
| 330 |
+
|
| 331 |
+
[11] Audrey M. Viterbi and Andrew J. Viterbi. Erlang capacity of a power controlled CDMA system. *IEEE Journal on Selected Areas in Communications*, 11(6):892–900, August 1993.
|
| 332 |
+
|
| 333 |
+
[12] Qiang Wu, Wei-Ling Wu, and Jiong-Pan Zhou. Effects of slow fading SIR errors on CDMA capacity. In *Proceedings of IEEE VTC*, pages 2215–17, 1997.
|
samples_new/texts_merged/4174805.md
ADDED
|
@@ -0,0 +1,578 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
NON-CUT, SHORE AND NON-BLOCK POINTS IN
|
| 5 |
+
CONTINUA
|
| 6 |
+
|
| 7 |
+
JOZEF BOBOK, PAVEL PYRIH AND BENJAMIN VEJNAR
|
| 8 |
+
|
| 9 |
+
Czech Technical University in Prague and Charles University in Prague,
|
| 10 |
+
Czech Republic
|
| 11 |
+
|
| 12 |
+
**ABSTRACT.** In a nondegenerate continuum we study the set of non-cut points. We show that it can be stratified by inclusion into six natural subsets (containing also non-block and shore points). Among other results we show that every nondegenerate continuum contains at least two non-block points. Our investigation is further focused on both the classes of arc-like and circle-like continua.
|
| 13 |
+
|
| 14 |
+
# 1. INTRODUCTION
|
| 15 |
+
|
| 16 |
+
In Continuum theory it is often useful to know more about special kinds of points in a continuum. A well known example is the classical result of Moore (see [Bo67, p. 177]) stating that every nondegenerate continuum must have at least two non-cut points (a non-cut point in a connected space is a point whose complement is connected); the result has been recently generalized to the shore points by Leonel ([Le13]) - precise definitions will be given later. We recommend to the reader the book of Nadler ([Na92]) as a general reference for many notions used throughout the paper.
|
| 17 |
+
|
| 18 |
+
More authors have investigated various properties of special sets in continua: Grace in [Gr81] provides a survey of results relating the notions of aposyndesis and weak cut point; Illanes in [Il01] shows that, in a dendroid, finite union of pairwise disjoint shore subdendroids is a shore set; among other results, a simple example of a planar dendroid in which the union of two disjoint closed shore sets is not a shore set is presented in [BMPV14]; in [Na07]
|
| 19 |
+
|
| 20 |
+
2010 Mathematics Subject Classification. 54F15, 54D10.
|
| 21 |
+
Key words and phrases. Continuum, shore point, non-cut point, arc-like continuum.
|
| 22 |
+
The work was supported by the grant GAČR 14-06989P. The third author is a junior researcher in the University Center for Mathematical Modeling, Applied Analysis and Computational Mathematics (Math MAC).
|
| 23 |
+
---PAGE_BREAK---
|
| 24 |
+
|
| 25 |
+
Nall explores the relationship between center points and shore points in a den-
|
| 26 |
+
droid; Illanes and Krupski study blockers and nonblockers for several kinds
|
| 27 |
+
of continua ([IKr11]); and, using the results of [IKr11], Escobedo, López and
|
| 28 |
+
Villanueva ([ELV12]) characterize some classes of locally connected continua
|
| 29 |
+
- for further information on the subject see also [PV12, Le13].
|
| 30 |
+
|
| 31 |
+
Our aim is to study blocking properties of points in a general continuum.
|
| 32 |
+
We laminate the set of non-cut points to six natural subsets (containing non-
|
| 33 |
+
block and shore points, among others) ordered by the inclusion and consider
|
| 34 |
+
various questions related to them. Our interest is mainly focused on both the
|
| 35 |
+
classes of arc-like and circle-like continua.
|
| 36 |
+
|
| 37 |
+
It is interesting to compare our lamination of non-cut points with several kinds of end points. The points of order one are points of colocal connect-edness. In dendroids end points in the classical sense are exactly the points which are not weak cut points. In chainable continua the notion of end point is usually used in another sense and we show that in chainable continua the end points are closely related to non-block points.
|
| 38 |
+
|
| 39 |
+
The structure of our paper is as follows. In the next section we nominate
|
| 40 |
+
the definitions of various kinds of non-cut points followed by illustrating ex-
|
| 41 |
+
amples. We recall several related results known from the literature and discuss
|
| 42 |
+
the Borel hierarchy with respect to the notions in question. Also we show,
|
| 43 |
+
generalizing the result from [Le13], that the sets of non-block points spans
|
| 44 |
+
every nondegenerate continuum. Section 3 is devoted to the class of chain-
|
| 45 |
+
able (arc-like) continua. Among other results we show that any chainable
|
| 46 |
+
continuum consisting of the non-block points is indecomposable - Corollary
|
| 47 |
+
3.6. In Section 4 we deal with the circle-like continua. The main result of this
|
| 48 |
+
part states that every point in a circle-like continuum is a non-block point -
|
| 49 |
+
Theorem 4.5.
|
| 50 |
+
|
| 51 |
+
## 2. LAMINATION OF NON-CUT POINTS
|
| 52 |
+
|
| 53 |
+
We start by one illuminating example showing that the notion of a non-
|
| 54 |
+
cut point is relatively weak. Let $X$ be the continuum defined as the union of
|
| 55 |
+
two $\sin(1/x)$-continua with the common vertical segment $S$. One can easily see
|
| 56 |
+
that the set of non-cut points consists of all points in $S$ and two end points
|
| 57 |
+
$e_1$ and $e_2$ of the sinusoidal branches. Choose $y \in S$ arbitrarily. The non-cut
|
| 58 |
+
points $e_1, e_2$ do not have the same relationship to $X$ as the point $y$. There
|
| 59 |
+
are arbitrarily small open neighborhoods of $e_1$, $e_2$ complements of which are
|
| 60 |
+
connected, which is not true for $y$. The composant of $y$ is the whole $X$ whereas
|
| 61 |
+
the composants of $e_1$, $e_2$ are proper subsets of $X$. The end points $e_1, e_2$ span
|
| 62 |
+
$X$, i.e. no proper subcontinuum of $X$ contains all of them, at the same time
|
| 63 |
+
the points in $S$ do not influence spanning of $X$ at all.
|
| 64 |
+
|
| 65 |
+
So it seems to be meaningful to distinguish various non-cut points in a
|
| 66 |
+
continuum. Let us consider six kinds of non-cut points listed in Table 1.
|
| 67 |
+
---PAGE_BREAK---
|
| 68 |
+
|
| 69 |
+
<table><thead><tr><td>notation</td><td>notion</td><td>definition</td></tr></thead><tbody><tr><td>P1</td><td>x is a point of colocal connectedness</td><td>there are arbitrary small open neighborhoods of x complements of which are connected</td></tr><tr><td>P2</td><td>x is not a weak cut point</td><td>any pair of points distinct from x is contained in a subcontinuum avoiding x</td></tr><tr><td>P3</td><td>x is a non-block point</td><td>there exist subcontinua A<sub>1</sub> ⊂ A<sub>2</sub> ⊂ … ⊂ X such that ⋃<sub>n</sub> A<sub>n</sub> is dense in X \\ {x}</td></tr><tr><td>P4</td><td>x is a shore point</td><td>for each ε > 0 there is an ε-dense sub-continuum avoiding x</td></tr><tr><td>P5</td><td>x is not a strong center</td><td>every pair of nonempty open sets is intersected by a subcontinuum avoiding x</td></tr><tr><td>P6</td><td>x is a non-cut point</td><td>the complement of {x} is connected</td></tr></tbody></table>
|
| 70 |
+
|
| 71 |
+
TABLE 1.
|
| 72 |
+
|
| 73 |
+
It is easy to see that in general context any property with smaller number implies the one with greater number. On the other hand, as we show later, no property in Table 1 with greater number implies the one with smaller number.
|
| 74 |
+
|
| 75 |
+
Whyburn ([Wh39]) defined a continuum $X$ to be semi-locally connected at a point $x$ provided that if $U$ is an open subset of $X$ containing $x$, there is an open subset $V$ of $X$ lying in $U$ and containing $x$ such that $X \setminus V$ has a finite number of components. A continuum is semi-locally connected if it is semi-locally connected at every point.
|
| 76 |
+
|
| 77 |
+
A continuum $X$ is *aposyndetic* at a point $x$ provided that whenever $y$ is a point of $X$ distinct from $x$, there exists a subcontinuum $Y$ of $X$ and an open subset $U$ of $X$ such that $x \in U \subset Y \subset X \setminus \{y\}$. A continuum is aposyndetic if it is aposyndetic at every point.
|
| 78 |
+
|
| 79 |
+
**REMARK 2.1.** A continuum is semi-locally connected if and only if it is aposyndetic ([Ma05, Theorem 1.7.17]).
|
| 80 |
+
|
| 81 |
+
Using the results from [Wh39] we deduce that all the notions from Table 1 are equivalent.
|
| 82 |
+
|
| 83 |
+
**PROPOSITION 2.2.** Let $X$ be a semi-locally connected continuum. Then all properties P1-P6 are equivalent. In particular, it is true when $X$ is locally connected.
|
| 84 |
+
|
| 85 |
+
**PROOF.** It is sufficient to show that P6 implies P1. Let $x \in X$ be a non-cut point. Choose an arbitrary open neighborhood $U$ of $x$. We assume that $X$ is semi-locally connected at $x$, so by definition there is an open neighborhood
|
| 86 |
+
---PAGE_BREAK---
|
| 87 |
+
|
| 88 |
+
$V$ of $x$ such that $x \in V \subset U$ and some components $C_1, \dots, C_n$ of $X \setminus V$
|
| 89 |
+
cover $X \setminus U$. By [Wh39, (6.2)] there exist subcontinua $X_{ij} \subset X \setminus \{x\}$, $1 \le$
|
| 90 |
+
$i, j \le n$, such that $X_{ij}$ connects $C_i$ and $C_j$ for $i \neq j$. Obviously the set
|
| 91 |
+
$W = U \setminus (\bigcup_i C_i \cup \bigcup_{i \neq j} X_{ij})$ is an open neighborhood of $x$ satisfying $W \subset U$
|
| 92 |
+
and for which $X \setminus W$ is connected. Since every locally connected continuum is
|
| 93 |
+
semi-locally connected ([Wh39, Example 2(i)]), the last part of our proposition
|
| 94 |
+
follows. $\square$
|
| 95 |
+
|
| 96 |
+
Another natural notion for a point $x$ in a continuum $X$ which fits for our
|
| 97 |
+
table could be:
|
| 98 |
+
|
| 99 |
+
P2': There exist subcontinua $A_1 \subset A_2 \subset \dots \subset X$ such that
|
| 100 |
+
$X \setminus \{x\} = \bigcup_n A_n$.
|
| 101 |
+
|
| 102 |
+
Clearly P1 implies P2' which implies P2. However, it turns out that P2' only provides an alternative way of how to characterize the points with the property P2.
|
| 103 |
+
|
| 104 |
+
PROPOSITION 2.3. Let $X$ be a continuum containing a point $x$. The following two properties are equivalent.
|
| 105 |
+
|
| 106 |
+
(i) *x* has the property P2 (*x* is not a weak cut point).
|
| 107 |
+
|
| 108 |
+
(ii) *x* has the property P2'.
|
| 109 |
+
|
| 110 |
+
PROOF. Clearly (ii) implies (i). In order to show the opposite implication,
|
| 111 |
+
let $x \in X$ be not a weak cut point. Let $B(x, 1/n)$ denote the open ball with
|
| 112 |
+
the center at $x$ and the radius $1/n$. Choose a point $p \in X \setminus \{x\}$ arbitrarily.
|
| 113 |
+
Let $A_n$ be the connected component of $X \setminus B(x, 1/n)$ containing $p$. Then
|
| 114 |
+
for each sufficiently large $n$ the set $A_n$ is a continuum. We may assume that
|
| 115 |
+
$A_1 \neq \emptyset$. Then $\emptyset \neq A_1 \subset A_2 \subset \dots$ and, since $x$ is not a weak cut point,
|
| 116 |
+
$\bigcup_n A_n = X \setminus \{x\}$. $\square$
|
| 117 |
+
|
| 118 |
+
In order to complete the definitions from Table 1 we list several examples
|
| 119 |
+
in which $P(n+1)$ is not accompanied by $P^n$. For simplicity of notation, we
|
| 120 |
+
write $P(n+1)\backslash P^n$.
|
| 121 |
+
|
| 122 |
+
EXAMPLE (P2\P1). Let $X$ be a dendroid constructed as follows: if $P = (0,0)$, $Q = (1,0)$, $A_n = (1+1/n, 1/n)$, $B_n = (1+1/n, -1/n)$ and $C_n = (0, -1/n)$ are the points from $\mathbb{R}^2$, then the union of segments forms the desired dendroid
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
X = PQ \cup \bigcup_n (PA_n) \cup (AnB_n) \cup (B_nC_n).
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
The point Q is neither a weak cut point nor a point of colocal connectedness.
|
| 129 |
+
|
| 130 |
+
EXAMPLE (P3\P2). Any point of the vertical segment in the $\sin(\frac{1}{x})$-continuum is a non-block point and a weak cut point.
|
| 131 |
+
---PAGE_BREAK---
|
| 132 |
+
|
| 133 |
+
EXAMPLE (P4\P3). Let us denote by $C$ the Cantor middle third set, let $Y \subset \mathbb{R}^2$ be the union of all segments $[p, c]$ connecting the point $p = (0, 1)$ with a point $c \in C \times \{0\}$. The continuum $Y$ is a special dendroid called the Cantor fan. Let $D_n = \{d_1^n, \dots, d_{m(n)}^n\} \subset C \times \{0\}$, $n = 1, 2, \dots$, be a finite 1/n-net in $C \times \{0\}$ such that $D_i \cap D_j = \emptyset$ for $i \neq j$. We define a decomposition $\sigma$ of Y whose nondegenerate elements consist of finite sets
|
| 134 |
+
|
| 135 |
+
$$ \ell_\alpha \cap \bigcup_{i=1}^{m(n)} [p, d_i^n], n \in \mathbb{N}, \alpha \in [1 - 1/n, 1), $$
|
| 136 |
+
|
| 137 |
+
where $\ell_\alpha$ denotes the horizontal line of points with second coordinate $\alpha \in \mathbb{R}$. The quotient space $X = Y/\sigma$ is a continuum, since $\sigma$ is an upper semi-continuous decomposition. The continuum $X$ is a dendroid as well. The point $p$ is a shore point but not a non-block point.
|
| 138 |
+
|
| 139 |
+
EXAMPLE (P5\P4). With the above notation, let
|
| 140 |
+
|
| 141 |
+
$$ \{(a_1^n, a_2^n): a_1^n \neq a_2^n \text{ for } n \in \mathbb{N} \text{ and } \{a_1^m, a_2^m\} \cap \{a_1^n, a_2^n\} = \emptyset \text{ for } m \neq n\} $$
|
| 142 |
+
|
| 143 |
+
be dense in $C \times C$. We define a decomposition $\tau$ of the Cantor fan $Y$ whose nondegenerate elements consist of pairs of points
|
| 144 |
+
|
| 145 |
+
$$ \ell_\alpha \cap \bigcup_{i=1}^{2} [p, a_i^n], n \in \mathbb{N}, \alpha \in [1 - 1/n, 1). $$
|
| 146 |
+
|
| 147 |
+
The quotient space $X = Y/\tau$ is again a dendroid. The point $p$ is neither a strong center nor a shore point.
|
| 148 |
+
|
| 149 |
+
EXAMPLE (P6\P5). Any point of the common vertical segment of two $\sin(\frac{1}{x})$-continua is a non-cut point and a strong center.
|
| 150 |
+
|
| 151 |
+
There are easy examples of continua without P2-points. For example, the closure of the graph of the function
|
| 152 |
+
|
| 153 |
+
$$ \sin\left(\frac{1}{1-|x|}\right), x \in (-1, 1) $$
|
| 154 |
+
|
| 155 |
+
has this property. In indecomposable continua there are no points of colocal connectedness (P1), even there are no points with property P2. On the other hand, every point of an indecomposable continuum is a non-block point (P3).
|
| 156 |
+
|
| 157 |
+
Let us briefly mention some results known from the literature related to the notions listed in our Table 1. In arcwise connected continua there are points of colocal connectedness (P1) ([KM79, Corollary 3.8]); the same is true for continua with exactly two arc components ([KM79, Corollary 3.11]). Every nondegenerate hereditarily decomposable continuum $X$ contains at least one subcontinuum $K$ with empty interior at which the continuum $X$ is colocally connected ([KM79, Corollary 3.5]). Hence any point of $K$ is a non-block point of $X$ (P3). In particular, every nondegenerate hereditarily decomposable
|
| 158 |
+
---PAGE_BREAK---
|
| 159 |
+
|
| 160 |
+
continuum contains a non-block point. We show that every nondegenerate continuum contains at least two such points (Corollary 2.8). Recently, using the results of Bing ([Bi48]), it has been proved that every nondegenerate continuum contains at least two shore points ([Le13]).
|
| 161 |
+
|
| 162 |
+
In what follows we concern the Borel types of sets of points listed in the table. We summarize our knowledge in the following.
|
| 163 |
+
|
| 164 |
+
**PROPOSITION 2.4.** Let $X$ be a continuum. The following is true.
|
| 165 |
+
|
| 166 |
+
(i) The set of P1-points is of type $G_\delta$.
|
| 167 |
+
|
| 168 |
+
(ii) The set of P4-points is of type $G_\delta$.
|
| 169 |
+
|
| 170 |
+
(iii) The set of P5-points is of type $G_\delta$.
|
| 171 |
+
|
| 172 |
+
(iv) The set of P6-points is of type $F_{\sigma\delta}$.
|
| 173 |
+
|
| 174 |
+
**PROOF.** (i) Let $C$ be the set of all points of colocal connectedness. For every $n \in \mathbb{N}$ there is an open cover $B_n$ of $C$ by open sets of diameter less than $1/n$ whose complements in $X$ are connected. It holds that $C = \bigcap_n \cup B_n$.
|
| 175 |
+
|
| 176 |
+
(ii) For $n \in \mathbb{N}$ let $G_n$ be the set of all points $p$ in $X$ for which there exists a $(1/n)$-dense continuum in $X \setminus p$. Then each $G_n$ is open and $\bigcap_{n=1}^\infty G_n$ is the set of all shore points in $X$.
|
| 177 |
+
|
| 178 |
+
(iii) Let $\mathcal{B}$ be a countable base of $X$. The set of all non-strong centers can be expressed as
|
| 179 |
+
|
| 180 |
+
$$ \bigcap_{A,B \in \mathcal{B}} \bigcup_K \{X \setminus K : K \cap A \neq \emptyset \neq K \cap B, K \text{ is a continuum}\}. $$
|
| 181 |
+
|
| 182 |
+
(iv) See [Wh42, Theorem 5.2]. □
|
| 183 |
+
|
| 184 |
+
We complete Proposition 2.4 by three examples.
|
| 185 |
+
|
| 186 |
+
**EXAMPLE 2.5.** (i) The set of P2-points need not be Borel. In a dendroid $X$, a point $x$ is an end point (in the classical sense) if whenever $x \in \gamma$ for some arc $\gamma \subset X$, then $x$ is an end point of $\gamma$. Obviously, the set of P2-points in $X$ coincide with the set of all end points. The assertion follows from [NT90, Example 5], where the authors have found an example of a dendroid in which the set of all end points is co-analytic and not Borel.
|
| 187 |
+
|
| 188 |
+
(ii) The set of Pn-points, $n \in \{1, 4, 5, 6\}$, need not be of type $F_\sigma$. Let us denote by $X$ the Wazewski universal dendrite ([Wa23]), and by $E$ the set of all end points in $X$. As stated in the explanation of (i), the set $E$ coincides with the set of all P2-points. Since $X$ is locally connected, it follows from Proposition 2.2 that the sets of points with the properties P1-P6 coincide and hence they are equal to $E$. In particular, by Proposition 2.4(i) the set $E$ is of type $G_\delta$. It is known that $E$ is dense with an empty interior in $X$, hence by the Baire category theorem $E$ cannot be of type $F_\sigma$.
|
| 189 |
+
|
| 190 |
+
(iii) The set of all non-cut points is in general not of type $G_\delta$. We only sketch our argument. Let $Q = \{q_n: n \ge 0\}$ be the set of all rational numbers
|
| 191 |
+
---PAGE_BREAK---
|
| 192 |
+
|
| 193 |
+
from the interval $(-1, 1)$. Define the continuum $X$ as the closure of the graph of the function
|
| 194 |
+
|
| 195 |
+
$$ \sum_{n=0}^{\infty} \frac{1}{2^n} \sin \left( \frac{1}{x - q_n} \right), x \in (-1, 1) \setminus Q. $$
|
| 196 |
+
|
| 197 |
+
Let us denote by $N$ the set of all non-cut points in $X$. Obviously $(x, y) \in N$ if and only if $x \in Q \cup \{-1, 1\}$, the set $N$ is dense and of the first category in $X$. By the Baire category theorem $N$ is not of type $G_\delta$.
|
| 198 |
+
|
| 199 |
+
It is of interest that the Borel complexity of the set of shore points is in general better than the one of the set of non-cut points. From this point of view the notion of a shore point is simpler than that of a non-cut point. Note that we still do not know the descriptive character of the set of non-block points. So we can pose the following.
|
| 200 |
+
|
| 201 |
+
**QUESTION 2.1.** Is the set of non-block points Borel?
|
| 202 |
+
|
| 203 |
+
Our proof of Theorem 2.7 is based on the result of Bing [Bi48, Theorem 5].
|
| 204 |
+
|
| 205 |
+
**THEOREM 2.6.** For each proper subset $R$ of the continuum $X$ there is a point $x$ of $X \setminus R$ such that the union of all continua that lie in $X \setminus \{x\}$ and intersect $R$ is dense in $X$.
|
| 206 |
+
|
| 207 |
+
We say that a subset $S$ of a continuum $X$ spans $X$ if no proper subcontinuum of $X$ contains $S$. The next theorem and its corollary generalize the fact that every nondegenerate continuum contains at least two non-cut points [Bo67, Le13].
|
| 208 |
+
|
| 209 |
+
**THEOREM 2.7.** Let $X$ be a continuum. Then the set of all non-block points spans $X$.
|
| 210 |
+
|
| 211 |
+
**PROOF.** To the contrary, let $A$ be a proper subcontinuum of $X$ containing all non-block points. By Theorem 2.6 there exists a point $x \in X \setminus A$ such that the union of all continua that lie in $X \setminus \{x\}$ and intersect $A$ is dense in $X$. For some decreasing sequence $(\varepsilon_n)_{n=1}^\infty$ of positive reals converging to zero let us denote by $B_n$ the open ball with the center at $x$ and the radius $\varepsilon_n$; we can assume that $A \cap B_1 = \emptyset$. For each $n$, let $A_n$ be the component of $X \setminus B_n$ containing $A$. Since $B_n$ is open, the set $A_n$ is a continuum. Moreover, $A_n$ is a subset of $A_{n+1}$ for $n=1, 2, \dots$ and any continuum $C \subset X \setminus \{x\}$ intersecting $A$ is a subset of $A_n$ for each sufficiently large $n$. Hence by Theorem 2.6 the union $\bigcup_{n=1}^\infty A_n$ is dense in $X \setminus \{x\}$, i.e. the point $x$ is a non-block point. Moreover, $x \notin A$ which is a contradiction. $\square$
|
| 212 |
+
|
| 213 |
+
**COROLLARY 2.8.** Let $X$ be a nondegenerate continuum. Then $X$ contains at least two non-block points.
|
| 214 |
+
---PAGE_BREAK---
|
| 215 |
+
|
| 216 |
+
### 3. CHAINABLE CONTINUA
|
| 217 |
+
|
| 218 |
+
In this section our attention will be focused on the class of chainable continua. For their own interest we state and prove several results describing various roles of distinct kinds of non-cut points from Table 1. When arguing our statements we will repeatedly use the fact that chainable continua are hereditarily unicoherent ([Na92, Theorem 12.2]).
|
| 219 |
+
|
| 220 |
+
We start with two lemmas concerning the decomposability of a chainable continuum.
|
| 221 |
+
|
| 222 |
+
**LEMMA 3.1.** Let $X$ be a chainable continuum such that $X = K \cup L$ for two proper subcontinua $K$ and $L$ of $X$. Then every point of $K \cap L$ is a strong center.
|
| 223 |
+
|
| 224 |
+
**PROOF.** Let $p \in K \cap L$ and suppose that $p$ is not a strong center. Consider the nonempty open sets $X \setminus K$ and $X \setminus L$. Since $p$ is not a strong center, there is a continuum $M$ intersecting $K$ and $L$ but omitting $p$. It follows that
|
| 225 |
+
|
| 226 |
+
$$M \cup (K \cap L) = (M \cap K) \cup (M \cap L) \cup (K \cap L)$$
|
| 227 |
+
|
| 228 |
+
form a weak triod. This is a contradiction with the fact that chainable continua do not contain weak triods ([Na92, Corollary 12.7]). □
|
| 229 |
+
|
| 230 |
+
Notice that the intersection $K \cap L$ from Lemma 3.1 can consist of the non-cut points only, see our Example P6$\setminus$P5 in Section 2.
|
| 231 |
+
|
| 232 |
+
A shore set in a continuum $X$ is a subset $A$ of $X$ such that, for each $\varepsilon > 0$, there exists a subcontinuum $Y$ of $X$ such that the Hausdorff distance from $Y$ to $X$ is less than $\varepsilon$ and $A \cap Y = \emptyset$. In [I101, Na07, BMPV14] the authors have studied in dendroids (or $\lambda$-dendroids) when the union of shore points (continua) is a shore set. In the case of chainable continua we deduce the following general result.
|
| 233 |
+
|
| 234 |
+
**PROPOSITION 3.2.** *The set of all shore points of a decomposable chainable continuum is a shore set.*
|
| 235 |
+
|
| 236 |
+
**PROOF.** Let $X$ be a decomposable chainable continuum and let $X = K \cup L$ for two proper subcontinua $K$ and $L$ of $X$. By Theorem 2.7 there are non-block points hence also shore points $p \in K \setminus L$ and $q \in L \setminus K$. Related to $p$, $q$ there are sequences of continua $A_n$ and $B_n$ which converge to $X$ and such that $p \notin A_n$ and $q \notin B_n$. We may suppose that all $A_n$ and $B_n$ contain $K \cap L$. We define $M_n = (A_n \cap K) \cup (B_n \cap L)$. The sequence $M_n$ converges to $X$. We prove that the complement of $\bigcup M_n$ consists of all shore points. Clearly any point of $X \setminus \bigcup M_n$ is a shore point. On the other hand, suppose for contradiction that there is a shore point $r \in M_n$ for some $n \in \mathbb{N}$. Without loss of generality we may suppose that $r \in K$, notice that by Lemma 3.1 and Table 1 even $r \in K \setminus L$. Since $r$ is a shore point and $K \cap M_n$ is a proper closed subset of $K$, it follows that there is a subcontinuum $C$ of $X$ such that
|
| 237 |
+
---PAGE_BREAK---
|
| 238 |
+
|
| 239 |
+
$C \cap (K \setminus M_n)$ is nonempty, $C \cap L = L \cap B_n$ and $r \notin C$. One can easily verify that $C \cup M_n \cup L$ is a weak triod in $X$ which is a contradiction. Thus the set of all shore points of $X$ is a shore set. $\square$
|
| 240 |
+
|
| 241 |
+
In a nondegenerate continuum $X$, a point $p \in X$ is a point of irreducibility provided that for some point $q \in X \setminus \{p\}$ no proper subcontinuum of $X$ contains $\{p, q\}$. Clearly $p$ is a point of irreducibility if and only if the composant of $p$ is a proper subset of $X$ (compare [Na92, Theorem 11.2]).
|
| 242 |
+
|
| 243 |
+
The next two lemmas hold true in the general context. They will be useful when proving Proposition 3.5 and Corollary 3.6. The first is from [Na92, Corollary 11.19]. The second generalizes [Le13, Theorem 3].
|
| 244 |
+
|
| 245 |
+
**LEMMA 3.3.** Let $X$ be a nondegenerate continuum. The following two properties are equivalent.
|
| 246 |
+
|
| 247 |
+
(i) Every point $p \in X$ is a point of irreducibility of $X$.
|
| 248 |
+
|
| 249 |
+
(ii) $X$ is indecomposable.
|
| 250 |
+
|
| 251 |
+
**LEMMA 3.4.** Every point of irreducibility of a nondegenerate continuum $X$ is a non-block point.
|
| 252 |
+
|
| 253 |
+
PROOF. If $p$ is a point of irreducibility, then for some point $q \in X \setminus \{p\}$ no proper subcontinuum of $X$ contains $\{p,q\}$. It means that the composant $\kappa(q)$ of $q$ does not contain the point $p$. Since the composant $\kappa(q)$ is dense in $X$ and can be expressed as a union of countably many proper subcontinua each of which contains $q$ ([Na92, Proposition 11.14]), the point $p$ is a non-block point. $\square$
|
| 254 |
+
|
| 255 |
+
The main statement of this section follows.
|
| 256 |
+
|
| 257 |
+
PROPOSITION 3.5. Let $X$ be a chainable continuum and let $p \in X$. The following properties of $p$ are equivalent.
|
| 258 |
+
|
| 259 |
+
(i) $p$ is a point of irreducibility.
|
| 260 |
+
|
| 261 |
+
(ii) $p$ is a non-block point.
|
| 262 |
+
|
| 263 |
+
(iii) $p$ is a shore point.
|
| 264 |
+
|
| 265 |
+
(iv) $p$ is not a strong center.
|
| 266 |
+
|
| 267 |
+
PROOF. By Lemma 3.4 (i) implies (ii). Moreover, (ii) implies (iii) and (iii) implies (iv) in general.
|
| 268 |
+
|
| 269 |
+
Let us prove (iv) implies (ii). If $X$ is an indecomposable continuum every point is a point of irreducibility ([Na92, Theorem 11.18]), so (iv) implies (i) and (i) implies (ii) by Lemma 3.4. Thus we can assume that $X$ is decomposable. Let $X = K \cup L$, where $K$ and $L$ are two proper subcontinua of $X$ and let $p$ be not a strong center of $X$. From Lemma 3.1 follows that $p \notin K \cap L$. Without loss of generality we may suppose that $p \in K \setminus L$. Let $\{B_n : n \in \mathbb{N}\}$ be the base of nonempty open subsets of $X \setminus \{p\}$. Since $p$ is not a strong center, there is for every $n \in \mathbb{N}$ a continuum $M_n$ intersecting $B_n$ and $X \setminus K$
|
| 270 |
+
---PAGE_BREAK---
|
| 271 |
+
|
| 272 |
+
such that $p \notin M_n$. It is enough to let $P_n = L \cup M_1 \cup \dots, \cup M_n$. We deduce
|
| 273 |
+
that $P_n$ is a continuum not containing $p$, $P_1 \subseteq P_2 \subseteq \dots$ and $P_n$ converge to
|
| 274 |
+
$X$ in the Hausdorff metric. We have shown that $p$ is a non-block point, i.e.
|
| 275 |
+
(iv) implies (ii).
|
| 276 |
+
|
| 277 |
+
It remains to prove that (ii) implies (i). In much the same way as above,
|
| 278 |
+
let $p \in K \setminus L$. By Theorem 2.7 there is a non-block point $q \in L \setminus K$. We show
|
| 279 |
+
that $X$ is irreducible between $p$ and $q$.
|
| 280 |
+
|
| 281 |
+
Suppose for contradiction that there is a proper subcontinuum A of X
|
| 282 |
+
which contains both p and q, let x be a strong center by
|
| 283 |
+
Lemma 3.1. At least one of the sets K \\ A, L \\ A is nonempty. Assume the
|
| 284 |
+
former possibility. Since p is a non-block point, there is a sufficiently dense
|
| 285 |
+
subcontinuum B such that p /∈ B, x ∈ B and (K \\ A) ∩ B is nonempty.
|
| 286 |
+
It follows that (L ∩ A) ∪ (K ∩ A) ∪ (K ∩ B) forms a weak triod which is a
|
| 287 |
+
contradiction.
|
| 288 |
+
|
| 289 |
+
Thus $X$ is irreducible between the points $p$ and $q$ and hence $p$ is a point
|
| 290 |
+
of irreducibility. □
|
| 291 |
+
|
| 292 |
+
Combining Proposition 3.5 and Lemma 3.3 we deduce the following.
|
| 293 |
+
|
| 294 |
+
COROLLARY 3.6. Let X be a chainable continuum. The following properties are equivalent.
|
| 295 |
+
|
| 296 |
+
(i) Each point in X is a non-block point.
|
| 297 |
+
|
| 298 |
+
(ii) *X is indecomposable.*
|
| 299 |
+
|
| 300 |
+
REMARK 3.7. By Proposition 3.5 the property P3 in Corollary 3.6 can be replaced by P4 or P5. Let X be an arc of pseudoarcs ([Le85]). Then each point of X is a non-cut point, at the same time X is decomposable, hence Corollary 3.6 is not true for P6.
|
| 301 |
+
|
| 302 |
+
Let $X$ be a chainable continuum. A point $x \in X$ is called an *end point* of $X$ provided that for every $\varepsilon > 0$ there is an $\varepsilon$-chain $B_1, \dots, B_n$ covering $X$ such that $x \in B_1$. An end point in a chainable continuum need not fulfill the (classical) definition presented in Section 2 before Proposition 2.4. In [Do94] it has been shown that the cardinality of end points of a chainable continuum can be any cardinal number from $\{0, 1, \dots, \aleph_0, c\}$. In particular, it is known that the Buckethandle continuum is chainable and has exactly one end point ([Do08]). Gluing two Buckethandle continua together in their end points we find a chainable continuum with no end point.
|
| 303 |
+
|
| 304 |
+
There is a classical characterization of end points in chainable continua
|
| 305 |
+
([Do08, p. 32]). We recall two descriptions in the following statement.
|
| 306 |
+
|
| 307 |
+
PROPOSITION 3.8. For a point $x$ of a nondegenerate chainable continuum $X$ the following conditions are equivalent.
|
| 308 |
+
|
| 309 |
+
(i) $x$ is an end point of $X$.
|
| 310 |
+
---PAGE_BREAK---
|
| 311 |
+
|
| 312 |
+
(ii) Each nondegenerate subcontinuum of X containing x is irreducible between x and some other point.
|
| 313 |
+
|
| 314 |
+
(iii) If there are two subcontinua of X containing x, one of them contains the other.
|
| 315 |
+
|
| 316 |
+
From the above characterization of an end point in a chainable continuum and our Proposition 3.5 we conclude the following.
|
| 317 |
+
|
| 318 |
+
PROPOSITION 3.9. Let $X$ be a chainable continuum and let $p \in X$. Then the following are equivalent.
|
| 319 |
+
|
| 320 |
+
(i) *p* is an end point.
|
| 321 |
+
|
| 322 |
+
(ii) *p* is a non-block point of every subcontinuum of X which contains p.
|
| 323 |
+
|
| 324 |
+
**PROOF.** It is a consequence of Propositions 3.8(i), (ii) and 3.5(i), (ii). □
|
| 325 |
+
|
| 326 |
+
A point $x$ in a chainable continuum $X$ is called an *absolute end point*, provided that whenever $X$ is irreducible between $p$ and $q$, then either $x = p$ or $x = q$. By the definition there are at most two absolute end points in a chainable continuum. The notion of an absolute end point in chainable continua was introduced in [Ro88], where a number of equivalent characterizations was proved. We choose only the following one. A point $x$ is an absolute end point if and only if $x$ is a point of irreducibility and $X$ is locally connected at $x$ ([Ro88, Theorem 1.0]). We note that being locally connected at a point $x$ of a chainable continuum is the same as being connected im kleinen at $x$ ([Ro88, Theorem 1.7]).
|
| 327 |
+
|
| 328 |
+
It is easy to show that a point of order one in a chainable continuum is an absolute end point. The converse need not be true. For example the two end points of the arcless arc ([BPV13]) are absolute end points but these are not of order one. This suggests to use the following notion. A continuum $X$ is said to be *rim-connected* at a point $x$ if there are arbitrarily small neighborhoods of $x$ whose boundaries are connected. From the Boundary bumping theorem we easily deduce that if a continuum $X$ is rim-connected at $x$, then $X$ is locally as well as colocally connected at $x$. We give two other characterizations of an absolute end point. One of them is based on Table 1 from Section 2, the other is using the notion of rim-connectedness. These results are using the following.
|
| 329 |
+
|
| 330 |
+
LEMMA 3.10. Let $x$ be a point of irreducibility of a continuum $X$. Then the following are equivalent.
|
| 331 |
+
|
| 332 |
+
(i) $x$ is a point of local connectedness.
|
| 333 |
+
|
| 334 |
+
(ii) $x$ is a point of colocal connectedness.
|
| 335 |
+
|
| 336 |
+
**PROOF.** (i) $\implies$ (ii). Let $X$ be irreducible between $x$ and $y$. Let $U$ be any neighborhood of $x$. There is an open connected neighborhood $V$ of $x$ whose closure is a subset of $U$ and which avoids $y$. Let $K$ be the component of $X \setminus V$ which contains $y$. Clearly $K$ is a continuum intersecting the boundary of $V$.
|
| 337 |
+
---PAGE_BREAK---
|
| 338 |
+
|
| 339 |
+
and hence $K \cup \text{cl}(V)$ is a continuum. Since it contains $x$ and $y$, we conclude that it is equal to $X$. Hence $X \setminus K$ is a neighborhood of $x$ whose complement is connected.
|
| 340 |
+
|
| 341 |
+
(ii) $\implies$ (i). Let $X$ be irreducible between $x$ and $y$. Let $U$ be any neighborhood of $x$. In $U$ there is an open neighborhood $V$ of $x$ avoiding $y$, whose complement is connected. Let us denote by $C$ the component of the point $x$ in $V$ and let us denote by $K$ the closure of $C$. Clearly $K$ is a continuum intersecting the boundary of $X \setminus V$. We get that $K \cup (X \setminus V)$ is a continuum containing $x$ and $y$ and thus it equals $X$. It follows that $K$ contains $V$ and thus $C = V$ is an open connected neighborhood of $x$ contained in $U$. $\square$
|
| 342 |
+
|
| 343 |
+
**PROPOSITION 3.11.** The following are equivalent for a point $x$ in a chain-able continuum $X$.
|
| 344 |
+
|
| 345 |
+
(i) *x* is an absolute end point.
|
| 346 |
+
|
| 347 |
+
(ii) *x* is a point of rim-connectedness.
|
| 348 |
+
|
| 349 |
+
(iii) *x* is a point of colocal connectedness.
|
| 350 |
+
|
| 351 |
+
**PROOF.** (i) $\implies$ (ii). Let $x$ be an absolute end point. By [Ro88, Theorem 1.0] $x$ is a point of irreducibility at which $X$ is locally connected. Thus there is $y \in X$ such that $X$ is irreducible between $x$ and $y$. Let $U$ be any neighborhood of $x$ whose closure does not contain $y$. There is a connected open neighborhood $V \subseteq U$ of the point $x$. Let $K$ be the closure of $V$. Define $S$ to be the union of all subcontinua of $X \setminus K$ containing the point $y$. Let $L$ be the closure of $S$. First, we claim that $K \cap L \neq \emptyset$. Suppose to the contrary that this is not the case. Thus we can find an open set $W$ such that $L \subseteq W \subseteq \text{cl}(W) \subseteq X \setminus K$. By the Boundary Bumping Theorem ([Na92, Theorem 5.4]) the component $C$ of the set $\text{cl}(W)$ containing the point $y$ intersects the boundary of $W$. Since $C$ is disjoint with $K$ we get that $C \subseteq S$ and hence $C \subseteq L$. Thus $L$ intersects the boundary of $W$ which contradicts the fact that $L \subseteq W$ and $W$ is open.
|
| 352 |
+
|
| 353 |
+
It follows that $K \cup L$ is a continuum containing both $x$ and $y$ and thus
|
| 354 |
+
$X = K \cup L$. Let $B$ be the boundary of $K$, we want to show that $B = K \cap L$.
|
| 355 |
+
Clearly
|
| 356 |
+
|
| 357 |
+
$$
|
| 358 |
+
B = K \cap \text{cl}(X \setminus K) \subseteq K \cap \text{cl}(L) = K \cap L
|
| 359 |
+
$$
|
| 360 |
+
|
| 361 |
+
because $K \cup L = X$. For the opposite inclusion suppose that $z \in K \cap L$
|
| 362 |
+
is arbitrary. By the definition of $L$ there is a sequence of points $z_n \in S$
|
| 363 |
+
converging to $z$. Since $S$ is disjoint with $K$ it follows that $z \in B$.
|
| 364 |
+
|
| 365 |
+
By the unicoherence of X, the set B = K ∩ L is connected. Since the neighborhood U was arbitrary we get that x is a point of rim-connectedness.
|
| 366 |
+
|
| 367 |
+
(ii) ⇒ (iii). This implication holds in general.
|
| 368 |
+
|
| 369 |
+
(iii) ⇒ (i). Let $x$ be a point of colocal connectedness. In order to show that $x$ is an absolute end point it is enough to show that it is a point of irreducibility at which $X$ is locally connected ([Ro88, Theorem 1.0]). We have
|
| 370 |
+
---PAGE_BREAK---
|
| 371 |
+
|
| 372 |
+
shown in Section 2 that $x$ is a non-block point (P1 implies P3) and thus it is a
|
| 373 |
+
point of irreducibility by Proposition 3.5. Then from Lemma 3.10 we conclude
|
| 374 |
+
that $x$ is a point of local connectedness of $X$ and hence $x$ is an absolute end
|
| 375 |
+
point of $X$. $\square$
|
| 376 |
+
|
| 377 |
+
### 4. CIRCLE-LIKE CONTINUA
|
| 378 |
+
|
| 379 |
+
In this section we investigate the class of circle-like continua. The main
|
| 380 |
+
tool of our approach will be the use of an inverse limit. Our main result is
|
| 381 |
+
formulated in Theorem 4.5.
|
| 382 |
+
|
| 383 |
+
Let $\mathcal{P}$ be a collection of compact metric spaces. We say that a continuum $X$ is $\mathcal{P}$-like provided that for each $\varepsilon > 0$ there is a continuous map $f$ from $X$ onto some member of $\mathcal{P}$ such that $\operatorname{diam} f^{-1}(f(x)) < \varepsilon$ for each $x \in X$. In particular, if $\mathcal{P}$ consists of an arc (resp. a simple closed curve), then $X$ is called arc-like (resp. circle-like).
|
| 384 |
+
|
| 385 |
+
The next general result can be found for example in [Na92, Theorem 2.13].
|
| 386 |
+
|
| 387 |
+
PROPOSITION 4.1. A continuum $X$ is $\mathcal{P}$-like if and only if $X$ is an inverse limit $\lim_{\leftarrow X_i, f_i}\{X_i, f_i\}$, where all the coordinate spaces $X_i$ are chosen from $\mathcal{P}$ and each bonding map $f_i: X_{i+1} \to X_i$ is continuous and onto.
|
| 388 |
+
|
| 389 |
+
It is known that the classes of arc-like and chainable continua coincide
|
| 390 |
+
([Na92, Theorem 12.11]). Some continua are both arc-like and circle-like, see
|
| 391 |
+
for example the Buckethandle continuum ([Na92, 12.48]). In this section we
|
| 392 |
+
deal with the circle-like continua from the point of view of our Table 1.
|
| 393 |
+
|
| 394 |
+
For a continuum $X$ and a continuous map $f: X \to X$ we say that $f$ is
|
| 395 |
+
weakly confluent if for any subcontinuum $K \subset X$ there exists a component $L$
|
| 396 |
+
of $f^{-1}(K)$ such that $f(L) = K$.
|
| 397 |
+
|
| 398 |
+
Let $\mathbb{S}^1 = \{z \in \mathbb{C} : |z| = 1\}$. Consider a continuous map $f: \mathbb{S}^1 \to \mathbb{S}^1$ of degree $\deg(f) \in \mathbb{Z}$. Let $F: \mathbb{R} \to \mathbb{R}$ be a lifting of $f$, i.e. the continuous map for which
|
| 399 |
+
|
| 400 |
+
$$ (4.1) \qquad \varphi \circ F = f \circ \varphi \text{ on } \mathbb{R}, $$
|
| 401 |
+
|
| 402 |
+
where $\varphi: \mathbb{R} \to \mathbb{S}^1$ is defined as $\varphi(x) = e^{2\pi ix}$. Then
|
| 403 |
+
|
| 404 |
+
$$ (4.2) \qquad F(x+1) = F(x) + \deg(f) \text{ for each } x \in \mathbb{R}. $$
|
| 405 |
+
|
| 406 |
+
In particular, if the degree $\deg(f)$ is nonzero the map $F$ is onto. Note that
|
| 407 |
+
any map $F+m$ is also a lifting of $f$ for $m \in \mathbb{Z}$.
|
| 408 |
+
|
| 409 |
+
We start with one lemma providing an important ingredient of our next
|
| 410 |
+
construction.
|
| 411 |
+
|
| 412 |
+
LEMMA 4.2. (i) Any nonzero degree continuous self-map of the unit circle is weakly confluent.
|
| 413 |
+
|
| 414 |
+
(ii) Any continuous onto map $f: I \to J$, where $I, J$ are intervals, is weakly confluent.
|
| 415 |
+
---PAGE_BREAK---
|
| 416 |
+
|
| 417 |
+
PROOF. (i) Let $F: \mathbb{R} \to \mathbb{R}$ be a lifting of $f$. By our assumption on the degree, the map $F$ is onto.
|
| 418 |
+
|
| 419 |
+
Let $K$ be an arc in $\mathbb{S}^1$. Then $\varphi^{-1}(K) = [a,b] + \mathbb{Z}$ for some interval $[a,b] \subset \mathbb{R}$, $0 < b - a < 1$. Since $F$ is continuous onto, there exist points $x, y \in \mathbb{R}$ such that $F(\{x,y\}) = \{a,b\}$ and each point $t$ between $x, y$ is mapped by $F$ into $(a,b)$. Moreover, from (4.2) we conclude $|x-y| < 1$. Let $J \subset \mathbb{R}$ be the interval with end points $x$ and $y$. Then $L' = \varphi(J)$ is an arc in $\mathbb{S}^1$ and using (4.1) we deduce $f(L') = \varphi(F(J)) = \varphi([a,b]) = K$. So, if $L \subset \mathbb{S}^1$ is a component of $f^{-1}(K)$ containing $L'$, then also $f(L) = K$ and $f$ is weakly confluent. (ii) We let the proof to the reader. □
|
| 420 |
+
|
| 421 |
+
Let $X$ be a circle-like continuum. By Proposition 4.1 the continuum $X$ can be expressed as an inverse limit
|
| 422 |
+
|
| 423 |
+
$$ (4.3) \quad \varliminf_{\mathcal{S}^1} \{f_i\} = \{((x_i)_{i=1}^\infty : f_i(x_{i+1}) = x_i \text{ for each } i \in \mathbb{N})\}. $$
|
| 424 |
+
|
| 425 |
+
The space $X$ will be equipped with the metric
|
| 426 |
+
|
| 427 |
+
$$ d(x, y) = \sum_{i=1}^{\infty} \frac{\rho(x_i, y_i)}{2^i}, $$
|
| 428 |
+
|
| 429 |
+
where $\rho(x_i, y_i)$ denotes the Euclidean distance of $x_i, y_i \in \mathbb{S}^1$. For $n \in \mathbb{N}$ let $X_n = \{(x_i)_{i=1}^n : (x_i)_{i=1}^\infty \in X\}$ be a metric space endowed with the metric $d_n(x,y) = \sum_{i=1}^n \frac{\rho(x_i, y_i)}{2^i}$. Let $\mathcal{H}_d$, resp. $\mathcal{H}_{d_n}$ be the induced Hausdorff metric on $X$, resp. $X_n$.
|
| 430 |
+
|
| 431 |
+
LEMMA 4.3. Let each bonding map in (4.3) has nonzero degree. Fix a point $x = (x_i)_{i=1}^\infty \in X$. Then there is a countable set
|
| 432 |
+
|
| 433 |
+
$$ \{K_i^j : i \in \mathbb{N}, j \in \{1, \dots, i\}\} $$
|
| 434 |
+
|
| 435 |
+
of arcs in $\mathbb{S}^1$ satisfying (let $f_j(i-1) = f_j \circ \dots \circ f_{i-1}$ for each $i > 1$ and $1 \le j < i-1$, $f_{(i-1)(i-1)} = f_{i-1}$, $f_i(i-1) = id$)
|
| 436 |
+
|
| 437 |
+
(i) $K_i^i \subset \mathbb{S}^1 \setminus \{x_i\}$ for $i \in \mathbb{N}$,
|
| 438 |
+
|
| 439 |
+
(ii) $f_i(K_{i+1}^j) = K_i^j$ for $i \in \mathbb{N}$ and $j \in \{1, \dots, i\}$,
|
| 440 |
+
|
| 441 |
+
(iii) $K_i^i \supset K_{i+1}^{i-1} \supset \dots \supset K_i^{i-1}$ for $i \in \mathbb{N}$,
|
| 442 |
+
|
| 443 |
+
(iv) $f_{ji-1}(K_i^i) \supset K_j^j$ for each $i > 1$ and $1 \le j < i$,
|
| 444 |
+
|
| 445 |
+
(v) $\mathcal{H}_{d_1}(K_1^1, X_1) < 1$,
|
| 446 |
+
|
| 447 |
+
(vi) For each $i > 1$,
|
| 448 |
+
|
| 449 |
+
$$ \mathcal{H}_{d_i}(X_i \cap \prod_{j=1}^{i} f_{j(i-1)}(K_j^i), X_i) < 1/2^{i-1}. $$
|
| 450 |
+
|
| 451 |
+
PROOF. In the construction of $K_i^j$ we repeatedly use the fact that the bonding maps $f_i$ are continuous, onto and of a nonzero degree and apply
|
| 452 |
+
---PAGE_BREAK---
|
| 453 |
+
|
| 454 |
+
Lemma 4.2. We proceed by the induction. In the $i$th step, we choose arcs $K_i^{i-1}, K_i^{i-2}, \dots, K_i^1, K_i^i$ (in written order):
|
| 455 |
+
|
| 456 |
+
STEP 1. We choose an arc $K_1^1 \subset S^1 \setminus \{x_1\}$ fulfilling the property (v).
|
| 457 |
+
|
| 458 |
+
STEP 2. With the help of Lemma 4.2(i) we choose an arc $K_2^1 \subset S^1$ such that $f_1(K_2^1) = K_1^1$. Since the arc $K_2^1$ does not contain the point $x_2$ and $f_1(x_2) = x_1$, there exists an arc $K_2^2 \subset S^1 \setminus \{x_2\}$ such that $K_2^2 \supset K_2^1$ (iii), $f_1(K_2^2) \supset K_1^1$ (iv) and (vi) is fulfilled for $i=2$.
|
| 459 |
+
|
| 460 |
+
STEP $i+1$. Let us assume that the arcs $K_i^i, K_i^{i-1}, \dots, K_i^1$ fulfilling (i)-(vi) have already been defined. Using Lemma 4.2 we can choose arcs $K_{i+1}^j$, $j \in \{1, \dots, i\}$ satisfying (ii) and (iii). Since the arc $K_{i+1}^i$ does not contain the point $x_{i+1}$ ($f_i(K_{i+1}^i) = K_i^i$ and $x_i \notin K_i^i$ by (i)), there exists an arc $K_{i+1}^{i+1}$ (the length of which is sufficiently close to $2\pi$) such that all the properties (i),(iii),(iv) and (vi) are satisfied.
|
| 461 |
+
|
| 462 |
+
This finishes our construction of the arcs $K_i^j$ satisfying (i)-(vi). □
|
| 463 |
+
|
| 464 |
+
For each $n \in \mathbb{N}$, let
|
| 465 |
+
|
| 466 |
+
$$L_i^n = f_{i(n-1)}(K_n^n) \text{ if } 1 \le i < n, L_i^n = K_i^n \text{ for } i \ge n.$$
|
| 467 |
+
|
| 468 |
+
The key proposition follows.
|
| 469 |
+
|
| 470 |
+
PROPOSITION 4.4. Let $X$ be a circle-like continuum such that each bonding map in (4.3) has nonzero degree.
|
| 471 |
+
|
| 472 |
+
(i) For each $n \in \mathbb{N}$, the set
|
| 473 |
+
|
| 474 |
+
$$A_n = \varprojlim \{L_i^n, f_i\}$$
|
| 475 |
+
|
| 476 |
+
is a subcontinuum of X. Moreover, $A_n \subset A_{n+1}$.
|
| 477 |
+
|
| 478 |
+
(ii) $\bigcup_n A_n \subset X \setminus \{x\}$,
|
| 479 |
+
|
| 480 |
+
(iii) $\bigcup_n A_n$ is dense in X.
|
| 481 |
+
|
| 482 |
+
PROOF. (i) By our definition of the arcs $L_i^n$ we conclude $f_i(L_{i+1}^n) = L_i^n$ for each $i \in \mathbb{N}$. Thus, the set $A_n$ is well defined for each $n \in \mathbb{N}$ and it is a subcontinuum in X. The inclusion $A_n \subset A_{n+1}$ directly follows from properties (iii) and (iv) of Lemma 4.3.
|
| 483 |
+
|
| 484 |
+
(ii) From property (i) of Lemma 4.3 we conclude $x_n \notin L_n^n = K_n^n$, hence $x \notin A_n$ for each $n$. It implies $x \notin \bigcup_n A_n$.
|
| 485 |
+
|
| 486 |
+
(iii) From the properties (v), (vi) of Lemma 4.3 we deduce
|
| 487 |
+
|
| 488 |
+
$$
|
| 489 |
+
\begin{align*}
|
| 490 |
+
\mathcal{H}_d(A_n, X) &\le \mathcal{H}_{d_n}(X_n \cap \prod_{j=1}^n f_{j(n-1)}(K_n^n), X_n) + \sum_{i=n+1}^\infty \frac{2}{2^i} \\
|
| 491 |
+
&< \frac{1}{2^{n-1}} + \frac{1}{2^{n-2}} = \frac{1}{2^{n-2}}.
|
| 492 |
+
\end{align*}
|
| 493 |
+
$$
|
| 494 |
+
|
| 495 |
+
□
|
| 496 |
+
---PAGE_BREAK---
|
| 497 |
+
|
| 498 |
+
Using the above construction and the conclusion of Proposition 4.4 we
|
| 499 |
+
conclude the following.
|
| 500 |
+
|
| 501 |
+
**THEOREM 4.5.** Let X be a circle-like continuum. Then every point $x \in X$ is a non-block point.
|
| 502 |
+
|
| 503 |
+
PROOF. If X is also arc-like, then by [Bi62, p. 121] the continuum X is indecomposable and the conclusion follows from Corollary 3.6. So in what follows we assume that X is not arc-like. Then by [Ma05, Theorems 2.5.9-10] each bonding map in (4.3) can be assumed to have a positive degree and Proposition 4.4 can be applied.
|
| 504 |
+
|
| 505 |
+
We have proved that each point of a circle-like continuum has the property P3 from our Table 1. On the other hand, there are circle-like continua in which no point has the property P2, for example the circle of pseudoarcs is such a continuum ([BJ59]).
|
| 506 |
+
|
| 507 |
+
REFERENCES
|
| 508 |
+
|
| 509 |
+
[Bi48] R. H. Bing, Some characterizations of arcs and simple closed curves, Amer. J. Math. **70** (1948), 497–506.
|
| 510 |
+
|
| 511 |
+
[Bi62] R. H. Bing, *Embedding circle-like continua in the plane*, Canad. J. Math. **14** (1962), 113–128.
|
| 512 |
+
|
| 513 |
+
[BJ59] R.H. Bing and F.B. Jones, *Another homogeneous plane continuum*, Trans. Amer. Math. Soc. **90** (1959), 171–192.
|
| 514 |
+
|
| 515 |
+
[BMPV14] J. Bobok, R. Marciña, P. Pyrih and B. Vejnar, *Union of shore sets in a dendroid*, Topology Appl. **161** (2014), 206–214.
|
| 516 |
+
|
| 517 |
+
[BPV13] J. Bobok, P. Pyrih and B. Vejnar, *Half-homogeneous chainable continua with end points*, Topology Appl. **160** (2013), 1066–1073.
|
| 518 |
+
|
| 519 |
+
[Bo67] K. Borsuk, Theory of retracts, Polish Scientific Publishers, Warsaw, 1967.
|
| 520 |
+
|
| 521 |
+
[Do94] J. Doucet, *Cardinality, completeness, and decomposability of sets of endpoints of chainable continua*, Topology Appl. **60** (1994), 41–59.
|
| 522 |
+
|
| 523 |
+
[Do08] J. Doucet, *Sets of endpoints of chainable continua*, Topology Proc. **32** (2008), 31–35.
|
| 524 |
+
|
| 525 |
+
[Gr81] E. E. Grace, *Aposyndesis and weak cutting*, in: General topology and modern analysis, Academic Press, New York, 1981, 71–82.
|
| 526 |
+
|
| 527 |
+
[ELV12] R. Escobedo, M. de Jesús López and H. Villanueva, *Nonblockers in hyperspaces*, Topology Appl. **159** (2012), 3614–3618.
|
| 528 |
+
|
| 529 |
+
[Il01] A. Illanes, *Finite unions of shore sets*, Rend. Circ. Mat. Palermo (2) **50** (2001), 483–498.
|
| 530 |
+
|
| 531 |
+
[IKr11] A. Illanes and P. Krupski, *Blockers in hyperspaces*, Topology Appl. **158** (2011), 653–659.
|
| 532 |
+
|
| 533 |
+
[KM79] J. Krasinkiewicz and P. Minc, *Continua and their open subsets with connected complements*, Fund. Math. **102** (1979), 129–136.
|
| 534 |
+
|
| 535 |
+
[Le13] R. Leonel, *Shore points of a continuum*, Topology Appl. **161** (2014), 433–441.
|
| 536 |
+
|
| 537 |
+
[Le85] W. Lewis, *Continuous curves of pseudo-arcs*, Houston J. Math. **11** (1985), 91–99.
|
| 538 |
+
|
| 539 |
+
[Ma05] S. Macias, Topics on continua, Chapman and Hall/CRC, Boca Raton, 2005.
|
| 540 |
+
|
| 541 |
+
[Na92] S. B. Nadler, Continuum theory. An introduction, Marcel Dekker, New York, 1992.
|
| 542 |
+
---PAGE_BREAK---
|
| 543 |
+
|
| 544 |
+
[Na07] V. C. Nall, *Centers and shore points of a dendroid*, Topology Appl. **154** (2007), 2167–2172.
|
| 545 |
+
|
| 546 |
+
[NT90] J. Nikiel and E. D. Tymchatyn, *Sets of end-points and ramification points in dendroids*, Fund. Math. **138** (1991), 139–146.
|
| 547 |
+
|
| 548 |
+
[PV12] P. Pyrih and B. Vejnar, *A lambda-dendroid with two shore points whose union is not a shore set*, Topology Appl. **159** (2012), 69–74.
|
| 549 |
+
|
| 550 |
+
[Ro88] I. Rosenholtz, *Absolute endpoints of chainable continua*, Proc. Amer. Math. Soc. **103** (1988), 1305–1314.
|
| 551 |
+
|
| 552 |
+
[Wa23] T. Wazewski, *Sur un continu singulier*, Fundamenta Mathematicae **4** (1923), 214–245.
|
| 553 |
+
|
| 554 |
+
[Wh39] G. T. Whyburn, *Semi-locally connected sets*, Amer. J. Math. **61** (1939), 733–749.
|
| 555 |
+
|
| 556 |
+
[Wh42] G. T. Whyburn, *Analytic topology*, American Mathematical Society, New York, 1942.
|
| 557 |
+
|
| 558 |
+
J. Bobok
|
| 559 |
+
Faculty of Civil Engineering
|
| 560 |
+
Czech Technical University in Prague
|
| 561 |
+
|
| 562 |
+
P. Pyrih
|
| 563 |
+
Faculty of Mathematics and Physics
|
| 564 |
+
Charles University in Prague
|
| 565 |
+
118 00 Prague
|
| 566 |
+
Czech Republic
|
| 567 |
+
|
| 568 |
+
B. Vejnar
|
| 569 |
+
Faculty of Mathematics and Physics
|
| 570 |
+
Charles University in Prague
|
| 571 |
+
118 00 Prague
|
| 572 |
+
Czech Republic
|
| 573 |
+
|
| 574 |
+
*E-mail:* vejnar@karlin.mff.cuni.cz
|
| 575 |
+
|
| 576 |
+
*Received:* 14.8.2014.
|
| 577 |
+
|
| 578 |
+
*Revised:* 18.2.2015.
|
samples_new/texts_merged/4364106.md
ADDED
|
@@ -0,0 +1,764 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# ASYMPTOTIC BEHAVIOR OF COUPLED INCLUSIONS WITH VARIABLE EXPONENTS
|
| 5 |
+
|
| 6 |
+
PETER E. KLOEDEN*
|
| 7 |
+
|
| 8 |
+
Mathematisches Institut, Universität Tübingen
|
| 9 |
+
D-72076 Tübingen, Germany
|
| 10 |
+
|
| 11 |
+
JACSON SIMSEN
|
| 12 |
+
|
| 13 |
+
Instituto de Matemática e Computação, Universidade Federal de Itajubá
|
| 14 |
+
Av. BPS n. 1303, Bairro Pinheirinho, 37500-903, Itajubá - MG - Brazil
|
| 15 |
+
|
| 16 |
+
PETRA WITTBOLD
|
| 17 |
+
|
| 18 |
+
Fakultät für Mathematik, Universität of Duisburg-Essen
|
| 19 |
+
Thea-Leymann-Str. 9, 45127 Essen, Germany
|
| 20 |
+
|
| 21 |
+
*(Communicated by Alain Miranville)*
|
| 22 |
+
|
| 23 |
+
**ABSTRACT.** This work concerns the study of asymptotic behavior of the solutions of a nonautonomous coupled inclusion system with variable exponents. We prove the existence of a pullback attractor and that the system of inclusions is asymptotically autonomous.
|
| 24 |
+
|
| 25 |
+
**1. Introduction.** Nonlinear reaction-diffusion equations have been studied extensively in recent years and a special attention has been given to coupled reaction-diffusion equations from various fields of applied sciences arising from epidemics, biochemistry and engineering [18]. Reaction-diffusion systems are naturally applied in chemistry where the most common is the change in space and time of the concentration of one or more chemical substances. One interest in chemical kinetics is the construction of mathematical models that can describe the characteristics of a chemical reaction. Mathematical models for electrorheological fluids were considered in [19, 20, 21] and variable exponents do appear in the diffusion term (see also [7, 9]). Reaction-diffusion systems can be perturbed by discontinuous nonlinear terms, which leads to study differential inclusions rather than differential equations, for example, evolution differential inclusion systems with positively sublinear upper semicontinuous multivalued reaction terms *F* and *G* (see [6]).
|
| 26 |
+
|
| 27 |
+
2000 Mathematics Subject Classification. Primary: 35B40, 35B41, 35K57; Secondary: 35K55, 35K92.
|
| 28 |
+
|
| 29 |
+
**Key words and phrases.** Pullback attractor, reaction-diffusion coupled systems, variable exponents, asymptotically autonomous problems.
|
| 30 |
+
|
| 31 |
+
This work was initiated when the second author was supported with CNPq scholarship - process 202645/2014-2 (Brazil). The first author was supported by Chinese NSF grant 11571125. The second author was partially supported by the Brazilian research agency FAPEMIG process PPM 00329-16.
|
| 32 |
+
|
| 33 |
+
* Corresponding author.
|
| 34 |
+
---PAGE_BREAK---
|
| 35 |
+
|
| 36 |
+
This work concerns the coupled system of inclusions:
|
| 37 |
+
|
| 38 |
+
$$
|
| 39 |
+
(S) \quad \left\{
|
| 40 |
+
\begin{array}{ll}
|
| 41 |
+
\dfrac{\partial u_1}{\partial t} - \operatorname{div}(D_1(t, \cdot)|\nabla u_1|^{p(\cdot)-2}\nabla u_1) + |u_1|^{p(\cdot)-2}u_1 \in F(u_1, u_2) & t > \tau \\
|
| 42 |
+
\\
|
| 43 |
+
\dfrac{\partial u_2}{\partial t} - \operatorname{div}(D_2(t, \cdot)|\nabla u_2|^{q(\cdot)-2}\nabla u_2) + |u_2|^{q(\cdot)-2}u_2 \in G(u_1, u_2) & t > \tau \\
|
| 44 |
+
\\
|
| 45 |
+
\dfrac{\partial u_1}{\partial n}(t,x) = \dfrac{\partial u_2}{\partial n}(t,x) = 0 & \text{in } \partial\Omega, \\
|
| 46 |
+
\\
|
| 47 |
+
(u_1(\tau), u_2(\tau)) = (u_{0,1}, u_{0,2}) \text{ in } L^2(\Omega) \times L^2(\Omega), &
|
| 48 |
+
\end{array}
|
| 49 |
+
\right.
|
| 50 |
+
$$
|
| 51 |
+
|
| 52 |
+
on a bounded domain $\Omega \subset \mathbb{R}^n$, $n \ge 1$, with smooth boundary, where $F$ and $G$ are
|
| 53 |
+
bounded, upper semicontinuous and positively sublinear multivalued maps and the
|
| 54 |
+
exponents $p(\cdot), q(\cdot) \in C(\Omega)$ satisfy
|
| 55 |
+
|
| 56 |
+
$$
|
| 57 |
+
p^+ := \max_{x \in \bar{\Omega}} p(x) > p^- := \min_{x \in \bar{\Omega}} p(x) > 2, \quad q^+ := \max_{x \in \bar{\Omega}} q(x) > q^- := \min_{x \in \bar{\Omega}} q(x) > 2.
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+
In addition, the diffusion coefficients $D_1, D_2$ are assumed to satisfy:
|
| 61 |
+
|
| 62 |
+
**Assumption D.** $D_1, D_2 : [\tau, T] \times \Omega \to \mathbb{R}$ are functions in $L^\infty([\tau, T] \times \Omega)$ satisfying:
|
| 63 |
+
(i) There is a positive constant $\beta$ such that $0 < \beta \le D_i(t, x)$ for almost all $(t, x) \in [\tau, T] \times \Omega$, $i = 1, 2$.
|
| 64 |
+
(ii) $D_i(t, x) \ge D_i(s, x)$ a.a. $x \in \Omega$ and $t \le s$ in $[\tau, T]$, $i = 1, 2$.
|
| 65 |
+
|
| 66 |
+
In this work we extend the results in [15] for a single inclusion to the case of a coupled inclusion system. We will prove that the strict generalized process (see Definition 2.7 in Section 2) defined by (S) possesses a pullback attractor. Moreover, we prove that the system (S) is in fact asymptotically autonomous. It makes use of a collection of ideas and results of some recent, distinct previous works [15, 22, 23, 27] of the authors, which are applied here to a new problem to yield interesting new results. Regarding [13, 14, 15] where an equation and a single inclusion of this type of problems were considered, the coupled system can not be treated in the same way as the single case, the principal additional technical difficulty is to adjust the results considering two inclusions, in this sense, the main technical difficulty appears to prove dissipativity.
|
| 67 |
+
|
| 68 |
+
The paper is organized as follows. First, in Section 2 we provide some definitions and results on existence of global solutions and generalized processes. In Section 3 we prove the existence of the pullback attractor for the system (S). In Section 4 we say some words about forward attraction and in the last section we prove that the system (S) is asymptotically autonomous.
|
| 69 |
+
|
| 70 |
+
**2. Preliminaries, existence of global solutions and generalized processes.**
|
| 71 |
+
|
| 72 |
+
Consider now the system (S) in the following abstract form
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
(S2) \quad \left\{
|
| 76 |
+
\begin{array}{ll}
|
| 77 |
+
\dfrac{du}{dt}(t) + A(t)u(t) \in F(u(t), v(t)) & t > \tau \\
|
| 78 |
+
\\
|
| 79 |
+
\dfrac{dv}{dt}(t) + B(t)v(t) \in G(u(t), v(t)) & t > \tau \\
|
| 80 |
+
(u(\tau), v(\tau)) = (u_0, v_0) \in H \times H,
|
| 81 |
+
\end{array}
|
| 82 |
+
\right.
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $F$ and $G$ are bounded, upper semicontinuous and positively sublinear mul-
|
| 86 |
+
tivalued maps (see Definitions 2.4, 2.3 and 2.5, respectively) and, for each $t > \tau$,
|
| 87 |
+
$A(t)$ and $B(t)$ are univalued maximal monotone operators of subdifferential type
|
| 88 |
+
in a real separable Hilbert space $H$. Specifically, $A(t) = \partial\varphi^t$ and $B(t) = \partial\psi^t$ for
|
| 89 |
+
---PAGE_BREAK---
|
| 90 |
+
|
| 91 |
+
nonnegative mappings $\varphi^t$, $\psi^t$ with $\partial\varphi^t(0) = \partial\psi^t(0) = 0$, $\forall t \in \mathbb{R}$ and the mappings $\varphi^t$, $\psi^t$ satisfy:
|
| 92 |
+
|
| 93 |
+
**Assumption A.** Let $T > \tau$ be fixed.
|
| 94 |
+
|
| 95 |
+
(A.1) There is a set $Z \subset (\tau, T]$ of zero measure such that $\phi^t$ is a lower semicontinuous proper convex function from $H$ into $(-\infty, \infty]$ with a nonempty effective domain for each $t \in [\tau, T] \setminus Z$.
|
| 96 |
+
|
| 97 |
+
(A.2) For any positive integer $r$ there exist a constant $K_r > 0$, an absolutely continuous function $g_r : [\tau, T] \to \mathbb{R}$ with $g'_r \in L^\beta(\tau, T)$ and a function of bounded variation $h_r : [\tau, T] \to \mathbb{R}$ such that if $t \in [\tau, T] \setminus Z$, $w \in D(\phi^t)$ with $|w| \le r$ and $s \in [t, T] \setminus Z$, then there exists an element $\tilde{w} \in D(\phi^s)$ satisfying
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\begin{align*}
|
| 101 |
+
|\tilde{w} - w| &\le |g_r(s) - g_r(t)|(\phi^t(w)) + K_r)^{\alpha}, \\
|
| 102 |
+
\phi^s(\tilde{w}) &\le \phi^t(w) + |h_r(s) - h_r(t)|(\phi^t(w) + K_r),
|
| 103 |
+
\end{align*}
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
where $\alpha$ is some fixed constant with $0 \le \alpha \le 1$ and
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
\beta := \begin{cases} 2 & \text{if } 0 \le \alpha \le \frac{1}{2}, \\ \frac{1}{1-\alpha} & \text{if } \frac{1}{2} \le \alpha \le 1 \end{cases} .
|
| 110 |
+
$$
|
| 111 |
+
|
| 112 |
+
Let us first review some concepts and results from the literature, which will be useful in the sequel. We refer the reader to [2, 3, 29] for more details about multivalued analysis theory.
|
| 113 |
+
|
| 114 |
+
**2.1. Setvalued mappings.** Let $X$ be a real Banach space and $M$ a Lebesgue measurable subset in $\mathbb{R}^q$, $q \ge 1$.
|
| 115 |
+
|
| 116 |
+
**Definition 2.1.** The map $G : M \to P(X)$ is called measurable if for each closed subset $C$ in $X$ the set
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
G^{-1}(C) = \{y \in M; G(y) \cap C \neq \emptyset\}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
is Lebesgue measurable.
|
| 123 |
+
|
| 124 |
+
If $G$ is a univ alued map, the above definition is equivalent to the usual definition
|
| 125 |
+
of a measurable function.
|
| 126 |
+
|
| 127 |
+
**Definition 2.2.** By a selection of $E: M \to P(X)$ we mean a function $f: M \to X$
|
| 128 |
+
such that $f(y) \in E(y)$ a.e. $y \in M$, and we denote by Sel$E$ the set
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
\mathrm{SelE} \doteq \{ f, f : M \to X \text{ is a measurable selection of } E \}.
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
**Definition 2.3.** Let $U$ be a topological space. A mapping $G : U \to P(X)$ is called upper semicontinuous [weakly upper semicontinuous] at $u \in U$, if
|
| 135 |
+
|
| 136 |
+
(i) $G(u)$ is nonempty, bounded, closed and convex.
|
| 137 |
+
|
| 138 |
+
(ii) For each open subset [open set in the weak topology] $D$ in $X$ satisfying $G(u) \subset D$, there exists a neighborhood $V$ of $u$, such that $G(v) \subset D$, for each $v \in V$.
|
| 139 |
+
|
| 140 |
+
If $G$ is upper semicontinuous [weakly upper semicontinuous] at each $u \in U$, then it
|
| 141 |
+
is called upper semicontinuous [weakly upper semicontinuous] on $U$.
|
| 142 |
+
---PAGE_BREAK---
|
| 143 |
+
|
| 144 |
+
**Definition 2.4.** $F,G: H \times H \rightarrow P(H)$ are said to be bounded if, whenever $B_1, B_2 \subset H$ are bounded, then $F(B_1, B_2) = \bigcup_{(u,v) \in B_1 \times B_2} F(u,v)$ and $G(B_1, B_2) = \bigcup_{(u,v) \in B_1 \times B_2} G(u,v)$ are bounded in $H$.
|
| 145 |
+
|
| 146 |
+
In order to obtain global solutions we impose the following suitable conditions on terms $F$ and $G$.
|
| 147 |
+
|
| 148 |
+
**Definition 2.5 ([24]).** The pair $(F,G)$ of maps $F, G: H \times H \to P(H)$, which takes bounded subsets of $H \times H$ into bounded subsets of $H$, is called positively sublinear if there exist $a > 0, b > 0, c > 0$ and $m_0 > 0$ such that for each $(u,v) \in H \times H$ with $\|u\| > m_0$ or $\|v\| > m_0$ for which either there exists $f_0 \in F(u,v)$ satisfying $\langle u, f_0 \rangle > 0$ or there exists $g_0 \in G(u,v)$ with $\langle v, g_0 \rangle > 0$, then both
|
| 149 |
+
|
| 150 |
+
$$ \|f\| \le a\|u\| + b\|v\| + c \quad \text{and} \quad \|g\| \le a\|u\| + b\|v\| + c $$
|
| 151 |
+
|
| 152 |
+
hold for each $f \in F(u,v)$ and each $g \in G(u,v)$.
|
| 153 |
+
|
| 154 |
+
## 2.2. Generalized processes.
|
| 155 |
+
In order to study the asymptotic behavior of the solutions of the system (S) we will work with a multivalued process defined by a generalized process. We will review these concepts which had been considered in [22, 23] and can be used in the study of infinite dimensional dynamical systems.
|
| 156 |
+
|
| 157 |
+
**Definition 2.6.** Let $(X, \rho)$ be a complete metric space. A generalized process $\mathcal{G} = \{\mathcal{G}(\tau)\}_{\tau \in \mathbb{R}}$ on $X$ is a family of function sets $\mathcal{G}(\tau)$ consisting of maps $\varphi : [\tau, \infty) \to X$, satisfying the conditions:
|
| 158 |
+
|
| 159 |
+
(C1) For each $\tau \in \mathbb{R}$ and $z \in X$ there exists at least one $\varphi \in \mathcal{G}(\tau)$ with $\varphi(\tau) = z$;
|
| 160 |
+
|
| 161 |
+
(C2) If $\varphi \in \mathcal{G}(\tau)$ and $s \ge 0$, then $\varphi^{+s} \in \mathcal{G}(\tau + s)$, where $\varphi^{+s} := \varphi_{|\tau+s,\infty)}$;
|
| 162 |
+
|
| 163 |
+
(C3) If $\{\varphi_j\}_{j \in \mathbb{N}} \subset \mathcal{G}(\tau)$ and $\varphi_j(\tau) \to z$, then there exists a subsequence $\{\varphi_\mu\}_{\mu \in \mathbb{N}}$ of $\{\varphi_j\}_{j \in \mathbb{N}}$ and $\varphi \in \mathcal{G}(\tau)$ with $\varphi(\tau) = z$ such that $\varphi_\mu(t) \to \varphi(t)$ for each $t \ge \tau$.
|
| 164 |
+
|
| 165 |
+
**Definition 2.7.** A generalized process $\mathcal{G} = \{\mathcal{G}(\tau)\}_{\tau \in \mathbb{R}}$ which satisfies the condition
|
| 166 |
+
(C4) (Concatenation) If $\varphi, \psi \in \mathcal{G}$ with $\varphi \in \mathcal{G}(\tau)$, $\psi \in \mathcal{G}(r)$ and $\varphi(s) = \psi(s)$ for
|
| 167 |
+
some $s \ge r \ge \tau$, then $\theta \in \mathcal{G}(\tau)$, where $\theta(t) := \begin{cases} \varphi(t), & t \in [\tau, s] \\ \psi(t), & t \in (s, \infty) \end{cases}$,
|
| 168 |
+
is called an exact (or strict) generalized process.
|
| 169 |
+
|
| 170 |
+
A multivalued process $\{U_G(t, \tau)\}_{t \ge \tau}$ defined by a generalized process $\mathcal{G}$ is a family of multivalued operators $U_G(t, \tau) : P(X) \to P(X)$ with $-\infty < \tau \le t < +\infty$, such that for each $\tau \in \mathbb{R}$
|
| 171 |
+
|
| 172 |
+
$$ U_G(t, \tau)E = \{\varphi(t); \varphi \in \mathcal{G}(\tau), \text{ with } \varphi(\tau) \in E\}, t \geq \tau. $$
|
| 173 |
+
|
| 174 |
+
**Theorem 2.8 ([22, 23]).** Let $\mathcal{G}$ be an exact generalized process. If $\{U_{\mathcal{G}}(t, \tau)\}_{t \geq \tau}$ is a multivalued process defined by $\mathcal{G}$, then $\{U_{\mathcal{G}}(t, \tau)\}_{t \geq \tau}$ is an exact multivalued process on $P(X)$, i.e.,
|
| 175 |
+
|
| 176 |
+
1. $U_{\mathcal{G}}(t, t) = Id_{P(X)}$,
|
| 177 |
+
|
| 178 |
+
2. $U_{\mathcal{G}}(t, \tau) = U_{\mathcal{G}}(t, s)U_{\mathcal{G}}(s, \tau)$ for all $-\infty < \tau \le s \le t < +\infty$.
|
| 179 |
+
|
| 180 |
+
A family of sets $K = \{K(t) \subset X : t \in \mathbb{R}\}$ will be called a nonautonomous set. The family $K$ is closed (compact, bounded) if $K(t)$ is closed (compact, bounded) for all $t \in \mathbb{R}$. The $\omega$-limit set $\omega(t, E)$ consists of the pullback limits of all converging sequences $\{\xi_n\}_{n \in \mathbb{N}}$ where $\xi_n \in U_{\mathcal{G}}(t, s_n)E$, $s_n \to -\infty$. Let $\mathcal{A} = \{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ be a family of subsets of $X$. We have the following concepts of invariance:
|
| 181 |
+
---PAGE_BREAK---
|
| 182 |
+
|
| 183 |
+
• $\mathcal{A}$ is positively invariant if $U_G(t, \tau)\mathcal{A}(\tau) \subset \mathcal{A}(t)$ for all $-\infty < \tau \le t < \infty$;
|
| 184 |
+
|
| 185 |
+
• $\mathcal{A}$ is negatively invariant if $\mathcal{A}(t) \subset U_G(t, \tau)\mathcal{A}(\tau)$ for all $-\infty < \tau \le t < \infty$;
|
| 186 |
+
|
| 187 |
+
• $\mathcal{A}$ is invariant if $U_G(t, \tau)\mathcal{A}(\tau) = \mathcal{A}(t)$ for all $-\infty < \tau \le t < \infty$.
|
| 188 |
+
|
| 189 |
+
**Definition 2.9.** Let $t \in \mathbb{R}$.
|
| 190 |
+
|
| 191 |
+
1. A set $\mathcal{A}(t) \subset X$ pullback attracts a set $B \in X$ at time $t$ if
|
| 192 |
+
$$ \mathrm{dist}(U_{\mathcal{G}}(t, s)B, \mathcal{A}(t)) \to 0 \quad \text{as } s \to -\infty. $$
|
| 193 |
+
|
| 194 |
+
2. A family $\mathcal{A} = \{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ pullback attracts bounded sets of $X$ if $\mathcal{A}(\tau) \subset X$
|
| 195 |
+
pullback attracts all bounded subsets at $\tau$, for each $\tau \in \mathbb{R}$. In this case, we
|
| 196 |
+
say that the nonautonomous set $\mathcal{A}$ is pullback attracting.
|
| 197 |
+
|
| 198 |
+
3. A set $\mathcal{A}(t) \subset X$ pullback absorbs bounded subsets of $X$ at time $t$ if, for each bounded set $B$ in $X$, there exists $T = T(t, B) \le t$ such that $U_G(t, \tau)B \subset \mathcal{A}(t)$ for all $\tau \le T$.
|
| 199 |
+
|
| 200 |
+
4. A family $\{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ pullback absorbs bounded subsets of $X$ if for each $t \in \mathbb{R}$ $\mathcal{A}(t)$ pullback absorbs bounded sets at time $t$.
|
| 201 |
+
|
| 202 |
+
2.3. **Strong solutions.** Consider the following initial value problem:
|
| 203 |
+
|
| 204 |
+
$$ (P_t) \quad \left\{ \begin{aligned} \frac{du}{dt}(t) + A(t)u(t) &\ni f(t), & t > \tau \\ u(\tau) &= u_0 \end{aligned} \right. $$
|
| 205 |
+
|
| 206 |
+
where for each $t > \tau$, $A(t)$ is maximal monotone in a Hilbert space $H$, $f \in L^1(\tau, T; H)$ and $u_0 \in H$. Moreover, suppose $\mathcal{D}(A(t)) = \mathcal{D}(A(\tau))$, $\forall t, \tau \in \mathbb{R}$ and $\overline{\mathcal{D}(A(t))} = H$, for all $t \in \mathbb{R}$.
|
| 207 |
+
|
| 208 |
+
**Definition 2.10.** A function $u : [\tau, T] \to H$ is called a strong solution of $(P_t)$ on $[\tau, T]$ if
|
| 209 |
+
|
| 210 |
+
(i) $u \in C([\tau, T]; H)$;
|
| 211 |
+
|
| 212 |
+
(ii) $u$ is absolutely continuous on any compact subset of $(\tau, T)$;
|
| 213 |
+
|
| 214 |
+
(iii) $u(t)$ is in $D(A(t))$ for a.e. $t \in [\tau, T]$, $u(\tau) = u_0$ and satisfies the inclusion in $(P_t)$ for a.e. $t \in [\tau, T]$.
|
| 215 |
+
|
| 216 |
+
**Definition 2.11.** A strong solution of (S2) is a pair $(u, v)$ satisfying: $u, v \in C([\tau, T]; H)$ for which there exist $f, g \in L^1(\tau, T; H)$, $f(t) \in F(u(t), v(t))$, $g(t) \in G(u(t), v(t))$ a.e. in $(\tau, T)$, and such that $(u, v)$ is a strong solution (see Definition 2.10) over $(\tau, T)$ to the system $(P_1)$ below:
|
| 217 |
+
|
| 218 |
+
$$ (P_1) \quad \left\{ \begin{aligned} \frac{du}{dt} + A(t)u &= f \\ \frac{dv}{dt} + B(t)v &= g \\ u(\tau) &= u_0, v(\tau) = v_0 \end{aligned} \right. $$
|
| 219 |
+
|
| 220 |
+
**Theorem 2.12 ([27]).** Let $A = \{A(t)\}_{t>\tau}$ and $B = \{B(t)\}_{t>\tau}$ be families of uni-
|
| 221 |
+
valued operators $A(t) = \partial\varphi^t$, $B(t) = \partial\psi^t$ with $\varphi^t$, $\psi^t$ non negative maps satisfying
|
| 222 |
+
**Assumption A** with $\partial\varphi^t(0) = \partial\psi^t(0) = 0$. Also suppose each one of A and B
|
| 223 |
+
generates a compact evolution process, and let $F, G: H \times H \to P(H)$ be upper
|
| 224 |
+
semicontinuous and bounded multivalued maps. Then given a bounded subset $B_0 \subset$
|
| 225 |
+
$H \times H$, there exists $T_0 > 0$ such that for each $(u_0, v_0) \in B_0$ there exists at least one
|
| 226 |
+
strong solution $(u, v)$ of (S2) defined on $[\tau, T_0]$. If, in addition, the pair $(F, G)$ is
|
| 227 |
+
positively sublinear, given $T > \tau$, the same conclusion is true with $T_0 = T$.
|
| 228 |
+
---PAGE_BREAK---
|
| 229 |
+
|
| 230 |
+
Let $D(u(\tau), v(\tau))$ be the set of the solutions of (S2) with initial data $(u_{\tau}, v_{\tau})$ and define $G(\tau) := \bigcup_{(u_{\tau}, v_{\tau}) \in H \times H} D(u(\tau), v(\tau))$. Consider $\mathbb{G} := \{G(\tau)\}_{\tau \in \mathbb{R}}$.
|
| 231 |
+
|
| 232 |
+
**Theorem 2.13 ([27]).** Under the conditions of Theorem 2.12, $\mathbb{G}$ is an exact generalized process.
|
| 233 |
+
|
| 234 |
+
Let $\Omega \subset \mathbb{R}^n$, $n \ge 1$, be a bounded smooth domain and write $H := L^2(\Omega)$ and $Y := W^{1,p(\cdot)}(\Omega)$ with $p^- > 2$. Then $Y \subset H \subset Y^*$ with continuous and dense embeddings. We refer the reader to [7, 8] and references therein to see properties of the Lebesgue and Sobolev spaces with variable exponents. In particular, with
|
| 235 |
+
|
| 236 |
+
$$L^{p(\cdot)}(\Omega) := \{u : \Omega \to \mathbb{R} : u \text{ is measurable, } \int_{\Omega} |u(x)|^{p(x)} dx < \infty\}$$
|
| 237 |
+
|
| 238 |
+
and $L_+^\infty(\Omega) := \{q \in L^\infty(\Omega) : \text{ess inf } q \ge 1\}$, define
|
| 239 |
+
|
| 240 |
+
$$\rho(u) := \int_{\Omega} |u(x)|^{p(x)} dx, \quad \|u\|_{L^{p(\cdot)}(\Omega)} := \inf \left\{ \lambda > 0 : \rho\left(\frac{u}{\lambda}\right) \le 1 \right\}.$$
|
| 241 |
+
|
| 242 |
+
for $u \in L^{p(\cdot)}(\Omega)$ and $p \in L_+^\infty(\Omega)$.
|
| 243 |
+
|
| 244 |
+
Consider the operator $A(t)$ defined in $Y$ such that for each $u \in Y$ is associated the following element of $Y^*$, $A(t)u: Y \to \mathbb{R}$ given by
|
| 245 |
+
|
| 246 |
+
$$A(t)u(v) := \int_{\Omega} D_1(t,x) |\nabla u(x)|^{p(x)-2} \nabla u(x) \cdot \nabla v(x) dx + \int_{\Omega} |u(x)|^{p(x)-2} u(x)v(x) dx.$$
|
| 247 |
+
|
| 248 |
+
The authors proved in [13] that:
|
| 249 |
+
|
| 250 |
+
• For each $t \in [\tau, T]$ the operator $A(t): Y \to Y^*$, with domain $Y = W^{1,p(\cdot)}(\Omega)$, is maximal monotone and $A(t)(Y) = Y^*$.
|
| 251 |
+
|
| 252 |
+
• The realization of the operator $A(t)$ in $H = L^2(\Omega)$, i.e.,
|
| 253 |
+
|
| 254 |
+
$$A_H(t)u = -\operatorname{div}(D_1(t)|\nabla u(t)|^{p(x)-2}\nabla u(t)) + |u(t)|^{p(x)-2}u(t),$$
|
| 255 |
+
|
| 256 |
+
is maximal monotone in $H$ for each $t \in [\tau, T]$.
|
| 257 |
+
|
| 258 |
+
• The operator $A_H(t)$ is the subdifferential $\partial\varphi_{p(\cdot)}^t$ of the convex, proper and lower semicontinuous map $\varphi_{p(\cdot)}^t: L^2(\Omega) \to \mathbb{R} \cup \{+\infty\}$ given by
|
| 259 |
+
|
| 260 |
+
$$\varphi_{p(\cdot)}^t(u) = \begin{cases} \left[ \int_{\Omega} \frac{D_1(t,x)}{p(x)} |\nabla u|^{p(x)} dx + \int_{\Omega} \frac{1}{p(x)} |u|^{p(x)} dx \right] & \text{if } u \in Y \\ +\infty, & \text{otherwise.} \end{cases} \quad (1)$$
|
| 261 |
+
|
| 262 |
+
Using the following elementary assertion we can obtain estimates on the operator considering only two cases.
|
| 263 |
+
|
| 264 |
+
**Proposition 1 ([1]).** Let $\lambda, \mu$ be arbitrary nonnegative numbers. For every positive $\alpha, \theta, \alpha \ge \theta$,
|
| 265 |
+
|
| 266 |
+
$$\lambda^{\alpha} + \mu^{\theta} \geq \frac{1}{2^{\alpha}} \begin{cases} (\lambda + \mu)^{\alpha} & \text{if } \lambda + \mu < 1, \\ (\lambda + \mu)^{\theta} & \text{if } \lambda + \mu \geq 1. \end{cases}$$
|
| 267 |
+
|
| 268 |
+
Then it is easy to show that for every $u \in Y$
|
| 269 |
+
|
| 270 |
+
$$\langle A(t)u, u\rangle_{Y^*,Y} \geq \frac{\min\{\beta, 1\}}{2^{p^+}} \begin{cases} \|u\|_Y^{p_+} & \text{if } \|u\|_Y < 1, \\ \|u\|_Y^{p_-} & \text{if } \|u\|_Y \geq 1. \end{cases} \quad (2)$$
|
| 271 |
+
|
| 272 |
+
From Example 4.4 in the last section of [27] we can apply Theorem 2.12 and Theorem 2.13 for $A(t)u = -\operatorname{div}(D_1(t, \cdot)|\nabla u|^{p(\cdot)-2}\nabla u) + |u|^{p(\cdot)-2}u$ and $B(t)v =$
|
| 273 |
+
---PAGE_BREAK---
|
| 274 |
+
|
| 275 |
+
$-div(D_2(t, \cdot)|\nabla v|^{q(\cdot)-2}\nabla v) + |v|^{q(\cdot)-2}v$ and conclude that system (S) has global solutions and it defines an exact generalized process $\mathbb{G}$.
|
| 276 |
+
|
| 277 |
+
**3. Existence of the pullback attractor.** First, we provide estimates on the solutions in the spaces $H \times H$ and $Y \times Y$.
|
| 278 |
+
|
| 279 |
+
**Lemma 3.1.** Let $(u_1, u_2)$ be a solution of problem (S). Then there exist a positive number $r_0$ and a constant $T_0$ which do not depend on the initial data, such that
|
| 280 |
+
|
| 281 |
+
$$\|(u_1(t), u_2(t))\|_{H \times H} \le r_0, \quad \forall t \ge T_0 + \tau.$$
|
| 282 |
+
|
| 283 |
+
*Proof.* Let $\varphi = (u_1, u_2) \in \mathbb{G}$ be a solution of (S). Then there exists a pair $(f,g) \in \text{Sel } F(u_1, u_2) \times \text{Sel } G(u_1, u_2)$ with $f, g \in L^1(\tau, T; H)$ for each $T > \tau$ such that $u_1$, $u_2$ satisfy the problem
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
\left\{
|
| 287 |
+
\begin{array}{ll}
|
| 288 |
+
\displaystyle \frac{du_1}{dt} + A(t)(u_1) = f & \text{in } (\tau, T) \times \Omega, \\
|
| 289 |
+
\\
|
| 290 |
+
\displaystyle \frac{du_2}{dt} + B(t)(u_2) = g & \text{in } (\tau, T) \times \Omega, \\
|
| 291 |
+
\\
|
| 292 |
+
u_1(\tau,x) = u_{1,0}(x), \quad u_2(\tau,x) = u_{2,0}(x) & \text{in } \Omega.
|
| 293 |
+
\end{array}
|
| 294 |
+
\right.
|
| 295 |
+
\qquad (3)
|
| 296 |
+
$$
|
| 297 |
+
|
| 298 |
+
Let $\alpha := 4(|\Omega| + 1)^2$ and $\sigma := \frac{\min\{\beta, 1\}}{2\max\{p^+, q^+\}}$. Multiplying the first equation by $u_1$, the second equation in (3) by $u_2$ and using (2) we obtain
|
| 299 |
+
|
| 300 |
+
$$
|
| 301 |
+
\frac{1}{2} \frac{d}{dt} \|u_1(t)\|_H^2 \leq \begin{cases} -\frac{\sigma}{\alpha^{p^+}} \|u_1(t)\|_H^{p^+} + \langle f(t), u_1(t) \rangle_H & \text{if } t \in I_1, \\ -\frac{\sigma}{\alpha^{q^-}} \|u_1(t)\|_H^{q^-} + \langle f(t), u_1(t) \rangle_H & \text{if } t \in I_2, \end{cases} \quad (4)
|
| 302 |
+
$$
|
| 303 |
+
|
| 304 |
+
where
|
| 305 |
+
|
| 306 |
+
$I_1 := \{t \in (\tau, T) : \|u_1(t)\|_Y < 1\}, \quad I_2 := \{t \in (\tau, T) : \|u_1(t)\|_Y \ge 1\},$
|
| 307 |
+
|
| 308 |
+
and
|
| 309 |
+
|
| 310 |
+
$$
|
| 311 |
+
\frac{1}{2} \frac{d}{dt} \|u_2(t)\|_H^2 \leq
|
| 312 |
+
\begin{cases}
|
| 313 |
+
-\frac{\sigma}{\alpha^{q^+}} \|u_2(t)\|_H^{q^+} + \langle g(t), u_2(t) \rangle_H & \text{if } t \in \tilde{I}_1 \\
|
| 314 |
+
-\frac{\sigma}{\alpha^{q^-}} \|u_2(t)\|_H^{q^-} + \langle g(t), u_2(t) \rangle_H & \text{if } t \in \tilde{I}_2,
|
| 315 |
+
\end{cases}
|
| 316 |
+
$$
|
| 317 |
+
|
| 318 |
+
where
|
| 319 |
+
|
| 320 |
+
$$
|
| 321 |
+
\tilde{I}_1 := \{t \in (\tau, T) : \|u_2(t)\|_Y < 1\}, \quad \tilde{I}_2 := \{t \in (\tau, T) : \|u_2(t)\|_Y \ge 1\}.
|
| 322 |
+
$$
|
| 323 |
+
|
| 324 |
+
Now, define $r := \frac{p^+}{p^-} > 1$ and let $r'$ be such that $\frac{1}{r} + \frac{1}{r'} = 1$. Then, by Young's inequality,
|
| 325 |
+
|
| 326 |
+
$$
|
| 327 |
+
-\frac{\sigma}{\alpha^{p^+}} \|u_1(t)\|_{H}^{p^+} \le r \left( -\frac{\sigma}{\alpha^{p^+}} \|u_1(t)\|_{H}^{p^-} + \frac{\sigma}{\alpha^{p^+} r'} \right). \quad (5)
|
| 328 |
+
$$
|
| 329 |
+
|
| 330 |
+
Using (5) in (4) we obtain
|
| 331 |
+
|
| 332 |
+
$$
|
| 333 |
+
\frac{1}{2} \frac{d}{dt} \|u_1(t)\|_{H}^{2} \leq -C_{2} \|u_{1}(t)\|_{H}^{p^{-}} + \langle f(t), u_{1}(t) \rangle_{H} + C_{1} \quad \forall t \in I := (\tau, T), \quad (6)
|
| 334 |
+
$$
|
| 335 |
+
|
| 336 |
+
where $C_1 := \frac{L\sigma}{p^{-}\alpha^{p^{-}}}$
|
| 337 |
+
and $C_2 := \frac{\min\{1,\beta\}}{(2\alpha)^L}$ with $L := \max\{p^+, q^+\}$.
|
| 338 |
+
|
| 339 |
+
In an analogous way, taking $\tilde{r} := \frac{q^+}{q^-} > 1$ and $\tilde{r}'$ such that $\frac{1}{\tilde{r}} + \frac{1}{\tilde{r}'} = 1$ we have
|
| 340 |
+
|
| 341 |
+
$$
|
| 342 |
+
\frac{1}{2} \frac{d}{dt} \|u_2(t)\|_H^2 \le -\tilde{C}_2 \|u_2(t)\|_H^{q^-} + \langle g(t), u_2(t) \rangle_H + \tilde{C}_1, \quad \forall t \in I,
|
| 343 |
+
$$
|
| 344 |
+
|
| 345 |
+
where $\tilde{C}_1 := \frac{L\sigma}{q^{-}\alpha^{q^{-}}}$ and $\tilde{C}_2 = C_2 = \frac{\min\{1,\beta\}}{(2\alpha)^L}$.
|
| 346 |
+
---PAGE_BREAK---
|
| 347 |
+
|
| 348 |
+
We can suppose, without losing generality that $p^{-} \ge q^{-}$. If $p^{-} = q^{-}$ we obtain a similar expression as (6) with $q^{-}$ in the place of $p^{-}$. If $p^{-} > q^{-}$, taking $\theta := \frac{p^{-}}{q^{-}} > 1$, $\theta'$ such that $\frac{1}{\theta'} + \frac{1}{\theta} = 1$ and $\epsilon > 0$ we have
|
| 349 |
+
|
| 350 |
+
$$
|
| 351 |
+
\begin{align*}
|
| 352 |
+
\|u_1(t)\|_H^{q^-} &= \frac{\epsilon}{\epsilon} \|u_1(t)\|_H^{q^-} \le \frac{1}{\theta' \epsilon \theta'} + \frac{1}{\theta} \epsilon^\theta \|u_1(t)\|_H^{p^-} \\
|
| 353 |
+
\text{and then} \quad & \\
|
| 354 |
+
& -C_2 \|u_1(t)\|_H^{p^-} \le \frac{\theta}{\epsilon^\theta} \left[ \frac{C_2}{\theta' \epsilon \theta'} - C_2 \|u_1(t)\|_H^{q^-} \right].
|
| 355 |
+
\end{align*}
|
| 356 |
+
$$
|
| 357 |
+
|
| 358 |
+
Thus we obtain
|
| 359 |
+
|
| 360 |
+
$$
|
| 361 |
+
\left\{
|
| 362 |
+
\begin{array}{l}
|
| 363 |
+
\displaystyle \frac{1}{2} \frac{d}{dt} \|u_1(t)\|_H^2 \le -\frac{C_2 \theta}{\epsilon^{\theta}} \|u_1(t)\|_H^{q_-} + \langle f(t), u_1(t) \rangle_H + C_1 + \frac{\theta C_2}{\theta' \epsilon^{\theta} \epsilon^{\theta'}} \\
|
| 364 |
+
\\
|
| 365 |
+
\displaystyle \frac{1}{2} \frac{d}{dt} \|u_2(t)\|_H^2 \le -\tilde{C}_2 \|u_2(t)\|_H^{q_-} + \langle g(t), u_2(t) \rangle_H + \tilde{C}_1
|
| 366 |
+
\end{array}
|
| 367 |
+
\right.
|
| 368 |
+
\quad (7)
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
We estimate $\langle f(t), u_1(t) \rangle_H$ and $\langle g(t), u_2(t) \rangle_H$ using the assumption that $(F, G)$ is positively sublinear (see Definition 2.5) and Young's inequality. Choosing a convenient, sufficiently small $\epsilon$ we obtain
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
\begin{align*}
|
| 375 |
+
\frac{1}{2} \frac{d}{dt} (\|u_1(t)\|_H^2 + \|u_2(t)\|_H^2) &\le -C_5 (\|u_1(t)\|_H^{q_-} + \|u_2(t)\|_H^{q_-}) + C_6 \\
|
| 376 |
+
&\le -\frac{C_5}{2^{q-2}} (\|u_1(t)\|_H^2 + \|u_2(t)\|_H^2)^{\frac{q-2}{2}} + C_6,
|
| 377 |
+
\end{align*}
|
| 378 |
+
$$
|
| 379 |
+
|
| 380 |
+
where $C_5$, $C_6 > 0$ are constants that depend on the numbers $|\Omega|$, $\beta$, $p^-$, $p^+$, $q^-$, $q^+$, $a$, $b$, $c$ and $m_0$.
|
| 381 |
+
|
| 382 |
+
Hence, the function $y(t) := \|u_1(t)\|_H^2 + \|u_2(t)\|_H^2$ satisfies the inequality
|
| 383 |
+
|
| 384 |
+
$$
|
| 385 |
+
y'(t) \leq - \frac{2C_5}{2^{q/2}} y(t)^{\frac{q-}{2}} + 2C_6, \quad t > 0.
|
| 386 |
+
$$
|
| 387 |
+
|
| 388 |
+
From Lemma 5.1 in [28] we obtain
|
| 389 |
+
|
| 390 |
+
$$
|
| 391 |
+
y(t) \le \left( \frac{C_6}{\frac{C_5}{2^{q^-/2}}} \right)^{2/q^-} + \left[ \frac{2C_5}{2^{q^-/2}} (q^-/2 - 1)(t-\tau) \right]^{-1/(q^-/2-1)} .
|
| 392 |
+
$$
|
| 393 |
+
|
| 394 |
+
Let $T_0 > 0$ be such that $\left[ \frac{2C_5}{2^{q^-/2}} \left(\frac{q^-}{2} - 1\right) T_0 \right]^{-1/(q^-/2-1)} \le 1$. Then,
|
| 395 |
+
|
| 396 |
+
$$
|
| 397 |
+
\|u_1(t)\|_{H}^{2} + \|u_{2}(t)\|_{H}^{2} \leq \kappa_{0} := (C_{6}2^{q^{-}/2}/C_{5})^{2/q^{-}} + 1 \quad \text{for all } t \geq T_{0} + \tau. \quad \square
|
| 398 |
+
$$
|
| 399 |
+
|
| 400 |
+
**Lemma 3.2.** Let $(u_1, u_2)$ be a solution of problem (S). Then there exist positive constants $r_1$ and $T_1 > T_0$, which do not depend on the initial data, such that
|
| 401 |
+
|
| 402 |
+
$$
|
| 403 |
+
\|(u_1(t), u_2(t))\|_{Y \times Y} \le r_1, \quad \forall t \ge T_1 + \tau.
|
| 404 |
+
$$
|
| 405 |
+
|
| 406 |
+
Proof. Take $T_1 > T_0$. Since $(u_1, u_2)$ is a solution of (S) there exists a pair $(f,g) \in \operatorname{Sel} F(u,v) \times \operatorname{Sel} G(u,v)$ with $f, g \in L^1(\tau,T;H)$ such that $u$ and $v$ satisfy the problem
|
| 407 |
+
|
| 408 |
+
$$
|
| 409 |
+
\left\{
|
| 410 |
+
\begin{array}{ll}
|
| 411 |
+
\displaystyle \frac{du_1}{dt} + A(t)(u_1) = f & \text{in } (\tau, T) \times \Omega, \\
|
| 412 |
+
\\
|
| 413 |
+
\displaystyle \frac{du_2}{dt} + B(t)(u_2) = g & \text{in } (\tau, T) \times \Omega.
|
| 414 |
+
\end{array}
|
| 415 |
+
\right.
|
| 416 |
+
$$
|
| 417 |
+
---PAGE_BREAK---
|
| 418 |
+
|
| 419 |
+
Consider $\varphi_{p(\cdot)}^t$ as in (1). Using Assumption D (ii),
|
| 420 |
+
|
| 421 |
+
$$ \frac{d}{dt} \varphi_{p(\cdot)}^{t}(u_{1}(t)) \leq \left\langle \partial \varphi_{p(\cdot)}^{t}(u_{1}(t)), \frac{du_{1}}{dt}(t) \right\rangle $$
|
| 422 |
+
|
| 423 |
+
and then we obtain
|
| 424 |
+
|
| 425 |
+
$$ \frac{d}{dt} \varphi_{p(\cdot)}^{t}(u_1(t)) + \frac{1}{2} \left\| f(t) - \frac{du_1}{dt}(t) \right\|_{H}^{2} \leq \frac{1}{2} \|f(t)\|_{H}^{2}. $$
|
| 426 |
+
|
| 427 |
+
Now by Lemma 3.1 and the fact that $F$ and $G$ are bounded, there exists a positive constant $C_0$ such that $\|f(t)\|_H \le C_0$ for all $t \ge T_0 + \tau$. Then, by the definition of a subdifferential and the Uniform Gronwall Lemma (see [28]), there exists a positive constant $C_1$ such that $\varphi_{p(\cdot)}^t(u_1(t)) \le C_1$ for all $t \ge T_1 + \tau$. Consequently, there exists a positive constant $K_1$ such that $\|u_1(t)\|_Y \le K_1$ for all $t \ge T_1 + \tau$.
|
| 428 |
+
|
| 429 |
+
In a similar way, we conclude $\|u_2(t)\|_Y \le K_2$ for all $t \ge T_1 + \tau$ for a positive constant $K_2$. The assertion of the lemma then follows. $\square$
|
| 430 |
+
|
| 431 |
+
Let $U_G$ be the multivalued process defined by the generalized process $G$. We know from [23] that for all $t \ge s$ in $\mathbb{R}$ the map $x \mapsto U_G(t,s)x \in P(H \times H)$ is closed, so we obtain from Theorem 18 in [4] the following result
|
| 432 |
+
|
| 433 |
+
**Theorem 3.3.** If for any $t \in \mathbb{R}$ there exists a nonempty compact set $D(t)$ which pullback attracts all bounded sets of $H \times H$ at time $t$, then the set $\mathcal{A} = \{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ with $\mathcal{A}(t) = \bigcup_{B \in \mathcal{B}(H \times H)} \omega(t, B)$, is the unique compact, negatively invariant pullback attracting set which is minimal in the class of closed pullback attracting nonautonomous sets. Moreover, the sets $\mathcal{A}(t)$ are compact.
|
| 434 |
+
|
| 435 |
+
**Theorem 3.4.** The multivalued evolution process $U_G$ associated with system (S) has a compact, negatively invariant pullback attracting set $\mathfrak{A} = \{\mathcal{A}(t)\}_{t \in \mathbb{R}}$ which is minimal in the class of closed pullback attracting nonautonomous sets. Moreover, the sets $\mathcal{A}(t)$ are compact.
|
| 436 |
+
|
| 437 |
+
*Proof.* By Lemma 3.2 we have that the family $D(t) := \overline{B_{Y \times Y}(0, r_1)}^{H \times H}$ of compact sets of $H \times H$ is attracting. The result thus follows from Theorem 3.3. $\square$
|
| 438 |
+
|
| 439 |
+
**4. Forward attraction.** Pullback attractors contain all of the bounded entire solutions of the nonautonomous dynamical system [11, 12]. Simple counterexamples show that a pullback attractor need not be attracting in the forward sense [11]. However, since the pullback absorbing set $D$ above is also forward absorbing (the absorption time is independent of the initial time $\tau$), the forward omega limit sets $\omega_f(\tau, D)$ of the multivalued process starting at time $\tau$ are nonempty and compact subsets of the compact set $D$. Moreover, it follows by the positive invariance of the $D$ and the two-parameter semi-group property that they are increasing in time. The forward limiting dynamics thus tends to the nonempty compact subset $\omega_f^\infty(D) = \cup_{\tau \ge 0} \omega_f(\tau, D) \subset D$, which was called the forward attracting set in [16]. (It is related to the Vishik uniform attractor, when that exists, but can be smaller since the attraction here need not be uniform in the initial time).
|
| 440 |
+
|
| 441 |
+
As shown in Proposition 8 of [16] (in the context of single valued difference equations, but a similar proof holds here) the forward attracting set $\omega_f^\infty(D)$ is asymptotically positively invariant with respect to the set valued process $U_G(t, \tau)$,
|
| 442 |
+
---PAGE_BREAK---
|
| 443 |
+
|
| 444 |
+
i.e., if for any monotone decreasing sequence $\varepsilon_p \to 0$ as $p \to \infty$ there exists a monotone increasing sequence $T_p \to \infty$ as $p \to \infty$ such that for each $\tau \ge T_p$
|
| 445 |
+
|
| 446 |
+
$$U_G(t, \tau)\omega_f^\infty(D) \subset B_{\varepsilon_p}(\omega_f^\infty(D)), \quad t \ge \tau,$$
|
| 447 |
+
|
| 448 |
+
where $B_{\varepsilon_p}(\omega_f^\infty(D)) := \{x \in H \times H : \operatorname{dist}_{H \times H}(x, \omega_f^\infty(D)) < \varepsilon_p\}$.
|
| 449 |
+
|
| 450 |
+
Simple counterexamples show that the set $\omega_f^\infty(D)$ need not be invariant or even positive invariant, although it may be in special cases depending on the nature of the time varying terms in the system. For asymptotically autonomous systems $\omega_f^\infty(D)$ is contained in the global attractor $\mathcal{A}_\infty$ for the multivalued semigroup $G$ associated with the limiting autonomous system.
|
| 451 |
+
|
| 452 |
+
Moreover, it is possible to compare the global attractor $\mathcal{A}_\infty$ with the limit-set $\mathcal{A}(\infty)$ defined by $\mathcal{A}(\infty) := \bigcap_{t \in \mathbb{R}} (\cup_{r \ge t} \mathcal{A}(r))$ and which can be characterized by
|
| 453 |
+
|
| 454 |
+
$$\bigcup_{r_n \nearrow \infty} \{x \in X : \exists x_n \in \mathcal{A}(r_n) \text{ s. t. } x_n \to x\}.$$
|
| 455 |
+
|
| 456 |
+
This kind of comparison was done in [26] for the multivalued context.
|
| 457 |
+
|
| 458 |
+
**Theorem 4.1** ([26]). Suppose the pullback attractor $\mathcal{A}$ is forward compact, i.e., $\cup_{r \ge t} \mathcal{A}(r)$ is precompact for each $t \in \mathbb{R}$. Moreover, suppose that for each solution $u$ of problem (8) there exists a solution $v$ of problem (9) such that $u(t+\tau) \to v(t)$ in $X$ as $\tau \to +\infty$ for each $t \ge 0$ whenever $\psi_\tau \in \mathcal{A}(\tau)$ and $\psi_\tau \to \psi_0$ in $X$ as $\tau \to +\infty$. Then $\mathcal{A}_\infty \supset \mathcal{A}(\infty)$.
|
| 459 |
+
|
| 460 |
+
To obtain the equality $\mathcal{A}_\infty = \mathcal{A}(\infty)$ we need to assume stronger conditions as in the next result.
|
| 461 |
+
|
| 462 |
+
**Theorem 4.2** ([26]). Under the same assumptions of Theorem 4.1, we have $\mathcal{A}_\infty = \mathcal{A}(\infty)$ if we further assume the following conditions:
|
| 463 |
+
|
| 464 |
+
(a) $\mathcal{A}(\infty)$ forward attracts $\mathcal{A}_\infty$ by $U_G(\cdot, 0)$, i.e.,
|
| 465 |
+
|
| 466 |
+
$$\lim_{t \to +\infty} \operatorname{dist}(U_G(t, 0)\mathcal{A}_\infty, \mathcal{A}(\infty)) = 0;$$
|
| 467 |
+
|
| 468 |
+
(b) $\lim_{t \to +\infty} \sup_{x \in \mathcal{A}_\infty} \operatorname{dist}(G(t)x, U_G(t, 0)x) = 0.$
|
| 469 |
+
|
| 470 |
+
5. Asymptotic upper semicontinuity. In this section we establish the asymptotic upper semicontinuity of the elements of the pullback attractor. Specifically, we prove that the system (S) is asymptotically autonomous.
|
| 471 |
+
|
| 472 |
+
5.1. **Theoretical results.** In this subsection motivated by problem (S), we study the asymptotic behavior of an abstract nonautonomous multivalued problem in a Hilbert space *H* of the form
|
| 473 |
+
|
| 474 |
+
$$
|
| 475 |
+
\left\{
|
| 476 |
+
\begin{array}{ll}
|
| 477 |
+
\displaystyle \frac{du_1}{dt}(t) + A(t)u_1(t) \in F(u_1(t), u_2(t)) & t > \tau \\
|
| 478 |
+
\\
|
| 479 |
+
\displaystyle \frac{du_2}{dt}(t) + B(t)u_2(t) \in G(u_1(t), u_2(t)) & t > \tau \\
|
| 480 |
+
\\
|
| 481 |
+
(u_1(\tau), u_2(\tau)) = (\psi_{1,\tau}, \psi_{2,\tau}) =: \psi_{\tau},
|
| 482 |
+
\end{array}
|
| 483 |
+
\right.
|
| 484 |
+
\qquad (8)
|
| 485 |
+
$$
|
| 486 |
+
|
| 487 |
+
compared with that of an autonomous multivalued problem of the form
|
| 488 |
+
|
| 489 |
+
$$
|
| 490 |
+
\left\{
|
| 491 |
+
\begin{array}{ll}
|
| 492 |
+
\displaystyle \frac{dv_1}{dt}(t) + A_\infty v_1(t) \in F(v_1(t), v_2(t)) & t > 0 \\
|
| 493 |
+
\\
|
| 494 |
+
\displaystyle \frac{dv_2}{dt}(t) + B_\infty v_2(t) \in G(v_1(t), v_2(t)) & t > 0 \\
|
| 495 |
+
\\
|
| 496 |
+
(v_1(0), v_2(0)) = (\psi_{1,0}, \psi_{2,0})) =: \psi_0,
|
| 497 |
+
\end{array}
|
| 498 |
+
\right.
|
| 499 |
+
\qquad (9)
|
| 500 |
+
$$
|
| 501 |
+
---PAGE_BREAK---
|
| 502 |
+
|
| 503 |
+
where $A(t), B(t), A_\infty$ and $B_\infty$ are univalued operators in $H \times H$ and $F, G: H \times H \to P(H \times H)$ are multivalued maps.
|
| 504 |
+
|
| 505 |
+
Under appropriate relationships between the operators $A(t)$, $A_\infty$ and $B(t)$, $B_\infty$, the autonomous problem (9) is the asymptotic autonomous version of the nonautonomous problem (8). In particular, we establish the convergence in the Hausdorff semi-distance of the component subsets of the pullback attractor of the nonautonomous problem (8) to the global autonomous attractor of the autonomous problem (9).
|
| 506 |
+
|
| 507 |
+
Some definitions on multivalued semigroups are recalled here, see for example [5, 17, 24] for more details.
|
| 508 |
+
|
| 509 |
+
**Definition 5.1.** Let $X$ be a complete metric space. The map $G : \mathbb{R}^+ \times X \to P(X)$ is called a multivalued semigroup (or *m-semiflow*) if
|
| 510 |
+
|
| 511 |
+
(1) $G(0, \cdot) = \mathbf{1}$ is the identity map;
|
| 512 |
+
|
| 513 |
+
(2) $G(t_1 + t_2, x) \subset G(t_1, G(t_2, x))$, for all $x \in X$ and $t_1, t_2 \in \mathbb{R}^+$.
|
| 514 |
+
|
| 515 |
+
It is called strict (or exact) if $G(t_1 + t_2, x) = G(t_1, G(t_2, x))$, for all $x \in X$ and $t_1, t_2 \in \mathbb{R}^+$.
|
| 516 |
+
|
| 517 |
+
**Definition 5.2.** Let $G$ be a multivalued semigroup on $X$. The set $A \subset X$ attracts the subset $B$ of $X$ if $\lim_{t \to \infty} \text{dist}_H(G(t, B), A) = 0$. The set $M$ is said to be a global $B$-attractor for $G$ if $M$ attracts any nonempty bounded subset $B \subset X$.
|
| 518 |
+
|
| 519 |
+
Suppose that the multivalued evolution process $\{U(t, \tau) : t \ge \tau\}$ in $H \times H$ associated with problem (8) has a pullback attractor $\mathcal{A} = \{\mathcal{A}(t) : t \in \mathbb{R}\}$ and that the multivalued semigroup $G : \mathbb{R}^+ \times H \times H \to P(H \times H)$ associated with problem (9) has a global autonomous $B$-attractor $\mathcal{A}_\infty$ in the Hilbert space $H \times H$. The following result will be used later to establish the convergence in the Hausdorff semi-distance of the component subsets $\mathcal{A}(t)$ of the pullback attractor $\mathcal{A}$ to $\mathcal{A}_\infty$ as $t \to \infty$.
|
| 520 |
+
|
| 521 |
+
**Theorem 5.3.** Suppose that $C := \bigcup_{\tau \ge 0} \mathcal{A}(\tau)$ is a compact subset of $H \times H$. In addition, suppose that for each solution $u$ of problem (8) there exists a solution $v$ of problem (9) with initial values $\psi_\tau$ and $\psi_0$, respectively, such that $u(t+\tau) \to v(t)$ in $H \times H$ as $\tau \to +\infty$ for each $t \ge 0$ whenever $\psi_\tau \in \mathcal{A}(\tau)$ and $\psi_\tau \to \psi_0$ in $H$ as $\tau \to +\infty$. Then
|
| 522 |
+
|
| 523 |
+
$$ \lim_{t \to +\infty} \text{dist}_{H \times H}(\mathcal{A}(t), \mathcal{A}_\infty) = 0. $$
|
| 524 |
+
|
| 525 |
+
*Proof.* Suppose that this is not true. Then there would exist an $\epsilon_0 > 0$ and a real sequence $\{\tau_n\}_{n \in \mathbb{N}}$ with $\tau_n \nearrow +\infty$ such that $\text{dist}_{H \times H}(\mathcal{A}(\tau_n), \mathcal{A}_\infty) \ge 3\epsilon_0$ for all $n \in \mathbb{N}$. Since the sets $\mathcal{A}(\tau_n)$ are compact, there exist $a_n \in \mathcal{A}(\tau_n)$ such that
|
| 526 |
+
|
| 527 |
+
$$ \text{dist}_{H \times H}(a_n, \mathcal{A}_\infty) = \text{dist}_{H \times H}(\mathcal{A}(\tau_n), \mathcal{A}_\infty) \ge 3\epsilon_0, \quad (10) $$
|
| 528 |
+
|
| 529 |
+
for each $n \in \mathbb{N}$. By the attraction property of the multivalued semigroup, we have $\text{dist}_{H \times H}(G(\tau_{n_0}, C), \mathcal{A}_\infty) \le \epsilon_0$ for $n_0 > 0$ large enough. Moreover, by the negative invariance of the pullback attractor there exist $b_n \in \mathcal{A}(\tau_n - \tau_{n_0}) \subset C$ for $n > n_0$ such that $a_n \in U(\tau_n, \tau_n - \tau_{n_0})b_n$ for each $n > n_0$. Since $C$ is compact, there is a convergent subsequence $b_{n'}' \to b \in C$. Since $a_{n'}' \in U(\tau_{n'}, \tau_{n'} - \tau_{n_0})b_{n'}'$ there exists
|
| 530 |
+
---PAGE_BREAK---
|
| 531 |
+
|
| 532 |
+
a solution $u_{n'} = (u_{1n'}, u_{2n'})$ of
|
| 533 |
+
|
| 534 |
+
$$
|
| 535 |
+
\begin{cases}
|
| 536 |
+
\frac{du_{1n'}}{dt}(t) + A(t)u_{1n'}(t) \in F(u_{1n'}(t), u_{2n'}(t)) \\
|
| 537 |
+
\frac{du_{2n'}}{dt}(t) + B(t)u_{2n'}(t) \in G(u_{1n'}(t), u_{2n'}(t)) \\
|
| 538 |
+
u_{n'}(\tau_{n'} - \tau_{n_0}) = b_{n'},
|
| 539 |
+
\end{cases}
|
| 540 |
+
$$
|
| 541 |
+
|
| 542 |
+
such that $a_{n'} = u_{n'}(\tau_{n'})$.
|
| 543 |
+
|
| 544 |
+
Writing $\tau_{n'} = \tau_{n_0} + (\tau_{n'} - \tau_{n_0})$ and using the hypotheses with $t = \tau_{n_0}$ and $\tau = \tau_{n'} - \tau_{n_0} \to +\infty$ (as $n' \to +\infty$), there exists a solution $v_{n'}$ of
|
| 545 |
+
|
| 546 |
+
$$
|
| 547 |
+
\left\{
|
| 548 |
+
\begin{array}{l}
|
| 549 |
+
\displaystyle \frac{dv_{1n'}}{dt}(t) + A_\infty v_{1n'}(t) \in F(v_{1n'}(t), v_{2n'}(t)) \\
|
| 550 |
+
\displaystyle \frac{dv_{2n'}}{dt}(t) + B_\infty v_{2n'}(t) \in G(v_{1n'}(t), v_{2n'}(t)) \\
|
| 551 |
+
v_{n'}(0) = b,
|
| 552 |
+
\end{array}
|
| 553 |
+
\right.
|
| 554 |
+
$$
|
| 555 |
+
|
| 556 |
+
such that
|
| 557 |
+
|
| 558 |
+
$$
|
| 559 |
+
\| u_{n'}(\tau_{n'}) - v_{n'}(\tau_{n_0}) \|_{H \times H} < \epsilon_0
|
| 560 |
+
$$
|
| 561 |
+
|
| 562 |
+
for $n'$ large enough. Hence,
|
| 563 |
+
|
| 564 |
+
$$
|
| 565 |
+
\begin{align*}
|
| 566 |
+
\mathrm{dist}_{H \times H} (a_{n'}, \mathcal{A}_{\infty}) &= \mathrm{dist}_{H \times H} (u_{n'}(\tau_{n'}), \mathcal{A}_{\infty}) \\
|
| 567 |
+
&\leq \| u_{n'}(\tau_{n'}) - v_{n'}(\tau_{n_0}) \|_{H \times H} + \mathrm{dist}_{H \times H} (v_{n'}(\tau_{n_0}), \mathcal{A}_{\infty}) \\
|
| 568 |
+
&\leq \| u_{n'}(\tau_{n'}) - v_{n'}(\tau_{n_0}) \|_{H \times H} + \mathrm{dist}_{H \times H} (G(\tau_{n_0}, C), \mathcal{A}_{\infty}) \\
|
| 569 |
+
&\leq 2\epsilon_0,
|
| 570 |
+
\end{align*}
|
| 571 |
+
$$
|
| 572 |
+
|
| 573 |
+
which contradicts (10). □
|
| 574 |
+
|
| 575 |
+
The next result is very useful for checking that the hypothesis of asymptotic
|
| 576 |
+
continuity of the nonautonomous flow in the preceeding theorem for problems like
|
| 577 |
+
(8) holds. In order to obtain the result we suppose that the operators $A(t)$ and $A_\infty$
|
| 578 |
+
satisfy the following assumption.
|
| 579 |
+
|
| 580 |
+
**Assumption G.** For each $\tau \in \mathbb{R}$ there exist non increasing functions $g_{1,\tau}$, $g_{2,\tau}$ :
|
| 581 |
+
$[0,+\infty) \rightarrow [0,+\infty)$ such that $g_{i,\tau}(t) \rightarrow 0$ as $\tau \rightarrow +\infty$, for each $t \ge 0$, $i=1,2$, and
|
| 582 |
+
$\langle A(t+\tau)u_1(t+\tau) - A_\infty v_1(t), u_1(t+\tau) - v_1(t) \rangle \ge -g_{1,\tau}(t)$, for all $t \in \mathbb{R}^+$, $\tau \in \mathbb{R}$,
|
| 583 |
+
and
|
| 584 |
+
$\langle B(t+\tau)u_2(t+\tau) - B_\infty v_2(t), u_2(t+\tau) - v_2(t) \rangle \ge -g_{2,\tau}(t)$, for all $t \in \mathbb{R}^+$, $\tau \in \mathbb{R}$,
|
| 585 |
+
for any solution $u = (u_1,u_2)$ of (8) and $v = (v_1,v_2)$ of (9).
|
| 586 |
+
|
| 587 |
+
**Lemma 5.4.** Suppose that Assumption G is satisfied. If $\psi_\tau = (\psi_{1,\tau}, \psi_{2,\tau}) \to \psi_0 = (\psi_{1,0}, \psi_{2,0})$ in $H \times H$ as $\tau \to +\infty$, then for each solution $u$ of (8) there exists a solution $v$ of (9) such that $u(t+\tau) \to v(t)$ in $H \times H$ as $\tau \to +\infty$ for each $t \ge 0$.
|
| 588 |
+
---PAGE_BREAK---
|
| 589 |
+
|
| 590 |
+
*Proof.* Let $u$ be a solution of (8) then there exists $f = (f_1, f_2)$ with $f_1, f_2 \in L^2([\tau, T]; H)$ such that $f_1(t) \in F(u_1(t), u_2(t))$ and $f_2(t) \in G(u_1(t), u_2(t))$, a.e., and
|
| 591 |
+
|
| 592 |
+
$$
|
| 593 |
+
\left\{
|
| 594 |
+
\begin{array}{ll}
|
| 595 |
+
\dfrac{du_1}{dt}(t) + A(t)u_1(t) = f_1(t), & \text{a.e in } (\tau, T], \\
|
| 596 |
+
\dfrac{du_2}{dt}(t) + B(t)u_2(t) = f_2(t), & \text{a.e in } (\tau, T], \\
|
| 597 |
+
u(\tau) = \psi_{\tau}. &
|
| 598 |
+
\end{array}
|
| 599 |
+
\right.
|
| 600 |
+
\qquad (11)
|
| 601 |
+
$$
|
| 602 |
+
|
| 603 |
+
Consider $g \in L^2([0, T]; H \times H)$ such that $g(t) = f(t+\tau)$ and let $v$ be the unique solution of the problem
|
| 604 |
+
|
| 605 |
+
$$
|
| 606 |
+
\left\{
|
| 607 |
+
\begin{array}{ll}
|
| 608 |
+
\dfrac{dv_1}{dt}(t) + A_\infty v_1(t) = g_1(t), & \text{a.e in } (0, T], \\
|
| 609 |
+
\dfrac{dv_2}{dt}(t) + B_\infty v_2(t) = g_2(t), & \text{a.e in } (0, T], \\
|
| 610 |
+
v(0) = \psi_0. &
|
| 611 |
+
\end{array}
|
| 612 |
+
\right.
|
| 613 |
+
\qquad (12)
|
| 614 |
+
$$
|
| 615 |
+
|
| 616 |
+
Subtracting the equations in (11) from the equations in (12) gives
|
| 617 |
+
|
| 618 |
+
$$ \frac{d}{dt}(u_1(t+\tau) - v_1(t)) + A(t+\tau)u_1(t+\tau) - A_{\infty}v_1(t) = f_1(t+\tau) - g_1(t) $$
|
| 619 |
+
|
| 620 |
+
and
|
| 621 |
+
|
| 622 |
+
$$ \frac{d}{dt}(u_2(t+\tau) - v_2(t)) + B(t+\tau)u_2(t+\tau) - B_{\infty}v_2(t) = f_2(t+\tau) - g_2(t) $$
|
| 623 |
+
|
| 624 |
+
for a.e. $t \in [0, T]$. Multiplying by $u_i(t+\tau) - v_i(t)$ and taking the inner product, then using Assumption G, we obtain
|
| 625 |
+
|
| 626 |
+
$$ \frac{1}{2} \frac{d}{dt} \|u_i(t+\tau) - v_i(t)\|_H^2 \leq g_{i,\tau}(t), \quad i=1,2. $$
|
| 627 |
+
|
| 628 |
+
Integrating this last inequality from 0 to t, gives
|
| 629 |
+
|
| 630 |
+
$$ \|u_i(t+\tau) - v_i(t)\|_H^2 \leq \| \psi_{i,\tau} - \psi_{i,0} \|_H^2 + 2tg_{i,\tau}(0). $$
|
| 631 |
+
|
| 632 |
+
Since $\psi_{i,\tau} \to \psi_{i,0}$ in $H$ and $g_{i,\tau}(0) \to 0$ as $\tau \to +\infty$, the result follows. $\square$
|
| 633 |
+
|
| 634 |
+
5.2. **Application to system (S).** The results in Subsection 5.1 are applied here to the nonlinear system of inclusions with spatially variable exponents (S) in the Hilbert space $\tilde{H} = H \times H$, with $H := L^2(\Omega)$.
|
| 635 |
+
|
| 636 |
+
We assume that the diffusion coefficients satisfy Assumption D and the additional Assumption D (iii) that follows:
|
| 637 |
+
|
| 638 |
+
**Assumption D (iii).** For each $t \ge 0$, $D_i(t+\tau, \cdot) \to D_i^*(\cdot)$ in $L^\infty(\Omega)$ as $\tau \to +\infty$, for $i=1,2$.
|
| 639 |
+
|
| 640 |
+
Assumptions D (i)—D (ii) imply that the pointwise limit $D_i^*(x)$ as $t \to \infty$ exists and satisfies $0 < \beta \le D_i^*(x)$ for almost all $x \in \Omega$, $i=1,2$. Then the problem (S) with $D^*(x) = (D_1^*(x), D_2^*(x))$ is autonomous and has a global autonomous B-attractor as a particular case of the results in Section 3 (see also a direct proof in [25] for the autonomous system of inclusions without the nonlinear perturbation $|u|^{p(\cdot)-2}u$).
|
| 641 |
+
|
| 642 |
+
We will show that the dynamics of the original nonautonomous problem is asymptotically autonomous and its pullback attractor converges upper semicontinuously
|
| 643 |
+
---PAGE_BREAK---
|
| 644 |
+
|
| 645 |
+
to the autonomous global B-attractor $\mathcal{A}_\infty$ of the problem
|
| 646 |
+
|
| 647 |
+
$$
|
| 648 |
+
\left\{
|
| 649 |
+
\begin{array}{l}
|
| 650 |
+
\displaystyle \frac{\partial v_1}{\partial t}(t) - \operatorname{div} (D_1^* |\nabla v_1(t)|^{p(x)-2} \nabla v_1(t)) + |v_1(t)|^{p(x)-2} v_1(t) \in F(v_1(t), v_2(t)), \\[6pt]
|
| 651 |
+
\displaystyle \frac{\partial v_2}{\partial t}(t) - \operatorname{div} (D_2^* |\nabla v_2(t)|^{q(x)-2} \nabla v_2(t)) + |v_2(t)|^{q(x)-2} v_2(t) \in G(v_1(t), v_2(t)), \\[6pt]
|
| 652 |
+
v(0) = \psi_0.
|
| 653 |
+
\end{array}
|
| 654 |
+
\right.
|
| 655 |
+
\tag{13}
|
| 656 |
+
$$
|
| 657 |
+
|
| 658 |
+
In particular, we consider the operators
|
| 659 |
+
|
| 660 |
+
$$
|
| 661 |
+
\begin{align*}
|
| 662 |
+
A(t)u_1 &:= -\operatorname{div} (D_1(t)|\nabla u_1|^{p(x)-2}\nabla u_1) + |u_1|^{p(x)-2}u_1, \\
|
| 663 |
+
B(t)u_2 &:= -\operatorname{div} (D_2(t)|\nabla u_2|^{q(x)-2}\nabla u_2) + |u_2|^{q(x)-2}u_2, \\
|
| 664 |
+
A_\infty v_1 &:= -\operatorname{div} (D_1^*|\nabla v_1|^{p(x)-2}\nabla v_1) + |v_1|^{p(x)-2}v_1, \\
|
| 665 |
+
B_\infty v_2 &:= -\operatorname{div} (D_2^*|\nabla v_2|^{q(x)-2}\nabla v_2) + |v_2|^{q(x)-2}v_2.
|
| 666 |
+
\end{align*}
|
| 667 |
+
$$
|
| 668 |
+
|
| 669 |
+
Applying Lemma 3.1, there exist positive constants $T_0$, $B_0$ such that
|
| 670 |
+
|
| 671 |
+
$$
|
| 672 |
+
\|u(t)\|_{H \times H} \le B_0, \quad \forall t \ge T_0 + \tau.
|
| 673 |
+
$$
|
| 674 |
+
|
| 675 |
+
Moreover, applying Lemma 3.2 for $Y = W^{1,p(x)}(\Omega)$, there exist positive constants $T_1$, $B_1$ such that
|
| 676 |
+
|
| 677 |
+
$$
|
| 678 |
+
\|u(t)\|_{Y \times Y} \le B_1, \quad \forall t \ge T_1 + \tau. \tag{14}
|
| 679 |
+
$$
|
| 680 |
+
|
| 681 |
+
Since also $\|v(t)\|_{Y \times Y} \le B_1$ for all $t \ge T_1 + \tau$ and $Y \subset H$ with compact embedding, it follows
|
| 682 |
+
|
| 683 |
+
**Corollary 1.** $\overline{\cup_{\tau \in \mathbb{R}} \mathcal{A}(\tau)}$ is a compact subset of $H \times H$.
|
| 684 |
+
|
| 685 |
+
Using estimate (14), the proof of the next result follows the same lines as the
|
| 686 |
+
proof of Theorem 4.2 of [14], and therefore is omitted here.
|
| 687 |
+
|
| 688 |
+
**Theorem 5.5.** If $\{\psi_\tau : \tau \in \mathbb{R}\}$ is a bounded set in $Y \times Y$ and $\psi_\tau \to \psi_0$ in $H \times H$ as $\tau \to +\infty$, then Assumption G is satisfied with $g_{i,\tau}(t) = K \|D_i(t+\tau, \cdot) - D_i^*(\cdot)\|_{L^\infty(\Omega)}$, $(i=1,2)$ where K is a positive constant.
|
| 689 |
+
|
| 690 |
+
Observe that by Assumption D (iii) the function $g_{i,\tau}: [0, +\infty) \to [0, +\infty)$ given in Theorem 5.5 satisfies $g_{i,\tau}(t) \to 0$ as $\tau \to +\infty$ for each $t \ge 0$. The next result gives the desired asymptotic upper semi-continuous convergence.
|
| 691 |
+
|
| 692 |
+
**Theorem 5.6.** $\lim_{t \to +\infty} \text{dist}_{H \times H}(\mathcal{A}(t), \mathcal{A}_\infty) = 0$.
|
| 693 |
+
|
| 694 |
+
*Proof.* Suppose that $\psi_\tau \in A(\tau)$ and $\psi_\tau \to \psi_0$ in $H \times H$. Using the negatively invariance of the pullback attractor and the estimate (14) it follows that $\{\psi_\tau : \tau \in \mathbb{R}\}$ is a bounded set in $Y \times Y$. Theorem 5.5 then guarantees that Assumption G is satisfied. Thus, from Lemma 5.4, for each solution $u = (u_1, u_2)$ of (S) there exists a solution $v = (v_1, v_2)$ of (13) such that $u(t+\tau) \to v(t)$ in $H \times H$ as $\tau \to +\infty$ for each $t \ge 0$. Theorem 5.3 then yields $\lim_{t \to +\infty} \text{dist}(A(t), A_\infty) = 0$. $\square$
|
| 695 |
+
|
| 696 |
+
REFERENCES
|
| 697 |
+
|
| 698 |
+
[1] C. O. Alves, S. Shmarev, J. Simsen and M. S. Simsen, *The Cauchy problem for a class of parabolic equations in weighted variable Sobolev spaces: existence and asymptotic behavior*, *J. Math. Anal. Appl.*, **443** (2016), 265–294.
|
| 699 |
+
---PAGE_BREAK---
|
| 700 |
+
|
| 701 |
+
[2] J. P. Aubin and A. Cellina, *Differential Inclusions: Set-Valued Maps and Viability Theory*, Springer-Verlag, Berlin, 1984.
|
| 702 |
+
|
| 703 |
+
[3] J. P. Aubin and H. Frankowska, *Set-valued Analysis*, Birkhäuser, Berlin, 1990.
|
| 704 |
+
|
| 705 |
+
[4] T. Caraballo, J. A. Langa, V. S. Melnik and J. Valero, Pullback attractors for nonautonomous and stochastic multivalued dynamical systems, *Set-Valued Analysis*, **11** (2003), 153–201.
|
| 706 |
+
|
| 707 |
+
[5] T. Caraballo, P. Marin-Rubio and J. C. Robinson, A comparison between two theories for multivalued semiflows and their asymptotic behaviour, *Set-Valued Analysis*, **11** (2003), 297–322.
|
| 708 |
+
|
| 709 |
+
[6] J. I. Díaz and I. I. Vrabie, Existence for reaction diffusion systems. A compactness method approach, *J. Math. Anal. Appl.*, **188** (1994), 521–540.
|
| 710 |
+
|
| 711 |
+
[7] L. Diening, P. Harjulehto, P. Hästö and M. Rúžička, *Lebesgue and Sobolev Spaces with Variable Exponents*, Springer-Verlag, Berlin, Heidelberg, 2011.
|
| 712 |
+
|
| 713 |
+
[8] X. L. Fan and Q. H. Zhang, Existence of solutions for $p(x)$-laplacian Dirichlet problems, *Nonlinear Anal.*, **52** (2003), 1843–1852.
|
| 714 |
+
|
| 715 |
+
[9] P. Harjulehto, P. Hästö, U. Lê and M. Nuortio, Overview of differential equations with non-standard growth, *Nonlinear Analysis*, **72** (2010), 4551–4574.
|
| 716 |
+
|
| 717 |
+
[10] P. E. Kloeden and T. Lorenz, Construction of nonautonomous forward attractors, *Proc. Amer. Mat. Soc.*, **144** (2016), 259–268.
|
| 718 |
+
|
| 719 |
+
[11] P. E. Kloeden and P. Marín-Rubio, Negatively invariant sets and entire trajectories of set-valued dynamical systems, *J. Setvalued & Variational Analysis*, **19** (2011), 43–57.
|
| 720 |
+
|
| 721 |
+
[12] P. E. Kloeden and M. Rasmussen, *Nonautonomous Dynamical Systems*, Amer. Math. Soc. Providence, 2011.
|
| 722 |
+
|
| 723 |
+
[13] P. E. Kloeden and J. Simsen, Pullback attractors for non-autonomous evolution equation with spatially variable exponents, *Commun. Pure & Appl. Analysis*, **13** (2014), 2543–2557.
|
| 724 |
+
|
| 725 |
+
[14] P. E. Kloeden and J. Simsen, Attractors of asymptotically autonomous quasilinear parabolic equation with spatially variable exponents, *J. Math. Anal. Appl.*, **425** (2015), 911–918.
|
| 726 |
+
|
| 727 |
+
[15] P. E. Kloeden, J. Simsen and M. S. Simsen, A pullback attractor for an asymptotically autonomous multivalued Cauchy problem with spatially variable exponent, *J. Math. Anal. Appl.*, **445** (2017), 513–531.
|
| 728 |
+
|
| 729 |
+
[16] P. E. Kloeden and Meihua Yang, Forward attraction in nonautonomous difference equations, *J. Difference Eqns. Applns.*, **22** (2016), 513–525.
|
| 730 |
+
|
| 731 |
+
[17] V. S. Melnik and J. Valero, On attractors of multivalued semi-flows and differential inclusions, *Set-Valued Anal.*, **6** (1998), 83–111.
|
| 732 |
+
|
| 733 |
+
[18] C. V. Pao, On nonlinear reaction-diffusion systems, *J. Math. Anal. Appl.*, **87** (1982), 165–198.
|
| 734 |
+
|
| 735 |
+
[19] K. Rajagopal and M. Rúžička, Mathematical modelling of electrorheological fluids, *Contin. Mech. Thermodyn.*, **13** (2001) 59–78.
|
| 736 |
+
|
| 737 |
+
[20] M. Rúžička, Flow of shear dependent elecrorheological fluids, *C. R. Acad. Sci. Paris, Série I*, **329** (1999), 393–398.
|
| 738 |
+
|
| 739 |
+
[21] M. Rúžička, *Electrorheological Fluids: Modeling and Mathematical Theory*, Lectures Notes in Mathematics, vol. 1748, Springer-Verlag, Berlin, 2000.
|
| 740 |
+
|
| 741 |
+
[22] J. Simsen and J. Valero, Characterization of Pullback Attractors for Multivalued Nonautonomous Dynamical Systems, Advances in Dynamical Systems and Control, 179–195, Stud. Syst. Decis. Control, 69, Springer, [Cham], 2016.
|
| 742 |
+
|
| 743 |
+
[23] J. Simsen and E. Capelato, Some properties for exact generalized processes, *Continuous and Distributed Systems II*, 209–219, Studies in Systems, Decision and Control. led. 30, Springer International Publishing, 2015.
|
| 744 |
+
|
| 745 |
+
[24] J. Simsen and C. B. Gentile, On p-Laplacian differential inclusions-Global existence, compactness properties and asymptotic behavior, *Nonlinear Analysis*, **71** (2009), 3488–3500.
|
| 746 |
+
|
| 747 |
+
[25] J. Simsen and M. S. Simsen, Existence and upper semicontinuity of global attractors for $p(x)$-Laplacian systems, *J. Math. Anal. Appl.*, **388** (2012), 23–38.
|
| 748 |
+
|
| 749 |
+
[26] J. Simsen and M. S. Simsen, On asymptotically autonomous dynamics for multivalued evolution problems, *Discrete Contin. Dyn. Syst. Ser. B*, **24** (2019), no. 8, 3557–3567.
|
| 750 |
+
|
| 751 |
+
[27] J. Simsen and P. Wittbold, Compactness results with applications for nonautonomous coupled inclusions, *J. Math. Anal. Appl.*, **479** (2019), 426–449.
|
| 752 |
+
|
| 753 |
+
[28] R. Temam, *Infinite-Dimensional Dynamical Systems in Mechanics and Physics*, Springer-Verlag, New York, 1988.
|
| 754 |
+
|
| 755 |
+
[29] I.I. Vrabie, *Compactness Methods for Nonlinear Evolutions*, Second Editon, Pitman Monographs and Surveys in Pure and Applied Mathematics, New York, 1995.
|
| 756 |
+
---PAGE_BREAK---
|
| 757 |
+
|
| 758 |
+
Received March 2019; revised June 2019.
|
| 759 |
+
|
| 760 |
+
*E-mail address:* kloeden@na-uni.tuebingen.de
|
| 761 |
+
|
| 762 |
+
*E-mail address:* jacson@unifei.edu.br
|
| 763 |
+
|
| 764 |
+
*E-mail address:* petra.wittbold@uni-due.de
|
samples_new/texts_merged/4579765.md
ADDED
|
@@ -0,0 +1,623 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
On the Sensitivity Conjecture
|
| 5 |
+
|
| 6 |
+
Avishay Tal *
|
| 7 |
+
|
| 8 |
+
April 18, 2016
|
| 9 |
+
|
| 10 |
+
Abstract
|
| 11 |
+
|
| 12 |
+
The sensitivity of a Boolean function $f: \{0,1\}^n \to \{0,1\}$ is the maximal number of neighbors a point in the Boolean hypercube has with different $f$-value. Roughly speaking, the block sensitivity allows to flip a set of bits (called a block) rather than just one bit, in order to change the value of $f$. The sensitivity conjecture, posed by Nisan and Szegedy (CC, 1994), states that the block sensitivity, $bs(f)$, is at most polynomial in the sensitivity, $s(f)$, for any Boolean function $f$. A positive answer to the conjecture will have many consequences, as the block sensitivity is polynomially related to many other complexity measures such as the certificate complexity, the decision tree complexity and the degree. The conjecture is far from being understood, as there is an exponential gap between the known upper and lower bounds relating $bs(f)$ and $s(f)$.
|
| 13 |
+
|
| 14 |
+
We continue a line of work started by Kenyon and Kutin (Inf. Comput., 2004), studying the $\ell$-block sensitivity, $bs_\ell(f)$, where $\ell$ bounds the size of sensitive blocks. While for $bs_2(f)$ the picture is well understood with almost matching upper and lower bounds, for $bs_3(f)$ it is not. We show that any development in understanding $bs_3(f)$ in terms of $s(f)$ will have great implications on the original question. Namely, we show that either $bs(f)$ is at most sub-exponential in $s(f)$ (which improves the state of the art upper bounds) or that $bs_3(f) \ge s(f)^{3-\epsilon}$ for some Boolean functions (which improves the state of the art separations).
|
| 15 |
+
|
| 16 |
+
We generalize the question of $bs(f)$ versus $s(f)$ to bounded functions $f: \{0,1\}^n \to [0,1]$ and show an analog result to that of Kenyon and Kutin: $bs_\ell(f) = O(s(f))^\ell$. Surprisingly, in this case, the bounds are close to being tight. In particular, we construct a bounded function $f: \{0,1\}^n \to [0,1]$ with $bs(f) \ge n/\log n$ and $s(f) = O(\log n)$, a clear counterexample to the sensitivity conjecture for bounded functions.
|
| 17 |
+
|
| 18 |
+
Finally, we give a new super-quadratic separation between sensitivity and decision tree complexity by constructing Boolean functions with $\mathrm{DT}(f) \ge s(f)^{2.115}$. Prior to this work, only quadratic separations, $\mathrm{DT}(f) = s(f)^2$, were known.
|
| 19 |
+
|
| 20 |
+
*Institute for Advanced Study, Princeton, NJ. Email: avishay.tal@gmail.com. Research supported by the Simons Foundation, and by the National Science Foundation grant No. CCF-1412958. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.
|
| 21 |
+
---PAGE_BREAK---
|
| 22 |
+
|
| 23 |
+
# 1 Introduction
|
| 24 |
+
|
| 25 |
+
A long-standing open problem in complexity and combinatorics asks what is the relationship between two complexity measures of Boolean functions: the sensitivity and block-sensitivity. We first recall the definition of the two complexity measures.
|
| 26 |
+
|
| 27 |
+
**Definition 1.1.** Let $f : \{0,1\}^n \to \{0,1\}$ be a Boolean function and $x \in \{0,1\}^n$ be a point. The sensitivity of $f$ at $x$ is the number of neighbors $y$ of $x$ in the Hamming cube such that $f(y) \neq f(x)$, i.e., $s(f,x) \triangleq | \{i \in [n] : f(x) \neq f(x \oplus e_i) \} |$.¹ The (maximal) sensitivity of $f$ is defined as $s(f) \triangleq \max_{x \in \{0,1\}^n} s(f,x)$.
|
| 28 |
+
|
| 29 |
+
**Definition 1.2.** Let $f : \{0,1\}^n \to \{0,1\}$ be a Boolean function and $x \in \{0,1\}^n$ be a point. For a block $B \subseteq [n]$, denote by $\mathbb{1}_B \in \{0,1\}^n$ its characteristic vector, i.e., $(\mathbb{1}_B)_i = 1$ iff $i \in B$. We say that a block $B$ is sensitive for $f$ on $x$ if $f(x) \neq f(x \oplus \mathbb{1}_B)$. The block-sensitivity of $f$ at $x$ is the maximal number of disjoint sensitive blocks for $f$ at $x$, i.e.,
|
| 30 |
+
|
| 31 |
+
$$bs(f, x) = \max\{r : \exists \text{ disjoint } B_1, B_2, \dots, B_r \subseteq [n], f(x) \neq f(x \oplus \mathbb{1}_{B_i})\}.$$
|
| 32 |
+
|
| 33 |
+
The (maximal) block-sensitivity of $f$ is defined as $bs(f) \triangleq \max_{x \in \{0,1\}^n} bs(f,x)$.
|
| 34 |
+
|
| 35 |
+
For shorthand, we will denote $(x \oplus e_i)$ and $(x \oplus \mathbb{1}_B)$ by $(x + e_i)$ and $(x + B)$ respectively. By definition, the block-sensitivity is at least the sensitivity by considering only blocks of size 1. The sensitivity conjecture, posed by Nisan and Szegedy [NS94], asks if a relation in the other direction holds as well.
|
| 36 |
+
|
| 37 |
+
**Conjecture 1.3 (The Sensitivity Conjecture).** $\exists d \; \forall f : bs(f) \leq s(f)^d$.
|
| 38 |
+
|
| 39 |
+
A stronger variant of the conjecture states that $d$ can be taken to be 2. Despite much work on the problem [Nis89, NS94, Rub95, KK04, Cha11, Vir11, AS11, HKP11, Bop12, ABG$^+$14, AP14, AV15, APV15, GKS15, Sze15, GNS$^+$16] there is still an exponential gap between the best known separations and the best known relations connecting the two complexity measures.
|
| 40 |
+
|
| 41 |
+
**Known Separations.** An interesting example due to Rubinstein [Rub95] shows a quadratic separation between the two measures: $bs(f) = \frac{1}{2} \cdot s(f)^2$. This example was improved by [Vir11] and then by [AS11] to $bs(f) = \frac{2}{3} \cdot s(f)^2 \cdot (1 - o(1))$ which is current state of the art.
|
| 42 |
+
|
| 43 |
+
**Known Relations.** Simon [Sim83] proved (implicitly) that $bs(f)$ is at most $4^{s(f)} \cdot s(f)$. The upper bound was improved by Kenyon and Kutin [KK04] who showed that $bs(f) \le O(e^{s(f)} \cdot \sqrt{s(f)})$. Recently, Ambainis et al. [ABG$^+$14] improved this bound to $bs(f) \le 2^{s(f)-1} \cdot s(f)$. Even more recently, Ambainis et al. [APV15] improved this bound slightly to $bs(f) \le 2^{s(f)-1} (s(f) - 1/3)$.
|
| 44 |
+
|
| 45 |
+
To sum up, while the best known upper bound on the block-sensitivity in terms of sensitivity is exponential, the best known lower bound is quadratic. Indeed, we seem far from understanding the right relation between the two complexity measures.
|
| 46 |
+
|
| 47 |
+
## 1.1 $\ell$-block sensitivity
|
| 48 |
+
|
| 49 |
+
All mentioned examples that exhibit quadratic separations between the sensitivity and block sensitivity ([Rub95, Vir11, AS11]) have the property that the maximal block sensitivity is achieved on blocks of size at most 2. For this special case, Kenyon and Kutin [KK04] showed that the block sensitivity is at most $2 \cdot s(f)^2$. Hence, these examples are essentially tight for this subcase.
|
| 50 |
+
|
| 51 |
+
¹$e_i$ is the vector whose $i$-th entry equals 1 and all other entries equal 0.
|
| 52 |
+
---PAGE_BREAK---
|
| 53 |
+
|
| 54 |
+
Kenyon and Kutin introduced the notion of $\ell$-block sensitivity (denoted $bs_\ell(f)$): the maximal number of disjoint sensitive blocks where each block is of size at most $\ell$. Note that without loss of generality we may consider only sensitive blocks that are minimal with respect to set-inclusion (since otherwise we could of picked smaller blocks that are still disjoint). A well-known fact (cf. [BdW02, Lemma 3]) asserts that any minimal sensitive block for $f$ is of size at most $s(f)$, thus $bs(f) = bs_{s(f)}(f)$. Kenyon and Kutin proved the following inequalities relating the $\ell$-block sensitivity of different $\ell$-s:
|
| 55 |
+
|
| 56 |
+
$$bs_{\ell}(f) \leq \frac{4}{\ell} \cdot s(f) \cdot bs_{\ell-1}(f) \quad (1)$$
|
| 57 |
+
|
| 58 |
+
$$bs_{\ell}(f) \leq \frac{e}{(\ell - 1)!} \cdot s(f)^{\ell} \quad (2)$$
|
| 59 |
+
|
| 60 |
+
for all $2 \leq \ell \leq s(f)$. Plugging $\ell = s(f)$ gives the aforementioned bound $bs(f) \leq O(e^{s(f)} \cdot \sqrt{s(f)})$.
|
| 61 |
+
|
| 62 |
+
## 1.2 Our Results
|
| 63 |
+
|
| 64 |
+
1. In Section 2, we refine the argument of Kenyon and Kutin giving a better upper bound on the $\ell$-block sensitivity in terms of the $(\ell - 1)$-block sensitivity. We show that
|
| 65 |
+
|
| 66 |
+
$$bs_{\ell}(f) \leq \frac{e}{\ell} \cdot s(f) \cdot bs_{\ell-1}(f) \quad (3)$$
|
| 67 |
+
|
| 68 |
+
improving the bound in Eq. (1). On the other hand, Kenyon and Kutin gave examples with $bs_\ell(f) \geq \frac{1}{\ell} \cdot s(f) \cdot bs_{\ell-1}(f)$. Hence, Eq. (3) (and in fact, also Eq. (1)) is tight up to a constant. Interestingly, our analysis uses (a very simple) ordinary differential equation.
|
| 69 |
+
|
| 70 |
+
2. In Section 3, we put focus on understanding $bs_3(f)$ in terms of the sensitivity. We show that an upper bound of the form $bs_3(f) \leq s(f)^{3-\epsilon}$ for some constant $\epsilon$ implies a sub-exponential upper bound for the sensitivity conjecture: $\forall f : bs(f) \leq 2^{s(f)^{1-\delta}}$, for $\delta > 0$. On the other hand, the best known separation (i.e., the aforementioned example by [AS11]) gives examples with $bs_3(f) \geq bs_2(f) \geq \Omega(s(f)^2)$. Thus, improving either the upper or lower bound for $bs_3(f)$ in terms of $s(f)$ will imply a breakthrough in our understanding of the sensitivity conjecture.
|
| 71 |
+
|
| 72 |
+
3. In Section 4, we consider an extension of the sensitivity conjecture to bounded functions $f: \{0,1\}^n \to [0,1]$. We show that while Kenyon and Kutin's approach works in this model, it is almost tight, i.e., we give functions for which $bs_\ell(f) = \Omega((s(f)/\ell)^\ell)$. In particular, we give a function with sensitivity $O(\log n)$ and block sensitivity $\Omega(n/\log n)$ – a clear counterexample for the sensitivity conjecture in this model.
|
| 73 |
+
|
| 74 |
+
4. In Section 5, we find better-than-quadratic separations between the sensitivity and the decision tree complexity. We construct functions based on minterm cyclic functions (as coined by Chakraborty [Cha11]), that were found using computer search. In particular, we give an infinite family of functions $\{f_n\}_{n \in I}$ with $\mathrm{DT}(f_n) = n$ and $s(f_n) = O(n^{0.48})$. In addition, we give an infinite family of functions $\{g_n\}_{n \in I}$ with $s(g_n) = O(\mathrm{DT}(g_n)^{0.473})$.
|
| 75 |
+
|
| 76 |
+
# 2 Improving The Bound on $bs_\ell$
|
| 77 |
+
|
| 78 |
+
In this section, we improve the bound on $bs_\ell(f)$ as a function of $bs_{\ell-1}(f)$ and $s(f)$. We start by recalling the analysis of [KK04], and then improve it using new ideas.
|
| 79 |
+
---PAGE_BREAK---
|
| 80 |
+
|
| 81 |
+
## 2.1 Kenyon-Kutin Argument
|
| 82 |
+
|
| 83 |
+
Let $x \in \{0, 1\}^n$ be a point in the Boolean hypercube and $\mathcal{B}$ a collection of disjoint minimal blocks such that $f(x) \neq f(x + B)$ for any $B \in \mathcal{B}$. We assign weights $w_1 \ge \dots \ge w_\ell \ge 1$ to sets of size 1, 2, ..., $\ell$ respectively, and we seek to maximize $t(x, \mathcal{B}) = \sum_{B \in \mathcal{B}} w_B$. Since all weights are at least 1, we have $t(x, \mathcal{B}) \ge |B|$. Thus, upper bounding the value of $t$ yields an upper bound on the $\ell$-block sensitivity.
|
| 84 |
+
|
| 85 |
+
We choose $w_1 = w_2 = \dots = w_{\ell-1} = w$ and $w_\ell = 1$ for some parameter $w \ge 1$. Let $(x, \mathcal{B})$ be a point and a collection of disjoint minimal sensitive blocks maximizing $t(\cdot, \cdot)$ w.r.t. the parameter $w$. Let $m_1, \dots, m_\ell$ be the number of blocks of size 1, $\dots$, $\ell$ respectively in $\mathcal{B}$. We have $t(x, \mathcal{B}) = w \cdot (m_1 + \dots + m_{\ell-1}) + m_\ell$.
|
| 86 |
+
|
| 87 |
+
**Lemma 2.1.** Suppose $(x, \mathcal{B})$ maximize $t(\cdot, \cdot)$ w.r.t. $w \ge 1$ and let $m_1, \dots, m_\ell$ be the number of blocks of size $1, \dots, \ell$ in $\mathcal{B}$ respectively. Then,
|
| 88 |
+
|
| 89 |
+
$$m_{\ell} \cdot (\ell w - s(f)) \le (m_1 + \dots + m_{\ell-1}) \cdot w \cdot s(f).$$
|
| 90 |
+
|
| 91 |
+
*Proof.* We would derive the above inequality by examining the value of $t(\cdot, \cdot)$ on neighbors of $x$, and using the fact that all of these values are smaller or equal to $t(x, \mathcal{B})$.
|
| 92 |
+
|
| 93 |
+
Let $B \in \mathcal{B}$ be a block of size $\ell$. By the minimality of the block $B$, it means that any subset of $B$ does not flip the value of $f$ on $x$. Thus, for each $i \in B$, we have $f(x+e_i) = f(x)$. In addition, the block $B' = B \setminus \{i\}$ is a sensitive block (of size $\ell-1$) for $x+e_i$, but is not a sensitive block for $x$. Consider all such $\ell \cdot m_\ell$ neighbors $y = x+e_i$ where $i \in B$, $B \in \mathcal{B}$ and $|B| = \ell$. Denote by $\mathcal{A}_i$ the collection of all blocks $B''$ in $\mathcal{B}$ such that $f(y) = f(y+B'')$ (i.e., we are only considering disjoint blocks that were sensitive on $x$ and minimal). Looking at a specific block $B'' \in \mathcal{B}$, we count for how many $y$'s it is not a sensitive block, i.e., $f(y) = f(y+B'')$. Since $f(x) = f(y)$ and $f(x) \neq f(x+B'')$ the block $B''$ is not sensitive for $y = x+e_i$ if and only if $f(x+B'') \neq f(x+B''+e_i)$. In other words, for $B''$ to be non-sensitive on $y = x+e_i$, $i$ must be a sensitive coordinate of $x+B''$. Hence, each block $B'' \in \mathcal{B}$ may appear in at most $s(f)$ of the sets $\mathcal{A}_i$.
|
| 94 |
+
|
| 95 |
+
By our design for $y = x+e_i$ the block $B' = B \setminus \{i\}$ and the blocks in $B'' \in \mathcal{B} \setminus \mathcal{A}_i$ are sensitive. In order to show that they are disjoint it is enough to show that $B \in \mathcal{A}_i$. This is indeed the case since $x+e_i+B = x+B'$ and by the minimality of $B$, we have $f(x+e_i+B) = f(x+B') = f(x) = f(x+e_i)$, hence $B$ is not a sensitive block for $x+e_i$. We got that $\{B'\} \cup (\mathcal{B} \setminus \mathcal{A}_i)$ is a family of disjoint sensitive blocks for $x+e_i$.
|
| 96 |
+
|
| 97 |
+
Using the fact that $t(x, \mathcal{B})$ is maximal, and summing over all neighbors of $x$ considered above, we get
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\begin{aligned}
|
| 101 |
+
\ell \cdot m_{\ell} \cdot t(x, \mathcal{B}) &\geq \sum_{i \in B, |B|=\ell} t(x+e_i, \{B \setminus \{i\}\} \cup (\mathcal{B} \setminus \mathcal{A}_i)) \\
|
| 102 |
+
&\geq \sum_{i \in B, |B|=\ell} \left( w_{\ell-1} + t(x, \mathcal{B}) - \sum_{B'' \in \mathcal{A}_i} w_{|B''|} \right).
|
| 103 |
+
\end{aligned}
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
Rearranging we get
|
| 107 |
+
|
| 108 |
+
$$\ell \cdot m_{\ell} \cdot w_{\ell-1} \leq \sum_{i \in B, |B|=\ell} \sum_{B'' \in A_i} w_{|B''|} = \sum_{B''} w_{|B''|} \cdot |\{(i, B) : i \in B, |B| = \ell, B'' \in A_i\}| \leq \sum_{B''} w_{|B''|} \cdot s(f).$$
|
| 109 |
+
|
| 110 |
+
Substituting $w_1, \dots, w_{\ell-1}$ with $w$ and $w_\ell$ with 1 and rearranging gives
|
| 111 |
+
|
| 112 |
+
$$m_{\ell} \cdot (\ell w - s(f)) \le (m_1 + \dots + m_{\ell-1}) w s(f)$$
|
| 113 |
+
|
| 114 |
+
which completes the proof. □
|
| 115 |
+
---PAGE_BREAK---
|
| 116 |
+
|
| 117 |
+
In order to get something meaningful from Lemma 2.1 we need $\ell \cdot w - s(f)$ to be greater than 0. Writing $w$ as $\alpha \cdot s(f)/\ell$, this means that $\alpha > 1$. So we can choose any $\alpha > 1$ and get that the optimal $(m_1, \dots, m_\ell)$ for that $\alpha$ fulfills the following inequality:
|
| 118 |
+
|
| 119 |
+
$$m_{\ell} \le (m_1 + \dots + m_{\ell-1}) \cdot \frac{\alpha \cdot s^2/\ell}{\alpha \cdot s - \ell} = (m_1 + \dots + m_{\ell-1}) \cdot \frac{s}{\ell} \cdot \frac{\alpha}{\alpha-1}.$$
|
| 120 |
+
|
| 121 |
+
Overall we got that the maximal value of $t(\cdot, \cdot)$ with respect to $w = \frac{\alpha}{\ell} \cdot s(f)$ is at most the value of
|
| 122 |
+
following linear program:
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
\begin{array}{ll}
|
| 126 |
+
\text{maximize} & \frac{\alpha \cdot s(f)}{\ell} \cdot (m_1 + \dots + m_{\ell-1}) + m_{\ell} \\
|
| 127 |
+
\text{subject to} & m_{\ell} \le \frac{\alpha}{\alpha-1} \cdot \frac{s(f)}{\ell} \cdot (m_1 + \dots + m_{\ell-1}) \\
|
| 128 |
+
& (m_1 + \dots + m_{\ell-1}) \le b s_{\ell-1}(f) \\
|
| 129 |
+
& m_i \ge 0 & \text{for } i = 1, \dots, \ell
|
| 130 |
+
\end{array}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
Substituting $x_1 \triangleq (m_1+\ldots+m_{\ell-1})/bs_{\ell-1}$ and $x_2 \triangleq m_\ell/(bs_{\ell-1}\cdot s(f)/\ell)$ gives the following equivalent linear program:
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\begin{array}{ll}
|
| 137 |
+
\text{maximize} & \displaystyle \frac{s(f)}{\ell} \cdot bs_{\ell-1}(f) \cdot (\alpha x_1 + x_2) \\
|
| 138 |
+
\text{subject to} & x_2 \le \frac{\alpha}{\alpha-1} x_1 \\
|
| 139 |
+
& x_1 \le 1 \\
|
| 140 |
+
& x_i \ge 0 & \text{for } i=1,2
|
| 141 |
+
\end{array}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
The value of this linear program is $\frac{s(f)}{\ell} \cdot bs_{\ell-1}(f) \cdot (\alpha + \frac{\alpha}{\alpha-1})$ (achieved at $x_1 = 1$ and $x_2 = \frac{\alpha}{\alpha-1}$).
|
| 145 |
+
This value attains its minimum at $\alpha = 2$, which gives a value of $\frac{s(f)}{\ell} \cdot bs_{\ell-1}(f) \cdot 4$ to the LP.
|
| 146 |
+
|
| 147 |
+
What does that mean? It means that $(m_1 + ... + m_{l-1}) \cdot s(f) \cdot 2/l + m_l \le \frac{s(f)}{l} \cdot bs_{l-1} \cdot 4$
|
| 148 |
+
for any $(m_1, ..., m_l)$ disjoint sensitive blocks of size $(1, ..., m_l)$ respectively. In particular, since
|
| 149 |
+
$s(f) \cdot 2/l \ge 1$ (because $l \le s(f)$ WLOG) this inequality bounds $bs_l(f)$ from above by $\frac{s(f)}{l} \cdot bs_{l-1} \cdot 4$.
|
| 150 |
+
|
| 151 |
+
## 2.2 Improved Bounds
|
| 152 |
+
|
| 153 |
+
Kenyon-Kutin [KK04] stopped at this point, seemingly getting the best bound this analysis could offer. This is indeed true if we use only one choice of $\alpha$, however, one can consider using several different $\alpha$'s to get a better bound, as we do next.
|
| 154 |
+
|
| 155 |
+
For starters, we show that using two different weights $\alpha_1, \alpha_2$ gives better bounds on $bs_\ell(f)$ in terms of the $bs_{\ell-1}(f)$ and $s(f)$. The idea is that the solution for the linear program for a certain $\alpha_1$ implies a new equation for the feasible region of the linear program for $\alpha_2$.
|
| 156 |
+
|
| 157 |
+
Recall that choosing $\alpha_1 = 2$ implies that $2 \cdot x_1 + x_2 \le 4$. We now rewrite the linear program for an arbitrary $\alpha$ adding this constraint.
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
\begin{array}{ll}
|
| 161 |
+
\text{maximize} & \displaystyle \frac{s(f)}{\ell} \cdot b s_{\ell-1}(f) \cdot (\alpha x_1 + x_2) \\
|
| 162 |
+
\text{subject to} & x_2 \leq \frac{\alpha}{\alpha-1} x_1 \\
|
| 163 |
+
& 2 \cdot x_1 + x_2 \leq 4 \\
|
| 164 |
+
& x_1 \leq 1 \\
|
| 165 |
+
& x_i \geq 0 & \text{for } i=1,2
|
| 166 |
+
\end{array}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
One can check that for $\alpha_2 = \frac{4}{3}$ the optimal value for the LP is $\frac{32}{9} \cdot \frac{s(f)}{\ell} \cdot b s_{\ell-1}(f)$. One can now get a new constraint from the linear program for $\alpha_2$ and continue repeating this process by choosing a sequence of $\alpha$'s. Instead of defining a sequence of $\alpha$'s we will use a continuous strategy.
|
| 170 |
+
---PAGE_BREAK---
|
| 171 |
+
|
| 172 |
+
**Theorem 2.2.** $\forall f: bs_\ell(f) \le \frac{e}{\ell} \cdot s(f) \cdot bs_{\ell-1}(f)$.
|
| 173 |
+
|
| 174 |
+
*Proof.* We calculate the optimal value for $\alpha$ given an optimal value for $\alpha + \delta$, for an infinitely small $\delta > 0$. Let $\text{OPT}(\alpha)$ be the optimal value of $t(\cdot, \cdot)$ for parameter $\alpha$, and in order to avoid carrying the multiplicative factor of $bs_{\ell-1}(f) \cdot \frac{s(f)}{\ell}$ let $F(\alpha) = \frac{\text{OPT}(\alpha)}{bs_{\ell-1}(f) \cdot s(f)/\ell}$. The value of the next linear program upper bounds $F(\alpha)$:
|
| 175 |
+
|
| 176 |
+
$$
|
| 177 |
+
\begin{array}{ll}
|
| 178 |
+
\text{maximize} & \alpha \cdot x_1 + x_2 \\
|
| 179 |
+
\text{subject to} & x_2 \le \frac{\alpha}{\alpha-1} \cdot x_1 \\
|
| 180 |
+
& x_1 \le 1 \\
|
| 181 |
+
& x_i \ge 0 & \text{for } i = 1, 2
|
| 182 |
+
\end{array}
|
| 183 |
+
\qquad (7)
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
By the definition of $F(\alpha)$ as the normalized optimal value of $t(\cdot, \cdot)$ w.r.t. $\alpha$ we get a new linear equation $\alpha \cdot x_1 + x_2 \le F(\alpha)$ for all feasible $(x_1, x_2)$. We wish to invoke the equation given by $\alpha + \delta$ on the linear program upper-bounding $F(\alpha)$, for an infinitely small $\delta > 0$.
|
| 187 |
+
|
| 188 |
+
$$
|
| 189 |
+
F(\alpha) \le \begin{cases}
|
| 190 |
+
\alpha \cdot x_1 + x_2 & \\
|
| 191 |
+
\text{subject to} & x_2 \le \frac{\alpha}{\alpha-1} \cdot x_1 \\
|
| 192 |
+
& (\alpha + \delta) \cdot x_1 + x_2 \le F(\alpha + \delta) \\
|
| 193 |
+
& x_1 \le 1 \\
|
| 194 |
+
& x_i \ge 0 & \text{for } i=1,2
|
| 195 |
+
\end{cases} \tag{8}
|
| 196 |
+
$$
|
| 197 |
+
|
| 198 |
+
Let $(x_1^{\text{OPT}}, x_2^{\text{OPT}})$ be the optimal point for the above LP. In the above LP, $x_2$ is upper bounded by two linear functions on $x_1$:
|
| 199 |
+
|
| 200 |
+
$$
|
| 201 |
+
x_2 \le \frac{\alpha}{\alpha - 1} \cdot x_1 \quad \text{and} \quad x_2 \le F(\alpha + \delta) - (\alpha + \delta) \cdot x_1.
|
| 202 |
+
$$
|
| 203 |
+
|
| 204 |
+
Since one linear function is increasing and the other is decreasing, the optimal value is achieved either at the intersection of these two lines or at $x_1 = 1$. The intersection point of the two lines, denoted by $x_1^{\text{int}}$ is given by
|
| 205 |
+
|
| 206 |
+
$$
|
| 207 |
+
x_1^{\text{int}} = \frac{F(\alpha + \delta)}{\frac{\alpha}{\alpha-1} + \alpha + \delta}.
|
| 208 |
+
$$
|
| 209 |
+
|
| 210 |
+
$x_1^{\text{int}}$ is smaller than 1 for $\alpha > 1$ since $F(\alpha + \delta) \le \frac{\alpha+\delta}{(\alpha+\delta)-1} + \alpha + \delta$ and $\frac{x}{x-1}$ is decreasing for $x > 1$. After the intersection, $x_2$ decreases faster than $\alpha \cdot x_1$ increases, hence the optimal value of the LP is achieved at the intersection, $x_1^{\text{OPT}} = x_1^{\text{int}}$. The optimal value of $x_2$ is given by $x_2^{\text{OPT}} = \frac{\alpha}{\alpha-1} \cdot x_1^{\text{OPT}}$, which yields
|
| 211 |
+
|
| 212 |
+
$$
|
| 213 |
+
\begin{align*}
|
| 214 |
+
F(\alpha) &\le x_1^{\text{OPT}} \cdot \alpha + x_2^{\text{OPT}} = x_1^{\text{OPT}} \cdot \left( \frac{\alpha}{\alpha-1} + \alpha \right) \\
|
| 215 |
+
&= \frac{F(\alpha + \delta)}{\frac{\alpha}{\alpha-1} + \alpha + \delta} \cdot \left( \frac{\alpha}{\alpha-1} + \alpha \right) \\
|
| 216 |
+
&= F(\alpha + \delta) \cdot \left( 1 - \frac{\delta}{\frac{\alpha}{\alpha-1} + \alpha + \delta} \right)
|
| 217 |
+
\end{align*}
|
| 218 |
+
$$
|
| 219 |
+
|
| 220 |
+
Rearranging the equation gives
|
| 221 |
+
|
| 222 |
+
$$
|
| 223 |
+
\frac{F(\alpha + \delta) - F(\alpha)}{\delta} \leq \frac{F(\alpha + \delta)}{\frac{\alpha}{\alpha-1} + \alpha + \delta},
|
| 224 |
+
$$
|
| 225 |
+
---PAGE_BREAK---
|
| 226 |
+
|
| 227 |
+
and as $\delta$ tends to 0 we get $F'(\alpha) \le \frac{F(\alpha)}{\frac{\alpha}{\alpha-1}+\alpha} = F(\alpha) \cdot \frac{\alpha-1}{\alpha^2}$. The solution for this ODE is $F(\alpha) \le \alpha \cdot e^{\frac{1}{\alpha}} \cdot c$
|
| 228 |
+
for some constant $c > 0$. Taking an initial condition on $\alpha \gg 1$: $F(\alpha) \le \alpha + \frac{\alpha}{\alpha-1}$ gives
|
| 229 |
+
|
| 230 |
+
$$c \le \frac{F(\alpha)}{\alpha \cdot e^{\frac{1}{\alpha}}} \le \frac{\alpha \cdot \left(1 + \frac{1}{\alpha-1}\right)}{\alpha \cdot e^{\frac{1}{\alpha}}} \xrightarrow{\alpha \to \infty} 1.$$
|
| 231 |
+
|
| 232 |
+
Hence, $F(\alpha) \le \alpha \cdot e^{\frac{1}{\alpha}}$. When $\alpha$ approaches 1 we get $\lim_{\alpha \to 1^+} F(\alpha) \le e$, thus $bs_\ell(f) \le \frac{e}{\ell} \cdot s(f) \cdot bs_{\ell-1}(f)$ completing the proof. $\square$
|
| 233 |
+
|
| 234 |
+
As a special case, Theorem 2.2 implies that $bs_2(f) \le \frac{e}{2} \cdot s(f)^2$, which leads us to the following open problem.
|
| 235 |
+
|
| 236 |
+
**Open Problem 1.** *What is the smallest constant $c > 0$ such that $bs_2(f) \le c \cdot s(f)^2$ for all Boolean functions?*
|
| 237 |
+
|
| 238 |
+
An example with $bs_2(f) = \frac{2}{3} \cdot s(f)^2 \cdot (1 - o(1))$ is given in [AS11], thus $\frac{2}{3} \le c \le \frac{e}{2}$.
|
| 239 |
+
|
| 240 |
+
# 3 Understanding $bs_3(f)$ is Important
|
| 241 |
+
|
| 242 |
+
As the upper and lower bounds for $bs_2(f)$ are almost matching, it seems that the next challenge is
|
| 243 |
+
understanding the asymptotic behavior of $bs_3(f)$. A more modest challenge is the following.
|
| 244 |
+
|
| 245 |
+
**Open Problem 2.** *Improve either the upper or lower bound on $bs_3(f)$.*
|
| 246 |
+
|
| 247 |
+
Recall that the upper bound on $bs_3(f)$ is $O(s(f)^3)$ (see Eq.(2)) and the lower bound is $(2/3) \cdot s(f)^2 \cdot (1-o(1))$. It is somewhat surprising that any slight improvement on either the lower or upper bound on $bs_3$ would be a significant step forward in our understanding of the general question. The following claim shows that a slightly better than quadratic gap on a single example implies a better than quadratic gap on an infinite family of examples.
|
| 248 |
+
|
| 249 |
+
**Claim 3.1.** *If there exists a function such that $bs_3(f) > s(f)^2$ then there exists a family of functions $\{f_n\}_{n \in \mathbb{N}}$ with $bs(f_n) > s(f_n)^{2+\epsilon}$ for some constant $\epsilon > 0$ (dependant on f).*
|
| 250 |
+
|
| 251 |
+
|
| 252 |
+
|
| 253 |
+
This family is simply $f_1 = f$, $f_n = f \circ f_{n-1}$ where $\circ$ stands for Boolean function composition as in [Tal13]. Next, we prove a theorem exhibiting the self-reducibility nature of the problem.
|
| 254 |
+
|
| 255 |
+
**Theorem 3.2.** Let $k, \ell, a \in \mathbb{N}$ such that $\ell > k$ and let $T: \mathbb{N} \to \mathbb{R}$ be a monotone function.
|
| 256 |
+
|
| 257 |
+
*If $\forall f : bs_\ell(f) \le T bs_k(f)$, then $\forall f' : bs_{\ell a}(f') \le T bs_{ka}(f')$.*
|
| 258 |
+
|
| 259 |
+
*Proof.* Assume by contradiction that there exists a function $f'$ such that $bs_{\ell a}(f') > T bs_{ka}(f')$. We will show that there exists a function $f$ such that $bs_\ell(f) > T bs_k(f)$. We shall assume WLOG that the maximal $bs_{\ell a}$ of $f'$ is achieved on $\vec{0}$. Let $B_1, B_2, \dots, B_m$ be a family of disjoint sensitive blocks for $f$ at $\vec{0}$, each $B_i$ of size at most $\ell a$. Split every block $B_i$ to $\ell$ sets $B_{i,1}, \dots, B_{i,\ell}$ of size at most $a$. The function $f$ will have a variable $x_{i,j}$ corresponding to every set $B_{i,j}$ of size at most $a$. The value of $f(x_{1,1}, \dots, x_{m,\ell})$ is defined to be the value of $f'$ where the variable in each $B_{i,j}$ equal $x_{i,j}$, and all other variables equal 0. $bs_\ell(f, \vec{0}) \ge bs_{\ell a}(f', \vec{0})$, since for any sensitive block $B_1, \dots, B_m$ for $f'$, there exists a corresponding sensitive block $B'_1, \dots, B'_m$ for $f$ of size $\ell$, where $B'_i = \{x_{i,j} : j \in [\ell]\}$.
|
| 260 |
+
|
| 261 |
+
On the other hand, any set of disjoint sensitive blocks of size at most *k* for *f* corresponds to a disjoint set of sensitive blocks of size at most *ka* for *f'*. Thus $bs_k(f) \le bs_{ka}(f')$, giving
|
| 262 |
+
|
| 263 |
+
$$T(bs_k(f)) \le T(bs_{ka}(f')) < bs_{\ell a}(f') \le bs_\ell(f),$$
|
| 264 |
+
|
| 265 |
+
where we used the monotonicity of *T* in the first inequality. $\square$
|
| 266 |
+
---PAGE_BREAK---
|
| 267 |
+
|
| 268 |
+
Using Theorem 3.2 we get that any upper bound of the form $bs_\ell(f) \le s(f)^{\ell-\epsilon}$ implies a subexponential upper bound on $bs(f)$ in terms of $s(f)$.
|
| 269 |
+
|
| 270 |
+
**Theorem 3.3.** Let $k \in \mathbb{N}, \varepsilon > 0$ be constants. If for all Boolean functions $bs_k(f) \le s(f)^{k-\varepsilon}$, then for the constant $\gamma = \frac{\log(k-\varepsilon)}{\log(k)} < 1$ it holds that $bs(f) \le 2^{O(s(f)^{\gamma} \cdot \log s(f))}$ for all $f$.
|
| 271 |
+
|
| 272 |
+
For example, Theorem 3.3 shows that if $\forall f : bs_3(f) \le s(f)^2$, then $\forall f : bs(f) \le 2^{O(s^{0.631} \cdot \log(s))}$.
|
| 273 |
+
|
| 274 |
+
*Proof.* Using the hypothesis and Theorem 3.2 one can show by induction on $t$ that
|
| 275 |
+
|
| 276 |
+
$$ \forall f : bs_{k^t}(f) \le s(f)^{(k-\epsilon)t}. \qquad (9) $$
|
| 277 |
+
|
| 278 |
+
The base case $t=1$ is simply the hypothesis. We assume the claim is true for $1,..., t-1$, and show the claim is true for $t$. Using Theorem 3.2 with $T(x) = x^{k-\epsilon}$ and $a = k^{t-1}$ we get $bs_{k^t}(f) \le T$.
|
| 279 |
+
|
| 280 |
+
$$bs_{k^t}(f) \le T bs_{k^{t-1}}(f) = (bs_{k^{t-1}}(f))^{k-\epsilon}.$$
|
| 281 |
+
|
| 282 |
+
By induction $bs_{k^{t-1}}(f) \le s(f)^{(k-\epsilon)t-1}$. Hence, we get $bs_{k^t}(f) \le s(f)^{(k-\epsilon)t}$, which finishes the induction proof.
|
| 283 |
+
|
| 284 |
+
Fix $f$ and let $s = s(f)$. Recall that $bs(f) = bs_s(f)$ since each minimal block that flips the value of $f$ is of size at most $s$. Hence,
|
| 285 |
+
|
| 286 |
+
$$bs(f) = bs_s(f) = bs_{k^{\lceil \log_k(s) \rceil}}(f) \\ \le s^{(k-\epsilon)^{\lceil \log_k(s) \rceil}} \le s^{(k-\epsilon)^{\log_k(s)+1}} = 2^{\log(s) \cdot s^{\log(k-\epsilon)/\log(k)}} \cdot (k-\epsilon) = 2^{O(s^\gamma \log(s))}. \quad \square$$
|
| 287 |
+
|
| 288 |
+
# 4 The Sensitivity Conjecture for Bounded Functions
|
| 289 |
+
|
| 290 |
+
In this section, we generalize the definitions of sensitivity and block sensitivity to bounded functions $f: \{0,1\}^n \to [0,1]$, extending the definitions for Boolean functions. We generalize the result of Kenyon and Kutin to this setting (after removing some trivial obstacles). Given that, one may hope that the sensitivity conjecture holds also for bounded functions, i.e., that the block-sensitivity is at most polynomial in the sensitivity. However, we give a counterexample to this question, by constructing functions on $n$ variables with sensitivity $O(\log n)$ and block sensitivity $n/\log(n)$. In fact, we show that the result of Kenyon and Kutin is essentially tight by giving examples for which $bs_\ell(f) = n/\ell$ and $s(f) = O(\ell \cdot n^{1/\ell})$ for any $\ell \le \log n$.
|
| 291 |
+
|
| 292 |
+
We begin by generalizing the definitions of sensitivity and block-sensitivity. For $f: \{0,1\}^n \to [0,1]$ and $x \in \{0,1\}^n$, we denote the sensitivity of $f$ at a point $x$ by
|
| 293 |
+
|
| 294 |
+
$$ s(f,x) = \sum_{i=1}^{n} |f(x) - f(x \oplus e_i)|. \qquad (10) $$
|
| 295 |
+
|
| 296 |
+
Similarly we define the block sensitivity and $\ell$-block sensitivity as
|
| 297 |
+
|
| 298 |
+
$$ bs(f,x) = \max \left\{ \sum_i |f(x) - f(x + B_i)| : B_1, \dots, B_k \subseteq [n] \text{ are disjoint} \right\}. \qquad (11) $$
|
| 299 |
+
|
| 300 |
+
and
|
| 301 |
+
|
| 302 |
+
$$ bs_\ell(f,x) = \max \left\{ \sum_i |f(x) - f(x + B_i)| : B_1, \dots, B_k \subseteq [n] \text{ are disjoint and } \forall i, |B_i| \le \ell \right\}. $$
|
| 303 |
+
---PAGE_BREAK---
|
| 304 |
+
|
| 305 |
+
Naturally we denote by $s(f) = \max_x s(f, x)$, by $bs(f) = \max_x bs(f, x)$ and by $bs_\ell(f) = \max_x bs_\ell(f, x)$.
|
| 306 |
+
It is easy to see that for a Boolean function these definitions match the standard definitions of sen-
|
| 307 |
+
sitivity, block sensitivity and $\ell$-block sensitivity.
|
| 308 |
+
|
| 309 |
+
We wish to prove an analog of Kenyon-Kutin result, showing that $bs_{\ell}(f) \le c_{\ell} \cdot s(f)^{\ell}$. However,
|
| 310 |
+
stated as is the claim is false for a “silly” reason. Take any Boolean function $f$ with a gap between
|
| 311 |
+
the sensitivity and the $\ell$-block sensitivity and take $g(x) = f(x)/s(f)$. Then, we get $s(g) = 1$ and
|
| 312 |
+
$bs_{\ell}(g) = bs_{\ell}(f)/s(f)$. As there are examples with $bs_2(f) = n/2$ and $s(f) = \sqrt{n}$, we get that
|
| 313 |
+
$bs_2(g) = \sqrt{n}/2$ while $s(g) = 1$, where $n$ grows to infinity. This seems to rule out any relation
|
| 314 |
+
between the sensitivity and block sensitivity (and even 2-block sensitivity) in the case of bounded
|
| 315 |
+
functions. To overcome this triviality, we insist that the block sensitivity is close to $n$, or alterna-
|
| 316 |
+
tively that changing each block dramatically changes the value of the function. Surprisingly, under
|
| 317 |
+
this requirement we are able to retrieve known relations between sensitivity and block sensitivity
|
| 318 |
+
that were established in the Boolean setting by Kenyon and Kutin [KK04].
|
| 319 |
+
|
| 320 |
+
**Theorem 4.1.** Let $c > 0$ and $f : \{0,1\}^n \to [0,1]$. Assume that there exists a point $x_0 \in \{0,1\}^n$
|
| 321 |
+
and disjoint blocks $B_1, \dots, B_k$ of size at most $\ell$ such that $|f(x_0) - f(x_0 + B_i)| \ge c$ for all $i \in k$.
|
| 322 |
+
Furthermore, assume that $2 \le \ell \le \log(k)$. Then, $s(f) \ge \Omega(k^{1/\ell} \cdot c)$.
|
| 323 |
+
|
| 324 |
+
We get the following corollary, whose proof is deferred to Appendix A.
|
| 325 |
+
|
| 326 |
+
**Corollary 4.2.** Let $f: \{0,1\}^n \to [0,1]$ with $bs(f) \ge n/\ell$. Then, $s(f) \ge \Omega(n^{1/2\ell}/\ell)$.
|
| 327 |
+
|
| 328 |
+
Unlike in the Boolean case, we are able to show that Theorem 4.1 is essentially tight! That is,
|
| 329 |
+
for any $\ell$ and $n$ we have a construction with $bs_\ell(f) \ge n/\ell$ and $s(f) = O(\ell \cdot n^{1/\ell})$. In particular,
|
| 330 |
+
picking $\ell = \log(n)$ gives an exponential separation between block sensitivity (which is at least
|
| 331 |
+
$n/\log n$) and sensitivity (which is $O(\log n)$).
|
| 332 |
+
|
| 333 |
+
**Theorem 4.3.** Let $\ell, n \in \mathbb{N}$ with $2 \le \ell \le n$. Then, there exists a function $h : \{0,1\}^n \to [0,1]$ with $bs_\ell(h) \ge \lfloor n/\ell \rfloor$ and $s(h) \le 3 \cdot \ell \cdot n^{1/\ell}$.
|
| 334 |
+
|
| 335 |
+
## 4.1 Proof of Kenyon-Kutin Result for Bounded Functions
|
| 336 |
+
|
| 337 |
+
**Proof Overview.** We start by giving a new proof for Kenyon-Kutin result, based on random walks on the hypercube. We assume by contradiction that $f(x_0) = 0$ and $f(x_0 + B_i) = 1$ for all $i \in [k]$ and that the sensitivity is $o(k^{1/\ell})$. Taking a random walk of length $r = n/k^{1/\ell}$ starting from $x_0$ will end up in point $y$ where with high probability $f(y) = f(x_0)$. This is true since in each step with probability at least $1-s(f)/n$ we are maintaining the value of $f$, hence by union bound with probability at least $1-r \cdot s(f)/n$ we maintain the value of $f$ in the entire walk. On the contrast, choosing a random $i \in [k]$ and starting a random walk of length $r - |B_i|$ starting from $(x_0 + B_i)$ will lead to a point $y'$ where with high probability $f(y') = f(x_0 + B_i) = 1$. However, as we show in the proof below, the distributions of $y$ and $y'$ are similar (close in statistical distance). This leads to a contradiction as $f(y)$ tends to be equal to 0 and $f(y')$ tends to be equal to 1.
|
| 338 |
+
|
| 339 |
+
A simple observation, which allows us to generalize the argument above to bounded function,
|
| 340 |
+
is that for a given point $x \in \{0,1\}^n$ and a random neighbor in the hypercube, $y \sim x$, the expected
|
| 341 |
+
value of $f(y)$ is close to $f(x)$. This follows from Eq. (10). Thus, the only difference in the argument
|
| 342 |
+
for bounded functions will be that $\mathbf{E}[f(y)]$ is close to 0 and $\mathbf{E}[f(y')]$ is close to 1, leading to a
|
| 343 |
+
contradiction as well.
|
| 344 |
+
|
| 345 |
+
*Proof of Theorem 4.1.* First, we make a few assumptions that are without loss of generality, in order to make the argument later clearer. We assume $x_0 = 0^n$ and $f(x_0) = 0$. We assume $n = k \cdot \ell$
|
| 346 |
+
---PAGE_BREAK---
|
| 347 |
+
|
| 348 |
+
and that the blocks are given by $B_i = \{(i-1)\ell + 1, \dots, i\ell\}$ for $i \in [k]$. We assume that $c=1$, since for $c<1$ one can take $f'(x) = \min\{f(x)/c, 1\}$, and note that $f'$ is a bounded function with $f'(x_0 + B_i) = 1$. Proving the theorem for $f'$ gives $s(f) \ge s(f') \cdot c \ge \Omega(c \cdot k^{1/\ell})$.
|
| 349 |
+
|
| 350 |
+
Let $r = \lfloor \frac{n}{(2k)^{1/\ell}} \rfloor$, by the assumption $2 \le \ell \le \log(k)$ we have $\sqrt{n} \le r \le n/2$. Assume by contradiction that $s(f) \le \varepsilon \cdot k^{1/\ell}$ for some sufficiently small constant $\varepsilon > 0$ to be determined later. Consider the following two random processes.
|
| 351 |
+
|
| 352 |
+
**Algorithm 1 Process A**
|
| 353 |
+
|
| 354 |
+
1: $X_0 \leftarrow 0^n$
|
| 355 |
+
2: **for** $t = 1, \dots, r$ **do**
|
| 356 |
+
3: Select a random $i \in [n]$ among the coordinates for which $X_{t-1}$ is 0 and let $X_t \leftarrow X_{t-1} + e_i$.
|
| 357 |
+
4: **end for**
|
| 358 |
+
|
| 359 |
+
**Algorithm 2 Process B**
|
| 360 |
+
|
| 361 |
+
1: Select uniformly $i \in [k]$ and let $Y_0 \leftarrow B_i$
|
| 362 |
+
2: **for** $t = 1, \dots, r - \ell$ **do**
|
| 363 |
+
3: Select a random $i \in [n]$ among the coordinates for which $Y_{t-1}$ is 0 and let $Y_t \leftarrow Y_{t-1} + e_i$.
|
| 364 |
+
4: **end for**
|
| 365 |
+
|
| 366 |
+
For each $t \in \{0, \dots, r-1\}$, we claim that
|
| 367 |
+
|
| 368 |
+
$$
|
| 369 |
+
\begin{align*}
|
| 370 |
+
\mathbf{E}[f(X_{t+1}) - f(X_t)] &= \mathbf{E} \left[ \frac{1}{n-t} \cdot \sum_{i:(X_t)_i=0} (f(X_t + e_i) - f(X_t)) \right] \\
|
| 371 |
+
&\le \frac{1}{n-t} \cdot \mathbf{E}[s(f(X_t))] \le \frac{s(f)}{n-t}.
|
| 372 |
+
\end{align*}
|
| 373 |
+
$$
|
| 374 |
+
|
| 375 |
+
By telescoping this implies that
|
| 376 |
+
|
| 377 |
+
$$
|
| 378 |
+
\mathbf{E}[f(X_r)] = \mathbf{E}[f(X_0)] + \sum_{t=0}^{r-1} \mathbf{E}[f(X_{t+1}) - f(X_t)] \le 0 + \frac{r \cdot s(f)}{n-r} \le O(\varepsilon).
|
| 379 |
+
$$
|
| 380 |
+
|
| 381 |
+
In a symmetric fashion, for each $t \in \{1, \dots, r-\ell\}$ we have $\mathbf{E}[f(Y_{t+1}) - f(Y_t)] \ge -\frac{s(f)}{n-t-\ell}$. Again, telescoping implies that
|
| 382 |
+
|
| 383 |
+
$$
|
| 384 |
+
\mathbf{E}[f(Y_{r-\ell})] \geq \mathbf{E}[f(Y_0)] - \frac{(r-\ell) \cdot s(f)}{n-r} \geq 1 - \frac{r \cdot s(f)}{n-r} \geq 1 - O(\varepsilon).
|
| 385 |
+
$$
|
| 386 |
+
|
| 387 |
+
So it seems that the distribution of $X_r$ and $Y_{r-\ell}$ are very different from one another. However, we shall show that conditioned on a probable event, $X_r$ and $Y_{r-\ell}$ are identically distributed. To define the event, consider the sets
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
U_i = \{1_A \mid A \subseteq [n], |A| = r, B_i \subseteq A, \forall j \neq i : B_j \not\subseteq A\}
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
for $i \in [k]$ and their union
|
| 394 |
+
|
| 395 |
+
$$
|
| 396 |
+
U = \bigcup_{i=1}^{k} U_i = \{\mathbb{1}_A \mid A \subseteq [n], |A| = r, \exists! i \in [k] : B_i \subseteq A\}.
|
| 397 |
+
$$
|
| 398 |
+
|
| 399 |
+
Let $E_X$ be the event that $X_r \in U$, and $E_Y$ be the event that $Y_{r-\ell} \in U$. We show that
|
| 400 |
+
---PAGE_BREAK---
|
| 401 |
+
|
| 402 |
+
**Claim 4.4.** The following hold:
|
| 403 |
+
|
| 404 |
+
1. $X_r|E_X$ is identically distributed as $Y_{r-\ell}|E_Y$.
|
| 405 |
+
|
| 406 |
+
2. $\mathbf{Pr}[E_Y] = \Omega(1)$
|
| 407 |
+
|
| 408 |
+
3. $\mathbf{Pr}[E_X] = \Omega(1)$
|
| 409 |
+
|
| 410 |
+
We defer the proof of Claim 4.4 for later. We derive a contradiction from all of the above by showing that $\mathbf{E}[f(X_r)|E_X] < \mathbf{E}[f(Y_{r-\ell})|E_Y]$ (this is indeed a contradiction because by the claim $X_r|E_X$ and $Y_{r-\ell}|E_Y$ should be identically distributed and hence the expected values of $f(\cdot)$ on each of them should be the same). To show this, we note that
|
| 411 |
+
|
| 412 |
+
$$
|
| 413 |
+
\begin{align*}
|
| 414 |
+
\mathbf{E}[f(X_r)|E_X] &= \mathbf{E}[f(X_r) \cdot 1_{E_X}]/\mathbf{Pr}[E_X] \\
|
| 415 |
+
&\leq \mathbf{E}[f(X_r)]/\mathbf{Pr}[E_X] = O(\mathbf{E}[f(X_r)]) = O(\varepsilon).
|
| 416 |
+
\end{align*}
|
| 417 |
+
$$
|
| 418 |
+
|
| 419 |
+
On the other hand
|
| 420 |
+
|
| 421 |
+
$$
|
| 422 |
+
\begin{align*}
|
| 423 |
+
\mathbf{E}[f(Y_{r-l})|E_Y] &= 1 - \mathbf{E}[1 - f(Y_{r-l})|E_Y] \\
|
| 424 |
+
&\geq 1 - \mathbf{E}[1 - f(Y_{r-l})]/\mathbf{Pr}[E_Y] = 1 - O(\mathbf{E}[1 - f(Y_{r-l})]) = 1 - O(\varepsilon).
|
| 425 |
+
\end{align*}
|
| 426 |
+
$$
|
| 427 |
+
|
| 428 |
+
Choosing $\epsilon$ to be a small enough constant implies that $\mathbf{E}[f(X_r)|E_X] < \mathbf{E}[f(Y_{r-\ell})|E_Y]$, which completes the proof. $\square$
|
| 429 |
+
|
| 430 |
+
*Proof of Claim 4.4.* We shall use in the proof of Items 2 and 3 the fact that $1/3 \le r^\ell k / n^\ell \le 1/2$ which follows from the choice of $r = \lfloor n/(2k)^{1/\ell} \rfloor$ (for large enough $n$ and $k$).
|
| 431 |
+
|
| 432 |
+
1. First note that $X_r$ is distributed uniformly over the set of vectors in $\{0,1\}^n$ with hamming weight $r$. In particular, conditioning that $X_r$ is in a set $U$ of such vectors, makes it uniform over $U$. We are left to show that $Y_{r-\ell}|E_Y$ is distributed uniformly over $U$. Given that $Y_0 = B_i$, we have that $Y_{r-\ell}$ is the OR of $1_{B_i}$ with a random vector of weight $r-\ell$ on $[n] \setminus B_i$. Conditioned on $E_Y$ the only way to reach $U_i$ is if $Y_0 = B_i$, hence, by the above, all points in $U_i$ are attained with the same probability. Using symmetry, all points in $U = \cup_i U_i$ are attained with the same probability.
|
| 433 |
+
|
| 434 |
+
2. Let $B_i$ be the block selected in the first step of Process B. We analyze the probability that all indices in $B_j$ for some $j \neq i$ are chosen in the $r-\ell$ iterations of Process B.
|
| 435 |
+
|
| 436 |
+
$$
|
| 437 |
+
\begin{align*}
|
| 438 |
+
\text{\textbf{Pr}}[B_j \text{ is selected}] &= \frac{(\# \text{ of sequences where } B_j \text{ is selected})}{(\# \text{ of sequences})} \\
|
| 439 |
+
&= \frac{(r-\ell)^\ell \cdot (n-2\ell)^{\frac{r-2\ell}{n}-1}}{(n-\ell)^{\frac{r-\ell}{n}}} = \frac{(r-\ell)!(n-2\ell)!(n-r)!}{(r-2\ell)!(n-r)!(n-\ell)!} \\
|
| 440 |
+
&= \frac{(r-\ell)!(n-2\ell)!}{(r-2\ell)!(n-\ell)!} = \frac{(r-\ell)\cdots(r-2\ell+1)}{(n-\ell)\cdots(n-2\ell+1)} \le \left(\frac{r}{n}\right)^\ell
|
| 441 |
+
\end{align*}
|
| 442 |
+
$$
|
| 443 |
+
|
| 444 |
+
(recall that $n^k \triangleq \frac{n!}{(n-k)!}$). Hence, $\mathrm{Pr}[\exists j \neq i : B_j \text{ is selected}] \leq k \cdot (r/n)^l \leq 1/2$ and we have $\mathrm{Pr}[E_Y] \geq 1/2$.
|
| 445 |
+
|
| 446 |
+
3. Let $\pi_1, \dots, \pi_r \in [n]$ be the sequence of choices made by Process A. For $i \in [k]$, let $E_{X,i}$ be the event that $X_r \in U_i$. By the uniqueness of the block contained in $X_r$ the events $E_{X,i}$ are disjoint, hence $\mathrm{Pr}[E_X] = \sum_{i=1}^k \mathrm{Pr}[E_{X,i}]$. By symmetry, $\mathrm{Pr}[E_X] = k \cdot \mathrm{Pr}[E_{X,1}]$. The event
|
| 447 |
+
---PAGE_BREAK---
|
| 448 |
+
|
| 449 |
+
$E_{X,1}$ is simply the event that there exists a set $S \subseteq [r]$ of size $\ell$ such that $\{\pi_j\}_{j \in S} = B_1$ and the sequence $\{\pi_j : j \in [r] \setminus S\}$ is a sequence of choices for which $E_Y$ holds, when starting Process B from $Y_0 = B_1$. This shows that $\Pr[E_{X,1}] = \Pr[E_Y|Y_0 = B_1] \cdot \Pr[B_1 \subseteq \{\pi_1, \dots, \pi_r\}]$. By Symmetry, $\Pr[E_Y|Y_0 = B_i] = \Pr[E_Y] = \Omega(1)$ from the previous item. In addition,
|
| 450 |
+
|
| 451 |
+
$$
|
| 452 |
+
\begin{align*}
|
| 453 |
+
\mathbf{Pr}[B_1 \subseteq \{\pi_1, \dots, \pi_r\}] &= \frac{r^{\ell} \cdot (n-\ell)^{r-\ell}}{n^{\underline{\ell}}} = \frac{r!(n-\ell)!(n-r)!}{(r-\ell)!(n-r)!n!} \\
|
| 454 |
+
&= \frac{r!(n-\ell)!}{(r-\ell)!n!} = \frac{r \cdots (r-\ell+1)}{n \cdots (n-\ell+1)} \geq \left(\frac{r-\ell}{n}\right)^{\ell} \\
|
| 455 |
+
&= \left(\frac{r}{n}\right)^{\ell} \cdot (1-\ell/r)^{\ell} = \left(\frac{r}{n}\right)^{\ell} \cdot (1-o(1))
|
| 456 |
+
\end{align*}
|
| 457 |
+
$$
|
| 458 |
+
|
| 459 |
+
where $(1 - \ell/r)^{\ell} = 1 - o(1)$ follows from $\ell \le \log(k)$ and $r \ge \sqrt{n} \ge \sqrt{k}$. Thus,
|
| 460 |
+
|
| 461 |
+
$$
|
| 462 |
+
\begin{align*}
|
| 463 |
+
\mathbf{Pr}[E_X] &= k \cdot \mathbf{Pr}[E_{X,1}] = k \cdot \mathbf{Pr}[B_1 \text{ is selected}] \cdot \mathbf{Pr}[E_Y | Y_0 = B_1] \\
|
| 464 |
+
&\geq k \cdot \left(\frac{r}{n}\right)^{\ell} \cdot (1-o(1)) \cdot \frac{1}{2} \geq \frac{1}{3} \cdot (1-o(1) \cdot \frac{1}{2} = \Omega(1). \quad \square
|
| 465 |
+
\end{align*}
|
| 466 |
+
$$
|
| 467 |
+
|
| 468 |
+
## 4.2 Separating Sensitivity and Block Sensitivity of Bounded Functions
|
| 469 |
+
|
| 470 |
+
**The Lattice Variant of The Sensitivity Conjecture** The proof of Theorem 4.3 is more natural in the lattice-variant of the sensitivity conjecture as suggested by Aaronson (see [Bop12]). In this variant, instead of talking about functions over {$0, 1$}^n we are considering functions over {$0, 1, ..., \ell$}^k for $\ell, k \in \mathbb{N}$. Given a function $g: \{0, 1, ..., \ell\}^k \to \mathbb{R}$ one can define a Boolean function $f: \{0, 1\}^{\ell \cdot k} \to \mathbb{R}$ by the following equation:
|
| 471 |
+
|
| 472 |
+
$$ f(x_{1,1}, \dots, x_{k,\ell}) = g\left(\sum_{i=1}^{\ell} x_{1,i}, \dots, \sum_{i=1}^{\ell} x_{k,i}\right). \qquad (12) $$
|
| 473 |
+
|
| 474 |
+
For a point $y \in \{0, 1, ..., \ell\}^k$ and function $g: \{0, ..., \ell\}^k \to \mathbb{R}$ one can define the sensitivity of $g$ at $y$ as
|
| 475 |
+
|
| 476 |
+
$$ s(g, y) = \sum_{y' \sim y} |g(y') - g(y)| $$
|
| 477 |
+
|
| 478 |
+
where $y' \sim y$ if $y' \in \{0, ..., \ell\}^k$ is a neighbor of $y$ in the grid $\{0, ..., \ell\}^k$, i.e., if $y$ and $y'$ agree on all coordinates except for one coordinate, say $j \in [k]$, on which $|y_j - y'_j| = 1$. The following claim relates the sensitivity of $f$ to that of $g$.
|
| 479 |
+
|
| 480 |
+
**Claim 4.5.** Let $g: \{0, ..., \ell\}^k \to \mathbb{R}$ and let $f$ be the function defined by Eq. (12). Then $s(f) \leq \ell \cdot s(g)$.
|
| 481 |
+
|
| 482 |
+
*Proof.* Let $x = (x_{1,1},...,x_{k,\ell}) \in \{0,1\}^{kl}$ and let $x' \in \{0,1\}^{kl}$ be a neighbor of $x$, obtained by flipping the $(i,j)$-th coordinate. Let $y = (\sum_{i=1}^{\ell} x_{1,i},..., \sum_{i=1}^{\ell} x_{k,i})$ and similarly let $y' = (\sum_{i=1}^{\ell} x'_{1,i},..., \sum_{i=1}^{\ell} x'_{k,i})$. Then $y$ and $y'$ differ only on the $i$-th coordinate, and on this coordinate they differ by a $\pm 1$. If $y'_i = y_i + 1$, then the number of neighbors $x' \sim x$ that are mapped to $y'$ by $y' = (\sum_i x'_{1,i},...,\sum_i x'_{k,i})$ equals the number of zeros in the $i$-th block of $x$, i.e., it equals $\ell - y_i$. Similarly, in the case $y'_i = y_i - 1$ the number of $x' \sim x$ that are mapped to $y'$ equals $y_i$. In both cases, there are between 1 to $\ell$ points $x' \sim x$ that are mapped to each neighbor $y' \sim y$. Thus,
|
| 483 |
+
|
| 484 |
+
$$
|
| 485 |
+
\sum_{x' \sim x} |f(x') - f(x)| = \sum_{x' \sim x} |g(y') - g(y)| \leq \ell \cdot \sum_{y' \sim y} |g(y') - g(y)|.
|
| 486 |
+
\quad \square
|
| 487 |
+
$$
|
| 488 |
+
---PAGE_BREAK---
|
| 489 |
+
|
| 490 |
+
**Construction of a Separation.** Let $k, \ell$ be integers. We construct $f : \{0, 1, \dots, \ell\}^k \to [0, 1]$ such that $f(0) = 0$, $f(e_i \cdot \ell) = 1$ for all $i \in [k]$ and $s(f) \le O(k^{1/\ell})$.
|
| 491 |
+
|
| 492 |
+
Define a weight function $w : \{0, 1, \dots, \ell\} \to [0, 1]$ as follows: $w(a) = k^{a/\ell}/k$ for $a \in \{1, \dots, \ell\}$ and $w(0) = 0$. Take $g : \{0, \dots, \ell\}^k \to \mathbb{R}^+$ to be the function $g(x_1, \dots, x_n) = \sum_{i=1}^k w(x_i)$ and take $f : \{0, \dots, \ell\}^k \to [0, 1]$ to be $f(x) = \min\{1, g(x)\}$. Then $f(0^k) = 0$ and $f(\ell \cdot e_i) = 1$ for all $i \in [k]$.
|
| 493 |
+
|
| 494 |
+
**Theorem 4.6.** $s(f) \le 3 \cdot k^{1/\ell}$.
|
| 495 |
+
|
| 496 |
+
*Proof.* Let $x \in \{0, 1, \dots, \ell\}^k$ be a point in the lattice. We distinguish between two cases $g(x) \ge 2$ and $g(x) < 2$. In the first case, all neighbors $x' \sim x$ have $g(x') \ge 1$ since the sums $\sum_i w(x_i)$ and $\sum_i w(x'_i)$ differ by at most 1. Since both $g(x)$ and $g(x')$ are at least 1 we get that $f(x) = f(x') = 1$ and the sensitivity of $f$ at $x$ is 0.
|
| 497 |
+
|
| 498 |
+
In the latter case, $g(x) < 2$, we bound the sensitivity as well. For ease of notation we extend $w$ to be defined over $\{-1, \dots, \ell+1\}$ by taking $w(\ell+1) = w(\ell)$ and $w(-1) = w(0)$. We extend also $g$ to $\{-1, 0, \dots, \ell+1\} \to \mathbb{R}^+$ by taking $g(x_1, \dots, x_n) = \sum_i w(x_i)$. We have
|
| 499 |
+
|
| 500 |
+
$$
|
| 501 |
+
\begin{align*}
|
| 502 |
+
s(f,x) \le s(g,x) &= \sum_{i=1}^{k} |g(x+e_i) - g(x)| + |g(x) - g(x-e_i)| \\
|
| 503 |
+
&= \sum_{i=1}^{k} |w(x_i+1) - w(x_i)| + |w(x_i) - w(x_i-1)| \\
|
| 504 |
+
&= \sum_{i=1}^{k} w(x_i+1) - w(x_i-1) && (w \text{ is monotone}) \\
|
| 505 |
+
&\le \sum_{i=1}^{k} w(x_i+1) && (w \text{ is non-negative}) \\
|
| 506 |
+
&\le \sum_{i:x_i=0} w(1) + \sum_{i:x_i>0} w(x_i) \cdot k^{1/\ell} \\
|
| 507 |
+
&\le k \cdot \frac{k^{1/\ell}}{k} + \sum_{i} w(x_i) \cdot k^{1/\ell} \\
|
| 508 |
+
&= k^{1/\ell} + g(x) \cdot k^{1/\ell} \le 3k^{1/\ell}. && \square
|
| 509 |
+
\end{align*}
|
| 510 |
+
$$
|
| 511 |
+
|
| 512 |
+
We show that Theorem **4.3** is a corollary of Theorem **4.6**.
|
| 513 |
+
|
| 514 |
+
*Proof of Theorem 4.3.* Let $k = n/\ell$. Let $f : \{0, 1, \dots, \ell\}^k \to [0, 1]$ be the function in Theorem **4.6**. Take $h(x_{1,1}, \dots, x_{k,\ell}) = f(\sum_{i=1}^\ell x_{1,i}, \dots, \sum_{i=1}^\ell x_{n,i})$. For $x = 0^n$, there are $k$ disjoint blocks $B_1, \dots, B_k$ of size $\ell$ each such that $h(x+B_i) = 1$. Hence, $bs_\ell(h) \ge k = n/\ell$. By Claim **4.5**, the sensitivity of $h$ is at most $s(f) \cdot \ell \le 3 \cdot k^{1/\ell} \cdot \ell \le 3 \cdot n^{1/\ell} \cdot \ell$ which completes the proof. $\square$
|
| 515 |
+
|
| 516 |
+
# 5 New Separations between Decision Tree Complexity and Sensitivity
|
| 517 |
+
|
| 518 |
+
We report a new separation between the decision tree complexity and the sensitivity of Boolean functions. We construct an infinite family of Boolean functions with
|
| 519 |
+
|
| 520 |
+
$$ DT(f_n) \geq s(f_n)^{1+\log_{14}(19)} \geq s(f_n)^{2.115}. $$
|
| 521 |
+
---PAGE_BREAK---
|
| 522 |
+
|
| 523 |
+
Our functions are transitive functions, and are inspired by the work of Chakraborty [Cha11].
|
| 524 |
+
|
| 525 |
+
Our construction is based on finding a “gadget” Boolean function $f$, defined over a constant number of variables, such that $s^0(f) = 1$, $s^1(f) = k$ and $\text{DT}(f) = \ell$ for $\ell > k$ (recall that $s^0(f) = \max_{x:f(x)=0} s(f,x)$ and similarly $s^1(f) = \max_{x:f(x)=1} s(f,x)$). Given the gadget $f$, we construct an infinite family of functions with super-quadratic gap between the sensitivity and the decision tree complexity using compositions (which is a well-used trick in query complexity separations, cf. [Tal13]).
|
| 526 |
+
|
| 527 |
+
**Lemma 5.1.** Let $f: \{0,1\}^c \to \{0,1\}$ such that $s^0(f) = 1$, $s^1(f) = k$ and $\text{DT}(f) = \ell > k$. Then, there exists an infinite family of functions $\{g_i\}_{i \in \mathbb{N}}$ such that $s(g_i) = k^i$ and $\text{DT}(g_i) = (k\ell)^i = s(g_i)^{1+\log(k)/\log(\ell)}$.
|
| 528 |
+
|
| 529 |
+
*Proof.* Take $g = \text{OR}_k \circ f$. It is easy to verify that $s(g) = k$, and that $\text{DT}(g) = \text{DT}(\text{OR}_k) \cdot \text{DT}(f) = k\ell$ (for the latter, one can use [Tal13, Lemma 3.1]). For $i \in \mathbb{N}$, we take $g_i = g^i$. It is well-known (cf. [Tal13, Lemma 3.1]) that $s(g^i) \le s(g)^i$ and that $\text{DT}(g^i) = \text{DT}(g)^i$, which completes the proof. $\square$
|
| 530 |
+
|
| 531 |
+
## 5.1 Finding a Good Gadget
|
| 532 |
+
|
| 533 |
+
The gadget $f$ will be a minterm-cyclic function. Roughly speaking, a function $f: \{0,1\}^n \to \{0,1\}$ is minterm-cyclic if there exists pattern $p \in \{0,1,*\}^n$ such that the function $f$ simply checks if $x$ matches one of the cyclic shifts of $p$. The formal definition follows
|
| 534 |
+
|
| 535 |
+
**Definition 5.2.** A pattern $p \in \{0, 1, *\}^n$ is a partial assignment to the variables $x_1, \dots, x_n$. We say that a point $x \in \{0, 1\}^n$ matches the pattern $p$, denoted by $p \subseteq x$, if for all $i \in [n]$ such that $p_i \in \{0, 1\}$ we have $p_i = x_i$. Given a pattern $p$, let $\text{CS}(p) = \{p^1, \dots, p^n\}$ be the set of cyclic shifts of $p$, where the $i$-th cyclic shift of $p$ is given by $p^i = (p_i, p_{i+1}, \dots, p_n, p_1, \dots, p_{i-1})$. For a pattern $p \in \{0, 1, *\}^n$ we denote by $f_p: \{0, 1\}^n \to \{0, 1\}$ the function defined by
|
| 536 |
+
|
| 537 |
+
$$f_p(x) = 1 \iff \exists p^i \in \text{CS}(p) : p^i \subseteq x$$
|
| 538 |
+
|
| 539 |
+
and call $f_p$ the minterm cyclic function defined by $p$.
|
| 540 |
+
|
| 541 |
+
For example, the pattern $p = 0011**$ defines a function $f_p$ that checks if there's a sequence of two zeros followed by two ones in $x$, when $x$ is viewed as a cyclic string. We say that two patterns $p, q \in \{0, 1, *\}^n$ disagree on a coordinate $i$ if both $p_i$ and $q_i$ are in $\{0, 1\}$ and $p_i \neq q_i$.
|
| 542 |
+
|
| 543 |
+
**Claim 5.3.** Let $p \in \{0, 1, *\}^n$ be a pattern defining $f_p: \{0, 1\}^n \to \{0, 1\}$. Assume that any two different cyclic-shifts of $p$ disagree on at least 3 coordinates. Then, $s^0(f_p) = 1$.
|
| 544 |
+
|
| 545 |
+
*Proof.* Let $x \in \{0,1\}^n$ with $f_p(x) = 0$ and assume by contradiction that $s(f_p,x) \ge 2$. In such a case, there are two indices $i$ and $j$ such that $f_p(x+e_i) = 1$ and $f_p(x+e_j) = 1$. Let $q$ and $q'$ be the patterns among $\text{CS}(p)$ that $x+e_i$ and $x+e_j$ satisfy respectively. If $q=q'$, then since both $x+e_i$ and $x+e_j$ satisfy $q$ and they differ on coordinates $i$ and $j$, it must be the case that $q_i=q_j=*$. However, this implies that $x$ satisfy $q$ as well, which is a contradiction. If $q \ne q'$, then we get that $q$ and $q'$ may disagree only on coordinates $i$ and $j$, which is also a contradiction. $\square$
|
| 546 |
+
|
| 547 |
+
The following fact is easy to verify.
|
| 548 |
+
|
| 549 |
+
**Fact 5.4.** Let $p \in \{0, 1, *\}^n$ be a pattern defining $f_p: \{0, 1\}^n \to \{0, 1\}$. Then, $s^0(f_p) \le c^0(f_p) \le |\{i \in [n] : p_i \in \{0, 1\}\}|$.
|
| 550 |
+
---PAGE_BREAK---
|
| 551 |
+
|
| 552 |
+
Next, we demonstrate a simple example with better-than-quadratic separation between $\text{DT}(f)$ and $s(f)$. Take the pattern $p = *001011$. Denote by $p^1, \dots, p^7$ all the cyclic shifts of $p$, where in $p^i$ the $i$-th coordinate equals *. It is easy to verify that any $p^i$ and $p^j$ for $i \neq j$ disagree on at least 3 coordinates. Hence, $s^0(f_p) = 1$ and $s^1(f_p) \le 6$. We wish to show that any decision tree $T$ for $f_p$ is of depth 7. Let $x_i$ be the first coordinate read by a decision tree $T$ for $f_p$. Our adversary will answer 0, and will continue to answer as if $x$ matches $p^i$. Assume the decision tree made a decision before reading the entire input. The decision tree must decide 1 since the adversary answered according to $x$ which satisfies $p^i$. However, if the decision tree hasn't read the entire input, there is still an unread coordinate $j$, where $j \neq i$. Let $x' = x + e_j$. Then, the decision tree answers 1 on $x'$ as well. However $x'$ does not match pattern $p^i$ as $(p^i)_j \in \{0,1\}$ and it must be the case that $x_j = (p^i)_j \neq x'_j$.
|
| 553 |
+
|
| 554 |
+
We also need to rule out that $x'$ matches some other pattern. Indeed, if $x'$ matches some other pattern $p^k$ it means that $p^k$ and $p^i$ disagree only on at most one coordinate, which as discussed above cannot happen.
|
| 555 |
+
|
| 556 |
+
Using Lemma 5.1 the function $f_p$ can be turned into an infinite family of functions $g_i$ with $\text{DT}(g_i) = (6 \cdot 7)^i$ and $s(g_i) \le 6^i$. This gives a super-quadratic separation since
|
| 557 |
+
|
| 558 |
+
$$ \text{DT}(g_i) \ge s(g_i)^{1+\log(7)/\log(6)} \ge s(g_i)^{2.086}. $$
|
| 559 |
+
|
| 560 |
+
In a similar fashion, one can show that for the pattern $p = **0*10000*101$ after reading any two input bits from the input there exists a cyclic shift $p^i$ of the pattern from which no $\{0,1\}$ coordinate has been read yet. However, to verify that the input $x$ matches $p^i$ we must read all $\{0,1\}$ positions in $p^i$, which gives $\text{DT}(f_p) \ge 9+2$ where 9 is the number of $\{0,1\}$-s in the pattern $p$.
|
| 561 |
+
|
| 562 |
+
The decision tree complexity analysis for the other patterns written below is more involved. We computed it using a computer program written to calculate the decision tree complexity in this special case. In the list below, we report several patterns yielding super-quadratic separations. For each pattern $p$ we report its length $n$, the decision tree complexity of $f_p$, the maximal sensitivity of $f_p$ (which equals the number of $\{0,1\}$-s in $p$) and the resulting exponent one get by using Lemma 5.1 (i.e., $1 + \frac{\log \text{DT}(f_p)}{\log s(f_p)}$).
|
| 563 |
+
|
| 564 |
+
<table><tr><td>p = *001011,</td><td>n = 7,</td><td>DT = 7,</td><td>s = 6,</td><td>exp = 2.086</td></tr><tr><td>p = **0*10000*101,</td><td>n = 13,</td><td>DT = 11,</td><td>s = 9,</td><td>exp = 2.091</td></tr><tr><td>p = *****01*1*01100000,</td><td>n = 19,</td><td>DT = 14,</td><td>s = 11,</td><td>exp = 2.100</td></tr><tr><td>p = *****00*0*0010**1*00*011,</td><td>n = 25,</td><td>DT = 17,</td><td>s = 13,</td><td>exp = 2.104</td></tr><tr><td>p = *****1**0**0**1**0**0**0*0*10*1011,</td><td>n = 33,</td><td>DT = 19,</td><td>s = 14,</td><td>exp = 2.115</td></tr></table>
|
| 565 |
+
|
| 566 |
+
**Acknowledgements.** I wish to thank my PhD advisor, Ran Raz, for lots of stimulating and helpful discussions about this problem. I wish to thank Scott Aaronson for his encouragement.
|
| 567 |
+
|
| 568 |
+
## References
|
| 569 |
+
|
| 570 |
+
[ABG+14] A. Ambainis, M. Bavarian, Y. Gao, J. Mao, X. Sun, and S. Zuo. Tighter relations between sensitivity and other complexity measures. In *ICALP (1)*, pages 101–113, 2014.
|
| 571 |
+
|
| 572 |
+
[AP14] A. Ambainis and K. Prusis. A tight lower bound on certificate complexity in terms of block sensitivity and sensitivity. In *MFCS*, pages 33–44, 2014.
|
| 573 |
+
---PAGE_BREAK---
|
| 574 |
+
|
| 575 |
+
[APV15] A. Ambainis, K. Prusis, and J. Vihrovs. Sensitivity versus certificate complexity of boolean functions. *CoRR*, abs/1503.07691, 2015.
|
| 576 |
+
|
| 577 |
+
[AS11] A. Ambainis and X. Sun. New separation between $s(f)$ and $bs(f)$. *Electronic Colloquium on Computational Complexity (ECCC)*, 18:116, 2011.
|
| 578 |
+
|
| 579 |
+
[AV15] A. Ambainis and J. Vihrovs. Size of sets with small sensitivity: A generalization of simon's lemma. In *Theory and Applications of Models of Computation - 12th Annual Conference, TAMC 2015, Singapore, May 18-20, 2015, Proceedings*, pages 122-133, 2015.
|
| 580 |
+
|
| 581 |
+
[BdW02] H. Buhrman and R. de Wolf. Complexity measures and decision tree complexity: a survey. *Theor. Comput. Sci.*, 288(1):21-43, 2002.
|
| 582 |
+
|
| 583 |
+
[Bop12] M. Boppana. Lattice variant of the sensitivity conjecture. *Electronic Colloquium on Computational Complexity (ECCC)*, 19:89, 2012.
|
| 584 |
+
|
| 585 |
+
[Cha11] S. Chakraborty. On the sensitivity of cyclically-invariant boolean functions. *Discrete Mathematics & Theoretical Computer Science*, 13(4):51-60, 2011.
|
| 586 |
+
|
| 587 |
+
[GKS15] J. Gilmer, M. Koucký, and M. E. Saks. A new approach to the sensitivity conjecture. In *Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, ITCS 2015, Rehovot, Israel, January 11-13, 2015*, pages 247-254, 2015.
|
| 588 |
+
|
| 589 |
+
[GNS$^{+}$16] P. Gopalan, N. Nisan, R. A. Servedio, K. Talwar, and A. Wigderson. Smooth boolean functions are easy: Efficient algorithms for low-sensitivity functions. In *ITCS*, pages 59-70, 2016.
|
| 590 |
+
|
| 591 |
+
[HKP11] P. Hatami, R. Kulkarni, and D. Pankratov. Variations on the sensitivity conjecture. *Theory of Computing, Graduate Surveys*, 2:1-27, 2011.
|
| 592 |
+
|
| 593 |
+
[KK04] C. Kenyon and S. Kutin. Sensitivity, block sensitivity, and l-block sensitivity of boolean functions. *Inf. Comput.*, 189(1):43-53, 2004.
|
| 594 |
+
|
| 595 |
+
[Nis89] N. Nisan. Crew prams and decision trees. In *STOC*, pages 327-335, 1989.
|
| 596 |
+
|
| 597 |
+
[NS94] N. Nisan and M. Szegedy. On the degree of Boolean functions as real polynomials. *Computational Complexity*, 4:301-313, 1994.
|
| 598 |
+
|
| 599 |
+
[Rub95] D. Rubinstein. Sensitivity vs. block sensitivity of boolean functions. *Combinatorica*, 15(2):297-299, 1995.
|
| 600 |
+
|
| 601 |
+
[Sim83] H. U. Simon. A tight $\Omega(\log \log n)$-bound on the time for parallel ram's to compute nondegenerated boolean functions. In *Foundations of computation theory*, pages 439-444. Springer, 1983.
|
| 602 |
+
|
| 603 |
+
[Sze15] M. Szegedy. An $O(n^{0.4732})$ upper bound on the complexity of the GKS communication game. *Electronic Colloquium on Computational Complexity (ECCC)*, 22:102, 2015.
|
| 604 |
+
|
| 605 |
+
[Tal13] A. Tal. Properties and applications of boolean function composition. In *ITCS*, pages 441-454, 2013.
|
| 606 |
+
|
| 607 |
+
[Vir11] M. Virza. Sensitivity versus block sensitivity of boolean functions. *Inf. Process. Lett.*, 111(9):433-435, 2011.
|
| 608 |
+
---PAGE_BREAK---
|
| 609 |
+
|
| 610 |
+
A Proof of Corollary 4.2
|
| 611 |
+
|
| 612 |
+
*Proof.* Let $x \in \{0,1\}^n$ and $B_1, \dots, B_m$ be the blocks that achieve $bs(f)$. Assume without loss of generality that $B_1, \dots, B_{m'}$ are of size at most $2\ell$ and that $B_{m'+1}, \dots, B_m$ are of size larger than $2\ell$. Then, by the disjointness of $B_{m'+1}, \dots, B_m$ we have that $m-m' \le \frac{n}{2\ell}$. Thus,
|
| 613 |
+
|
| 614 |
+
$$
|
| 615 |
+
\begin{align*}
|
| 616 |
+
bs_\ell(f,x) &\geq \sum_{i=1}^{m'} |f(x)-f(x+B_i)| \\
|
| 617 |
+
&= \sum_{i=1}^{m} |f(x)-f(x+B_i)| - \sum_{i=m'+1}^{m} |f(x)-f(x+B_i)| \\
|
| 618 |
+
&\geq bs(f,x) - (m-m') \geq bs(f,x) - \frac{n}{2\ell} \geq \frac{n}{2\ell}.
|
| 619 |
+
\end{align*}
|
| 620 |
+
$$
|
| 621 |
+
|
| 622 |
+
Assume without loss of generality that $B_1, \dots, B_{m''}$ are blocks such that $|f(x) - f(x + B_i)| \ge \frac{1}{4\ell}$ and that $B_{m''+1}, \dots, B_{m'}$ are not. Then, $\sum_{i=m''+1}^{m'} |f(x) - f(x + B_i)| \le \frac{m''-m'}{4\ell} \le \frac{n}{4\ell}$. This implies that $\sum_{i=1}^{m''} |f(x) - f(x + B_i)| \ge \frac{n}{4\ell}$, and in particular that $m'' \ge \frac{n}{4\ell}$. Thus, there are $m'' \ge n/4\ell$ disjoint blocks of size at most $2\ell$ which change the value of $f$ by at least $\frac{1}{4\ell}$. Theorem 4.1 gives that $s(f) \ge \Omega((m'')^{1/2\ell}/\ell) \ge \Omega(n^{1/2\ell}/\ell)$. $\square$
|
| 623 |
+
|
samples_new/texts_merged/4808858.md
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
**Problem 1** In an LC circuit with $C = 4.00$ μF, the maximum potential difference across the capacitor is 1.50 V and the maximum current through the inductor is 50 mA.
|
| 5 |
+
|
| 6 |
+
(a) What is the inductance $L$?
|
| 7 |
+
|
| 8 |
+
(b) What is the frequency of oscillations?
|
| 9 |
+
|
| 10 |
+
(c) How long does it take for the charge to rise from 0 to its maximum value?
|
| 11 |
+
|
| 12 |
+
**Problem 4** A circuit is composed of two metal rails 8 cm apart, a resistor with $R = 1 \Omega$ connecting them, and a rod at the other end which moves at a speed of 0.45 m/s. A uniform magnetic field $B = 0.1$ T points perpendicular to the plane of the circuit.
|
| 13 |
+
|
| 14 |
+
(a) Find the induced emf in the circuit.
|
| 15 |
+
|
| 16 |
+
(b) Find the current in the circuit.
|
| 17 |
+
|
| 18 |
+
(c) If the rod moved in the opposite direction, how would your answers change?
|
| 19 |
+
|
| 20 |
+
**Problem 5** While upgrading the electronics in your car stereo, you calculate that you need to construct an LC circuit that oscillates at 20 Hz. If you have a 40 mH inductor, what capacitor do you need to buy from Radio Shack?
|
| 21 |
+
|
| 22 |
+
**Problem 6** You have an LC circuit that includes a small, unavoidable resistance from the wires. The inductor is 1.5 mH and the capacitor is 3 mF. The capacitor is initially charged to 30 μC. After 100 oscillations, the maximum charge on the capacitor is only 5 μC.
|
| 23 |
+
|
| 24 |
+
(a) What is the resistance of the circuit?
|
| 25 |
+
|
| 26 |
+
(b) How much energy has been lost?
|
| 27 |
+
|
| 28 |
+
(c) Where did this energy go?
|
samples_new/texts_merged/4872902.md
ADDED
|
@@ -0,0 +1,230 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Computation of Time-Domain Frequency Stability and
|
| 5 |
+
Jitter from PM Noise Measurements*
|
| 6 |
+
|
| 7 |
+
W. F. Walls and F. L. Walls
|
| 8 |
+
|
| 9 |
+
Femtosecond Systems Inc.,
|
| 10 |
+
4894 Van Gordon St. Suite 301N,
|
| 11 |
+
Wheat Ridge, CO 80033, USA
|
| 12 |
+
|
| 13 |
+
National Institute of Standards and Technology,
|
| 14 |
+
325 Broadway Boulder, CO 80303, USA
|
| 15 |
+
|
| 16 |
+
Abstract
|
| 17 |
+
|
| 18 |
+
This paper explores the effect of phase modulation (PM), amplitude modulation (AM), and thermal noise on the rf spectrum, phase jitter, timing jitter, and frequency stability of precision sources.
|
| 19 |
+
|
| 20 |
+
**1. Introduction**
|
| 21 |
+
|
| 22 |
+
In this paper we review the basic definitions generally used to describe phase
|
| 23 |
+
modulation (PM) noise, amplitude modulation (AM) noise, fractional frequency stability,
|
| 24 |
+
timing jitter and phase jitter in precision sources. From these basic definitions we can then
|
| 25 |
+
compute the effect of frequency multiplication or division on these measures of
|
| 26 |
+
performance. We find that under ideal frequency multiplication or division by a factor N,
|
| 27 |
+
the PM noise and phase jitter of a source is intrinsically changed by a factor of N². The
|
| 28 |
+
fractional frequency stability and timing jitter are, however, unchanged as long as we can
|
| 29 |
+
determine the average zero crossings. After a sufficiently large N, the carrier power
|
| 30 |
+
density is less than the PM noise power. This condition is often referred to as carrier
|
| 31 |
+
collapse. Ideal frequency translation results in the addition of the PM noise of the two
|
| 32 |
+
sources. The effect of AM noise on the multiplied or translated signals can be increased
|
| 33 |
+
or decreased depending on the component non-linearity. Noise added to a precision signal
|
| 34 |
+
results in equal amounts of PM and AM noise. The upper and lower PM (or AM)
|
| 35 |
+
sidebands are exactly equal and 100% correlated, independent of whether the PM (or AM)
|
| 36 |
+
originate from random or coherent processes [1].
|
| 37 |
+
|
| 38 |
+
## 2. Basic Definitions
|
| 39 |
+
|
| 40 |
+
2.1 Descriptions of Voltage Wave Form
|
| 41 |
+
|
| 42 |
+
The output of a precision source can be written as
|
| 43 |
+
|
| 44 |
+
$$
|
| 45 |
+
V(t) = [V_o + \varepsilon(t)][\cos(2\pi v_o t) + \phi(t)], \quad (1)
|
| 46 |
+
$$
|
| 47 |
+
|
| 48 |
+
* Work of the US Government not subject to US copyright.
|
| 49 |
+
† Presently at Total Frequency, Boulder, CO 80303.
|
| 50 |
+
---PAGE_BREAK---
|
| 51 |
+
|
| 52 |
+
where $v_0$ is the average frequency, and $V_0$ is the average amplitude. Phase/frequency variations are included in $\phi(t)$ and the amplitude variations are included in $\epsilon(t)$ [2]. The instantaneous frequency is given by
|
| 53 |
+
|
| 54 |
+
$$ v = v_o + \frac{1}{2\pi} \frac{d}{dt} \phi(t) \quad (2a) $$
|
| 55 |
+
|
| 56 |
+
The instantaneous fractional frequency deviation is given by
|
| 57 |
+
|
| 58 |
+
$$ y(t) = \frac{1}{2\pi v_o} \frac{d}{dt} \phi(t) \quad (2b) $$
|
| 59 |
+
|
| 60 |
+
The power spectral density (PSD) of phase fluctuations $S_\phi(f)$ is the mean squared phase fluctuation $\delta\phi(f)$ at Fourier frequency $f$ from the carrier in a measurement bandwidth of 1 Hz. This includes the contributions at both the upper and lower sidebands. These sidebands are exactly equal in amplitude and are 100% correlated [1]. Thus experimentally
|
| 61 |
+
|
| 62 |
+
$$ S_{\phi}(f) = \frac{[\delta\phi(f)]^2}{BW} \quad \text{radians}^2/\text{Hz}, \quad (3) $$
|
| 63 |
+
|
| 64 |
+
where BW is the measurement bandwidth in Hz. Since the BW is small compared to $f$, $S_\phi(f)$ appears locally to be white and obeys Gaussian statistics. The fractional 1-sigma confidence interval is $1 \pm 1/\sqrt{N}$ [3].
|
| 65 |
+
|
| 66 |
+
Often the PM noise is specified as single side band noise $\ell(f)$, which is defined as $1/2$ of $S_\phi(f)$. The units are generally given in dBc/Hz, which is short hand for dB below the carrier in a 1 Hz bandwidth.
|
| 67 |
+
|
| 68 |
+
$$ \ell(f) = 10 \log \left[ \frac{1}{2} S_{\phi}(f) \right] \quad \text{dBc/Hz}. \quad (4) $$
|
| 69 |
+
|
| 70 |
+
Frequency modulation noise is often specified as $S_y(f)$ which is the PSD of fractional frequency fluctuations. $S_y(f)$ is related to $S_\phi(f)$ by
|
| 71 |
+
|
| 72 |
+
$$ s_y(f) = \frac{f^2}{\nu^2} S_\phi(f) \quad 1/\text{Hz}. \quad (5) $$
|
| 73 |
+
|
| 74 |
+
In the laser literature one often sees the frequency noise expressed as the PSD of frequency modulation $S_\phi^\bullet$, which is related to $S_y(f)$ as.
|
| 75 |
+
|
| 76 |
+
$$ S_\phi^\bullet \phi(f) = f^2 S_y(f) = f^2 S_\phi(f) \quad \text{Hz}^2/\text{Hz}. \quad (6) $$
|
| 77 |
+
|
| 78 |
+
The amplitude modulation (AM) noise $S_a(f)$ is the mean squared fractional amplitude fluctuations at Fourier frequency $f$ from the carrier in a measurement bandwidth of 1 Hz. Thus experimentally
|
| 79 |
+
|
| 80 |
+
$$ S_a(f) = \left( \frac{\delta\epsilon(f)}{V_0} \right)^2 \frac{1}{BW} \quad 1/\text{Hz}, \quad (7) $$
|
| 81 |
+
|
| 82 |
+
where BW is the measurement bandwidth in Hz.
|
| 83 |
+
---PAGE_BREAK---
|
| 84 |
+
|
| 85 |
+
The rf power spectrum for small PM and AM noise is approximately given by
|
| 86 |
+
|
| 87 |
+
$$V^2(f) \equiv V_o^2 [e^{-\phi_c^2} + S_\phi(f) + S_a(f)] \quad (8)$$
|
| 88 |
+
|
| 89 |
+
Where $e^{-\phi_c^2}$ is the approximate power in the carrier at Fourier frequencies from 0 to $f_c$. $\phi_c^2$ is the mean squared phase fluctuation due to the PM noise at frequencies larger than $f_c$ [4]. $\phi_c^2$ is calculated from.
|
| 90 |
+
|
| 91 |
+
$$\phi_c^2 = \int_{-\infty}^{\infty} S_{\phi}(f) df. \quad (9)$$
|
| 92 |
+
|
| 93 |
+
The half-power bandwidth of the signal, 2 fc can be found by setting $\phi_c^2 = 0.7$. The difference between the half-power and the 3 dB bandwidth depends on the shape of $S_\phi(f)$ [4].
|
| 94 |
+
|
| 95 |
+
## 2.2 Frequency Stability In The Time Domain
|
| 96 |
+
|
| 97 |
+
The frequency of even a precision source is often not stationary in time, so traditional statistical methods to characterize it diverge with increasing number of samples [2]. Special statistics have been developed to handle this problem. The most common is the two-sample or Allan variance (AVAR), which is based on analyzing the fluctuations of adjacent samples of fractional frequency averaged over a period $\tau$. The square root of the Allan variance $\sigma_y(\tau)$, often called ADEV, is defined as
|
| 98 |
+
|
| 99 |
+
$$\sigma_y(\tau) = \left\langle \frac{1}{2} \left[ y(t+\tau) - \bar{y}(t) \right]^2 \right\rangle^{1/2} \quad (10)$$
|
| 100 |
+
|
| 101 |
+
$\sigma_y(\tau)$ can be estimated from a finite set of frequency averages, each of length $\tau$ from
|
| 102 |
+
|
| 103 |
+
$$\sigma_y(\tau) = \left[ \frac{1}{2(M-1)} \sum_{i=1}^{M-1} (y_i - \bar{y})^2 \right]^{1/2} \quad (11)$$
|
| 104 |
+
|
| 105 |
+
This assumes that there is no dead time between samples [2]. If there is dead time, the results are biased depending on the amount of dead time and the type of PM noise. See [2] for details.
|
| 106 |
+
|
| 107 |
+
$\sigma_y(\tau)$ can also be calculated from the $S_\phi(f)$ using
|
| 108 |
+
|
| 109 |
+
$$\sigma_y(\tau) = \left( \frac{\sqrt{2}}{\pi v_o \tau} \right) \left[ \int_0^\infty H_\phi(f) |S_\phi(f)| \sin^4(\pi f \tau) df \right]^{1/2} \quad (12)$$
|
| 110 |
+
|
| 111 |
+
where $H_o(f)$ is the transfer function of the system used for measuring $\sigma_y(\tau)$ or $\delta t$ below [2]. $H_\phi(f)$ must
|
| 112 |
+
---PAGE_BREAK---
|
| 113 |
+
|
| 114 |
+
Figure 1. Placement of the yis used in the computation of σy(τ) and δt = τσy(τ).
|
| 115 |
+
|
| 116 |
+
have a low-pass characteristic for σy(τ) to converge in the presence of white PM or flicker PM noise. In practice the measurement system always has a finite bandwidth but if this is not controlled or known, the results for σy(τ) will have little meaning [2]. See Table 1. If H₀(f) has a low pass characteristic with a very sharp roll off at a maximum frequency f_h, it can be replaced by 1 and the integration terminated at f_h. Practical examples usually require the exact shape of H₀(f). Programs exist that numerically compute σy(τ) for an arbitrary combination of these 5 noise types [5]. Most sources contain at least three of them plus long-term drift or aging.
|
| 117 |
+
|
| 118 |
+
## 2.3 Effects of Frequency Multiplication, Division, and Translation
|
| 119 |
+
|
| 120 |
+
Frequency multiplication by a factor N is the same as phase amplification by a factor N. For example 2π radians is amplified to 2πN radians. Since PM noise is the mean squared phase fluctuation, the PM noise must increase by N². Thus
|
| 121 |
+
|
| 122 |
+
$$S_{\phi}(Nv_o, f) = N^2 S_{\phi}(v_o, f) + \text{Multiplication PM}, \quad (13)$$
|
| 123 |
+
|
| 124 |
+
where Multiplication PM is the noise added by the multiplication process.
|
| 125 |
+
|
| 126 |
+
We see from Eqs. (8), (9) and (13) that the power in the carrier decreases exponentially as $e^{-N^2}$. After a sufficiently large multiplication factor N, the carrier power density is less than the PM noise power. This is often referred to as carrier collapse [4]. Ideal frequency translation results in the addition of the PM noise of the two sources [2]. The half power bandwidth of the signal also changes with frequency multiplication.
|
| 127 |
+
|
| 128 |
+
Frequency division can be considered as frequency multiplication by a factor 1/N. The effect is to reduce the PM noise by a factor 1/N². The only difference is that there can be aliasing of the broadband PM noise at the input to significantly increase the output PM above that calculated for a perfect divider [6]. This effect can be avoided by using narrow
|
| 129 |
+
---PAGE_BREAK---
|
| 130 |
+
|
| 131 |
+
band filter at the input or intermediate stages. Ideal frequency multiplication or division does not change $\sigma_y(\tau)$.
|
| 132 |
+
|
| 133 |
+
Frequency translation has the effect of adding the PM noise of the input signal $v_i$ and the reference signal $v_o$ to that of the PM noise in the nonlinear device providing the translation.
|
| 134 |
+
|
| 135 |
+
$$S_{\phi}(v_2, f) = S_{\phi}(v_o, f) + S_{\phi}(v_1, f) + \text{Translation PM.} \quad (14)$$
|
| 136 |
+
|
| 137 |
+
Thus dividing a high frequency signal, rather than mixing two high frequency signals generally produces a low frequency reference signal with less residual noise.
|
| 138 |
+
|
| 139 |
+
### 3. Effect Of Multiplicative Noise
|
| 140 |
+
|
| 141 |
+
Multiplicative noise is noise modulation power that remains proportional to the signal level. For example consider the case where the gain is modulated by some process with an index $\beta$ as
|
| 142 |
+
|
| 143 |
+
$$\text{Gain} = G_o(1+\beta)\cos\Omega\tau \quad (15)$$
|
| 144 |
+
|
| 145 |
+
If we assume an input signal given by
|
| 146 |
+
|
| 147 |
+
$$V_{in} = V_o \cos[2\pi v_o t + \phi(t)] \quad (16)$$
|
| 148 |
+
|
| 149 |
+
then the output voltage will have the form
|
| 150 |
+
|
| 151 |
+
$$V_{out} = V_o G_o + V_o G_o \beta \cos\Omega t \cos[2\pi v_o t + \phi(t)] \quad (17)$$
|
| 152 |
+
|
| 153 |
+
The amplitude fluctuation is seen to be proportional to the input signal. Using Eqs. (1) and (7) we can compute the AM noise to be
|
| 154 |
+
|
| 155 |
+
$$\frac{1}{2} S_a(f) = \frac{\beta^2}{2} \quad (18)$$
|
| 156 |
+
|
| 157 |
+
Similarly if the phase is modulated as
|
| 158 |
+
|
| 159 |
+
$$\phi(t) = \beta \cos[\Omega(t)] \quad (19)$$
|
| 160 |
+
|
| 161 |
+
the output voltage will be of the form
|
| 162 |
+
|
| 163 |
+
$$V_{out} = V_o \cos[\omega\tau + \beta \cos[\Omega(t)]] \quad (20)$$
|
| 164 |
+
|
| 165 |
+
The phase fluctuation is proportional to the input signal and the PM is calculated using Eqs. (1) and (3) to be
|
| 166 |
+
|
| 167 |
+
$$\frac{1}{2} S_{\phi}(f) = \frac{\beta^2}{4} \quad (21)$$
|
| 168 |
+
---PAGE_BREAK---
|
| 169 |
+
|
| 170 |
+
### 4. Effect of Additive Noise
|
| 171 |
+
|
| 172 |
+
The addition of a noise signal $V_n(t)$ to the signal $V_o(t)$ yields a total signal
|
| 173 |
+
|
| 174 |
+
$$V(t) = V_o(t) + V_n(t) \quad (22)$$
|
| 175 |
+
|
| 176 |
+
Since the noise term $V_n(t)$ is uncorrelated with $V_o(t)$, 1/2 the power contributes to AM noise and 1/2 the power contributes to PM noise.
|
| 177 |
+
|
| 178 |
+
$$\text{AM } V_n(t)/\sqrt{2} \text{ PM } V_n(t)/\sqrt{2} \quad (23)$$
|
| 179 |
+
|
| 180 |
+
$$L(f) = \frac{s_{\phi}(f)}{2} = \frac{s_{a}(f)}{2} = \frac{V_{n}^{2}(f)}{4V_{o}^{2}} \frac{1}{BW} \quad (24)$$
|
| 181 |
+
|
| 182 |
+
where BW is the bandwidth in Hz. We see that the AM and PM is proportional to inverse power. These results can be applied to amplifier and detection circuits as follows. The input noise power to the amplifier is given by kTBW. The gain of the amplifier from a matched source into a match load is $G_o$. The noise power to the load is just kTBWG_oF, where F is the noise figure. The output power to the load is $P_o$. Using Eq. (24) we obtain
|
| 183 |
+
|
| 184 |
+
$$L(f) = \frac{s_{\phi}(f)}{2} = \frac{s_{a}(f)}{2} = \frac{V_{n}^{2}(f)}{4V_{o}^{2}} \frac{1}{BW} = \frac{2kTBWFG_{o}}{4P_{o}BW} = \frac{kTFG_{o}}{2P_{o}} = -177\text{dBc/Hz} \quad (25)$$
|
| 185 |
+
|
| 186 |
+
for T= 300K, F=1, P_o/G_o = P_in = 0 dBm.
|
| 187 |
+
|
| 188 |
+
### 5. Phase Jitter
|
| 189 |
+
|
| 190 |
+
The phase jitter $\delta\phi$ is computed from the PM noise spectrum using
|
| 191 |
+
|
| 192 |
+
$$\delta\phi = \int_{0}^{\infty} [S_{\phi}(f)] H(f) df \quad (26)$$
|
| 193 |
+
|
| 194 |
+
Generally $H(f)$ must have the shape of the high pass filter or a minimum cutoff frequency $f_{min}$ used to exclude low frequency changes for the integration, or $\delta\phi$ will diverge due to random walk FM, flicker FM, or white FM noise processes. Usually $H(f)$ also has a low pass characteristic at high frequencies to limit the effects of flicker PM and white PM [2]. See Table 1.
|
| 195 |
+
|
| 196 |
+
### 6. Timing Jitter
|
| 197 |
+
|
| 198 |
+
Recall that $\sigma_y(\tau)$ is the fractional frequency stability of adjacent samples each of length $\tau$. See Fig. 1. The time jitter $\delta t$ is the timing error that accumulates after a period $\tau$. $\delta t$ is related to $\sigma_y(\tau)$ by
|
| 199 |
+
|
| 200 |
+
$$\frac{\delta t}{\tau} = \frac{\delta v}{v} = \sigma_y(\tau) \quad \delta t = \tau \sigma_y(\tau) \quad (27)$$
|
| 201 |
+
---PAGE_BREAK---
|
| 202 |
+
|
| 203 |
+
Table 1 shows the asymptotic forms of $\sigma_y(\tau)$, $\delta t$, and $\delta\phi$ as a function of $\tau$, $f_{\text{min}}$, and $f_h$ for the 5 common noise types at frequency $v_o$ and $Nv_o$, under the assumption that $2\pi f_h \tau > 1$. It is interesting to note that for white phase noise, all three measures are dominated by $f_h$[5]. For random walk frequency modulation (FM) and flicker FM, $\sigma_y(\tau)$ is independent of $f_h$ and instead is dominated by $S_\phi(1/\tau)$ or $S_\phi(f_{\text{min}})$. Also, the timing jitter is independent of $N$ as long as we can still identify zero crossings, while the phase jitter, which is proportional to frequency, is multiplied by a factor $N$. Typical sources usually contain at least 3 of these noise types.
|
| 204 |
+
|
| 205 |
+
Table 1. $\sigma_y(\tau)$, $\delta t$, and $\delta\phi$ as a function of $\tau$, $f_{\text{min}}$, and $f_h$ at carrier frequency $v_o$ and $Nv_o$
|
| 206 |
+
|
| 207 |
+
<table><thead><tr><th>Noise type</th><th>S<sub>φ</sub>(f)</th><th>σ<sub>y</sub>(τ)</th><th>δt at v<sub>o</sub> or Nv<sub>o</sub></th><th>δφ at v<sub>o</sub></th><th>δφ at N v<sub>o</sub></th></tr></thead><tbody><tr><td>Random<br>Walk FM</td><td>[v<sup>2</sup>/f<sup>4</sup>]h<sub>2</sub></td><td>π[(2/3)h<sub>2</sub>τ]<sup>1/2</sup></td><td>Tπ[(2/3)h<sub>2</sub>τ]<sup>1/2</sup></td><td>v[[(1/(3f<sub>min</sub>)<sup>3</sup>)h<sub>2</sub>]<sup>1/2</sup></td><td>Nv[[(1/(3f<sub>min</sub>)<sup>3</sup>)h<sub>2</sub>]<sup>1/2</sup></td></tr><tr><td>Flicker FM</td><td>[v<sup>2</sup>/f<sup>3</sup>]h<sub>1</sub></td><td>[2ln(2)h<sub>1</sub>]<sup>1/2</sup></td><td>τ[2ln(2)h<sub>1</sub>]<sup>1/2</sup></td><td>v[[(1/(2f<sub>min</sub>)<sup>2</sup>)h<sub>1</sub>]<sup>1/2</sup></td><td>Nv[[(1/(2f<sub>min</sub>)<sup>2</sup>)h<sub>1</sub>]<sup>1/2</sup></td></tr><tr><td>White FM</td><td>[v<sup>2</sup>/f<sup>2</sup>]h<sub>0</sub></td><td>{(1/(2τ))h<sub>0</sub>}<sup>1/2</sup></td><td>[(τ/2)h<sub>0</sub>]<sup>1/2</sup></td><td>v{{((1/f<sub>min</sub>)-/f<sub>h</sub>)]h<sub>0</sub>}<sup>1/2</sup></td><td>Nv[(1/(f<sub>min</sub>)-/f<sub>h</sub>)h<sub>0</sub>]<sup>1/2</sup></td></tr><tr><td>Flicker PM</td><td>[v<sup>2</sup>/f<sup>1</sup>]h<sub>1</sub></td><td>{(1/(2π)) [1.038<br>+3ln(2πf<sub>h</sub>τ)h<sub>1</sub>]<sup>1/2</sup></td><td>[1/(2π)] [1.038<br>+3ln(2πf<sub>h</sub>τ)h<sub>1</sub>]<sup>1/2</sup></td><td>v[ln(f<sub>h</sub>/f<sub>min</sub>)h<sub>1</sub>]<sup>1/2</sup></td><td>Nv[ln(f<sub>h</sub>/f<sub>min</sub>)h<sub>1</sub>]<sup>1/2</sup></td></tr><tr><td>White PM</td><td>[v<sup>2</sup>f<sup>-2</sup>]h<sub>2</sub></td><td>{1/(2πτ)}[3f<sub>h</sub>h<sub>2</sub>]<sup>1/2</sup></td><td>[1/(2π)}[3f<sub>h</sub>h<sub>2</sub>]<sup>1/2</sup></td><td>v[f<sub>n</sub>,h<sub>2</sub>]<sup>1/2</sup></td><td>Nv[f<sub>n</sub>,h<sub>2</sub>]<sup>1/2</sup></td></tr></tbody></table>
|
| 208 |
+
|
| 209 |
+
## 7. Discussion
|
| 210 |
+
|
| 211 |
+
We have explored the effects of phase modulation (PM), amplitude modulation (AM), and additive noise on the rf spectrum, phase jitter, timing jitter, and frequency stability of precision sources. Under ideal frequency multiplication or division by a factor $N$, the PM noise and phase jitter of a source is changed by a factor of $N^2$. After a sufficiently large $N$, the carrier power density is less than the PM noise power. This condition is often referred to as carrier collapse. Noise added to a precision signal results in equal amounts of PM and AM noise. The upper and lower PM (or AM) sidebands are exactly equal and 100% correlated, independent of whether the PM (or AM) originates from random or coherent processes.
|
| 212 |
+
|
| 213 |
+
## 8. Acknowledgements
|
| 214 |
+
|
| 215 |
+
We gratefully acknowledge helpful discussions with David A. Howe, A. Sen Gupta, and Jeff Vollin.
|
| 216 |
+
|
| 217 |
+
## References
|
| 218 |
+
|
| 219 |
+
[1] F.L. Walls, "Correlation Between Upper and Lower Sidebands," IEEE Trans. Ultrason., Ferroelectrics, and Freq. Cont., 47, 407-410, 2000.
|
| 220 |
+
|
| 221 |
+
[2] D.B. Sullivan, D.W. Allan, D.A. Howe, and F.L. Walls, "Characterization of Clocks and Oscillators", NIST Tech. Note 1337, 1-342, 1990.
|
| 222 |
+
|
| 223 |
+
[3] F.L. Walls, D.B. Percival, and W.R. Irelan, "Biases and Variances of Several FFT Spectral Estimators as a Function of Noise Type and Number of Samples," Proc. 43rd Ann. Symp. Freq. Control, Denver, CO, May 31-June 2, 336-341, 1989. Also found in [1].
|
| 224 |
+
---PAGE_BREAK---
|
| 225 |
+
|
| 226 |
+
[4] F.L. Walls and A. DeMarchi, "RF Spectrum of a Signal After Frequency Multiplication: Measurement and Comparison with a Simple Calculation," IEEE Trans. Instrum. Meas., **24**, 210-217, 1975.
|
| 227 |
+
|
| 228 |
+
[5] F.L. Walls, J. Gary, A. O'Gallagher, R. Sweet, and L. Sweet, Time Domain Frequency Stability Calculated from the Frequency Domain Description: Use of the SIGINT Software Package to Calculate Time Domain Frequency Stability from the Frequency Domain, NISTIR 89-3916 (revised), 1-31, 1991.
|
| 229 |
+
|
| 230 |
+
[6] A. SenGupta and F.L. Walls, "Effect of Aliasing on Spurs and PM Noise in Frequency Dividers," Proc. Intl. IEEE Freq. Cont. Symp., Kansas City, MO, June 6-9, 2000.
|
samples_new/texts_merged/4994833.md
ADDED
|
@@ -0,0 +1,529 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Sampling variance update method in
|
| 5 |
+
Monte Carlo Model Predictive Control*
|
| 6 |
+
|
| 7 |
+
Shintaro Nakatani* Hisashi Date**
|
| 8 |
+
|
| 9 |
+
* Graduate School of Systems and Information Engineering, University of Tsukuba, Ibaraki, Japan (e-mail: nakatani-s@roboken.iit.tsukuba.ac.jp).
|
| 10 |
+
|
| 11 |
+
** Faculty of Engineering, Information and Systems, University of Tsukuba, Ibaraki, Japan (e-mail: hdate@iit.tsukuba.ac.jp)
|
| 12 |
+
|
| 13 |
+
**Abstract:** This study describes the influence of user parameters on control performance in a Monte-Carlo model predictive control (MCMPC). MCMPC based on Monte-Carlo sampling depends significantly on the characteristics of sampling distribution. We quantified the effect of user determinable parameters on control performance using the relationship between the algorithm of MCMPC and convergence to the optimal solution. In particular, we investigated the limitations associated with the variance of sampling distribution causing a trade-off relationship with the convergence speed and accuracy of estimation. To overcome this limitation, we proposed two variance updating methods and new MCMPC algorithm. Furthermore, the effectiveness of the numerical simulation was verified.
|
| 14 |
+
|
| 15 |
+
**Keywords:** Optimal control theory, Monte-Carlo methods, Randomized methods, Model predictive and optimization-based control
|
| 16 |
+
|
| 17 |
+
# 1. INTRODUCTION
|
| 18 |
+
|
| 19 |
+
In recent years, model predictive control (MPC) has attracted considerable attention in various fields owing to its ability of explicitly handling the required constraints Carlos E. Garcia and Morari (1989), Ohtsuka (2004). In MPC, an algorithm is used to determine the optimal control inputs by repeatedly solving the optimization problem with constraint up to a finite time in the future. From the view point of implementation, MPC can be separated into two categories, i.e., gradient and sample-based MPC.
|
| 20 |
+
|
| 21 |
+
The former method is currently being researched to be applied in various real-world systems. The C/GMRES proposed by Ohtsuka (2004) is a quite efficient method among gradient-based MPC. The C/GMRES is known to be an efficient algorithm Cairano and Kolmanovsky (2019) for nonlinear systems and has been considered for application in various systems such as smart grid systems Toru (2012) and vehicle collision avoidance control Masashi Nanno (2010).
|
| 22 |
+
|
| 23 |
+
In gradient-based MPC, the optimal input is determined by solving the optimal control problem using the gradient information of the cost function. Therefore, if the optimal control problem is simple, the optimal solution can be derived quickly and accurately. Alternatively, the target system is limited to systems with differentiable cost function.
|
| 24 |
+
|
| 25 |
+
In another method, i.e., sample-based MPC, the optimal input is determined using Monte-Carlo approximation. In general, Monte-Carlo method requires a significant number of computational resources; therefore, real-time im-
|
| 26 |
+
|
| 27 |
+
plementation of sample-based MPC is difficult. However, in literature Williams et al. (2016); Ohyama and Date (2017), it has been reported that the efficient approach is to take advantage of the parallel nature of sampling and use graphical processing unit to implement it in real time. In addition, as sample-based MPC does not require gradient information of the cost function, there are many significant advantages. The literature Nakatani and Date (2019) describes the feature of the Monte-Carlo model predictive control (MCMPC), which is a type of sample-based MPC. It also explains its capability of handling discontinuous events, based on the result of experiments of collision of pendulum on a cart.
|
| 28 |
+
|
| 29 |
+
From theoretical point of view, the most successful method is the path integral optimal control framework Kappen (2007); Satoh et al. (2017). The key idea in this framework is that the solution of the optimal control problem is transformed into the expectation over all possible trajectories and corresponding trajectory costs. This transformation allows stochastic optimal control problems to be solved by using a Monte-Carlo approximation with guaranteed convergence. However, in these studies, effect of the variance of sampling distribution on convergence was not considered. Williams et al. (2015) mentions this problem and proposes a framework that allows users to freely determine the variance of the sampling distribution. These previous studies are common in that the theory of path integration is applied to stochastic optimal control problems.
|
| 30 |
+
|
| 31 |
+
Alternatively, the MCMPC investigated herein aims to overcome the optimal control problem for deterministic systems. Therefore, herein we discuss the convergence of MCMPC by considering the optimal control problem for
|
| 32 |
+
|
| 33 |
+
* This work was not supported by any organization
|
| 34 |
+
---PAGE_BREAK---
|
| 35 |
+
|
| 36 |
+
discrete-time linear systems, wherein the only optimal
|
| 37 |
+
solution can be derived analytically.
|
| 38 |
+
|
| 39 |
+
This study aims to mainly describe the trade-off relation-
|
| 40 |
+
ship between the variance of sampling distribution and the
|
| 41 |
+
convergence, i.e., if we choose large sampling variance, the
|
| 42 |
+
convergence can be fastened while a large noise remains on
|
| 43 |
+
the solution. This problem requires that the variance must
|
| 44 |
+
be properly controlled to perfectly match the sub-optimal
|
| 45 |
+
input to the optimal solution. This also means that we
|
| 46 |
+
need to adjust the sampling variance properly to achieve
|
| 47 |
+
fast convergence and precision at the same time. Two types
|
| 48 |
+
of variance update methods are proposed: The one is in-
|
| 49 |
+
spired by cooling principle in simulated annealing method
|
| 50 |
+
and the other is based on the use of the most recent sample
|
| 51 |
+
variance. These methods are compared in simulation of
|
| 52 |
+
a linear system. Besides the variance update methods,
|
| 53 |
+
we also introduce two types of optimization among the
|
| 54 |
+
Monte Carlo samples: Top-1 sample and weighted mean.
|
| 55 |
+
Taking the best sample among all samples tends to achieve
|
| 56 |
+
fast convergence but suffered from large estimation noise
|
| 57 |
+
compared with weighted mean. These are compared in
|
| 58 |
+
simulation.
|
| 59 |
+
|
| 60 |
+
Based on these results, we show that the newly proposed
|
| 61 |
+
method is one of the effective methods for the problem
|
| 62 |
+
discussed in this paper.
|
| 63 |
+
|
| 64 |
+
## 2. FINITE-TIME OPTIMAL CONTROL PROBLEM FOR DISCRETE-TIME LINEAR SYSTEMS
|
| 65 |
+
|
| 66 |
+
We considered an optimal control problem for discrete-
|
| 67 |
+
time linear systems on the *k*-th control cycle with predic-
|
| 68 |
+
tion for *I*-th steps, indicated by {$k|0$}, . . . , {$k|i$}, . . . , {$k|I$}.
|
| 69 |
+
Consider a class of linear discrete-time systems described
|
| 70 |
+
by the following equation:
|
| 71 |
+
|
| 72 |
+
$$x_{\{k|i+1\}} = Ax_{\{k|i\}} + Bu_{\{k|i\}}, \quad (1)$$
|
| 73 |
+
|
| 74 |
+
where the state is denoted by $x_{\{k|i\}} \in \mathbb{R}^n$, control input by $u_{\{k|i\}} \in \mathbb{R}^1$, and system matrices are denoted by $A \in \mathbb{R}^{n \times n}$ and $B \in \mathbb{R}^{n \times 1}$. In addition, it is assumed that the initial state $x_{\{k|0\}}$ of the system at each control cycle $k$ is known and there are no constraint about input or state for simplicity. For the system (1), the cost function used in the finite-time optimal control problem from the current control cycle to $I$-steps future is described by following equation:
|
| 75 |
+
|
| 76 |
+
$$J(x_k, u_k, k) = \frac{1}{2} \sum_{i=0}^{N-1} \left( x_{\{k|i+1\}}^T Q x_{\{k|i+1\}} + u_{\{k|i\}}^T R u_{\{k|i\}} \right), \quad (2)$$
|
| 77 |
+
|
| 78 |
+
where the $Q \in \mathbb{R}^{n \times n}$ is the positive definite weight for the state, $R \in \mathbb{R}^1$ is the positive definite weight for the input. In the rest of this study, we use $J$ as the cost value unless otherwise noted. Then, the solution of this optimal control problem is defined as
|
| 79 |
+
|
| 80 |
+
$$u_{\{k|i\}}^* = \arg \min_{u_{\{k|i\}}} J(x_k, u_k, k). \quad (3)$$
|
| 81 |
+
|
| 82 |
+
At this moment, by using the fact that the time evolution
|
| 83 |
+
of the system (1) can be expressed using only the initial
|
| 84 |
+
state $x_{\{k|0\}}$ and input sequences $u_{\{k|0\}}, \dots, u_{\{k|N-1\}}$, we
|
| 85 |
+
can rewrite the equation (2) as following equation:
|
| 86 |
+
|
| 87 |
+
$$J(x_k, u_k, k) = \frac{1}{2} \hat{\mathbf{u}}^T \hat{\mathbf{Q}} \hat{\mathbf{u}} + x_{\{k|0\}}^T \hat{\mathbf{B}} \hat{\mathbf{u}} + \frac{1}{2} x_{\{k|0\}}^T \hat{\mathbf{A}} x_{\{k|0\}}, \quad (4)$$
|
| 88 |
+
|
| 89 |
+
where the matrices $\hat{A} \in \mathbb{R}^{n \times n}$, $\hat{B} \in \mathbb{R}^{n \times N}$, and $\hat{Q} \in \mathbb{R}^{N \times N}$ and the vector $\hat{u} \in \mathbb{R}^I$, are shown in from (5) to (8).
|
| 90 |
+
|
| 91 |
+
$$\hat{A} = A^T QA + (A^2)^T QA^2 + \cdots + (A^N)^T QA^N \quad (5)$$
|
| 92 |
+
|
| 93 |
+
$$\hat{B} = \left[ \sum_{k=1}^{N} (A^k)^T QA^{k-1} B, \dots, \sum_{k=j}^{N} (A^k)^T QA^{k-j} B, \dots, (A^N)^T QB \right] \quad (6)$$
|
| 94 |
+
|
| 95 |
+
$$\hat{Q} = \begin{bmatrix} \hat{q}_{11} & \cdots & \hat{q}_{1j} & \cdots & \hat{q}_{1I} \\ \vdots & \ddots & \vdots & & \vdots \\ \hat{q}_{1i} & \cdots & \hat{q}_{ij} & & \hat{q}_{iI} \\ \vdots & & \vdots & & \vdots \\ \hat{q}_{I1} & \cdots & \hat{q}_{jI} & & \hat{q}_{II} \end{bmatrix} \quad (7)$$
|
| 96 |
+
|
| 97 |
+
$$\hat{\mathbf{u}} = [u_{\{k|0\}}, \dots, u_{\{k|I-1\}}] \quad (8)$$
|
| 98 |
+
|
| 99 |
+
The matrix $\hat{Q}$, whose element in the *i*-th row and *j*-th column of the upper triangle, is a symmetric matrix $\hat{Q}$ and is given by
|
| 100 |
+
|
| 101 |
+
$$\hat{q}_{ij} =
|
| 102 |
+
\begin{cases}
|
| 103 |
+
\displaystyle \sum_{k=0}^{N-i} B^T (A^k)^T Q A^k B + R, & (i=j) \\
|
| 104 |
+
\displaystyle \sum_{k=j-i}^{N-i} B^T (A^k)^T Q A^{k+i-j} B. & (i<j)
|
| 105 |
+
\end{cases}
|
| 106 |
+
\quad (9)$$
|
| 107 |
+
|
| 108 |
+
If the matrix $\hat{Q}$ is positive definite symmetric matrix, the unique solution $\mathbf{u}^*$ can be obtained as
|
| 109 |
+
|
| 110 |
+
$$\mathbf{u}^* = -\hat{\mathbf{Q}}^{-1}\hat{\mathbf{B}}^T x_{\{k|0\}}. \quad (10)$$
|
| 111 |
+
|
| 112 |
+
These discussions so far are a general theory when con-
|
| 113 |
+
sidering a finite-time optimal control problem using cost
|
| 114 |
+
function (2) for discrete-time linear systems (1). In the
|
| 115 |
+
next section, we discuss the relationship between algorithm
|
| 116 |
+
of MCMPC which takes expectation over all possible tra-
|
| 117 |
+
jectories as sub-optimal input and convergence. We also
|
| 118 |
+
propose an alternative method: Top-1 sample algorithm
|
| 119 |
+
for MCMPC.
|
| 120 |
+
|
| 121 |
+
## 3. ALGORITHM OF TWO TYPES MCMPC
|
| 122 |
+
|
| 123 |
+
In this section, we describe two different MCMPC algo-
|
| 124 |
+
rithms. First, we describe the relationship between con-
|
| 125 |
+
vergence and the normal type MCMPC algorithm that
|
| 126 |
+
uses the expectation over all possible trajectories as sub-
|
| 127 |
+
optimal inputs. Next, we describe the TOP1 sample
|
| 128 |
+
MCMPC algorithm that uses the best trajectories from
|
| 129 |
+
all sample trajectories as a sub-optimal input.
|
| 130 |
+
|
| 131 |
+
### 3.1 Relation between algorithm of normal type MCMPC and convergence
|
| 132 |
+
|
| 133 |
+
Normal type MCMPC consists of three main phases.
|
| 134 |
+
|
| 135 |
+
**Phase 1**
|
| 136 |
+
Generating input sequenses
|
| 137 |
+
|
| 138 |
+
**Phase 2**
|
| 139 |
+
Running forward simulation in parallel
|
| 140 |
+
---PAGE_BREAK---
|
| 141 |
+
|
| 142 |
+
**Phase 3**
|
| 143 |
+
|
| 144 |
+
Estimating the sub-optimal input sequences $\tilde{\mathbf{u}}$
|
| 145 |
+
|
| 146 |
+
At the Phase 1, input sequences are generated by random
|
| 147 |
+
sampling from normal distribution as following equation:
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\hat{\mathbf{u}} \sim \mathcal{N}(\bar{\mathbf{u}}, \Sigma), \quad (11)
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
where the mean values $\bar{\mathbf{u}}$ is initialized and updated by
|
| 154 |
+
using following equation:
|
| 155 |
+
|
| 156 |
+
$$
|
| 157 |
+
\bar{\mathbf{u}} = \begin{cases} \mathbf{0}, & (k=0) \\ [\tilde{u}_{\{k|0\}}, \dots, \tilde{u}_{\{k|I-1\}}]^T, & (k \neq 0) \end{cases} \tag{12}
|
| 158 |
+
$$
|
| 159 |
+
|
| 160 |
+
where $\tilde{u}$ means sub-optimal input estimated in the pre-
|
| 161 |
+
vious estimation. $\Sigma \in \mathbb{R}^{I \times I}$ is the variance-covariance
|
| 162 |
+
matrix and satisfies the following two assumptions.
|
| 163 |
+
|
| 164 |
+
Assumption 1. The standard deviation $\sigma$ used in all prediction steps is constant.
|
| 165 |
+
|
| 166 |
+
Assumption 2. For all $u_{\{|\cdot|\}} \in \mathbb{R}^1$, each element are independent from each other:
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
E(u_{\{|i|\}} u_{\{|j|\}}) = 0, (i \neq j) \quad (13)
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
where $E(\cdot)$ means expected value.
|
| 173 |
+
|
| 174 |
+
Then, we can describe Σ as following equation (14) using
|
| 175 |
+
these two assumptions.
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\Sigma = \begin{bmatrix} \sigma^2 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & \sigma^2 \end{bmatrix} \tag{14}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
Therefore, $\hat{u}$ can be regarded as a random variable with probability density function (PDF) as shown in the following equation:
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
f(\hat{\mathbf{u}}) = \frac{1}{\sqrt{2\pi}\sigma^2} \exp \left( -\frac{1}{2}(\hat{\mathbf{u}} - \bar{\mathbf{u}})^T \Sigma^{-1} (\hat{\mathbf{u}} - \bar{\mathbf{u}}) \right) \\
|
| 185 |
+
= \frac{1}{\sqrt{2\pi}\sigma^2} \exp \left( -\frac{1}{2\sigma^2} (\hat{\mathbf{u}} - \bar{\mathbf{u}})^T (\hat{\mathbf{u}} - \bar{\mathbf{u}}) \right). \tag{15}
|
| 186 |
+
$$
|
| 187 |
+
|
| 188 |
+
In Phase 2, the system state for the number of samples
|
| 189 |
+
used for predicting and estimating is updated using the
|
| 190 |
+
system model (1) and input sequences sampled randomly
|
| 191 |
+
as shown in (11). The updated system state and randomly
|
| 192 |
+
sampled inputs are also used to calculate the cost values
|
| 193 |
+
$J(x_k, u_k, k)$.
|
| 194 |
+
|
| 195 |
+
In Phase 3, sub-optimal input sequences $\tilde{\mathbf{u}}$ are derived as
|
| 196 |
+
the sample mean using the randomly sampled inputs $\hat{\mathbf{u}}$
|
| 197 |
+
and the weights $w(\hat{\mathbf{u}})$ for each input sequence:
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
\tilde{\mathbf{u}} = \frac{\sum_{k=1}^{M} w(\hat{\mathbf{u}})\hat{\mathbf{u}}}{\sum_{k=1}^{M} w(\hat{\mathbf{u}})}, \quad (16)
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
where $w(\hat{\mathbf{u}})$ can be derived as the following equation if $\hat{\mathcal{Q}}$ is positive definite:
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
\begin{align*}
|
| 207 |
+
w(\hat{\mathbf{u}}) &= \exp\left(-\frac{J}{\lambda^2}\right) \\
|
| 208 |
+
&= \exp\left(-\frac{1}{2\lambda^2}\hat{\mathbf{u}}^T\hat{\mathcal{Q}}\hat{\mathbf{u}} - \frac{1}{\lambda^2}\alpha_0^T\hat{\mathcal{B}}\hat{\mathbf{u}} - \frac{1}{2\lambda^2}\alpha_{\{k|0\}}^T\hat{\mathcal{A}}\alpha_{\{k|0\}}\right) \\
|
| 209 |
+
&= \exp\left(-\frac{1}{2\lambda^2}(\hat{\mathbf{u}} - \mathbf{u}^*)^T\hat{\mathcal{Q}}(\hat{\mathbf{u}} - \mathbf{u}^*) + \text{const}\right),
|
| 210 |
+
\end{align*}
|
| 211 |
+
\tag{17}
|
| 212 |
+
$$
|
| 213 |
+
|
| 214 |
+
where $\lambda$ is positive constant. Then, $E(\tilde{\mathbf{u}})$, the expected
|
| 215 |
+
value of the sample mean (16), can be described by
|
| 216 |
+
following equation:
|
| 217 |
+
|
| 218 |
+
$$
|
| 219 |
+
E(\tilde{\mathbf{u}}) = \int w(\hat{\mathbf{u}}) d\hat{\mathbf{u}}. \quad (18)
|
| 220 |
+
$$
|
| 221 |
+
|
| 222 |
+
Note that we are interested in the expected value of the
|
| 223 |
+
function (17) approximated by using a random variable $\hat{\mathbf{u}}$
|
| 224 |
+
with the PDF (15). Then, equation (18) can be rewritten
|
| 225 |
+
as the following equation from the definition of the expec-
|
| 226 |
+
tation of the function of random variables:
|
| 227 |
+
|
| 228 |
+
$$
|
| 229 |
+
E(\tilde{\mathbf{u}}) = \int w(\hat{\mathbf{u}}) f(\hat{\mathbf{u}}) d\hat{\mathbf{u}}
|
| 230 |
+
= (\sigma^2 \hat{\mathcal{Q}} + \lambda^2 I)^{-1} (\sigma^2 \hat{\mathcal{Q}} \mathbf{u}^* + \lambda^2 \tilde{\mathbf{u}}), \quad (19)
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
where $I \in \mathbb{R}^{N \times N}$ is the identity matrix. The derivation
|
| 234 |
+
of (19) is shown in Appendix A. Then, the variance of
|
| 235 |
+
the sample mean $\Sigma_S$ can be expressed by the following
|
| 236 |
+
equation:
|
| 237 |
+
|
| 238 |
+
$$
|
| 239 |
+
\Sigma_S = \frac{\sigma^2 \lambda^2}{M} (\sigma^2 \hat{\Omega} + \lambda^2 I)^{-1}, \quad (20)
|
| 240 |
+
$$
|
| 241 |
+
|
| 242 |
+
where *M* is the total number of samples used for the pre-
|
| 243 |
+
diction and estimation, (See Appendix A for derivation).
|
| 244 |
+
Next, we consider the relationship between iteration of
|
| 245 |
+
prediction and estimation and the convergence of sub-
|
| 246 |
+
optimal input sequences **û**. Considering about updating
|
| 247 |
+
the expected value in (11) by repeating the estimation
|
| 248 |
+
shown in (18), and the sub-optimal input value by the
|
| 249 |
+
*d*-th estimation is **û***<sub>*d*</sub>, **û***<sub>*d+1*</sub> can be described as
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
\bar{\mathbf{u}}_{d+1} = E(\bar{\mathbf{u}}) = (\sigma^2 \hat{\Omega} + \lambda^2 I_N)^{-1} (\sigma^2 \hat{\Omega} \mathbf{u}^* + \lambda^2 \bar{\mathbf{u}}_d). \quad (21)
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
If we define the error between the optimal input sequences $\mathbf{u}^*$ and the sub-optimal input $\bar{\mathbf{u}}_d$ estimated by the $d$-th estimation as $\boldsymbol{e}_d = \bar{\mathbf{u}}_d - \mathbf{u}^*$, we can describe the $d+1$-th estimation error as
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\boldsymbol{e}_{d+1} = \left( \frac{\sigma^2}{\lambda^2} \hat{\boldsymbol{Q}} + I \right)^{-1} \boldsymbol{e}_d
|
| 259 |
+
\quad (22)
|
| 260 |
+
$$
|
| 261 |
+
|
| 262 |
+
As a result of the above considerations, we obtain the
|
| 263 |
+
theorem on the relationship between convergence and
|
| 264 |
+
parameters specific to MCMPC as shown below.
|
| 265 |
+
|
| 266 |
+
Theorem 1. In (4), it is assumed that the matrix $\hat{\mathcal{Q}}$ is a real positive definite symmetric matrix and the unique optimal inputs sequences exists as shown in (10).
|
| 267 |
+
|
| 268 |
+
Then, the sub-optimal input $\bar{\mathbf{u}}_d$ converges to $\mathbf{u}^*$ when $d \to \infty$.
|
| 269 |
+
|
| 270 |
+
**Proof.** The necessary and sufficient condition for the error $\boldsymbol{e}_d$ to asymptotically converge to 0 is that the
|
| 271 |
+
---PAGE_BREAK---
|
| 272 |
+
|
| 273 |
+
absolute value of all eigenvalues of matrix $\Omega$ shown in (23) is less than 1.
|
| 274 |
+
|
| 275 |
+
$$ \Omega = \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right)^{-1} \qquad (23) $$
|
| 276 |
+
|
| 277 |
+
Assuming that for any real positive definite symmetric matrices $M_A, M_B$, the following inequality holds:
|
| 278 |
+
|
| 279 |
+
$$ \lambda_i(M_A + M_B) > \lambda_i(M_A), \qquad (24) $$
|
| 280 |
+
|
| 281 |
+
where $\lambda_i(Z)$ means the i-th eigenvalue of a matrix Z (Proof omitted.). Based on the assumption that $\hat{Q}$ is a real positive definite symmetric matrix, the following equation holds:
|
| 282 |
+
|
| 283 |
+
$$ \lambda_i \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right) > \lambda_i(I) = 1. \qquad (25) $$
|
| 284 |
+
|
| 285 |
+
Since $\lambda_i(Z^{-1}) = \frac{1}{\lambda_i(Z)}$ holds for any non-singular matrix, the following inequality holds:
|
| 286 |
+
|
| 287 |
+
$$ \lambda_i(\Omega) = \lambda_i \left( \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right)^{-1} \right) < \lambda_i(I). \qquad (26) $$
|
| 288 |
+
|
| 289 |
+
As the eigenvalues of all real positive definite symmetric matrices are positive real numbers, the absolute value of all eigenvalues of the matrix $\Omega$ is less than 1. Then, the error $e_d$ satisfies the following equation:
|
| 290 |
+
|
| 291 |
+
$$ \lim_{d \to \infty} e_d = 0. \qquad (27) $$
|
| 292 |
+
|
| 293 |
+
This means:
|
| 294 |
+
|
| 295 |
+
$$ \lim_{d \to \infty} (\bar{u}_d - u^*) = 0. \qquad (28) $$
|
| 296 |
+
|
| 297 |
+
Thus, the sub-optimal input sequences $\bar{u}_d$ converges asymptotically to $u^*$ when $d \to \infty$. $\square$
|
| 298 |
+
|
| 299 |
+
**Corollary 1.** When $\sigma \to \infty$, Eq. (26) satisfies the following equation:
|
| 300 |
+
|
| 301 |
+
$$ \lim_{\sigma \to \infty} \lambda_i \left( \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right)^{-1} \right) = 0, \forall i. \qquad (29) $$
|
| 302 |
+
|
| 303 |
+
Eq. (29) shows that if $\sigma \to \infty$, the first estimation result $\bar{u}^{(1)}$ satisfies $\bar{u}^{(1)} = u^*$. Therefore, if $\sigma$ is larger, the sub-optimal input sequences $\bar{u}_d$ converges to the optimal values faster.
|
| 304 |
+
|
| 305 |
+
Then, the variance-covariance matrix of the sample mean $\Sigma_S$ shown in Eq. (20) can be described as the following equation:
|
| 306 |
+
|
| 307 |
+
$$ \lim_{\sigma \to \infty} \Sigma_S = \frac{\lambda^2 \hat{Q}^{-1}}{M}. \qquad (30) $$
|
| 308 |
+
|
| 309 |
+
Eq. (30) means that if $\lambda$ is sufficiently small, the variance of the sub-optimal input sequences $\bar{u}_d$ is small. This consideration is consistent with the results of path integral analysis. Therefore, this means that there is a tradeoff between convergence and variance. Moreover, equation (30) shows that if sample number $M$ is large, the error of the expected value $E(\bar{u})$ by the Monte-Carlo approximation is $O(1/\sqrt{M})$.
|
| 310 |
+
|
| 311 |
+
**Corollary 2.** When $\sigma \to 0$, equation (20) satisfies the following equation:
|
| 312 |
+
|
| 313 |
+
$$ \lim_{\sigma \to 0} \Sigma_S = 0, \qquad (31) $$
|
| 314 |
+
|
| 315 |
+
However, the eigenvalue of the coefficient matrix $\Omega$ in equation (22) is as shown below:
|
| 316 |
+
|
| 317 |
+
$$ \lim_{\sigma \to 0} \lambda_i \left( \left( \frac{\sigma^2}{\lambda^2} \hat{Q} + I \right)^{-1} \right) = 1, \forall i. \qquad (32) $$
|
| 318 |
+
|
| 319 |
+
These equations show that there is a tradeoff between the convergence and variance of sample mean $\Sigma_S$. Equation (31) and (32) show that if the user chooses the variance $\sigma^2$ as small as possible to eliminate the variance of sample mean $\Sigma_S$, the error $e_d$ at the previous estimation will remain. Moreover, if $\sigma$ is too small, the sub-optimal input sequences $\bar{u}_d$ slowly converges to the optimal values.
|
| 320 |
+
|
| 321 |
+
From Corollary 1 and Corollary 2, it is understood that the variance needs to be controlled appropriately to improve the estimation accuracy and convergence speed.
|
| 322 |
+
|
| 323 |
+
## 3.2 Algorithm of TOP1 sample MCMPC
|
| 324 |
+
|
| 325 |
+
In Top1 sample MCMPC, the optimization problem is solved by iterating the following three processes within the same control cycle.
|
| 326 |
+
|
| 327 |
+
### Phase 1
|
| 328 |
+
Generating input sequences
|
| 329 |
+
|
| 330 |
+
### Phase 2
|
| 331 |
+
Running forward simulation in parallel
|
| 332 |
+
|
| 333 |
+
### Phase 3
|
| 334 |
+
Estimating the sub-optimal input sequences $\tilde{u}$ and updating standard deviation $\sigma$.
|
| 335 |
+
|
| 336 |
+
Phase 1 and phase 2 are the same as the MCMPC algorithm described above.
|
| 337 |
+
|
| 338 |
+
In phase 3, sub-optimal input sequences $\tilde{u}$ is described by the following equation:
|
| 339 |
+
|
| 340 |
+
$$ \tilde{u} = \arg\min_{u_{\{k|i\}\in U}} J(x_k, u_k, k), \qquad (33) $$
|
| 341 |
+
|
| 342 |
+
where $U$ means a set of all inputs sequences $\hat{u}$ randomly sampled in phase 1. In addition, the standard deviation $\sigma$ updated as described in section 4
|
| 343 |
+
|
| 344 |
+
## 3.3 Model predictive control algorithm
|
| 345 |
+
|
| 346 |
+
So far we have described how to repeat the prediction in one control cycle. In the model predictive control we propose, the prediction is repeated every control cycle, and the sub-optimal input predicted in the previous control cycle is re-optimized. So, sub-optimal input in k-th control cycle correspond to the result of iteration of $k \times d$ times predictions.
|
| 347 |
+
|
| 348 |
+
# 4. SAMPLING VARIANCE UPDATE METHODS
|
| 349 |
+
|
| 350 |
+
In this section, we describe two types of update methods that are used each time of the iteration of precision. The first variance update method used in this study can be described as following equation:
|
| 351 |
+
|
| 352 |
+
$$ \sigma_d = \gamma^d \sigma_0, \qquad (34) $$
|
| 353 |
+
|
| 354 |
+
where $\gamma$ is a positive constant $\gamma \in [0.8, 1.0]$, and $d$ is the number of iteration, and $\sigma_0$ is a parameter that represents the initial standard deviation that should be designed by the user. Equation (34) is inspired by the cooling schedule used in the simulated annealing (SA) method. In SA, it is guaranteed that the estimated value can reach the optimal solution when $\gamma$ is chosen appropriately and cooled enough times. For example, if we chose $\gamma = 1/\log(1+d)$, estimated value reliably converges to optimal value. But, the cooling rate $\gamma = 1/\log(1+d)$ is too slow, so, in practically, the
|
| 355 |
+
---PAGE_BREAK---
|
| 356 |
+
|
| 357 |
+
cooling rate $\gamma \in [0.8, 1.0)$ is generally used Rosen and Nakano (1994).
|
| 358 |
+
|
| 359 |
+
The second method can be described by the following equation:
|
| 360 |
+
|
| 361 |
+
$$\sigma_d = \sqrt{\frac{1}{\sum_{m=1}^{M} w_{d-1}(\hat{\mathbf{u}})}}. \quad (35)$$
|
| 362 |
+
|
| 363 |
+
Equation (35) corresponds to the error variance of equation (16) that can be calculated based on the error propagation law. Note that equation (35) is a variance update method that reflects the quality of the estimation results. In the rest of this study, we will refer to the method shown earlier as the geometric cooling method and the method shown later as latest sample variance method.
|
| 364 |
+
|
| 365 |
+
## 5. NUMERICAL SIMULATION
|
| 366 |
+
|
| 367 |
+
In this section, we first show the models used in two different numerical simulations. Next, we show the simulation results when using normal type MCMPC, which shows the effect of variance $\sigma$ on convergence. Furthermore, we show the results of applying the two types of variance update methods shown in the subsection 4 to normal type MCMPC and Top1 sample MCMPC. Finally, we show the results of the application to the problem of swing-up stabilization of a double inverted pendulum, which is a type of nonlinear system.
|
| 368 |
+
|
| 369 |
+
### 5.1 Simulation models
|
| 370 |
+
|
| 371 |
+
**Example 1.** As the first example, we consider the optimal control problem when MCMPC is applied to a three-dimensional unstable discrete-time linear system that can be described by the following equation:
|
| 372 |
+
|
| 373 |
+
$$ \begin{aligned} x_{k+1} &= Ax_k + Bu_k \\ x_k &\in \mathbb{R}^3, u_k \in \mathbb{R}^1 \end{aligned} \quad (36) $$
|
| 374 |
+
|
| 375 |
+
where we denote coefficient matrices A and B as show in the following equations:
|
| 376 |
+
|
| 377 |
+
$$A = \begin{bmatrix} 0 & 1 & 0 \\ 0 & -1.1364 & 0.273 \\ 0 & -0.1339 & -0.1071 \end{bmatrix} \quad (37)$$
|
| 378 |
+
|
| 379 |
+
$$B = \begin{bmatrix} 0 \\ 0 \\ 0.0893 \end{bmatrix}, \quad (38)$$
|
| 380 |
+
|
| 381 |
+
then, the eigenvalues of A are as $\Lambda = [0, -1.1059, -0.1376]^T$. Since one of eigenvalues of A exists outside of the unit circle, system (36) is an unstable system. Then we consider an optimal control problem for system (36) that takes a prediction horizon $N = 15$, initial state $x_0 = [2.98, 0.7, 0.0]^T$, state weight matrix Q and an input weight R as follows:
|
| 382 |
+
|
| 383 |
+
$$Q = \operatorname{diag}(2.0, 1.0, 0.1), \quad R = 1. \quad (39)$$
|
| 384 |
+
|
| 385 |
+
Then, the optimal input sequences $\mathbf{u}^*$ can be easily calculated using equation (3). In this study, we show only the analytical solution $u_0^* = -2.69$ used in the following discussion.
|
| 386 |
+
|
| 387 |
+
**Example 2.** As the second example, we consider the swing-up stabilization of an arm type double inverted pendulum.
|
| 388 |
+
|
| 389 |
+
Table 1. Parameters of arm type double pendulum
|
| 390 |
+
|
| 391 |
+
<table><thead><tr><td>Name</td><td>Symbol (·)</td><td>Value</td></tr></thead><tbody><tr><td>Angle of the first link</td><td>$\theta_1$ (rad)</td><td>Variable</td></tr><tr><td>Angle of the second link</td><td>$\theta_2$ (rad)</td><td>Variable</td></tr><tr><td>First link drive torque</td><td>$\tau_1$ (N·m)</td><td>Variable</td></tr><tr><td>Mass of first link</td><td>$m_1$ (kg)</td><td>-</td></tr><tr><td>Mass of second link</td><td>$m_2$ (kg)</td><td>$9.60 \times 10^{-2}$</td></tr><tr><td>Coefficient of friction</td><td>$\mu_2$ (kg·m²s⁻¹)</td><td>$1.26 \times 10^{-4}$</td></tr><tr><td>Gravity acceleration</td><td>$g$ (ms⁻²)</td><td>9.81</td></tr><tr><td>Length of first link</td><td>$L_1$ (m)</td><td>$2.27 \times 10^{-1}$</td></tr><tr><td>Length of second link</td><td>$l_2$ (m)</td><td>$1.95 \times 10^{-1}$</td></tr><tr><td>Moment of inertia</td><td>$J_2$ (kg·m²)</td><td>$1.10 \times 10^{-3}$</td></tr><tr><td>Positive constant</td><td>$a_1$</td><td>6.29</td></tr><tr><td>Positive constant</td><td>$b_1$</td><td>$1.64 \times 10^1$</td></tr></tbody></table>
|
| 392 |
+
|
| 393 |
+
Fig. 1. Model of arm type double pendulum
|
| 394 |
+
|
| 395 |
+
The state equation of the arm type double inverted pendulum shown in Fig. 1 can be described by the following two equations:
|
| 396 |
+
|
| 397 |
+
$$\ddot{\theta}_1(t) = -a_1\dot{\theta}_1(t) + b_1u(t) \quad (40)$$
|
| 398 |
+
|
| 399 |
+
$$\alpha_1 \cos \theta_{12}(t) \cdot \ddot{\theta}_1(t) + \alpha_2 \ddot{\theta}_2(t) = \alpha_1 \dot{\theta}^2(t) \sin \theta_{12}(t) + \alpha_3 \sin \theta_2(t) \\ + \mu_2 \dot{\theta}_1(t) - \mu_2 \dot{\theta}_2(t) \quad (41)$$
|
| 400 |
+
|
| 401 |
+
The time-invariant parameters $\alpha_1$, $\alpha_2$, and $\alpha_3$ and the variable $\theta_{12}$ in Equation (40) and Equation (41) are as follows:
|
| 402 |
+
|
| 403 |
+
$$\begin{align} \alpha_1 &= m_2 L_1 l_2, & \alpha_2 &= J_2 + m_2 l_2^2 \\ \alpha_3 &= m_2 l_2 g, & \theta_{12}(t) &= \theta_1(t) - \theta_2(t). \end{align} \quad (42)$$
|
| 404 |
+
|
| 405 |
+
The parameters of equations (40) to (42) and Fig. 1 are listed in Table 2. Then we consider an optimal control problem for this example that takes a prediction horizon $N = 80$, initial state shown in equation (43), state weight matrix Q and an input weight R shown in equation (44).
|
| 406 |
+
|
| 407 |
+
$$[\theta_1(0), \dot{\theta}_1(0), \theta_2(0), \dot{\theta}_2(0)] = [\pi, 0, \pi, 0]. \quad (43)$$
|
| 408 |
+
|
| 409 |
+
$$Q = \operatorname{diag}(5.0, 0.01, 5.0, 0.01), \quad R = 1. \quad (44)$$
|
| 410 |
+
|
| 411 |
+
### 5.2 Trade-off between precision and convergence
|
| 412 |
+
|
| 413 |
+
In this subsection, we consider the relationship between the variance $\sigma$ of the sampling distribution and convergence using the result of applying normal type MCMPC to Example 1. Fig. 3 shows the average and standard deviation $3\sigma$ of the simulation results of 30 independent trials under each condition.
|
| 414 |
+
---PAGE_BREAK---
|
| 415 |
+
|
| 416 |
+
Table 2. Parameters (for Example 1)
|
| 417 |
+
|
| 418 |
+
<table><thead><tr><th>Name</th><th>Symbol</th><th>Value</th></tr></thead><tbody><tr><td>Num of predictive steps</td><td>N</td><td>15 step</td></tr><tr><td>Num of samples</td><td>M</td><td>5,000</td></tr><tr><td>Num of iterations</td><td>d</td><td>100</td></tr><tr><td>Variance</td><td>σ<sup>2</sup></td><td>Variable value</td></tr><tr><td>Variance</td><td>λ</td><td>6.3</td></tr></tbody></table>
|
| 419 |
+
|
| 420 |
+
Fig. 2. Effect of $\sigma$ on estimation error $e_0 = \tilde{u}_0 - u_0^*$ in the Example 1
|
| 421 |
+
|
| 422 |
+
Table 2 lists the specific parameters of MCMPC used in this simulation to confirm the relationship between variance $\sigma$ and convergence. In Fig. 3, we compare the result when $\sigma$ gradually increase to 0.5, 1.0, 2.0, 4.0. As $\sigma$ increases, error $e_0$ converges to 0 with fewer iterations. However, it can be confirmed that the variation in error $e_0$ as the variance $\sigma$ increases. This result is a good example showing that the variance $\sigma$ of sampling distribution results in a trade-off relationship between the speed of convergence and the accuracy of the estimated sub-optimal inputs at the time of convergence.
|
| 423 |
+
|
| 424 |
+
From the results shown in Fig.3, it is necessary to update the variance $\sigma$ appropriately to obtain the optimal inputs faster and more accurately.
|
| 425 |
+
|
| 426 |
+
## 5.3 Comparison of sampling variance update methods
|
| 427 |
+
|
| 428 |
+
Fig. 3 shows the result obtained by using geometric cooling method, as shown in (34). Then, we plotted the result of the average of 30 independent trials and range of the standard deviation $3\sigma$ in Fig. 3. The upper figure shows the result obtained using normal type MCMPC, whereas the lower figure shows the results obtained using Top1 sample MCMPC. We determined $\gamma$ in equation (34) using the following equation:
|
| 429 |
+
|
| 430 |
+
$$ \gamma = \exp \left( \frac{1}{D} \log \left( \frac{\delta}{\sigma_0} \right) \right) \quad (45) $$
|
| 431 |
+
|
| 432 |
+
where *D* number of iterations, $\sigma_0$ is initial variance $\sigma$ of sampling distribution, and $\delta$ is variance $\sigma$ of sampling distribution used in the *D*-th iterations. In this simulation, the conditions of $D = 100, \delta = 10^{-5}$ remained, and the value of $\sigma_0$ was changed from 0.5 to 4.0. In the upper figure in Fig. 3, it can be confirmed that the error $e_0$ may or may not converge to 0 depending on the initial variance $\sigma_0$. On the contrary, in the lower figure in Fig. 3, the error $e_0$ converges to 0 at any initial variance. In either case, the variation with respect to the estimated sub-optimal
|
| 433 |
+
|
| 434 |
+
Fig. 3. Effect of $\sigma$ on estimation error $e_0 = \tilde{u}_0 - u_0^*$ in the Example 1 when using geometric cooling method. (This figure shows results of mean and variance $3\sigma$ of 30 trials.)
|
| 435 |
+
|
| 436 |
+
input can be reduced. When normal type MCMPC was applied, the error $e_0$ in result did not converge to 0 when the initial variance $\sigma_0$ was set considerably small because $\sigma_d$ converged earlier than error $e_0$.
|
| 437 |
+
|
| 438 |
+
Fig. 4 shows the result obtained by applying latest sample variance method, as shown in (35). In the upper figure, which shows the result obtained by applying the normal type MCMPC, it can be confirmed that the error $e_0$ did not converge because $\sigma$ converged earlier than error $e_0$. Alternatively, when the TOP1 sample MCMPC, as shown in the lower figure in Fig. 4, is applied, the error $e_0$ and variation in the error $e_0$ of results converged near 0.
|
| 439 |
+
|
| 440 |
+
These results shown in Fig. 3 and Fig. 4 indicate that the two variance update methods proposed in this study cannot improve the trade off relationship between the convergence speed and the estimation accuracy when the normal type MCMPC is applied. However, when the update method shown in (34) is applied, choosing the appropriate (i.e., sufficiently large) initial variance can improve the trade-off relationship. On the other hand, in the case of TOP1 sample MCMPC, any of the updating methods can reliably converge to the optimal solution if sufficient iteration is taken. This means that TOP1 sample MCMPC has high affinity with any distribution update method.
|
| 441 |
+
|
| 442 |
+
## 5.4 Application to a nonlinear system
|
| 443 |
+
|
| 444 |
+
In this section, we show the results of applying what we have analogized so far to nonlinear systems. The discussion of convergence for the linear system can be applied to a nonlinear system that can be linearly approximated around the optimal solution. The system model and cost function are shown in Example 2. The parameters of the controller used for this simulation are as shown in Table 3. We set the initial variance to the lower bound given by:
|
| 445 |
+
|
| 446 |
+
$$ \sigma_0 \geq \frac{u_{max} - u_{min}}{6}. \quad (46) $$
|
| 447 |
+
|
| 448 |
+
The method of determining the variance $\sigma_0$ as in equation (46) is also used in Nakatani and Date (2019). Fig. 5 shows time responses of $\theta_1, \theta_2, \dot{\theta}_1, \dot{\theta}_2$, respectively, and shows a plot of the average value of 30 trials and a stan-
|
| 449 |
+
---PAGE_BREAK---
|
| 450 |
+
|
| 451 |
+
Fig. 4. Effect of $\sigma$ on estimation error $e_0 = \tilde{u}_0 - u_0^*$ in the Example 1 when using latest sample variance method. (This figure shows results of mean and variance $3\sigma$ of 30 trials.)
|
| 452 |
+
|
| 453 |
+
dard deviation $3\sigma$. In addition, (a) shows in the figure corresponds to the result of applying the TOP1 sample MCMPC, and (b) is the result of applying the normal type MCMPC. When the variance update method considered in this study was applied to normal type MCMPC, none of the methods achieved swing-up stabilization. For this reason, the result shown in Fig. 5 is a result of applying normal type MCMPC without variance updating. Moreover, the result of TOP1 sample MCMPC is the result of using the variance update method shown in equation (34). In addition, the variance $\sigma$ used in this simulation was one with the best performance among the five different simulations using variance $\sigma_0^2 = 0.5, 1.0, 2.0, 3.0, 4.0$ in normal type MCMPC. Both controllers stabilized the swing up in approximately 2.0 s after the start of control.
|
| 454 |
+
|
| 455 |
+
The upper figure in Fig. 6 and Fig. 7 shows the input sequences. Immediately after the start of control, TOP1 sample MCMPC selects the smallest input that satisfies the input constraints. On the contrary, the normal type MCMPC selects the conservative input. The lower figure in Fig. 6 and Fig. 7 shows the value of the cost function calculated based on the input sequences predicted in each control cycle. The smaller the value shown in Fig. 6 in each control cycle, the better the control performance. According to the results shown in this study, the TOP1 sample MCMPC demonstrates superior control performance. Moreover, this result was the same when the initial variance $\sigma_0$ and the variance update method were changed.
|
| 456 |
+
|
| 457 |
+
In normal type MCMPC, when the variance $\sigma$ or the distributed update method was changed, the control performance deteriorated or the swing-up stability could not be stabilized due to the trade-off relationship described in subsection 3.1.
|
| 458 |
+
|
| 459 |
+
## 6. CONCLUSION
|
| 460 |
+
|
| 461 |
+
Herein, we examined the relationship between the convergence of MCMPC and user determinable parameters. Additionally, it was analytically verified that the variance $\sigma$ of sampling distribution has a trade off relationship with the convergence speed and the accuracy of estimation. Next, we proposed two types of variance update meth-
|
| 462 |
+
|
| 463 |
+
Table 3. Parameters (for Example 2)
|
| 464 |
+
|
| 465 |
+
<table><thead><tr><td>Name</td><td>Value</td></tr></thead><tbody><tr><td>Simulation time</td><td>5.0 (s)</td></tr><tr><td>Control cycle</td><td>100 (Hz)</td></tr><tr><td>Prediction horizon</td><td>0.8 (s)</td></tr><tr><td>Num of predictive steps</td><td>80 step</td></tr><tr><td>Num of samples</td><td>5,000</td></tr><tr><td>Num of iterations</td><td>100</td></tr><tr><td>σ<sup>2</sup><sub>0</sub> or σ<sup>2</sup></td><td>1.0</td></tr><tr><td>λ<sup>2</sup></td><td>40</td></tr><tr><td>γ</td><td>0.9</td></tr><tr><td>Input constraint</td><td>-3.0 ≤ u(t) ≤ 3.0 (V)</td></tr></tbody></table>
|
| 466 |
+
|
| 467 |
+
Fig. 5. Simulation result ((a) TOP1 sample MCMPC vs (b) Normal type MCMPC). Left side top: time response of $\theta_1$. Right side top: time response of $\theta_2$. Left side bottom: time response of $\dot{\theta}_1$. Right side bottom: time response of $\dot{\theta}_2$.
|
| 468 |
+
|
| 469 |
+
Fig. 6. Top: Simulation result of input sequences. Bottom: Cost value calculated in each control cycle. (This figure shows results of mean and variance $3\sigma$ of 30 trials.)
|
| 470 |
+
|
| 471 |
+
ods and TOP1 sample MCMPC to overcome this trade-off problem. Finally, we completed numerical simulations and discussed the effects of applying the variance update method and TOP1 sample MCMPC. We also showed an example of numerical simulation applied to a nonlinear system and examined the possibility of applying the proposed analogy for controlling nonlinear systems.
|
| 472 |
+
---PAGE_BREAK---
|
| 473 |
+
|
| 474 |
+
Fig. 7. Top: Simulation result of input sequences. Bottom: Cost value calculated in each control cycle. (This figure shows results of one trial out of 30 trials.)
|
| 475 |
+
|
| 476 |
+
REFERENCES
|
| 477 |
+
|
| 478 |
+
Cairano, S.D. and Kolmanovsky, I.V. (2019). Automotive applications of model predictive control. In *Handbook of Model Predictive Control*, 493–527. Springer International Publishing, Cham.
|
| 479 |
+
|
| 480 |
+
Carlos E. Garcia, D.M.P. and Morari, M. (1989). Model predictive control: Theory and practice—a survey. *Automatica*, **25**, 335–348.
|
| 481 |
+
|
| 482 |
+
Kappen, H.J. (2007). An introduction to stochastic control theory, path integrals and reinforcement learning. *Proc. 9th Granada seminor on computational physics: Cooperative behavior in nearal systems*, 149–181.
|
| 483 |
+
|
| 484 |
+
Masashi Nanno, T.O. (2010). Nonlinear model predictive control for vehicle collision avoidance using c/gmres algorithm. presented at the 2010 IEEE International Conference on Control Applications, Yokohama, Japan, September 8–10.
|
| 485 |
+
|
| 486 |
+
Nakatani, S. and Date, H. (2019). Swing up control of inverted pendulum on a cart with collision by monte carlo model predictive control. *2019 58th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE)*, 1050–1055.
|
| 487 |
+
|
| 488 |
+
Namekawa, T. (2012). Distributed and predictive control for smart grids. *Journal of the Society of Instrument and Control Engineers*, **51**, 62–68.
|
| 489 |
+
|
| 490 |
+
Ohtsuka, T. (2004). A continuation/gmres method for fast computation of nonlinear receding horizon control. *Automatica*, **40**, 563–574.
|
| 491 |
+
|
| 492 |
+
Ohyama, S. and Date, H. (2017). Parallelized nonlinear model predictive control on gpu. *2017 11th Asian Control Conference (ASCC)*, Gold Coast, QLD, 1620–1625.
|
| 493 |
+
|
| 494 |
+
Rosen, E.B. and Nakano, R. (1994). Simulated annealing: Basics and recent topics on simulated annealing [in japanese]. *Journal of Japanese Society for Artificial Intelligence*, 365–372.
|
| 495 |
+
|
| 496 |
+
Satoh, S., Kappen, H.J., and Saeki, M. (2017). An iterative method for nonlinear stochastic optimal control based on path integrals. *IEEE Transactions on Automatic Control*, **62**, 262–276.
|
| 497 |
+
|
| 498 |
+
Williams, G., Aldrich, A., and Theodorou, E. (2015). Model predictive path integral control using covariance variable importance sampling. arXiv preprint arXiv:1509.01149.
|
| 499 |
+
|
| 500 |
+
Williams, G., Paul Drews, B.G., Rehg, J.M., and Theodorou, E.A. (2016). Aggressive driving with model predictive path integral control. *IEEE international Conference on Robotics and Automation (ICRA)*, Stockholm, Sweden, 1433–1440.
|
| 501 |
+
|
| 502 |
+
Appendix A. DERIVATION OF SAMPLE MEAN EXPECTATION AND VARIANCE OF SAMPLE MEAN
|
| 503 |
+
|
| 504 |
+
In this section, we describe how to derive the analytical solution (21) from Eq. (18). Substituting the results of Eq. (15) and Eq. (17) for Eq. (18) can be transformed as:
|
| 505 |
+
|
| 506 |
+
$$
|
| 507 |
+
\begin{align*}
|
| 508 |
+
E(\tilde{u}) &= \frac{1}{\sqrt{2\pi}\sigma^2} \int \exp\left(-\frac{1}{2\lambda^2}(\tilde{u}-\mathbf{u}^*)^T \hat{\mathcal{Q}} (\tilde{u}-\mathbf{u}^-) - (\tilde{u}-\mathbf{u}^-)^T \Sigma^{-1} (\tilde{u}-\mathbf{u}^-)\right) d\tilde{u} \\
|
| 509 |
+
&= \bar{C}_1 \int \exp\left(-\frac{1}{2\lambda^2}(\tilde{u}-\mathbf{u}^*)^T \hat{\mathcal{Q}} (\tilde{u}-\mathbf{u}^-) - \frac{1}{2\sigma^2}(\tilde{u}-\mathbf{u}^-)^T (\tilde{u}-\mathbf{u}^-)\right) d\tilde{u} \\
|
| 510 |
+
&= \bar{C}_2 \int \exp\left(-\tilde{u}^T \left(\frac{1}{2\lambda^2}\hat{\mathcal{Q}} + \frac{1}{2\sigma^2}I\right)\tilde{u} + \frac{1}{\lambda^2}(\tilde{u}^*)^T \hat{\mathcal{Q}} \tilde{u} + \frac{1}{\sigma^2}\tilde{u}^T \tilde{u}\right) d\tilde{u} \\
|
| 511 |
+
&= \bar{C}_3 \int \exp\left(-\frac{1}{2\lambda^2\sigma^2}\tilde{u}^T (\sigma^2\hat{\mathcal{Q}} + \lambda^2 I)\tilde{u} + \frac{\sigma^2}{\lambda^2\sigma^2}(\tilde{u}^*)^T \hat{\mathcal{Q}} \tilde{u} + \frac{\lambda^2}{\lambda^2\sigma^2}\tilde{u}^T \tilde{u}\right) d\tilde{u} \\
|
| 512 |
+
&= \bar{C}_4 \int \exp\left(-\frac{1}{2\lambda^2\sigma^2}(\tilde{u}-\mathbf{u})^T (\sigma^2\hat{\mathcal{Q}} + \lambda^2 I)(\tilde{u}-\mathbf{u})\right) d\tilde{u}
|
| 513 |
+
\end{align*}
|
| 514 |
+
(A.1)
|
| 515 |
+
$$
|
| 516 |
+
|
| 517 |
+
where $\bar{C}_1, \bar{C}_2, \bar{C}_3$ and $\bar{C}_4$ are equivalent to terms that are listed as constants to arrange them into terms of the quadratic form and other terms related to $\hat{\mathbf{u}}$, respectively. Then, we define the contents of the exponential function on the fourth line in Eq. (A.1) as $g$, and obtain a stationary point by partial differentiation of $g$ with $\hat{\mathbf{u}}$ to obtain the following result:
|
| 518 |
+
|
| 519 |
+
$$
|
| 520 |
+
\left. \frac{\partial g}{\partial \hat{\mathbf{u}}} \right|_{\hat{\mathbf{u}}=\bar{\mathbf{u}}} = (\sigma^2 \hat{\mathcal{Q}} + \lambda^2 I) \bar{\mathbf{u}} - (\sigma^2 \hat{\mathcal{Q}} \mathbf{u}^* + \lambda^2 \bar{\mathbf{u}}) = 0. \quad (\text{A.2})
|
| 521 |
+
$$
|
| 522 |
+
|
| 523 |
+
Here, solving the Eq. (A.2) for $\tilde{\mathbf{u}}$ agrees with the result of Eq. (A.2).
|
| 524 |
+
|
| 525 |
+
Next, we find the variance of sample mean $\tilde{\mathbf{u}}$ using Eq. (A.1). Let random variable $\hat{\mathbf{u}}$ be a random variable that follows a multidimensional normal distribution with expected value $\tilde{\mathbf{u}}$ and variance $\Sigma_S$. From the PDF of this distribution and the result of the coefficient comparison of the integrand on the fifth line in Eq. (A.1), the variance of $\Sigma_S$ can be shown as:
|
| 526 |
+
|
| 527 |
+
$$
|
| 528 |
+
\frac{1}{2\lambda^2\sigma^2} (\sigma^2\hat{\mathcal{Q}} + \lambda^2 I) = \frac{1}{2}\Sigma_S^{-1} \qquad (\text{A.3})
|
| 529 |
+
$$
|
samples_new/texts_merged/503850.md
ADDED
|
@@ -0,0 +1,169 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
QUESTION 1
|
| 5 |
+
|
| 6 |
+
$A$ = the complement of $\angle B$ degrees
|
| 7 |
+
|
| 8 |
+
$B$ = the supplement of $\angle C$ degrees
|
| 9 |
+
|
| 10 |
+
$C$ = the supplement of the complement of $\angle D$ degrees
|
| 11 |
+
|
| 12 |
+
$D$ = the central angle of a circle with radius 4 with corresponding arc length of $\pi$
|
| 13 |
+
|
| 14 |
+
$$\text{Find } A + B + C + D$$
|
| 15 |
+
---PAGE_BREAK---
|
| 16 |
+
|
| 17 |
+
QUESTION 2
|
| 18 |
+
|
| 19 |
+
A = the number of diagonals of an icosagon (20 sided polygon)
|
| 20 |
+
|
| 21 |
+
B = the area of an isosceles trapezoid with base lengths 4 and 28 and a height of 5
|
| 22 |
+
|
| 23 |
+
C = the height of a rectangular prism with a length of 20, a width of 9, and a space diagonal of 25
|
| 24 |
+
|
| 25 |
+
D = the volume of a hemisphere with radius 6
|
| 26 |
+
|
| 27 |
+
Find $A+B+\frac{D}{C}$
|
| 28 |
+
---PAGE_BREAK---
|
| 29 |
+
|
| 30 |
+
QUESTION 3
|
| 31 |
+
|
| 32 |
+
Puneet lives in a box with dimensions $20ft \times 15ft \times 10ft$. There is a door with dimensions $7ft \times 4ft$. Each can of paint can cover $100 ft^2$.
|
| 33 |
+
|
| 34 |
+
A = the number of paint cans needed to paint the door
|
| 35 |
+
|
| 36 |
+
B = the number of paint cans needed to paint Puneet's house given that he paints the entire surface area of the house
|
| 37 |
+
|
| 38 |
+
C = the length of the longest sandwich Puneet can fit into his box
|
| 39 |
+
|
| 40 |
+
D = the ratio of the volume of the box to the surface area of the box
|
| 41 |
+
|
| 42 |
+
Find $AC + BD$
|
| 43 |
+
---PAGE_BREAK---
|
| 44 |
+
|
| 45 |
+
QUESTION 4
|
| 46 |
+
|
| 47 |
+
A semicircle is inscribed in an equilateral triangle so that the diameter rests on one side of the triangle and is tangent to
|
| 48 |
+
the other two sides. Let A be the radius of the semicircle when the side lengths of the triangle equals 24.
|
| 49 |
+
|
| 50 |
+
Two poles of height 6 ft. and 8 ft. are located 12 ft. away from each other. Jenny attaches two cables that connect the top of one pole to the bottom of the other. Let B be the height of the intersection of the two cables from the ground.
|
| 51 |
+
|
| 52 |
+
Jenny likes pie and $\pi$. She buys herself a two-dimensional pie with radius 14 in. Let C be the area of her pie in $in^2$.
|
| 53 |
+
|
| 54 |
+
Find $A + B + C$.
|
| 55 |
+
---PAGE_BREAK---
|
| 56 |
+
|
| 57 |
+
QUESTION 5
|
| 58 |
+
|
| 59 |
+
A = the length of the inradius of a triangle with side lengths 7, 8, and 9
|
| 60 |
+
|
| 61 |
+
B = the length of circumradius of a triangle with side lengths 10, 10, and 14
|
| 62 |
+
|
| 63 |
+
C = the area of a triangle with side lengths 14, 60, and 66
|
| 64 |
+
|
| 65 |
+
D = the area of a triangle with side lengths 12 and 15 and included angle of 60°
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\text{Hint: Area} = \frac{1}{2} ab \sin C \text{ where C is the angle between } a \text{ and } b
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
Find $A\sqrt{5} + B\sqrt{51} - \frac{C}{\sqrt{2}} + \frac{D}{\sqrt{3}}$
|
| 72 |
+
---PAGE_BREAK---
|
| 73 |
+
|
| 74 |
+
**QUESTION 6**
|
| 75 |
+
|
| 76 |
+
A = the sum of the coordinates of the centroid of a triangle with vertices (5, 7), (-1, 5), and (8, 0)
|
| 77 |
+
|
| 78 |
+
B = the slope of the median from vertex B of a triangle with vertices A(31, 7), B(19, 21), C(25, 12)
|
| 79 |
+
|
| 80 |
+
C = the measure of ∠D in degrees in △DOG if the opposite side length is $4\sqrt{2}$, ∠G equals 45° and DO equals 8
|
| 81 |
+
|
| 82 |
+
Find A+B+C.
|
| 83 |
+
---PAGE_BREAK---
|
| 84 |
+
|
| 85 |
+
QUESTION 7
|
| 86 |
+
|
| 87 |
+
(Figure not drawn to scale. A quadrilateral is drawn over two parallel lines.)
|
| 88 |
+
|
| 89 |
+
What is the sum of $\angle B$ and $\angle F$ if $\angle A = 42^\circ$, $\angle C = 79^\circ$, $\angle E = 135^\circ$, and $\angle D = 51^\circ$?
|
| 90 |
+
---PAGE_BREAK---
|
| 91 |
+
|
| 92 |
+
QUESTION 8
|
| 93 |
+
|
| 94 |
+
Two spheres are inscribed in a rectangular box so that each sphere is tangent to five sides of the box and the other sphere.
|
| 95 |
+
If the radius of each of the spheres is 4 in, then the volume of the box is A in³.
|
| 96 |
+
|
| 97 |
+
If a frustum of the cone has radii 6 in and 8 in and a height of 4 in, then the lateral surface area is Bπ in².
|
| 98 |
+
|
| 99 |
+
An ant is sitting on the center of the top face of a right, cylindrical can of soup with radius 4 in and height 6π in. The ant wants to get down to the ground so it takes the shortest path to the edge of the face and climbs down the side of the can. The ant spirals down the can, rotating around once and arriving at the point directly underneath his position on the top edge. The length of the path the ant took from his original position to the ground is C in.
|
| 100 |
+
|
| 101 |
+
Find A+B+C.
|
| 102 |
+
---PAGE_BREAK---
|
| 103 |
+
|
| 104 |
+
QUESTION 9
|
| 105 |
+
|
| 106 |
+
Add the values in the parentheses to $x$ if they are true. Subtract them from $x$ if they are false. Begin with $x = 0$.
|
| 107 |
+
|
| 108 |
+
(5) The incenter of a triangle is the center of its inscribed circle
|
| 109 |
+
|
| 110 |
+
(-3) The circumcenter of a triangle is equidistant from the sides of the triangle
|
| 111 |
+
|
| 112 |
+
(-2) The orthocenter is the intersection of the altitudes of a triangle
|
| 113 |
+
|
| 114 |
+
(7) The centroid is the intersection of the medians of a triangle
|
| 115 |
+
|
| 116 |
+
(10) Euler's line is made up of the orthocenter, circumcenter, and the incenter
|
| 117 |
+
|
| 118 |
+
After performing these operations, what is $x$?
|
| 119 |
+
---PAGE_BREAK---
|
| 120 |
+
|
| 121 |
+
QUESTION 10
|
| 122 |
+
|
| 123 |
+
A cylinder with radius 3 and height $\frac{9}{4}$ is inscribed in a cone with radius 8.
|
| 124 |
+
|
| 125 |
+
$A$ = the volume of cylinder
|
| 126 |
+
|
| 127 |
+
$B$ = the height of the cone
|
| 128 |
+
|
| 129 |
+
$C$ = the volume of the cone
|
| 130 |
+
|
| 131 |
+
Find $\frac{AC}{B}$.
|
| 132 |
+
---PAGE_BREAK---
|
| 133 |
+
|
| 134 |
+
QUESTION 11
|
| 135 |
+
|
| 136 |
+
Siddarth is obsessed with the song Bang by Griana Arande. Jeewoo, unfortunately, has bad music taste and likes All the Single Men by Jeyonce. The song Bang by Griana Arande is 3 minutes long. All the Single Men by Jeyonce is also 3 minutes long. If Siddarth starts to listen to the song randomly at a time between 12:00 pm and 12:30 pm and if Jenny listens to All the Single Men by Jeyonce randomly between 12:00 and 12:30 p.m., what is the probability that their songs are both playing at a given time between 12:00 to 12:30 p.m.
|
| 137 |
+
---PAGE_BREAK---
|
| 138 |
+
|
| 139 |
+
QUESTION 12
|
| 140 |
+
|
| 141 |
+
A = the number of sides of an undecagon
|
| 142 |
+
|
| 143 |
+
B = the number of faces of a hexahedron
|
| 144 |
+
|
| 145 |
+
C = the number of vertices of a figure with 12 edges and 8 faces
|
| 146 |
+
|
| 147 |
+
D = the number of space diagonals in a dodecahedron
|
| 148 |
+
|
| 149 |
+
Find (A+D) - (B+C)
|
| 150 |
+
---PAGE_BREAK---
|
| 151 |
+
|
| 152 |
+
QUESTION 13
|
| 153 |
+
|
| 154 |
+
A = sin 60°
|
| 155 |
+
|
| 156 |
+
B = sin 30°
|
| 157 |
+
|
| 158 |
+
C = cos 45°
|
| 159 |
+
|
| 160 |
+
D = tan 60°
|
| 161 |
+
|
| 162 |
+
Find ABCD.
|
| 163 |
+
---PAGE_BREAK---
|
| 164 |
+
|
| 165 |
+
QUESTION 14
|
| 166 |
+
|
| 167 |
+
(The figure is not drawn to scale.)
|
| 168 |
+
|
| 169 |
+
The lengths of *a* and *b* are 6 and 4, respectively. How many possible combinations of (*c*, *d*) exist if *c* and *d* are integer lengths?
|
samples_new/texts_merged/5396754.md
ADDED
|
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Monte Carlo Sampling in Path
|
| 5 |
+
Space: Calculating Time Correlation
|
| 6 |
+
Functions by Transforming
|
| 7 |
+
Ensembles of Trajectories
|
| 8 |
+
|
| 9 |
+
Cite as: AIP Conference Proceedings 690, 192 (2003); https://doi.org/10.1063/1.1632129
|
| 10 |
+
Published Online: 06 November 2003
|
| 11 |
+
|
| 12 |
+
Christoph Dellago, and Phillip L. Geissler
|
| 13 |
+
|
| 14 |
+
ARTICLES YOU MAY BE INTERESTED IN
|
| 15 |
+
|
| 16 |
+
Precision shooting: Sampling long transition pathways
|
| 17 |
+
The Journal of Chemical Physics **129**, 194101 (2008); https://doi.org/10.1063/1.2978000
|
| 18 |
+
|
| 19 |
+
An efficient transition path sampling algorithm for nanoparticles under pressure
|
| 20 |
+
The Journal of Chemical Physics **127**, 154718 (2007); https://doi.org/10.1063/1.2790431
|
| 21 |
+
---PAGE_BREAK---
|
| 22 |
+
|
| 23 |
+
Monte Carlo Sampling in Path Space:
|
| 24 |
+
Calculating Time Correlation Functions
|
| 25 |
+
by Transforming Ensembles of Trajectories
|
| 26 |
+
|
| 27 |
+
Christoph Dellago\* and Phillip L. Geissler\^\textsuperscript{†}
|
| 28 |
+
|
| 29 |
+
\*Institute for Experimental Physics, University of Vienna, Boltzmanngasse 5, 1090 Vienna, Austria
|
| 30 |
+
|
| 31 |
+
\^\textsuperscript{†}Department of Chemistry, Massachusetts Institute of Technology, Cambridge, MA 02139
|
| 32 |
+
|
| 33 |
+
**Abstract.** Computational studies of processes in complex systems with metastable states are often complicated by a wide separation of time scales. Such processes can be studied with transition path sampling, a computational methodology based on an importance sampling of reactive trajectories capable of bridging this time scale gap. Within this perspective, ensembles of trajectories are sampled and manipulated in close analogy to standard techniques of statistical mechanics. In particular, the population time correlation functions appearing in the expressions for transition rate constants can be written in terms of free energy differences between ensembles of trajectories. Here we calculate such free energy differences with thermodynamic integration, which, in effect, corresponds to reversibly changing between ensembles of trajectories.
|
| 34 |
+
|
| 35 |
+
INTRODUCTION
|
| 36 |
+
|
| 37 |
+
Transition path sampling is a computational technique developed by us and others to study rare events in complex systems [1, 2, 3]. Although rare, such events are crucially important in many condensed matter systems. Nucleation of first order phase transitions, transport in solids, chemical reactions in solution, and protein folding all occur on time scales which are long compared to basic molecular motions. Transition path sampling, which is based on an importance sampling in trajectory space, can provide insights into mechanism and kinetics of processes involving dynamical bottlenecks. In the following we will give a brief overview of this methodology, focusing on the calculation of reaction rate constants. In this framework reaction rates are related to the reversible work required to manipulate ensembles of trajectories. As a consequence, rate constants can be calculated using free energy estimation methods familiar from equilibrium statistical mechanics, such as umbrella sampling and thermodynamic integration. For an in depth treatment of all aspects of transition path sampling we refer the reader to the review articles [2] and [3].
|
| 38 |
+
|
| 39 |
+
In the path sampling approach dynamical pathways of length $t$ are represented by ordered sequences of $L = t/\Delta t + 1$ states, $x(t) \equiv \{x_0, x_{\Delta t}, x_{2\Delta t}, \dots, x_t\}$. Consecutive states are separated by a time increment $\Delta t$. Such dynamical pathways can be deterministic trajectories as generated by Newtonian dynamics or stochastic trajectories as constructed from Langevin dynamics or from Monte Carlo simulations. For Markovian single step transition probabilities $p(x_{i\Delta t} \rightarrow x_{(i+1)\Delta t})$ the statistical weight $\mathcal{P}[x(t)]$ of a particular
|
| 40 |
+
---PAGE_BREAK---
|
| 41 |
+
|
| 42 |
+
trajectory $x(t)$ is
|
| 43 |
+
|
| 44 |
+
$$ \mathcal{P}[x(t)] = \rho(x_0) \prod_{i=0}^{L-1} p(x_{i\Delta t} \rightarrow x_{(i+1)\Delta t}), \quad (1) $$
|
| 45 |
+
|
| 46 |
+
where $\rho(x_0)$ is the distribution of initial states $x_0$. In many applications, $\rho(x_0)$ will be an equilibrium distribution such as the canonical distribution, but non-equilibrium distributions of initial conditions are possible as well.
|
| 47 |
+
|
| 48 |
+
In applying transition path sampling one is usually interested in finding dynamical pathways connecting stable (or metastable) states, which we name *A* and *B*. Then, the probability of a *reactive* pathway, i.e., of a pathway starting in *A* and ending in *B*, is
|
| 49 |
+
|
| 50 |
+
$$ \mathcal{P}_{AB}[x(t)] = h_A(x_0) \mathcal{P}[x(t)] h_B(x_t) / Z_{AB}(t), \quad (2) $$
|
| 51 |
+
|
| 52 |
+
where $h_A(x)$ and $h_B(x)$ are the population functions for regions *A* and *B*. That is, $h_A(x)$ is 1 if $x$ is in *A* and 0 otherwise, and $h_B(x)$ is defined analogously. The factor $Z_{AB}$,
|
| 53 |
+
|
| 54 |
+
$$ Z_{AB}(t) = \int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] h_B(x_t), \quad (3) $$
|
| 55 |
+
|
| 56 |
+
normalizes the reactive path probability, and the notation $\int \mathcal{D}x(t)$ indicates an integration over all time slices of the pathway. The quantity $Z_{AB}(t)$ can be viewed as a partition function characterizing the ensemble of all reactive pathways. This analogy between conventional equilibrium statistical mechanics and the statistics of trajectories will be important in the discussion of reaction kinetics in the next section. The distribution $\mathcal{P}_{AB}[x(t)]$, which weights trajectories in the *transition path ensemble*, is a statistical description of all dynamical pathways connecting regions *A* and *B*.
|
| 57 |
+
|
| 58 |
+
To sample the transition path ensemble we have developed several Monte Carlo simulation techniques [4, 5]. In these algorithms, which are importance sampling procedures in trajectory space, one proceeds by generating trial pathways from existing trajectories via what we call the shooting and shifting method [4]. Newly generated trial pathways are then accepted with a probability obeying the detailed balance condition. This condition guarantees that pathways are sampled according to their weight in the transition path ensemble. The detailed balance condition can be satisfied by choosing an acceptance probability according to the celebrated Metropolis rule [6]. Using such an acceptance probability in conjunction with the shooting and shifting algorithms one can efficiently explore trajectory space and harvest reactive pathways with their proper weight. Statistical analysis of the harvested pathways can then provide information on the kinetics of transition. The basis for this type of analysis will be discussed in the following section.
|
| 59 |
+
|
| 60 |
+
REACTION RATES
|
| 61 |
+
|
| 62 |
+
The time correlation function of state populations
|
| 63 |
+
|
| 64 |
+
$$ C(t) = \frac{\langle h_A(x_0) h_B(x_t) \rangle}{\langle h_A(x_0) \rangle} \quad (4) $$
|
| 65 |
+
---PAGE_BREAK---
|
| 66 |
+
|
| 67 |
+
provides a link between the microscopic dynamics of the system and the phenomeno-
|
| 68 |
+
logical description of the kinetics in terms of the forward and backward reaction rate
|
| 69 |
+
constants k<sub>AB</sub> and k<sub>BA</sub>, respectively [7]. If the reaction time τ<sub>rxn</sub> = (k<sub>AB</sub> + k<sub>BA</sub>)<sup>-1</sup> is sig-
|
| 70 |
+
nificantly larger than the time τ<sub>mol</sub> necessary to cross the barrier top, C(t) approaches its
|
| 71 |
+
long time value exponentially after the short molecular transient time τ<sub>mol</sub>:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
C(t) \approx \langle h_B \rangle (1 - \exp\{-t/\tau_{\text{rxn}}\}), \quad (5)
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
For τ<sub>mol</sub> < t ≪ τ<sub>rxn</sub> the population correlation function C(t) grows linearly:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
C(t) \approx k_{AB}t. \tag{6}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
Thus, the forward reaction rate constant can be determined from the slope of C(t) in this
|
| 84 |
+
time regime.
|
| 85 |
+
|
| 86 |
+
To evaluate C(t) in the transition path sampling framework we rewrite it in terms of
|
| 87 |
+
sums over trajectories:
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
C(t) = \frac{\int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] h_B(x_t)}{\int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)]} = \frac{Z_{AB}(t)}{Z_A}. \quad (7)
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
The above expression can be viewed as the ratio between the “partition functions” for
|
| 94 |
+
two different path ensembles: one, $Z_A$, in which pathways start in A and end anywhere,
|
| 95 |
+
and one, $Z_{AB}(t)$, in which pathways start in A and end in B. This perspective suggests
|
| 96 |
+
that we determine the correlation function $C(t)$ via calculation of $\Delta F(t) \equiv F_{AB}(t) - F_A =$
|
| 97 |
+
$-\ln Z_{AB}(t) + \ln Z_A$, in effect a difference of free energies. From the free energy difference
|
| 98 |
+
one can then immediately determine the time correlation function, $C(t) = \exp[-\Delta F(t)]$.
|
| 99 |
+
The free energy difference $\Delta F(t)$ can be viewed as the work necessary to reversibly
|
| 100 |
+
change from a path ensemble with free final points $x_t$ to a path ensemble in which the
|
| 101 |
+
final points $x_t$ are required to reside in region B.
|
| 102 |
+
|
| 103 |
+
In principle, one can determine the reaction rate constant $k_{AB}$ by calculating the time
|
| 104 |
+
correlation function $C(t)$ at various times and by taking a numerical derivative with
|
| 105 |
+
respect to $t$. This procedure is, however, numerically costly since it requires repeated
|
| 106 |
+
free energy calculations. Fortunately, the reversible work $\Delta F(t')$ for a given time $t'$ can
|
| 107 |
+
be written as a sum of the reversible work $\Delta F(t)$ for a different time $t$ and the reversible
|
| 108 |
+
work $F(t',t)$ necessary to change $t$ to $t'$ [2]:
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
\Delta F(t') = \Delta F(t) + F(t', t). \tag{8}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
This reversible work $F(t',t)$ can then be calculated for all times between 0 and $t'$ in
|
| 115 |
+
a single transition path sampling simulation, as described in detail in Ref. [2]. In the
|
| 116 |
+
following sections we will focus on ways to determine the reversible work $\Delta F(t)$ for a
|
| 117 |
+
single time $t$.
|
| 118 |
+
|
| 119 |
+
MODEL
|
| 120 |
+
|
| 121 |
+
To illustrate the numerical methods presented in this paper we have used them to
|
| 122 |
+
calculate the time correlation function C(t) for isomerizations occurring in a simple
|
| 123 |
+
---PAGE_BREAK---
|
| 124 |
+
|
| 125 |
+
diatomic molecule immersed in a bath of purely repulsive particles, schematically shown on the left hand side panel of Fig. 1. A very similar model has been studied by Straub, Borkovec, and Berne [8]. This two dimensional model consists of *N* point particles of unit mass interacting via the Weeks-Chandler-Anderson potential [9],
|
| 126 |
+
|
| 127 |
+
$$V_{\text{WCA}}(r) = \begin{cases} 4\epsilon \left[ \left(\frac{\sigma}{r}\right)^{12} - \left(\frac{\sigma}{r}\right)^{6} \right] + \epsilon & \text{for } r \le r_{\text{WCA}} \equiv 2^{1/6}\sigma, \\ 0 & \text{for } r > r_{\text{WCA}}. \end{cases} \quad (9)$$
|
| 128 |
+
|
| 129 |
+
Here, *r* is the interparticle distance, and *ε* and *σ* specify the strength and the interaction radius of the potential, respectively. In addition, two of the *N* particles are bound to each other by a double well potential
|
| 130 |
+
|
| 131 |
+
$$V_{\text{dw}}(r) = h \left[ 1 - \frac{(r - r_{\text{WCA}} - w)^2}{w^2} \right]^2, \quad (10)$$
|
| 132 |
+
|
| 133 |
+
where *h* denotes the height of the potential energy barrier separating the potential energy wells located at $r_{\text{WCA}} = 2^{1/6}\sigma$ and $r_{\text{WCA}} + w$.
|
| 134 |
+
|
| 135 |
+
**FIGURE 1.** (a) Schematic representation of the diatomic molecule (dark grey disks) held together by a spring immersed in the WCA fluid (light grey disks). (b) Intramolecular (solid line) and intermolecular (dashed line) potential energy. The parameters determining height and width of the double well potential are $h = 6\epsilon$ and $w = 0.5\sigma$. The thin lines denote the "drawbridge" constraining potential used in the thermodynamic integration and are labelled from $\lambda = 10$ to $\lambda = 100$ according to their slopes. The limits $r_A$ and $r_B$ for states A and B, respectively, are shown as vertical dotted lines.
|
| 136 |
+
|
| 137 |
+
The diatomic molecule held together by the potential shown in Fig. 1 can reside in two states. In the *contracted* state the interatomic distance *r* fluctuates around $r_{\text{WCA}}$, while in the *expanded* state *r* is close to $r_{\text{WCA}} + w$. Due to interactions with the solvent particles, transitions between the two states can occur provided the total energy of the system is sufficiently high. Collisions with solvent particles provide the energy for activation as well as the dissipation necessary to stabilize the molecule in one of the wells after a barrier crossing has occurred. For high barriers, transitions between the extended and the contracted state are rare. In all calculations the system is defined to be in state A if the interatomic distance $r < r_A = 1.35\sigma$ and in state B if $r > r_B = 1.45\sigma$. These limiting values are denoted by vertical dotted lines in the right hand side panel of Fig. 1. The
|
| 138 |
+
---PAGE_BREAK---
|
| 139 |
+
|
| 140 |
+
Newtonian equations of motion are integrated with the velocity Verlet algorithm [10] using a time step of $\Delta t = 0.002(m\sigma^2/\epsilon)^{1/2}$.
|
| 141 |
+
|
| 142 |
+
THERMODYNAMIC INTEGRATION
|
| 143 |
+
|
| 144 |
+
In Ref. [4] we determined the time correlation function $C(t)$ with an umbrella sampling approach. Here we show how the time correlation function $C(t)$ from Equ. (7) can be calculated with a strategy analogous to thermodynamic integration, a method used to estimate the free energy difference between ensembles [11, 12]. In a conventional thermodynamic integration, one introduces a coupling parameter $\lambda$, which can transform one ensemble into the other when changed from $\lambda_i$ to $\lambda_f$. Derivatives of the free energy with respect to $\lambda$ calculated at intermediate values of $\lambda$ can then be used to compute the free energy difference by numerical integration from $\lambda_i$ to $\lambda_f$.
|
| 145 |
+
|
| 146 |
+
Thermodynamic integration can also be used to calculate free energy differences between path ensembles. Such a strategy has in effect been used by S. Sun [13] to efficiently estimate free energy difference in the fast switching method recently proposed by Jarzynski [14, 15, 16, 17, 18]. For our purpose we introduce a function $\Theta(x, \lambda)$ depending on the configuration $x$ and on a parameter $\lambda$. The dependence on $\lambda$ is chosen such that $\Theta(x, \lambda_i) = 1$ and $\Theta(x, \lambda_f) = h_B(x)$. Using this function $\Theta$ one can then continuously transform an ensemble of paths starting in A and ending anywhere into an ensemble of pathways beginning in A and ending in B.
|
| 147 |
+
|
| 148 |
+
Introducing the partition function
|
| 149 |
+
|
| 150 |
+
$$Z(t, \lambda) \equiv \int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] \Theta(x_t, \lambda) \quad (11)$$
|
| 151 |
+
|
| 152 |
+
we generalize the time correlation function $C(t)$ from Equ. (7) as the ratio between partition functions for $\lambda$ and $\lambda_i$:
|
| 153 |
+
|
| 154 |
+
$$C(t, \lambda) = Z(t, \lambda) / Z(t, \lambda_i). \qquad (12)$$
|
| 155 |
+
|
| 156 |
+
For $\lambda = \lambda_f$ this function is just the correlation function $C(t) = \exp(-\Delta F)$ we wish to determine. We calculate the reversible work $F(t, \lambda) = -\ln Z(t, \lambda)$ by first taking its derivative with respect to $\lambda$:
|
| 157 |
+
|
| 158 |
+
$$\frac{\partial F(t, \lambda)}{\partial \lambda} = -\frac{\partial \ln Z(t, \lambda)}{\partial \lambda} = -\frac{1}{Z(t, \lambda)} \frac{\partial}{\partial \lambda} Z(t, \lambda). \quad (13)$$
|
| 159 |
+
|
| 160 |
+
Using the definition of $Z$ we obtain:
|
| 161 |
+
|
| 162 |
+
$$\frac{\partial F(t, \lambda)}{\partial \lambda} = - \int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] \frac{\partial \Theta(x_t, \lambda)}{\partial \lambda} / Z(t, \lambda). \quad (14)$$
|
| 163 |
+
|
| 164 |
+
To bring this expression into a form amenable to a path sampling simulation we define an “energy” $U(x, \lambda)$ related to the function $\Theta$ by:
|
| 165 |
+
|
| 166 |
+
$$U(x, \lambda) = -\ln \Theta(x, \lambda). \quad (15)$$
|
| 167 |
+
---PAGE_BREAK---
|
| 168 |
+
|
| 169 |
+
Inserting the above expression into Eq. (14) we finally obtain:
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
\frac{\partial F(t, \lambda)}{\partial \lambda} = \frac{1}{Z(t, \lambda)} \int \mathcal{D}x(t) h_A(x_0) \mathcal{P}[x(t)] \Theta(x_t, \lambda) \frac{\partial U(x_t, \lambda)}{\partial \lambda} = \left\langle \frac{\partial U(x_t, \lambda)}{\partial \lambda} \right\rangle_{\lambda}. \quad (16)
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
Here, 〈· · · 〉$_{λ}$ denotes a path average carried out in the ensemble described by
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
\mathcal{P}[x(t), \lambda] \equiv h_A(x_0) \mathcal{P}[x(t)] \Theta(x_t, \lambda) / Z(t, \lambda). \quad (17)
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
This is the ensemble of all pathways starting in region A with a bias Θ(xᵢ, λ) acting on xᵢ, the last time slice of the pathway. The biasing function Θ(x, λ) is designed to pull the path endpoints gradually towards region B as λ is increased and to finally confine them to region B for λ = λ_f. From derivatives ∂F(t, λ)/∂λ computed for several values of λ in the range between λ_i and λ_f one then can calculate the reversible work ΔF(t) = F(t, λ_f) - F(t, λ_i) by integration:
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\Delta F(t) = \int_{\lambda_i}^{\lambda_f} d\lambda \left\langle \frac{\partial U(x_t, \lambda)}{\partial \lambda} \right\rangle_{\lambda}. \qquad (18)
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
The correlation function we originally set out to compute is then simply given by $C(t) = \exp[-\Delta F(t)]$.
|
| 188 |
+
|
| 189 |
+
To study transitions of our solvated diatomic molecule, we introduce a “drawbridge” potential anchored at $r_B$:
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
U(x, \lambda) \equiv \lambda \times [r_B - r(x)] \times \theta[r_B - r(x)]. \tag{19}
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
Here, $r_B$ is the lower limit of $r$ in region $B$ and $\theta$ is the Heaviside theta function. By lifting the drawbridge from $\lambda = 0$ to $\lambda = \infty$ one can continuously confine the initially free endpoints of the pathways to final region $B$. For this drawbridge biasing potential the derivative of the reversible work $F(t, \lambda)$ is given by
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
\frac{\partial F(t, \lambda)}{\partial \lambda} = \left\langle [r_B - r(x_t)] \times \theta[r_B - r(x_t)] \right\rangle_{\lambda}. \quad (20)
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
We have used Equ. (20) to calculate $\partial F(t, \lambda)/\partial \lambda$ for $t = 0.8(m\sigma^2/\epsilon)^{1/2}$ at 100 equidistant values of $\lambda$ in the range from $\lambda = 0$ to $\lambda = 100$. Each single path sampling simulation consisted of $2 \times 10^6$ attempted path moves. In this sequence of path sampling simulations starting at $\lambda = 0$ and ending at $\lambda = 100$, corresponding to a *compression* of pathways, the final path of simulation *n* was used as initial path for simulation *n* + 1. Results of these simulations are plotted in Fig. 2. Derivatives of the reversible work with respect to $\lambda$ are shown on the left hand side. The right panel contains the reversible work $F(t, \lambda)$ as a function of $\lambda$ as obtained by numerical integration. The plateau value of $F(t, \lambda) = 9.85$ reached at $\lambda \sim 40$ is the reversible work $\Delta F(t)$ necessary to confine the final points of the pathways to region *B*. To investigate if these results are affected by hysteresis, we have carried out a sequence of path sampling simulations corresponding to an *expansion* of the path ensemble. In this sequence of simulations we started with pathways constrained to end in region *B* end then subsequently lowered $\lambda$ from an initial value of 100
|
| 202 |
+
---PAGE_BREAK---
|
| 203 |
+
|
| 204 |
+
FIGURE 2. Results of path ensemble thermodynamic integration simulations. Left hand side: derivatives of the reversible work $F(t, \lambda)$ with respect to the coupling parameter $\lambda$ calculated in a path compression simulation (solid line) and in a path expansion simulation (dashed line). In both cases $\partial F/\partial \lambda$ was calculated at 101 equidistant values of $\lambda$ in the range from 0 to 100. Right hand side: Reversible work $F(t, \lambda)$ as a function of $\lambda$ obtained by numerical integration of the curves shown on the left hand side. Again, the solid line denotes results of a path ensemble compression while the dashed line refers to a path ensemble expansion. The free energy difference obtained from these simulations is $\Delta F(t) = 9.85$ corresponding to a correlation function value of $C(t) = 5.27 \times 10^{-5}$.
|
| 205 |
+
|
| 206 |
+
to a final value of 0. The reversible work and its derivative obtained by path expansions are shown as dashed lines in Fig. 2. Path compression and path expansion yield almost identical results.
|
| 207 |
+
|
| 208 |
+
In this work we have borrowed many familiar ideas and techniques from statistical thermodynamics (e.g., reversible work, thermodynamic integration) in order to compute intrinsically dynamical quantities (e.g., rate constants). Thermodynamic concepts become directly useful for this purpose once the dynamical problem has been reduced to characterizing the statistical consequences of imposing constraints (of reactivity) on stationary distributions (of dynamical pathways). This task, in the context of phase space ensembles, is the central challenge of classical statistical mechanics. Remarkably, such a thermodynamic interpretation extends even to the nonequilibrium realm. Recent results concerning *irreversible* transformations between equilibrium states [14, 15, 16, 17, 18] have analogous meaning for finite-time switching between ensembles of trajectories, opening new routes for rate constant calculations. We are working to develop transition path sampling methods exploiting this analogy.
|
| 209 |
+
|
| 210 |
+
## ACKNOWLEDGMENTS
|
| 211 |
+
|
| 212 |
+
P.L.G. is an MIT Science Fellow. The calculations were performed on the Schrödinger II Linux cluster of the Vienna University Computer Center.
|
| 213 |
+
---PAGE_BREAK---
|
| 214 |
+
|
| 215 |
+
REFERENCES
|
| 216 |
+
|
| 217 |
+
1. C. Dellago, P. G. Bolhuis, F. S. Csajka, and D. Chandler, *J. Chem. Phys.* **108**, 1964 (1998).
|
| 218 |
+
|
| 219 |
+
2. C. Dellago, P. G. Bolhuis, and P. L. Geissler, *Adv. Chem. Phys.* **123**, 1 (2002);
|
| 220 |
+
|
| 221 |
+
3. Peter G. Bolhuis, D. Chandler, C. Dellago, Phillip L. Geissler, *Ann. Rev. Phys. Chem.* **53**, 291 (2002).
|
| 222 |
+
|
| 223 |
+
4. C. Dellago, P. G. Bolhuis, and D. Chandler, *J. Chem. Phys.* **108**, 9263 (1998).
|
| 224 |
+
|
| 225 |
+
5. P. G. Bolhuis, C. Dellago, and D. Chandler, *Faraday Discuss.* **110**, 421 (1998).
|
| 226 |
+
|
| 227 |
+
6. N. Metropolis, A. W. Metropolis, M. N. Rosenbluth, A. H. Teller, and E. Teller, *J. Chem. Phys.* **21**, 1087 (1953).
|
| 228 |
+
|
| 229 |
+
7. D. Chandler, *Introduction to Modern Statistical Mechanics*, Oxford University Press (1987).
|
| 230 |
+
|
| 231 |
+
8. J. E. Straub, M. Borkovec, and B. J. Berne, *J. Chem. Phys.* **89**, 4833 (1988).
|
| 232 |
+
|
| 233 |
+
9. J. D. Weeks, D. Chandler, and H. C. Andersen, *J. Chem. Phys.* **54**, 5237 (1971).
|
| 234 |
+
|
| 235 |
+
10. M. P. Allen and D. J. Tildesley, *Computer Simulations of Liquids*, Oxford University Press, Oxford (1987).
|
| 236 |
+
|
| 237 |
+
11. J. G. Kirkwood, *J. Chem. Phys.* **3**, 300 (1935).
|
| 238 |
+
|
| 239 |
+
12. D. Frenkel and B. Smit, *Understanding Molecular Simulation*, 2nd edition, Academic Press (2002).
|
| 240 |
+
|
| 241 |
+
13. S. X. Sun, *J. Chem. Phys.* **118**, 5769 (2003).
|
| 242 |
+
|
| 243 |
+
14. C. Jarzynski, *Phys. Rev. Lett* **78**, 2690 (1997).
|
| 244 |
+
|
| 245 |
+
15. C. Jarzynski, *Phys. Rev. E* **56**, 5018 (1997).
|
| 246 |
+
|
| 247 |
+
16. G. E. Crooks, *J. Stat. Phys.* **90**, 1480 (1997).
|
| 248 |
+
|
| 249 |
+
17. G. E. Crooks, *Phys. Rev. E* **60**, 2721 (1999).
|
| 250 |
+
|
| 251 |
+
18. G. E. Crooks, *Phys. Rev. E* **61**, 2361 (2000).
|
samples_new/texts_merged/5647681.md
ADDED
|
@@ -0,0 +1,487 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
A note on sufficiency in binary panel models
|
| 5 |
+
|
| 6 |
+
Koen Jochmans, Thierry Magnac
|
| 7 |
+
|
| 8 |
+
► To cite this version:
|
| 9 |
+
|
| 10 |
+
Koen Jochmans, Thierry Magnac. A note on sufficiency in binary panel models. 2015. hal-01248065
|
| 11 |
+
|
| 12 |
+
HAL Id: hal-01248065
|
| 13 |
+
|
| 14 |
+
https://hal-sciencespo.archives-ouvertes.fr/hal-01248065
|
| 15 |
+
|
| 16 |
+
Preprint submitted on 23 Dec 2015
|
| 17 |
+
|
| 18 |
+
**HAL** is a multi-disciplinary open access
|
| 19 |
+
archive for the deposit and dissemination of sci-
|
| 20 |
+
entific research documents, whether they are pub-
|
| 21 |
+
lished or not. The documents may come from
|
| 22 |
+
teaching and research institutions in France or
|
| 23 |
+
abroad, or from public or private research centers.
|
| 24 |
+
|
| 25 |
+
L'archive ouverte pluridisciplinaire **HAL**, est
|
| 26 |
+
destinée au dépôt et à la diffusion de documents
|
| 27 |
+
scientifiques de niveau recherche, publiés ou non,
|
| 28 |
+
émanant des établissements d'enseignement et de
|
| 29 |
+
recherche français ou étrangers, des laboratoires
|
| 30 |
+
publics ou privés.
|
| 31 |
+
---PAGE_BREAK---
|
| 32 |
+
|
| 33 |
+
# A NOTE ON SUFFICIENCY IN BINARY PANEL MODELS
|
| 34 |
+
|
| 35 |
+
Koen Jochmans
|
| 36 |
+
Thierry Magnac
|
| 37 |
+
---PAGE_BREAK---
|
| 38 |
+
|
| 39 |
+
A NOTE ON SUFFICIENCY IN BINARY PANEL MODELS
|
| 40 |
+
|
| 41 |
+
KOEN JOCHMANS AND THIERRY MAGNAC
|
| 42 |
+
|
| 43 |
+
December 4, 2015
|
| 44 |
+
|
| 45 |
+
Consider estimating the slope coefficients of a fixed-effect binary-choice model from two-period panel data. Two approaches to semiparametric estimation at the regular parametric rate have been proposed. One is based on a sufficient statistic, the other is based on a conditional-median restriction. We show that, under standard assumptions, both approaches are equivalent.
|
| 46 |
+
|
| 47 |
+
KEYWORDS: binary choice, fixed effects, panel data, regular estimation, sufficiency.
|
| 48 |
+
|
| 49 |
+
INTRODUCTION
|
| 50 |
+
|
| 51 |
+
A classic problem in panel data analysis is the estimation of the vector of slope coefficients, $\beta$, in fixed-effect linear models from binary response data on $n$ observations.
|
| 52 |
+
|
| 53 |
+
In seminal work, Rasch (1960) constructed a conditional maximum-likelihood estimator for the fixed-effect logit model by building on a sufficiency argument. Chamberlain (2010) and Magnac (2004) have shown that sufficiency is necessary for estimation at the $n^{-1/2}$ rate to be possible in general.
|
| 54 |
+
|
| 55 |
+
Manski (1987) proposed a maximum-score estimator of $\beta$. His estimator relies on a conditional median restriction and does not require sufficiency. However, it converges at the slow rate $n^{-1/3}$. Horowitz (1992) suggested smoothing the maximum-score criterion function and showed that, by doing so, the convergence rate can be improved, although the $n^{-1/2}$-rate remains unattainable.
|
| 56 |
+
|
| 57 |
+
Lee (1999) has given an alternative conditional-median restriction and derived a $n^{-1/2}$-consistent maximum rank-correlation estimator of $\beta$. He provided sufficient conditions for this condition to hold that restrict the distribution of the fixed effects and the covariates. It can be shown that these restrictions involve the unknown parameter $\beta$ through index-sufficiency requirements on the distribution of the covariates, and that these can severely restrict the values that $\beta$ is allowed to take.
|
| 58 |
+
|
| 59 |
+
In this note we reconsider the conditional-median restriction of Lee (1999) under standard assumptions and look for conditions that imply it to hold for any $\beta$. We find that imposing the
|
| 60 |
+
|
| 61 |
+
Department of Economics, Sciences Po, 28 rue des Saints Pères, 75007 Paris, France.
|
| 62 |
+
koen.jochmans@sciencespo.fr.
|
| 63 |
+
|
| 64 |
+
GREMAQ and IDEI, Toulouse School of Economics, 21 Allée de Brienne, 31000 Toulouse, France.
|
| 65 |
+
thierry.magnac@tse-fr.eu.
|
| 66 |
+
---PAGE_BREAK---
|
| 67 |
+
|
| 68 |
+
conditional-median restriction is equivalent to requiring sufficiency.
|
| 69 |
+
|
| 70 |
+
1. MODEL AND ASSUMPTIONS
|
| 71 |
+
|
| 72 |
+
Suppose that binary outcomes $y_i = (y_{i1}, y_{i2})$ relate to a set of observable covariates $x_i = (x_{i1}, x_{i2})$ through the threshold-crossing model
|
| 73 |
+
|
| 74 |
+
$$y_{i1} = 1\{x_{i1}\beta + \alpha_i \geq u_{i1}\}, \quad y_{i2} = 1\{x_{i2}\beta + \alpha_i \geq u_{i2}\},$$
|
| 75 |
+
|
| 76 |
+
where $u_i = (u_{i1}, u_{i2})$ are latent disturbances, $\alpha_i$ is an unobserved effect, and $\beta$ is a parameter vector of conformable dimension, say $k$. The challenge is to construct an estimator of $\beta$ from a random sample ${(y_i, x_i), i = 1, \dots, n}$ that converges at the regular $n^{-1/2}$ rate.
|
| 77 |
+
|
| 78 |
+
Let $\Delta y_i = y_{i2} - y_{i1}$ and $\Delta x_i = x_{i2} - x_{i1}$. The following assumption will be maintained throughout.
|
| 79 |
+
|
| 80 |
+
ASSUMPTION 1 (Identification and regularity)
|
| 81 |
+
|
| 82 |
+
(a) $u_i$ is independent of $(x_i, \alpha_i)$.
|
| 83 |
+
|
| 84 |
+
(b) $\Delta x_i$ is not contained in a proper linear subspace of $\mathbb{R}^k$.
|
| 85 |
+
|
| 86 |
+
(c) The first component of $\Delta x_i$ continuously varies over $\mathcal{R}$ (for almost all values of the other components) and the first component of $\beta$ is not equal to zero.
|
| 87 |
+
|
| 88 |
+
(d) $\alpha_i$ varies continuously over $\mathcal{R}$ (for almost all values of $x_i$).
|
| 89 |
+
|
| 90 |
+
(e) The distribution of $u_i$ admits a strictly positive, continuous, and bounded density function with respect to Lebesgue measure.
|
| 91 |
+
|
| 92 |
+
Parts (a)-(c) collect sufficient conditions that ensure that $\beta$ is identified while Parts (d)-(e) are conventional regularity conditions (see Magnac 2004). From here on out we omit the 'almost surely' qualifier from all conditional statements.
|
| 93 |
+
|
| 94 |
+
Assumption 1 does not parametrize the distribution of $u_i$ nor does it restrict the dependence between $\alpha_i$ and $x_i$ beyond the complete-variation requirement of Assumption 1(d). As such, our approach is semiparametric and we treat the $\alpha_i$ as fixed effects.
|
| 95 |
+
|
| 96 |
+
2. CONDITIONS FOR REGULAR ESTIMATION
|
| 97 |
+
|
| 98 |
+
Magnac (2004, Theorem 1) has shown that, under Assumption 1, the semiparametric efficiency bound for $\beta$ is zero unless $y_{i1} + y_{i2}$ is a sufficient statistic for $\alpha_i$. Sufficiency can be stated as follows.
|
| 99 |
+
---PAGE_BREAK---
|
| 100 |
+
|
| 101 |
+
**CONDITION 1 (Sufficiency)** There exists a real function G, independent of $\alpha_i$, such that
|
| 102 |
+
|
| 103 |
+
$$ \mathrm{Pr}(\Delta y_i = 1 | x_i, \Delta y_i \neq 0, \alpha_i) = \mathrm{Pr}(\Delta y_i = 1 | x_i, \Delta y_i \neq 0) = G(\Delta x_i \beta) $$
|
| 104 |
+
|
| 105 |
+
for all $\alpha_i \in \mathbb{R}$.
|
| 106 |
+
|
| 107 |
+
Condition 1 states that data in first-differences follow a single-indexed binary-choice model. This yields a variety of estimators of $\beta$, such as semiparametric maximum likelihood (Klein and Spady 1993), that are $n^{-1/2}$-consistent under standard assumptions.
|
| 108 |
+
|
| 109 |
+
Magnac (2004, Theorem 3) derived conditions on the distributions of $u_i$ and $\Delta u_i$ that imply
|
| 110 |
+
that Condition 1 holds.
|
| 111 |
+
|
| 112 |
+
On the other hand, Lee (1999) considered estimation of $\beta$ based on a sign restriction. We write
|
| 113 |
+
$\mathrm{med}(x)$ for the median of random variable $x$ and let $\sgn(x) = 1\{x > 0\} - 1\{x < 0\}$.
|
| 114 |
+
|
| 115 |
+
**CONDITION 2 (Median restriction)** For any two observations i and j,
|
| 116 |
+
|
| 117 |
+
$$ \mathrm{med} \left( \frac{\Delta y_i - \Delta y_j}{2} \mid x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j \right) = \mathrm{sgn}(\Delta x_i \beta - \Delta x_j \beta) $$
|
| 118 |
+
|
| 119 |
+
holds.
|
| 120 |
+
|
| 121 |
+
Condition 2 suggests a rank estimator for $\beta$. Conditions for this estimator to be $n^{-1/2}$-consistent
|
| 122 |
+
are stated in Sherman (1993).
|
| 123 |
+
|
| 124 |
+
Lee (1999, Assumption 1) restricted the joint distribution of $\alpha_i, x_i$, and $x_{i1}\beta, x_{i2}\beta$ to ensure that
|
| 125 |
+
Condition 2 holds. Aside from these restrictions going against the fixed-effect approach, they do
|
| 126 |
+
not hold uniformly in $\beta$, in general. The Appendix contains additional discussion and an example.
|
| 127 |
+
|
| 128 |
+
### 3. EQUIVALENCE
|
| 129 |
+
|
| 130 |
+
The main result of this paper is the equivalence of Conditions 1 and 2 as requirements for $n^{-1/2}$-
|
| 131 |
+
consistent estimation of any $\beta$.
|
| 132 |
+
|
| 133 |
+
**THEOREM 1 (Equivalence)** *Under Assumption 1 Condition 2 holds for any $\beta$ if and only if Condition 1 holds.*
|
| 134 |
+
|
| 135 |
+
|
| 136 |
+
|
| 137 |
+
PROOF: We start with two lemmas that are instrumental in showing Theorem 1.
|
| 138 |
+
---PAGE_BREAK---
|
| 139 |
+
|
| 140 |
+
LEMMA 1 (Sufficiency) Condition 1 is equivalent to the existence of a continuously-differentiable,
|
| 141 |
+
strictly-decreasing function c, independent of αᵢ, such that
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
\frac{\Pr(\Delta y_i = -1 | x_i, \alpha_i)}{\Pr(\Delta y_i = 1 | x_i, \alpha_i)} = c(\Delta x_i \beta)
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
for all $\alpha_i \in \mathbb{R}$.
|
| 148 |
+
|
| 149 |
+
PROOF: Conditional on $\Delta y_i \neq 0$ and on $\alpha_i, x_i$, the variable $\Delta y_i$ is Bernoulli with success probability
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\mathrm{Pr}(\Delta y_i = 1 | x_i, \Delta y_i \neq 0, \alpha_i) = \frac{1}{1 + \frac{\mathrm{Pr}(\Delta y_i = -1 | x_i, \alpha_i)}{\mathrm{Pr}(\Delta y_i = 1 | x_i, \alpha_i)}}.
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
Re-arranging this expression and enforcing Condition 1 shows that
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
\frac{\Pr(\Delta y_i = -1|x_i, \alpha_i)}{\Pr(\Delta y_i = 1|x_i, \alpha_i)} = \frac{1 + G(\Delta x_i \beta)}{G(\Delta x_i \beta)},
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
which is a function of $\Delta x_i \beta$ only. Monotonicity of this function follows easily, as in Magnac (2004,
|
| 162 |
+
Proof of Theorem 2). This completes the proof of Lemma 1.
|
| 163 |
+
Q.E.D.
|
| 164 |
+
|
| 165 |
+
LEMMA 2 (Median restriction) Let
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\tilde{c}(x_i) = \frac{\Pr(\Delta y_i = -1|x_i)}{\Pr(\Delta y_i = 1|x_i)}.
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
Condition 2 is equivalent to the sign restriction
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
\operatorname{sgn}(\tilde{c}(x_j) - c(x_i)) = \operatorname{sgn}(\Delta x_i \beta - \Delta x_j \beta)
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
holding for any two observations *i* and *j*.
|
| 178 |
+
|
| 179 |
+
PROOF: Conditional on $\Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j$ (and the covariates),
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
\frac{\Delta y_i - \Delta y_j}{2} = \begin{cases} 1 & \text{if } \Delta y_i = 1 \text{ and } \Delta y_j = -1 \\ -1 & \text{if } \Delta y_j = 1 \text{ and } \Delta y_i = -1. \end{cases}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
Therefore, it is Bernoulli with success probability
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\mathrm{Pr}(\Delta y_i = 1, \Delta y_j = -1 | x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j) = \frac{1}{1 + r(x_i, x_j)},
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
where
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
r(x_i, x_j) = \frac{\Pr(\Delta y_i = -1, \Delta y_j = 1 | x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j)}{\Pr(\Delta y_i = 1, \Delta y_j = -1 | x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j)}.
|
| 195 |
+
$$
|
| 196 |
+
---PAGE_BREAK---
|
| 197 |
+
|
| 198 |
+
Note that
|
| 199 |
+
|
| 200 |
+
$$
|
| 201 |
+
\mathrm{med} \left( \frac{\Delta y_i - \Delta y_j}{2} \middle| x_i, x_j, \Delta y_i \neq 0, \Delta y_j \neq 0, \Delta y_i \neq \Delta y_j \right) = \mathrm{sgn} \left( \frac{1}{1+r(x_i, x_j)} - \frac{r(x_i, x_j)}{1+r(x_i, x_j)} \right).
|
| 202 |
+
$$
|
| 203 |
+
|
| 204 |
+
By the Bernoulli nature of the outcomes in the first step and random sampling of the observations
|
| 205 |
+
in the second step, we have that
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
r(x_i, x_j) = \frac{\Pr(\Delta y_i = -1, \Delta y_j = 1 | x_i, x_j)}{\Pr(\Delta y_i = 1, \Delta y_j = -1 | x_i, x_j)} = \frac{\Pr(\Delta y_i = -1 | x_i) \Pr(\Delta y_j = 1 | x_j)}{\Pr(\Delta y_i = 1 | x_i) \Pr(\Delta y_j = -1 | x_j)} = \frac{\tilde{c}(x_i)}{\tilde{c}(x_j)}.
|
| 209 |
+
$$
|
| 210 |
+
|
| 211 |
+
Therefore, Condition 2 can be written as
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
\operatorname{sgn}(\tilde{c}(x_j) - c(x_i)) = \operatorname{sgn}(\Delta x_i \beta - \Delta x_j \beta).
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
This completes the proof of Lemma 2.
|
| 218 |
+
|
| 219 |
+
Q.E.D.
|
| 220 |
+
|
| 221 |
+
We first establish that Condition 1 implies Condition 2. Armed with Lemmas 1 and 2 this is a
|
| 222 |
+
simple task. First note that, because the function $c$ is strictly decreasing by Lemma 1, Condition
|
| 223 |
+
1 implies that
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
\operatorname{sgn}(c(\Delta x_j \beta) - c(\Delta x_i \beta)) = \operatorname{sgn}(\Delta x_i \beta - \Delta x_j \beta).
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
Under Condition 1 we also have that
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
c(\Delta x_i \beta) = \frac{\Pr(\Delta y_i = -1 | x_i, \alpha_i)}{\Pr(\Delta y_i = 1 | x_i, \alpha_i)} = \frac{\Pr(\Delta y_i = -1 | x_i)}{\Pr(\Delta y_i = 1 | x_i)} = \tilde{c}(x_i).
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
Therefore,
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
\operatorname{sgn}(\tilde{c}(x_j) - c(x_i)) = \operatorname{sgn}(\Delta x_i \beta - \Delta x_j \beta).
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
By Lemma 2, this is Condition 2.
|
| 242 |
+
|
| 243 |
+
To see that Condition 2 implies Condition 1, first note that
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
\frac{\Pr(\Delta y_i = -1 | x_i, \alpha_i)}{\Pr(\Delta y_i = 1 | x_i, \alpha_i)} = \frac{\Pr(u_{i1} \le \tilde{\alpha}_i - \frac{1}{2}\Delta x_i \beta, u_{i2} > \tilde{\alpha}_i + \frac{1}{2}\Delta x_i \beta)}{\Pr(u_{i1} > \tilde{\alpha}_i - \frac{1}{2}\Delta x_i \beta, u_{i2} \le \tilde{\alpha}_i + \frac{1}{2}\Delta x_i \beta)}
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
where we let $\tilde{\alpha}_i = \alpha_i + \frac{1}{2}(x_{i1} + x_{i2})\beta$. Therefore,
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
\mathrm{Pr}(\Delta y_i = 1|x_i, \Delta y_i \neq 0, \alpha_i) = \tilde{G}(\Delta x_i \beta, \tilde{\alpha})
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
for some function $\tilde{G}$, and
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\mathrm{Pr}(\Delta y_i = 1 | x_i, \Delta y_i \neq 0) = \int \tilde{G}(\Delta x_i \beta, \tilde{\alpha}_i) P(d\tilde{\alpha} | x_i, \Delta y_i \neq 0),
|
| 259 |
+
$$
|
| 260 |
+
---PAGE_BREAK---
|
| 261 |
+
|
| 262 |
+
where $P(\tilde{\alpha}_i|x_i, \Delta y_i \neq 0)$ denotes the distribution of $\tilde{\alpha}_i$ given $x_i$ and $\Delta y_i \neq 0$. Next, by Lemma 2, Condition 2 implies that
|
| 263 |
+
|
| 264 |
+
$$ \Delta x_i \beta = \Delta x_j \beta \iff \tilde{c}(x_i) = \tilde{c}(x_j) \iff E[\tilde{G}(\Delta x_i \beta, \tilde{\alpha}_i)|x_i, \Delta y_i \neq 0] = E[\tilde{G}(\Delta x_j \beta, \tilde{\alpha}_j)|x_j, \Delta y_j \neq 0]. $$
|
| 265 |
+
|
| 266 |
+
Hence, it must hold that
|
| 267 |
+
|
| 268 |
+
$$ \int_{-\infty}^{+\infty} \tilde{G}(v, \tilde{\alpha}) \{ P(d\tilde{\alpha}|x_i, \Delta y_i \neq 0) - P(d\tilde{\alpha}|x_j, \Delta y_i \neq 0) \} = 0 $$
|
| 269 |
+
|
| 270 |
+
for all values $v \in \mathcal{R}$ and all $(x_i, x_j)$. Because the distribution of $\alpha_i$ given $x_i$ and $\Delta y_i \neq 0$ is unrestricted, this condition holds if and only if the function $\tilde{G}$ does not depend on $\tilde{\alpha}_i$, and so not on $\alpha_i$. Moreover, we must have that
|
| 271 |
+
|
| 272 |
+
$$ \tilde{G}(\Delta x_i \beta, \tilde{\alpha}_i) = \Pr(\Delta y_i = 1 | x_i, \Delta y_i \neq 0, \alpha_i) = \Pr(\Delta y_i = 1 | x_i, \Delta y_i \neq 0) = G(\Delta x_i \beta) $$
|
| 273 |
+
|
| 274 |
+
for some function $G$. This is Condition 1. This completes the proof of Theorem 1. Q.E.D.
|
| 275 |
+
|
| 276 |
+
## APPENDIX (NOT FOR PUBLICATION)
|
| 277 |
+
|
| 278 |
+
The notation in Lee (1999) decomposes $x$ into its continuously varying single component whose coefficient is equal to 1 and the remaining variables. We shall denote $a$ the first component and $z$ the remaining variables so that $x = (a, z)$. We denote by $\theta$ the coefficient of $z$ in $x\beta$ so that $\beta = (1, \theta)$, and omit the subscript $i$ throughout.
|
| 279 |
+
|
| 280 |
+
Assumptions (g) and (h) of Lee (1999) can be written as
|
| 281 |
+
|
| 282 |
+
$$ (g) \quad \alpha \perp \Delta z | \Delta a + \theta \Delta z, $$
|
| 283 |
+
|
| 284 |
+
$$ (h) \quad a_1 + \theta z_1 \perp \Delta z | \Delta a + \theta \Delta z, \alpha $$
|
| 285 |
+
|
| 286 |
+
in which, e.g., $\Delta z = z_2 - z_1$.
|
| 287 |
+
|
| 288 |
+
We first prove that these conditions imply an index sufficiency requirement on the distribution function of regressors. Second, we provide an example in which these conditions restrict the parameter of interest to only two possible values, except in non-generic cases.
|
| 289 |
+
|
| 290 |
+
### Index sufficiency
|
| 291 |
+
|
| 292 |
+
Denote by $f$ the density with respect to some dominating measure and rewrite (h) as
|
| 293 |
+
|
| 294 |
+
$$ f(a_1 + \theta z_1, \Delta z | \Delta a + \theta \Delta z, \alpha) = f(a_1 + \theta z_1 | \Delta a + \theta \Delta z, \alpha) f(\Delta z | \Delta a + \theta \Delta z, \alpha). $$
|
| 295 |
+
|
| 296 |
+
As Condition (g) can be written as
|
| 297 |
+
|
| 298 |
+
$$ f(\Delta z | \Delta a + \theta \Delta z, \alpha) = f(\Delta z | \Delta a + \theta \Delta z), $$
|
| 299 |
+
---PAGE_BREAK---
|
| 300 |
+
|
| 301 |
+
we therefore have that
|
| 302 |
+
|
| 303 |
+
$$f(a_1 + \theta z_1, \Delta z | \Delta a + \theta \Delta z, \alpha) = f(a_1 + \theta z_1 | \Delta a + \theta \Delta z, \alpha) f(\Delta z | \Delta a + \theta \Delta z),$$
|
| 304 |
+
|
| 305 |
+
which we can multiply by $f(\alpha | \Delta a + \theta \Delta z)$ and integrate with respect to $\alpha$ to get
|
| 306 |
+
|
| 307 |
+
$$f(a_1 + \theta z_1, \Delta z | \Delta a + \theta \Delta z) = f(a_1 + \theta z_1 | \Delta a + \theta \Delta z) f(\Delta z | \Delta a + \theta \Delta z).$$
|
| 308 |
+
|
| 309 |
+
As this expression can be rewritten as
|
| 310 |
+
|
| 311 |
+
$$f(\Delta z | \Delta a + \theta \Delta z, a_1 + z_1 \theta) = f(\Delta z | \Delta a + \theta \Delta z),$$
|
| 312 |
+
|
| 313 |
+
Conditions (g) and (h) of Lee (1999) demand that
|
| 314 |
+
|
| 315 |
+
$$f(\Delta z | a_1 + z_1\theta, a_2 + z_2\theta) = f(\Delta z | \Delta a + \theta\Delta z, a_1 + z_1\theta) = f(\Delta z | \Delta a + \theta\Delta z),$$
|
| 316 |
+
|
| 317 |
+
or in terms of the original variables, that
|
| 318 |
+
|
| 319 |
+
$$f(\Delta z | x_1\beta, x_2\beta) = f(\Delta z | \Delta x\beta),$$
|
| 320 |
+
|
| 321 |
+
This is an index sufficiency requirement on the data generating process of the regressors $x$ that is
|
| 322 |
+
driven by the parameter of interest, $\beta$.
|
| 323 |
+
|
| 324 |
+
*Example*
|
| 325 |
+
|
| 326 |
+
To illustrate, suppose that $z$ is a single dimensional regressor and that regressors are jointly normal
|
| 327 |
+
with a restricted covariance matrix allowing for contemporaneous correlation only. Moreover,
|
| 328 |
+
|
| 329 |
+
$$\begin{pmatrix} a_1 \\ a_2 \\ z_1 \\ z_2 \end{pmatrix} \sim N \left( \begin{pmatrix} \mu_{a_1} \\ \mu_{a_2} \\ \mu_{z_1} \\ \mu_{z_2} \end{pmatrix}, \begin{pmatrix} \sigma_{a_1}^2 & 0 & \sigma_{a_1 z_1} & 0 \\ 0 & \sigma_{a_2}^2 & 0 & \sigma_{a_2 z_2} \\ \sigma_{a_1 z_1} & 0 & \sigma_{z_1}^2 & 0 \\ 0 & \sigma_{a_2 z_2} & 0 & \sigma_{z_2}^2 \end{pmatrix} \right).$$
|
| 330 |
+
|
| 331 |
+
Then
|
| 332 |
+
|
| 333 |
+
$$\begin{pmatrix} \Delta z \\ x_1\beta \\ x_2\beta \end{pmatrix} \sim N \left( \begin{pmatrix} \mu_1 \\ \mu_2 \\ \mu_3 \end{pmatrix}, \begin{pmatrix} \Sigma_{11} & \Sigma_{12} & \Sigma_{13} \\ \Sigma_{12} & \Sigma_{22} & \Sigma_{23} \\ \Sigma_{13} & \Sigma_{23} & \Sigma_{33} \end{pmatrix} \right)$$
|
| 334 |
+
|
| 335 |
+
for
|
| 336 |
+
---PAGE_BREAK---
|
| 337 |
+
|
| 338 |
+
$$
|
| 339 |
+
\begin{align*}
|
| 340 |
+
\mu_1 &= \mu_{z_2} - \mu_{z_1} \\
|
| 341 |
+
\mu_2 &= \mu_{a_1} + \mu_{z_1} \theta \\
|
| 342 |
+
\mu_3 &= \mu_{a_2} + \mu_{z_2} \theta
|
| 343 |
+
\end{align*}
|
| 344 |
+
$$
|
| 345 |
+
|
| 346 |
+
and
|
| 347 |
+
|
| 348 |
+
$$
|
| 349 |
+
\begin{align*}
|
| 350 |
+
\Sigma_{11} &= \operatorname{var}(\Delta z) = \operatorname{var}(z_1) + \operatorname{var}(z_2) \\
|
| 351 |
+
\Sigma_{12} &= \operatorname{cov}(\Delta z, x_1 \beta) = -\operatorname{cov}(z_1, a_1 + z_1 \theta) \\
|
| 352 |
+
&= -\operatorname{cov}(a_1, z_1) - \theta \operatorname{var}(z_1) \\
|
| 353 |
+
&= -\sigma_{a_1 z_1} - \theta \sigma_{z_1}^2 \\
|
| 354 |
+
\Sigma_{13} &= \operatorname{cov}(\Delta z, x_2 \beta) = \operatorname{cov}(z_2, a_2 + z_2 \theta) \\
|
| 355 |
+
&= \operatorname{cov}(a_2, z_2) + \theta \operatorname{var}(z_2) \\
|
| 356 |
+
&= \sigma_{a_2 z_2} + \theta \sigma_{z_2}^2 \\
|
| 357 |
+
\Sigma_{22} &= \operatorname{var}(x_1 \beta) = \operatorname{var}(a_1 + z_1 \theta) \\
|
| 358 |
+
&= \operatorname{var}(a_1) + \theta^2 \operatorname{var}(z_1) + \theta 2 \operatorname{cov}(a_1, z_1) \\
|
| 359 |
+
&= \sigma_{a_1}^2 + 2\theta \sigma_{a_1 z_1} + \theta^2 \sigma_{z_1}^2 \\
|
| 360 |
+
\Sigma_{33} &= \operatorname{var}(x_2 \beta) = \operatorname{var}(a_2 + z_2 \theta) \\
|
| 361 |
+
&= \operatorname{var}(a_2) + \theta^2 \operatorname{var}(z_2) + \theta 2 \operatorname{cov}(a_2, z_2) \\
|
| 362 |
+
&= \sigma_{a_2}^2 + 2\theta \sigma_{a_2 z_2} + \theta^2 \sigma_{z_2}^2 \\
|
| 363 |
+
\Sigma_{23} &= \operatorname{cov}(x_1 \beta, x_2 \beta) = 0.
|
| 364 |
+
\end{align*}
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
From standard results on the multivariate normal distribution we have that
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
\Delta z | x_1 \beta, x_2 \beta
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
is normal with constant variance and conditional mean function
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
m(x_1\beta, x_2\beta) = \mu_1 + \frac{(\Sigma_{13}\Sigma_{22} - \Sigma_{12}\Sigma_{23})(x_2\beta - \mu_3) - (\Sigma_{13}\Sigma_{23} - \Sigma_{12}\Sigma_{33})(x_1\beta - \mu_2)}{\Sigma_{22}\Sigma_{33} - \Sigma_{23}^2}.
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
To satisfy the condition of index sufficiency we need that
|
| 380 |
+
|
| 381 |
+
$$
|
| 382 |
+
(\Sigma_{13}\Sigma_{22} - \Sigma_{12}\Sigma_{23}) = (\Sigma_{13}\Sigma_{23} - \Sigma_{12}\Sigma_{33}).
|
| 383 |
+
$$
|
| 384 |
+
|
| 385 |
+
Plugging-in the expressions from above, this becomes
|
| 386 |
+
|
| 387 |
+
$$(\sigma_{a_2 z_2} + \theta \sigma_{z_2}^2)(\sigma_{a_1}^2 + 2\theta\sigma_{a_1 z_1} + \theta^2\sigma_{z_1}^2) = (\sigma_{a_1 z_1} + \theta\sigma_{z_1}^2)(\sigma_{a_2}^2 + 2\theta\sigma_{a_2 z_2} + \theta^2\sigma_{z_2}^2).$$
|
| 388 |
+
---PAGE_BREAK---
|
| 389 |
+
|
| 390 |
+
We can write this condition as the third-order polynomial equation (in $\theta$)
|
| 391 |
+
|
| 392 |
+
$$C + B\theta + A\theta^2 + D\theta^3 = 0$$
|
| 393 |
+
|
| 394 |
+
with coefficients
|
| 395 |
+
|
| 396 |
+
$$
|
| 397 |
+
\begin{align*}
|
| 398 |
+
C &= \sigma_{a_1}^2 \sigma_{a_2 z_2} - \sigma_{a_2}^2 \sigma_{a_1 z_1} \\
|
| 399 |
+
B &= \sigma_{a_1}^2 \sigma_{z_2}^2 + 2\sigma_{a_2 z_2} \sigma_{a_1 z_1} - \sigma_{a_2}^2 \sigma_{z_1}^2 - 2\sigma_{a_2 z_2} \sigma_{a_1 z_1} \\
|
| 400 |
+
&= \sigma_{a_1}^2 \sigma_{z_2}^2 - \sigma_{a_2}^2 \sigma_{z_1}^2 \\
|
| 401 |
+
A &= \sigma_{a_1 z_1} \sigma_{z_2}^2 - \sigma_{a_2 z_2} \sigma_{z_1}^2 \\
|
| 402 |
+
D &= 0.
|
| 403 |
+
\end{align*}
|
| 404 |
+
$$
|
| 405 |
+
|
| 406 |
+
For $t = 1, 2$, let
|
| 407 |
+
|
| 408 |
+
$$\rho_t = \frac{\sigma_{a_t z_t}}{\sigma_{a_t} \sigma_{z_t}}, r_t = \frac{\sigma_{a_t}}{\sigma_{z_t}}.$$
|
| 409 |
+
|
| 410 |
+
Then
|
| 411 |
+
|
| 412 |
+
$$
|
| 413 |
+
\begin{align*}
|
| 414 |
+
\frac{C}{\sigma_{a_1}\sigma_{a_2}\sigma_{z_1}\sigma_{z_2}} &= \rho_2 r_1 - \rho_1 r_2 \\
|
| 415 |
+
\frac{B}{\sigma_{a_1}\sigma_{a_2}\sigma_{z_1}\sigma_{z_2}} &= \frac{r_1}{r_2} - \frac{r_2}{r_1} \\
|
| 416 |
+
\frac{A}{\sigma_{a_1}\sigma_{a_2}\sigma_{z_1}\sigma_{z_2}} &= \frac{\rho_1}{r_2} - \frac{\rho_2}{r_1}.
|
| 417 |
+
\end{align*}
|
| 418 |
+
$$
|
| 419 |
+
|
| 420 |
+
The polynomial condition therefore is
|
| 421 |
+
|
| 422 |
+
$$(\rho_2 r_1 - \rho_1 r_2) + \left( \frac{r_1}{r_2} - \frac{r_2}{r_1} \right) \theta + \left( \frac{\rho_1}{r_2} - \frac{\rho_2}{r_1} \right) \theta^2 = 0.$$
|
| 423 |
+
|
| 424 |
+
Note that the leading polynomial coefficient is equal to zero if and only if $\rho_1 r_1 = \rho_2 r_2$. This leads to three mutually-exclusive cases:
|
| 425 |
+
|
| 426 |
+
(i) The data are stationary, that is, $\rho_1 = \rho_2$ and $r_1 = r_2$. Then all polynomial coefficients are zero so that all values of $\theta$ satisfy Lee's restriction.
|
| 427 |
+
|
| 428 |
+
(ii) We have $\rho_1 r_1 = \rho_2 r_2$ but $r_1 \neq r_2$. Then the resulting linear equation admits one and only one solution in $\theta$.
|
| 429 |
+
|
| 430 |
+
(iii) The leading polynomial coefficient is non-zero, so, $\rho_1 r_1 \neq \rho_2 r_2$. In this case the discriminant
|
| 431 |
+
---PAGE_BREAK---
|
| 432 |
+
|
| 433 |
+
of the second-order polynomial equals
|
| 434 |
+
|
| 435 |
+
$$
|
| 436 |
+
\begin{align*}
|
| 437 |
+
\Delta &= \left(\frac{r_1}{r_2} - \frac{r_2}{r_1}\right)^2 - 4 \left(\frac{\rho_1}{r_2} - \frac{\rho_2}{r_1}\right) (\rho_2 r_1 - \rho_1 r_2) \\
|
| 438 |
+
&= \left(\frac{r_1}{r_2}\right)^2 + \left(\frac{r_2}{r_1}\right)^2 - 2 - 4 \left( \rho_1 \rho_2 \left\{ \frac{r_1}{r_2} + \frac{r_2}{r_1} \right\} - (\rho_1^2 + \rho_2^2) \right).
|
| 439 |
+
\end{align*}
|
| 440 |
+
$$
|
| 441 |
+
|
| 442 |
+
Set $x = \frac{r_1}{r_2} \ge 0$ and write
|
| 443 |
+
|
| 444 |
+
$$
|
| 445 |
+
\Delta(x) = x^2 + \frac{1}{x^2} - 2 - 4(\rho_1\rho_2(x + \frac{1}{x}) - (\rho_1^2 + \rho_2^2)),
|
| 446 |
+
$$
|
| 447 |
+
|
| 448 |
+
which is smooth for $x > 0$. The derivative of $\Delta$ with respect to $x$ equals
|
| 449 |
+
|
| 450 |
+
$$
|
| 451 |
+
\begin{align*}
|
| 452 |
+
\Delta'(x) &= 2x - \frac{2}{x^3} - 4(\rho_1\rho_2(1 - \frac{1}{x^2})) \\
|
| 453 |
+
&= \frac{2}{x^3}(x^4 - 1) - 4\rho_1\rho_2\frac{1}{x^2}(x^2 - 1) \\
|
| 454 |
+
&= \frac{2}{x^3}(x^2 - 1)(x^2 + 1 - 2\rho_1\rho_2 x).
|
| 455 |
+
\end{align*}
|
| 456 |
+
$$
|
| 457 |
+
|
| 458 |
+
Note that the Cauchy-Schwarz inequality implies that $x^2 + 1 - 2\rho_1\rho_2 x \ge 0$ so that, for $x \ge 0$,
|
| 459 |
+
|
| 460 |
+
$$
|
| 461 |
+
\operatorname{sgn}(\Delta'(x)) = \operatorname{sgn}(x - 1).
|
| 462 |
+
$$
|
| 463 |
+
|
| 464 |
+
Further, $\Delta(1) = 4(\rho_1 - \rho_2)^2$. Therefore, $\Delta(x)$ is always non-negative. Hence, in this case, the polynomial condition generically has two solutions in $\theta$.
|
| 465 |
+
|
| 466 |
+
Conclusion
|
| 467 |
+
|
| 468 |
+
Conditions (g) and (h) of Lee (1999) imply an index-sufficiency condition for the distribution function of regressors. In generic cases in a standard example, this condition is restrictive and is not verified by every possible value of the parameter of interest, $\theta$, but only two.
|
| 469 |
+
|
| 470 |
+
REFERENCES
|
| 471 |
+
|
| 472 |
+
Chamberlain, G. (2010), “Binary Response Models for Panel Data: Identification and Information,” *Econometrica*, 78, 159–168.
|
| 473 |
+
|
| 474 |
+
Horowitz, J. L. (1992), “A Smoothed Maximum Score Estimator for the Binary Response Model,” *Econometrica*, 60, 505–531.
|
| 475 |
+
|
| 476 |
+
Klein, R. W., and Spady, R. H. (1993), “An Efficient Semiparametric Estimator for Binary Choice Models,” *Econometrica*, 61, 387–421.
|
| 477 |
+
|
| 478 |
+
Lee, M.-J. (1999), “A Root-N Consistent Semiparametric Estimator for Related-Effects Binary Response Panel Data,” *Econometrica*, 67, 427–433.
|
| 479 |
+
---PAGE_BREAK---
|
| 480 |
+
|
| 481 |
+
Magnac, T. (2004), "Panel Binary Variables and Sufficiency: Generalizing Conditional Logit," *Econometrica*, 72, 1859-1876.
|
| 482 |
+
|
| 483 |
+
Manski, C. F. (1987), "Semiparametric Analysis of Random Effects Linear Models from Binary Panel Data," *Econometrica*, 55, 357-362.
|
| 484 |
+
|
| 485 |
+
Rasch, G. (1960), "Probabilistic models for some intelligence and attainment tests," Unpublished report, The Danish Institute of Educational Research, Copenhagen.
|
| 486 |
+
|
| 487 |
+
Sherman, R. P. (1993), "The Limiting Distribution of the Maximum Rank Correlation Estimator," *Econometrica*, 61, 123-137.
|
samples_new/texts_merged/565481.md
ADDED
|
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Homework Handout II
|
| 5 |
+
|
| 6 |
+
4. For the following, $\mathcal{V}$ is a three-dimensional space of traditional vectors with standard basis
|
| 7 |
+
|
| 8 |
+
$$S = \{\mathbf{i}, \mathbf{j}, \mathbf{k}\}.$$
|
| 9 |
+
|
| 10 |
+
(If you prefer, use $\{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\}$ or $\{\mathbf{x}, \mathbf{y}, \mathbf{z}\}$.)
|
| 11 |
+
|
| 12 |
+
Also, let
|
| 13 |
+
|
| 14 |
+
$$\mathcal{B} = \{\mathbf{b}_1, \mathbf{b}_2, \mathbf{b}_3\}$$
|
| 15 |
+
|
| 16 |
+
where
|
| 17 |
+
|
| 18 |
+
$$\mathbf{b}_1 = \mathbf{i} + 2\mathbf{j}, \quad \mathbf{b}_2 = 3\mathbf{j} - \mathbf{k} \text{ and } \mathbf{b}_3 = 2\mathbf{i} - 3\mathbf{j}.$$
|
| 19 |
+
|
| 20 |
+
1. Solve for $\mathbf{i}, \mathbf{j}$ and $\mathbf{k}$ in terms of $\mathbf{b}_1, \mathbf{b}_2$ and $\mathbf{b}_3$.
|
| 21 |
+
|
| 22 |
+
2. Is $\mathcal{B}$ a basis for $\mathcal{V}$? Give a reason for your answer.
|
| 23 |
+
|
| 24 |
+
3. Let $\mathbf{v} = 2\mathbf{i} + 3\mathbf{j} + 4\mathbf{k}$. What is $\mathbf{v}$ in terms of $\mathcal{B}$?
|
| 25 |
+
|
| 26 |
+
4. Find the following (with $\mathbf{v}$ as above):
|
| 27 |
+
|
| 28 |
+
$$
|
| 29 |
+
\begin{align*}
|
| 30 |
+
|\mathbf{i}\rangle_S, & |\mathbf{j}\rangle_S, |\mathbf{k}\rangle_S, |\mathbf{b}_1\rangle_S, |\mathbf{b}_2\rangle_S, |\mathbf{b}_3\rangle_S, |\mathbf{v}\rangle_S, \\
|
| 31 |
+
|\mathbf{i}\rangle_B, & |\mathbf{j}\rangle_B, |\mathbf{k}\rangle_B, |\mathbf{b}_1\rangle_B, |\mathbf{b}_2\rangle_B, |\mathbf{b}_3\rangle_B \text{ and } |\mathbf{v}\rangle_B
|
| 32 |
+
\end{align*}
|
| 33 |
+
$$
|
| 34 |
+
|
| 35 |
+
5. Compute $\langle \mathbf{b}_i | \mathbf{b}_j \rangle$ (i.e., $\mathbf{b}_i \cdot \mathbf{b}_j$) for all possible $i$'s and $j$'s.
|
| 36 |
+
|
| 37 |
+
6. Let $\mathbf{v} = v_1 \mathbf{b}_1 + v_2 \mathbf{b}_2 + v_3 \mathbf{b}_3$ and $\mathbf{w} = w_1 \mathbf{b}_1 + w_2 \mathbf{b}_2 + w_3 \mathbf{b}_3$. Find the corresponding component formulas for $\langle \mathbf{v} | \mathbf{w} \rangle$ and $\|\mathbf{v}\|$.
|
| 38 |
+
(Note: $\langle \mathbf{v} | \mathbf{w} \rangle \neq v_1 w_1 + v_2 w_2 + v_3 w_3$ and $\|\mathbf{v}\| \neq \sqrt{(v_1)^2 + (v_2)^2 + (v_3)^2}$! )
|
| 39 |
+
|
| 40 |
+
7. (optional) Suppose **c** is any vector in **V** and let the components of **c** with respect to our two bases be denoted by
|
| 41 |
+
|
| 42 |
+
$$|\mathbf{c}\rangle_S = \begin{bmatrix} \alpha_1 \\ \alpha_2 \\ \alpha_3 \end{bmatrix} \quad \text{and} \quad |\mathbf{c}\rangle_B = \begin{bmatrix} \beta_1 \\ \beta_2 \\ \beta_3 \end{bmatrix}.$$
|
| 43 |
+
|
| 44 |
+
Find the formulas for computing the $\alpha_k$'s from the $\beta_k$'s, and for computing the $\beta_k$'s from the $\alpha_k$'s.
|
| 45 |
+
---PAGE_BREAK---
|
| 46 |
+
|
| 47 |
+
B. Consider (but don't yet bother solving, yet) the differential equation
|
| 48 |
+
|
| 49 |
+
$$y'' + y = 0 .$$
|
| 50 |
+
|
| 51 |
+
1. Suppose $y_1$ and $y_2$ are two solutions to this differential equation. Verify that any linear combination of these two solutions is also a solution.
|
| 52 |
+
|
| 53 |
+
2. Let $S$ be the set of all solutions to this differential equation. Is $S$ a vector space?
|
| 54 |
+
Explain.
|
| 55 |
+
|
| 56 |
+
3. What is the general solution to this differential equation? What does it tell you about a possible basis for $S$ and the dimension of $S$?
|
| 57 |
+
|
| 58 |
+
C. Let $S$ be the set of all solutions to some given homogeneous linear differential equation
|
| 59 |
+
|
| 60 |
+
$$ay'' + by' + cy = 0$$
|
| 61 |
+
|
| 62 |
+
where $a$, $b$, and $c$ are known functions. Show that $S$ is a vector space. (If you recall enough from your old differential equations class, you can even state the dimension of $S$.)
|
| 63 |
+
|
| 64 |
+
D. Compute $\langle \mathbf{v} | \mathbf{w} \rangle$, $\langle \mathbf{w} | \mathbf{v} \rangle$ and $\|\mathbf{v}\|$ when the vector space is $\mathbb{C}^2$, $\mathbf{v} = (3i, 2+3i)$ and $\mathbf{w} = (4, 5+2i)$.
|
| 65 |
+
|
| 66 |
+
E. Compute the “energy norm” inner product of two functions $f$ and $g$ on the interval $[0, 1]$,
|
| 67 |
+
|
| 68 |
+
$$\langle f | g \rangle = \int_{0}^{1} f^{*}(x) g(x) dx,$$
|
| 69 |
+
|
| 70 |
+
for the following choices of $f$ and $g$ (simplify your answers as much as practical):
|
| 71 |
+
|
| 72 |
+
1. $f(x) = 3 + (2 + 3i)x$ and $g(x) = 5x - 2ix^2$
|
| 73 |
+
|
| 74 |
+
2. $f(x) = 3 + (2 + 3i)e^{i2\pi x}$ and $g(x) = e^{i\pi x}$
|
| 75 |
+
|
| 76 |
+
3. $f(x) = e^{i2\pi x}$ and $g(x) = 2 + x$
|
| 77 |
+
|
| 78 |
+
F. Let **a** and **v** be two (nonzero) vectors from a vector space $\mathcal{V}$ with an inner product $\langle \cdot | \cdot \rangle$. Define the “generalized projection of vector **v** onto vector **a**” by
|
| 79 |
+
|
| 80 |
+
$$\vec{\mathrm{pr}}_{\mathbf{a}}(\mathbf{v}) = \frac{\langle \mathbf{a} | \mathbf{v} \rangle}{\|\mathbf{a}\|^2} \mathbf{a} .$$
|
| 81 |
+
|
| 82 |
+
and define the corresponding “generalized projection of **v** orthogonal to **a**” by
|
| 83 |
+
|
| 84 |
+
$$\overrightarrow{\mathrm{or}}_{\mathbf{a}}(\mathbf{v}) = \mathbf{v} - \vec{\mathrm{pr}}_{\mathbf{a}}(\mathbf{v}).$$
|
| 85 |
+
---PAGE_BREAK---
|
| 86 |
+
|
| 87 |
+
Note that we automatically have that $\vec{pr}_a(\mathbf{v})$ is "parallel" to $\mathbf{a}$, and that
|
| 88 |
+
|
| 89 |
+
$$ \mathbf{v} = \vec{pr}_a(\mathbf{v}) + \overrightarrow{or}_a(\mathbf{v}) . $$
|
| 90 |
+
|
| 91 |
+
Now confirm that the set $\{\vec{pr}_a(\mathbf{v}), \overrightarrow{or}_a(\mathbf{v})\}$ is orthogonal.
|
| 92 |
+
|
| 93 |
+
G. Let $V$ be the linear space of all functions of the form
|
| 94 |
+
|
| 95 |
+
$$ f(x) = \alpha_{-2}e^{-i4\pi x} + \alpha_{-1}e^{-i2\pi x} + \alpha_0 + \alpha_1 e^{i2\pi x} + \alpha_2 e^{i4\pi x} $$
|
| 96 |
+
|
| 97 |
+
where $\alpha_{-2}, \alpha_{-1}, \alpha_0, \alpha_1$ and $\alpha_2$ are constants.
|
| 98 |
+
|
| 99 |
+
1. Using the inner product
|
| 100 |
+
|
| 101 |
+
$$ \langle f | g \rangle = \int_{0}^{1} f^*(x) g(x) dx , $$
|
| 102 |
+
|
| 103 |
+
verify that both
|
| 104 |
+
|
| 105 |
+
$$ B_E = \{e^{-i4\pi x}, e^{-i2\pi x}, 1, e^{i2\pi x}, e^{i4\pi x}\} $$
|
| 106 |
+
|
| 107 |
+
and
|
| 108 |
+
|
| 109 |
+
$$ B_T = \{1, \cos(2\pi x), \sin(2\pi x), \cos(4\pi x), \sin(4\pi x)\} $$
|
| 110 |
+
|
| 111 |
+
are orthogonal bases for $V$.
|
| 112 |
+
|
| 113 |
+
2. What is $|e^{i2\pi x}\rangle_{B_T}$? $|\sin(4\pi x)\rangle_{B_E}$? (That is, find the components of each function with respect to the indicated basis.)
|
| 114 |
+
|
| 115 |
+
3. Construct the orthonormal basis corresponding to $B_E$ and the orthonormal basis corresponding to $B_T$.
|
| 116 |
+
|
| 117 |
+
H. Let $V$ be a three-dimensional space of traditional vectors with a “standard” basis
|
| 118 |
+
|
| 119 |
+
$$ S = \{\mathbf{i}, \mathbf{j}, \mathbf{k}\} . $$
|
| 120 |
+
|
| 121 |
+
(If you prefer, use $\{\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3\}$ or $\{\mathbf{x}, \mathbf{y}, \mathbf{z}\}$.)
|
| 122 |
+
|
| 123 |
+
Using the Gram-Schmidt procedure, construct an orthonormal basis for $V$ from
|
| 124 |
+
|
| 125 |
+
$$ B = \{\mathbf{b}_1, \mathbf{b}_2, \mathbf{b}_3\} $$
|
| 126 |
+
|
| 127 |
+
where
|
| 128 |
+
|
| 129 |
+
$$ \mathbf{b}_1 = \mathbf{i} + 2\mathbf{j} , \quad \mathbf{b}_2 = 3\mathbf{j} - \mathbf{k} \quad \text{and} \quad \mathbf{b}_3 = 2\mathbf{i} - 3\mathbf{j} . $$
|
| 130 |
+
---PAGE_BREAK---
|
| 131 |
+
|
| 132 |
+
I. You should have already convinced yourself that the space $P$ of all polynomials has the
|
| 133 |
+
basis
|
| 134 |
+
|
| 135 |
+
$$ \{1, x, x^2, x^3, x^4, x^5, \ldots\} . $$
|
| 136 |
+
|
| 137 |
+
However, this basis is not orthonormal or even orthogonal with respect to the inner product
|
| 138 |
+
|
| 139 |
+
$$ \langle f | g \rangle = \int_{0}^{1} f^{*}(x) g(x) dx . $$
|
| 140 |
+
|
| 141 |
+
Let
|
| 142 |
+
|
| 143 |
+
$$ \Phi = \{ \phi_0(x), \phi_1(x), \phi_2(x), \phi_3(x), \phi_4(x), \phi_5(x), \ldots \} $$
|
| 144 |
+
|
| 145 |
+
be the corresponding orthonormal basis generated from the above basis by the Gram-Schmidt procedure.
|
| 146 |
+
|
| 147 |
+
1. Find the formulas for $\phi_0(x)$, $\phi_1(x)$ and $\phi_2(x)$.
|
| 148 |
+
|
| 149 |
+
2. Find the components of $f(x) = 2 + 3x^2$ with respect to basis $\Phi$.
|
samples_new/texts_merged/5893423.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples_new/texts_merged/6026555.md
ADDED
|
@@ -0,0 +1,180 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Thermodynamics of Efflux Process of Liquids and Gases
|
| 5 |
+
|
| 6 |
+
E. A. Mikaelian¹, Saif A. Mouhammad²*
|
| 7 |
+
|
| 8 |
+
¹Gubkin Russian State University of Oil and Gas, Moscow, Russia
|
| 9 |
+
|
| 10 |
+
²Physics Department, Faculty of Science, Taif University, Taif, Kingdom of Saudi Arabia
|
| 11 |
+
|
| 12 |
+
Email: saifnet70@hotmail.com
|
| 13 |
+
|
| 14 |
+
Received 29 March 2015; accepted 11 May 2015; published 14 May 2015
|
| 15 |
+
|
| 16 |
+
Copyright © 2015 by authors and Scientific Research Publishing Inc.
|
| 17 |
+
This work is licensed under the Creative Commons Attribution International License (CC BY).
|
| 18 |
+
http://creativecommons.org/licenses/by/4.0/
|
| 19 |
+
|
| 20 |
+
Open Access
|
| 21 |
+
|
| 22 |
+
## Abstract
|
| 23 |
+
|
| 24 |
+
The main objective of this work is to obtain the calculated ratio of efflux processes for liquids, vapors, gases on the basis of the developed mathematical model, which allows to determine the characteristics of the channel profiles nozzles and diffusers, to solve a number of subsequent applications for analysis modes. On the basis of the calculated ratios are equations of the first law of thermodynamics for the flow of liquids and gases. The obtained calculated ratios are extended for the case of the efflux of compressible liquids, vapors and gases and as a special case, for incompressible liquids. The characteristics of the critical efflux regime liquids, which allows to determine the linear and the mass efflux rate of the critical regime and the calculated characteristics of the channel profiles nozzles and diffusers, Laval nozzles for different modes of operation are obtained.
|
| 25 |
+
|
| 26 |
+
## Keywords
|
| 27 |
+
|
| 28 |
+
Thermodynamics, Efflux, Compressible, Incompressible, Liquids, Diffusers, Nozzles
|
| 29 |
+
|
| 30 |
+
## 1. Introduction
|
| 31 |
+
|
| 32 |
+
The efflux processes are quite common in various technological processes performed with the power technology equipment in the gas and oil industry, in heat engines, pumps, compressor machines, mas-and-heat exchange units, pipelines, in separate elements of machines and devices: nozzles, diffusers, convergent nozzles, mud guns, fittings, locking devices, gate valves, valves, various calibration holes etc. It is worth emphasising a special role in studying the processes of the gas and liquid efflux through various sorts of leakiness and gap [1] [2].
|
| 33 |
+
|
| 34 |
+
The efflux process can be considered as a special case of the occurrence and distribution of potential work. Effective work in the efflux process is distributed on the work, directly transmitted to the bodies of external systems.
|
| 35 |
+
|
| 36 |
+
*Corresponding author.
|
| 37 |
+
---PAGE_BREAK---
|
| 38 |
+
|
| 39 |
+
tem (in our case in the efflux process this work is absent: $\delta W_{ez}^* = 0$) and to a change in the energy of external position of working medium itself ($de_{ez}$). The last term, in turn, consists of the kinetic energy $d(c^2/2)$ and the potential energy ($gdz$).
|
| 40 |
+
|
| 41 |
+
Thus, the initial equation of the theoretical efflux process has the following form:
|
| 42 |
+
|
| 43 |
+
$$ \delta W = -VdP = d(c^2/2) + gdz. \quad (1) $$
|
| 44 |
+
|
| 45 |
+
Switching to the real efflux processes then is carried out by introducing correction factors: velocity rates ($\phi$) and flow rates ($\phi_*$). The integral of the initial equation of efflux for the expression of potential flow work from the initial 1 to the final section 2 of a flow has the form as follows:
|
| 46 |
+
|
| 47 |
+
$$ W_{12} = c_2^2/2 - c_1^2/2 + g(z_2 - z_1); \quad (2) $$
|
| 48 |
+
|
| 49 |
+
$$ W_{12} = \left[ 1 - \left( \frac{P_2}{P_1} \right)^{(n-1)/n} \right] \frac{P_1 V_1 n}{n-1}. \quad (3) $$
|
| 50 |
+
|
| 51 |
+
The rate of gas efflux in the initial section can be considered as a result of the efflux of a conditional initial state 0-0 at zero velocity $c_0 = 0$; with the graded level $z_0 = z_2$ and pressure $P_0$.
|
| 52 |
+
|
| 53 |
+
Then the calculated expression for the potential work and linear velocity of the efflux of the final section of the flow is determined by the following equations:
|
| 54 |
+
|
| 55 |
+
$$ W_{02} = W_{12} + W_{01} = \left[ 1 - \left( \frac{P_2}{P_0} \right)^{\frac{n-1}{n}} \right] \frac{P_0 V_0 n}{n-1}; \quad (4) $$
|
| 56 |
+
|
| 57 |
+
$$ c_2 = (2W_{02})^{0.5} = \left[ 2W_{12} + c_1^2 + 2g(z_1 + z_2) \right]^{0.5}. \quad (5) $$
|
| 58 |
+
|
| 59 |
+
The theoretical efflux process is regarded as an adiabatic one, then based on the first law of thermodynamics for the flow the potential flow work is determined as the specific heat drop of the flow equal to the difference between the heat content (enthalpy) of it, taken with the opposite sign [3] [4]:
|
| 60 |
+
|
| 61 |
+
$$ W_{12} = h_1 - h_2; \quad q_{12} = 0. \quad (6) $$
|
| 62 |
+
|
| 63 |
+
Further the mass rate of the efflux is entered in calculations:
|
| 64 |
+
|
| 65 |
+
$$ u = G/f = V\rho/f = \rho c, \quad (7) $$
|
| 66 |
+
|
| 67 |
+
where *f*—the cross section of the flow under consideration; *G* and *V* are the mass flow rate and volumetric flow rate; *ρ*—liquid density; *c*—the linear velocity of the liquid in the direction of movement (the average velocity in the section *f* in the direction of a normal to this section).
|
| 68 |
+
|
| 69 |
+
The concept of the mass flow rate in research is most essential. The concept of linear velocity characterises only the kinetic energy of the flow, averaging of such a velocity depends on the flow mode (laminar, transitional, turbulent) and is not identical with the mass flow rate.
|
| 70 |
+
|
| 71 |
+
The calculated expression of the theoretical mass flow rate at the outlet section is obtained according to the last equation depending on the linear velocity (4) and (5) and the equation of the efflux process:
|
| 72 |
+
|
| 73 |
+
$$ u_2 = \left\{ \left[ 1 - \left( \frac{P_2}{P_0} \right)^{\frac{n-1}{n}} \right] \left( \frac{P_2}{P_0} \right)^{\frac{2}{n}} \left( \frac{P_0}{V_0} \right)^{\frac{2}{n}} \right\}^{\frac{0.5}{n-1}}. \quad (8) $$
|
| 74 |
+
|
| 75 |
+
For transition to the real characteristics of a flow we input correction factors in our calculations:
|
| 76 |
+
|
| 77 |
+
$$ c = \varphi c_2; \quad u = \varphi u_2; \quad G = uf = \varphi u_2 f = \varphi c_2 \rho_2 f. \quad (9) $$
|
| 78 |
+
|
| 79 |
+
In the formula (9) the velocity and flow-rate factors are determined as the ratio of theoretical and actual velocities:
|
| 80 |
+
|
| 81 |
+
$$ \varphi = c/c_2 = V/fc_2; \quad \varphi = u/u_2 = G/fu_2. \quad (10) $$
|
| 82 |
+
|
| 83 |
+
The work of irreversible energy losses associated with the real efflux process:
|
| 84 |
+
|
| 85 |
+
$$ W^{**} = (c_2^2 - c^2)/2 = (1 - \varphi^2)c_2^2/2 = \xi c_2^2/2, \quad (11) $$
|
| 86 |
+
---PAGE_BREAK---
|
| 87 |
+
|
| 88 |
+
where $\xi$—the factor of energy losses in the real process.
|
| 89 |
+
|
| 90 |
+
To calculate the velocity and flow rate, as follows from the formulas (10) it is necessary to arrange for mass (volume) measurements of the liquid flow rates [5] [6].
|
| 91 |
+
|
| 92 |
+
## 2. Efflux of Incompressible Liquids
|
| 93 |
+
|
| 94 |
+
The initial condition ($\rho_1 = \rho_2 = \rho = 1/\nu = \text{idem}$):
|
| 95 |
+
|
| 96 |
+
$$W_{12} = (P_1 - P_2)/\rho; W_{02} = (P_0 - P_2)/\rho. \quad (12)$$
|
| 97 |
+
|
| 98 |
+
Further, by using the initial general ratios (5), (7), (9) and Equation (12), we will obtain the calculated ratios for a particular case of the efflux of incompressible liquids:
|
| 99 |
+
|
| 100 |
+
$$c_2 = (2W_{02})^{0.5} = \left[ 2W_{12} + c_1^2 + 2g(z_1 + z_2) \right]^{0.5} = \left[ 2(P_0 - P_2)/\rho \right]^{0.5} \\ = \left[ 2(P_1 - P_2)/\rho + c_1^2 + 2g(z_1 + z_2) \right]^{0.5}; \quad (13)$$
|
| 101 |
+
|
| 102 |
+
$$u_2 = G/f = V\rho/f = \rho c_2 = \left[ 2(P_0 - P_2)\rho \right]^{0.5}; \quad (14)$$
|
| 103 |
+
|
| 104 |
+
$$G = \phi u_2 f. \quad (15)$$
|
| 105 |
+
|
| 106 |
+
The obtained ratios can be applied to the efflux of compressible liquids (gases) with the condition of insignificant fluctuations of the densities. In this case, in Formulae (12)-(15) there should be introduced the average density value, for example, as arithmetic mean:
|
| 107 |
+
|
| 108 |
+
$$(\rho_1 + \rho_2)/2 = \rho_m$$
|
| 109 |
+
|
| 110 |
+
## 3. Efflux of Compressible Liquids (Gases)
|
| 111 |
+
|
| 112 |
+
The general solution of problems concerning the efflux of compressible liquids is obtained by a corresponding development of the previously obtained initial relationships.
|
| 113 |
+
|
| 114 |
+
From a consideration of the original ratio (8) it follows that the mass velocity becomes zero for the following values of pressure ratios: 1) $P_2/P_0 = 1$, this takes place at the beginning of the efflux $c=0$; $u=c\rho=0$ due to the initial rate; 2) $P_2/P_0 = 0$—in the efflux to vacuum at the outlet section $\rho=0$; $u=c\rho=0$ due to density. Within this range, the mass flow rate passes through the maximum (Rolle's theorem). This means the variable factor of the radicand (8) passes through the maximum:
|
| 115 |
+
|
| 116 |
+
$$\Psi = \left[ 1 - \left( \frac{P_2}{P_0} \right)^{\frac{n-1}{n}} \right] \left( \frac{P_2}{P_0} \right)^{\frac{2}{n}}. \quad (16)$$
|
| 117 |
+
|
| 118 |
+
Let us introduce the following designations:
|
| 119 |
+
|
| 120 |
+
$$(P_2/P_0) = \tau^{n/(n-1)}; \quad (P_2/P_0)^{2/n} = \tau^{2/(n-1)}, \quad (17)$$
|
| 121 |
+
|
| 122 |
+
using the Rolle's theorem for investigating the function to the maximum, we obtain the parameters of the critical mode of efflux for compressible liquids
|
| 123 |
+
|
| 124 |
+
$$\tau_{cr} = (P_2/P_0)_{cr}^{(n-1)/n} = 2/(n+1), \quad (18)$$
|
| 125 |
+
|
| 126 |
+
$$\beta = (P_2/P_0)_{cr} = \tau_{cr}^{n/(n-1)} = \left[ 2/(n+1) \right]^{n/(n-1)}, \quad (19)$$
|
| 127 |
+
|
| 128 |
+
$$\Psi_{cr} = (1 - \tau_{cr}) \tau_{cr}^{2/(n-1)}. \quad (20)$$
|
| 129 |
+
|
| 130 |
+
Depending on parameters of the critical efflux mode, the linear and mass efflux rates of the critical mode are determined:
|
| 131 |
+
|
| 132 |
+
$$c_{cr} = \left[ n(PV)_{cr} \right]^{0.5}, \quad (21)$$
|
| 133 |
+
---PAGE_BREAK---
|
| 134 |
+
|
| 135 |
+
**Table 1. Characteristic values of the discharge critical mode.**
|
| 136 |
+
|
| 137 |
+
<table><thead><tr><th>n</th><th>1.1</th><th>1.2</th><th>1.3</th><th>1.4</th></tr></thead><tbody><tr><td>τ<sub>cr</sub> = 2/(n+1)</td><td>0.953</td><td>0.909</td><td>0.870</td><td>0.833</td></tr><tr><td>β = τ<sup>q/(n-1)</sup><sub>cr</sub></td><td>0.5847</td><td>0.5645</td><td>0.5457</td><td>0.5283</td><td>~0.55</td></tr><tr><td>Ψ<sub>cr</sub></td><td>1.9677</td><td>2.0309</td><td>2.0896</td><td>2.1443</td><td>~2.05</td></tr></tbody></table>
|
| 138 |
+
|
| 139 |
+
$$u_{cr} = \left[ 2P_0 \rho_0 \Psi_{cr} n / (n-1) \right]^{0.5}. \quad (22)$$
|
| 140 |
+
|
| 141 |
+
**Table 1** shows the values of the critical discharge characteristics depending on the performance of the efflux process.
|
| 142 |
+
|
| 143 |
+
## 4. Particular Cases of Efflux
|
| 144 |
+
|
| 145 |
+
The ideal gas ($PV = RT$):
|
| 146 |
+
|
| 147 |
+
$$c_{cr} = [nRT_{cr}]^{0.5}; \quad u_{cr} = P_0 [2\Psi_{cr}n/(n-1)RT_{cr}]^{0.5}, \quad (23)$$
|
| 148 |
+
|
| 149 |
+
The incompressible liquids ($V$ = idem; $n = \infty$): $c_{cr} = \infty$.
|
| 150 |
+
|
| 151 |
+
This means that the critical mode for incompressible fluids is unattainable. The critical linear velocity of the adiabatic efflux ($n = k$) is the velocity of sound:
|
| 152 |
+
|
| 153 |
+
$$a^* = [k(PV)_{cr}]^{0.5},$$
|
| 154 |
+
|
| 155 |
+
for the ideal gas:
|
| 156 |
+
|
| 157 |
+
$$a^* = [nRT_{cr}]^{0.5}. \quad (24)$$
|
| 158 |
+
|
| 159 |
+
## 5. Conclusion
|
| 160 |
+
|
| 161 |
+
According to the energy conservation law, the equation of distribution and occurrence of the potential work of any thermodynamic systems is obtained. Taken as a basis for the theory of the efflux of gases, compressible and incompressible liquids, the characteristic features of the critical mode of the liquid efflux are obtained. The derived calculated ratios will further determine the calculated characteristics of the channel profiles of nozzles and diffusers, Laval nozzles for a range of modes of operation.
|
| 162 |
+
|
| 163 |
+
## References
|
| 164 |
+
|
| 165 |
+
[1] Mikaelian, E.A. (2000) Maintenance Energotechnological Equipment, Gas Turbine Gas Compressor Units of Gas Gathering and Transportation. Methodology, Research, Analysis and Practice, Fuel and Energy, Moscow, 304.
|
| 166 |
+
http://www.dobi.oglib.ru/bgl/5076.html
|
| 167 |
+
|
| 168 |
+
[2] Mikaelian, E.A. (2001) Improving the Quality, to Ensure Reliability and Safety of the Main Pipelines. In: Margulov, G.D., Ed., Series: Sustainable Energy and Society, Fuel and Energy, Moscow, 640.
|
| 169 |
+
http://www.dobi.oglib.ru/bgl/4625.html
|
| 170 |
+
|
| 171 |
+
[3] Vladimirov, A.I. and Kershenbaum, Y.V. (2008) Industrial Safety Compressor Stations. Management of Safety and Reliability. Inter-Sector Foundation “National Institute of Oil and Gas”, Moscow, 640.
|
| 172 |
+
http://www.mdk-arbat.ru/bookcard?book_id=3304125
|
| 173 |
+
|
| 174 |
+
[4] Mikaelian, E.A. (2008) Diagnosis Energotechnological Equipment GGPA Based on Various Diagnostic Features. Gas Industry, **4**, 59-63.
|
| 175 |
+
|
| 176 |
+
[5] Mikaelian, E.A. (2014) Determination of the Characteristic Features and Technical Condition of the Gas-Turbine and Gas-Compressor Units of Compressor Stations Based on a Simplified Thermodynamic Model. Quality Management in Oil and Gas Industry, **1**, 44-48.
|
| 177 |
+
http://instoilgas.ru/ukang
|
| 178 |
+
|
| 179 |
+
[6] Mikaelian, E.A. and Mouhammed, S.A. (2014) Survey Equipment Gas Transmission Systems. Quality Management in Oil and Gas Industry, **4**, 29-36.
|
| 180 |
+
http://instoilgas.ru/ukang
|
samples_new/texts_merged/6080891.md
ADDED
|
@@ -0,0 +1,760 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# A Hankel matrix acting on Hardy and Bergman spaces
|
| 5 |
+
|
| 6 |
+
by
|
| 7 |
+
|
| 8 |
+
PETROS GALANOPOULOS and JOSÉ ÁNGEL PELÁEZ (Málaga)
|
| 9 |
+
|
| 10 |
+
**Abstract.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$. Let $\mathcal{H}_\mu = (\mu_{n,k})_{n,k \ge 0}$ be the Hankel matrix with entries $\mu_{n,k} = \int_{[0,1)} t^{n+k} d\mu(t)$. The matrix $\mathcal{H}_\mu$ induces formally an operator on the space of all analytic functions in the unit disc by the formula
|
| 11 |
+
|
| 12 |
+
$$ \mathcal{H}_{\mu}(f)(z) = \sum_{n=0}^{\infty} \left( \sum_{k=0}^{\infty} \mu_{n,k} a_k \right) z^n, \quad z \in \mathbb{D}, $$
|
| 13 |
+
|
| 14 |
+
where $f(z) = \sum_{n=0}^{\infty} a_n z^n$ is an analytic function in $\mathbb{D}$.
|
| 15 |
+
|
| 16 |
+
We characterize those positive Borel measures on $[0,1)$ such that $\mathcal{H}_\mu(f)(z) = \int_{[0,1)} \frac{f(t)}{1-tz} d\mu(t)$ for all $f$ in the Hardy space $H^1$, and among them we describe those for which $\mathcal{H}_\mu$ is bounded and compact on $H^1$. We also study the analogous problem for the Bergman space $A^2$.
|
| 17 |
+
|
| 18 |
+
**1. Introduction.** We denote by $\mathbb{D} = \{z \in \mathbb{C} : |z| < 1\}$ the unit disc and by $\mathbb{T}$ the unit circle. Let $\mathcal{Hol}(\mathbb{D})$ be the space of analytic functions in $\mathbb{D}$ and let $H^p(0 < p \le \infty)$ be the classical Hardy space of analytic functions in $\mathbb{D}$ (see [D]).
|
| 19 |
+
|
| 20 |
+
If $0 < p < \infty$ the Bergman space $A^p$ is the set of all $f \in \mathcal{Hol}(\mathbb{D})$ such that
|
| 21 |
+
|
| 22 |
+
$$ \|f\|_{A^p} := \int_{\mathbb{D}} |f(z)|^p dA(z) < \infty, $$
|
| 23 |
+
|
| 24 |
+
where $dA(z) = \pi^{-1}dx dy$ is the normalized Lebesgue area measure on $\mathbb{D}$.
|
| 25 |
+
|
| 26 |
+
For the theory of these spaces we refer to [DS] and [Zh].
|
| 27 |
+
|
| 28 |
+
Let $\mu$ be a finite positive Borel measure on $[0, 1)$ and let $\mathcal{H}_\mu = (\mu_{n,k})_{n,k \ge 0}$ be the Hankel matrix with entries $\mu_{n,k} = \int_{[0,1)} t^{n+k} d\mu(t)$. The matrix $\mathcal{H}_\mu$ induces formally an operator (which will also be denoted $\mathcal{H}_\mu$) on $\mathcal{Hol}(\mathbb{D})$ in the following sense. If $f(z) = \sum_{n \ge 0} a_n z^n \in \mathcal{Hol}(\mathbb{D})$, by multiplication of the
|
| 29 |
+
|
| 30 |
+
2010 Mathematics Subject Classification: Primary 47B35; Secondary 30H10.
|
| 31 |
+
Key words and phrases: Hankel matrices, Hardy spaces, Bergman spaces.
|
| 32 |
+
---PAGE_BREAK---
|
| 33 |
+
|
| 34 |
+
matrix with the sequence of Taylor coefficients of the function,
|
| 35 |
+
|
| 36 |
+
$$ \{a_n\}_{n \ge 0} \mapsto \left\{ \sum_{k \ge 0} \mu_{n,k} a_k \right\}_{n \ge 0}, $$
|
| 37 |
+
|
| 38 |
+
we can formally define
|
| 39 |
+
|
| 40 |
+
$$ (1.1) \qquad \mathcal{H}_\mu(f)(z) = \sum_{n=0}^{\infty} \left( \sum_{k=0}^{\infty} \mu_{n,k} a_k \right) z^n, \quad z \in \mathbb{D}. $$
|
| 41 |
+
|
| 42 |
+
If $\mu$ is the Lebesgue measure on the interval $[0,1]$ we get the classical Hilbert matrix $H = \{\frac{1}{n+k+1}\}_{n,k \ge 0}$. This matrix induces, in the same way as above, a bounded operator on $H^p$, $p \in (1, \infty)$ (see [DiS]), and on $A^p$, $p \in (2, \infty)$ (see [Di]); estimates on the norms have also been obtained. Recently in [DJV], a further progress has been achieved in this direction.
|
| 43 |
+
|
| 44 |
+
In this paper we shall focus our attention on the limit cases $H^1$ and $A^2$, that is, we shall study the boundedness, compactness, and other related properties of $\mathcal{H}_\mu$ on these spaces in terms of $\mu$. Similar investigations have previously been conducted by several authors in different spaces of analytic functions in $\mathbb{D}$ (see e.g. [W], [Po]).
|
| 45 |
+
|
| 46 |
+
The classical Hilbert matrix $\mathcal{H}$ is well defined but it is not bounded on $H^1$ (see [DiS]). It is known that the operator induced by the Hilbert matrix is not even well defined on $A^2$. Indeed, $f(z) = \sum_{n=1}^{\infty} \frac{1}{\log(n+1)}z^n \in A^2$ but $Hf(0) = \sum_{n=1}^{\infty} \frac{1}{(n+1)\log(n+1)} = \infty$ (see [DJV]). Thus, it is natural to study under which conditions on the measure $\mu$ the corresponding matrix $\mathcal{H}_\mu$ induces a well defined and bounded operator on $H^1$ and on $A^2$.
|
| 47 |
+
|
| 48 |
+
The structure of the paper is as follows. In Section 2 we deal with the case of the Hardy space $H^1$. Let $\mu$ be a positive Borel measure in $\mathbb{D}$. For $\alpha \ge 0$ and $s > 0$, we say that $\mu$ is an $\alpha$-logarithmic $s$-Carleson measure, resp. a vanishing $\alpha$-logarithmic $s$-Carleson measure, if
|
| 49 |
+
|
| 50 |
+
$$ \sup_{a \in \mathbb{D}} \frac{\mu(S(a)) (\log \frac{2}{1-|a|^2})^\alpha}{(1 - |a|^2)^s} < \infty, \quad \text{resp. } \lim_{|a| \to 1^{-}} \frac{\mu(S(a)) (\log \frac{2}{1-|a|^2})^\alpha}{(1 - |a|^2)^s} = 0. $$
|
| 51 |
+
|
| 52 |
+
By $S(a)$ we denote the Carleson box with vertex at $a$, that is,
|
| 53 |
+
|
| 54 |
+
$$ S(a) = \left\{ z \in \mathbb{D} : 1 - |z| \le 1 - |a|, \left| \frac{\arg(a\bar{z})}{2\pi} \right| \le \frac{1 - |a|}{2} \right\}. $$
|
| 55 |
+
|
| 56 |
+
The above definition is a generalization of the fundamental notion of *classical Carleson measure* introduced by Carleson (see [C]). These are measures that occur for $\alpha = 0$ and $s = 1$.
|
| 57 |
+
|
| 58 |
+
We shall prove that any classical Carleson measure induces a well defined operator on $H^1$, and conversely being Carleson is necessary in the following sense.
|
| 59 |
+
---PAGE_BREAK---
|
| 60 |
+
|
| 61 |
+
**PROPOSITION 1.1.** Suppose that $\mu$ is a finite positive Borel measure on $[0, 1)$.
|
| 62 |
+
|
| 63 |
+
(i) If $\mu$ is a classical Carleson measure then the power series $\mathcal{H}_\mu(f)(z)$ represents a function in $\text{Hol}(\mathbb{D})$ for any $f \in H^1$, and moreover
|
| 64 |
+
|
| 65 |
+
$$ (1.2) \qquad \mathcal{H}_\mu(f)(z) = \int_{[0,1)} \frac{f(t)}{1-tz} d\mu(t), \quad f \in H^1. $$
|
| 66 |
+
|
| 67 |
+
(ii) If the integral in (1.2) converges for each $z \in \mathbb{D}$ and $f \in H^1$, then $\mu$ is a classical Carleson measure.
|
| 68 |
+
|
| 69 |
+
The hope that any classical Carleson measure $\mu$ induces a bounded operator $\mathcal{H}_\mu$ on $H^1$ is unjustified, because the Lebesgue measure does not. The next result describes the appropriate subclass of classical Carleson measures.
|
| 70 |
+
|
| 71 |
+
**THEOREM 1.2.** Suppose that $\mu$ is a classical Carleson measure on $[0, 1)$.
|
| 72 |
+
|
| 73 |
+
(i) $\mathcal{H}_\mu : H^1 \to H^1$ is bounded if and only if $\mu$ is a 1-logarithmic 1-Carleson measure.
|
| 74 |
+
|
| 75 |
+
(ii) $\mathcal{H}_\mu : H^1 \to H^1$ is compact if and only if $\mu$ is a vanishing 1-logarithmic 1-Carleson measure.
|
| 76 |
+
|
| 77 |
+
In many papers (see [CS], [JPS], [T], [PV] and [Pe]), another approach to the study of Hankel operators on spaces of analytic functions is developed, using the symbol of the operator, which in our case is essentially the function
|
| 78 |
+
|
| 79 |
+
$$ (1.3) \qquad h_\mu(z) = \sum_n \mu_n z^n, \quad \mu_n = \int_{[0,1)} t^n d\mu(t). $$
|
| 80 |
+
|
| 81 |
+
A characterization of the boundedness and compactness of the operator $\mathcal{H}_\mu : H^1 \to H^1$ in terms of $h_\mu$ follows from [PV, Theorems 1.6 and 1.7] (see also [CS], [JPS] and [T]). We shall provide two proofs of Theorem 1.2, a first one based on the integral representation (1.2) and a second one which uses the last cited result.
|
| 82 |
+
|
| 83 |
+
In the case of $H^2$, $\mathcal{H}_\mu$ is bounded if and only if $\mu$ is a classical Carleson measure (see [Pe]). Power, [Po, p. 428], proved that if $\int_{[0,1)} d\mu(t)/(1-t)^2 < \infty$, then $\mathcal{H}_\mu$ is a Hilbert-Schmidt operator, and raised the question of a necessary condition. The next result solves this problem.
|
| 84 |
+
|
| 85 |
+
**THEOREM 1.3.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$ and suppose that the operator $\mathcal{H}_\mu$ is bounded on $H^2$. Then $\mathcal{H}_\mu$ is a Hilbert-Schmidt operator on $H^2$ if and only if
|
| 86 |
+
|
| 87 |
+
$$ (1.4) \qquad \int_{[0,1)} \frac{\mu([t, 1])}{(1-t)^2} d\mu(t) < \infty. $$
|
| 88 |
+
---PAGE_BREAK---
|
| 89 |
+
|
| 90 |
+
In Section 3 we turn our attention to $A^2$. First we clarify for which measures the operator is well defined on this space and also gets an integral representation.
|
| 91 |
+
|
| 92 |
+
**PROPOSITION 1.4.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$.
|
| 93 |
+
|
| 94 |
+
(i) If $\mu$ satisfies (1.4) then the power series $\mathcal{H}_\mu(f)(z)$ is in $\text{Hol}(\mathbb{D})$ for any $f \in A^2$ and moreover
|
| 95 |
+
|
| 96 |
+
$$ (1.5) \qquad \mathcal{H}_\mu(f)(z) = \int_{[0,1]} \frac{f(t)}{1-tz} d\mu(t), \quad f \in A^2. $$
|
| 97 |
+
|
| 98 |
+
(ii) If for any choice of $f \in A^2$ and $z \in \mathbb{D}$ the integral in (1.5) converges, then (1.4) is satisfied.
|
| 99 |
+
|
| 100 |
+
Unfortunately, condition (1.4) does not imply the boundedness of $\mathcal{H}_\mu$ on $A^2$ (see Theorem 1.5 and Proposition 1.7 below), so we need to look for a stronger one. Observe that (1.4) can be restated by saying that the analytic function $h_\mu$ belongs to the *Dirichlet space*
|
| 101 |
+
|
| 102 |
+
$$ \mathcal{D} = \left\{ f(z) = \sum_{n=0}^{\infty} a_n z^n \in \text{Hol}(\mathbb{D}) : \int_{\mathbb{D}} |f'(z)|^2 dA(z) < \infty \right\}, $$
|
| 103 |
+
|
| 104 |
+
which is a Hilbert space equipped with the inner product $\langle f, g \rangle_{\mathcal{D}} = a_0 \bar{b}_0 + \sum_{n \ge 0} (n+1)a_{n+1} \bar{b}_{n+1}$. We characterize in these terms the boundedness of the operator $\mathcal{H}_\mu$ on $A^2$.
|
| 105 |
+
|
| 106 |
+
**THEOREM 1.5.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$ that satisfies (1.4). The operator $\mathcal{H}_\mu$ is bounded in $A^2$ if and only if the measure $|h'_\mu(z)|^2 dA(z)$ is a Dirichlet Carleson measure.
|
| 107 |
+
|
| 108 |
+
We remind the reader that a finite positive Borel measure $\nu$ in $\mathbb{D}$ is called a *Dirichlet Carleson measure* if the identity operator is bounded from the Dirichlet space to $L^2(\mathbb{D}, \nu)$. We refer to [S] and [ARS] for descriptions of these measures.
|
| 109 |
+
|
| 110 |
+
It would be nice to relate the boundedness of the operator directly to a condition on the measure. In this spirit, we are able to describe the Hilbert-Schmidt operators on $A^2$.
|
| 111 |
+
|
| 112 |
+
**THEOREM 1.6.** Let $\mu$ be a finite positive Borel measure on $[0, 1)$ that satisfies (1.4). The operator $\mathcal{H}_\mu$ is a Hilbert-Schmidt operator on $A^2$ if and only if
|
| 113 |
+
|
| 114 |
+
$$ (1.6) \qquad \int_{[0,1]} \frac{\mu([t, 1])}{(1-t)^2} \log \frac{1}{1-t} d\mu(t) < \infty. $$
|
| 115 |
+
|
| 116 |
+
Obviously, (1.6) gives bounded operators $\mathcal{H}_\mu$ on $A^2$; maybe surprisingly, it is sharp for the boundedness in a certain sense.
|
| 117 |
+
---PAGE_BREAK---
|
| 118 |
+
|
| 119 |
+
PROPOSITION 1.7. For each $\beta \in [0,1)$ there is a finite positive Borel measure $\mu$ on $[0,1)$ such that
|
| 120 |
+
|
| 121 |
+
$$ (1.7) \quad \int_{[0,1)} \frac{\mu([t, 1))}{(1-t)^2} \left(\log \frac{1}{1-t}\right)^\beta d\mu(t) < \infty, $$
|
| 122 |
+
|
| 123 |
+
and $\mathcal{H}_\mu$ is not bounded on $A^2$.
|
| 124 |
+
|
| 125 |
+
**2. The Hankel matrix $\mathcal{H}_\mu$ acting on $H^1$.** Before we proceed to the proofs of Proposition 1.1 and Theorem 1.2 some results and definitions must be recalled. First, we present an equivalent description of the $\alpha$-logarithmic $s$-Carleson measures (see [Z]).
|
| 126 |
+
|
| 127 |
+
LEMMA A. Suppose that $0 \le \alpha < \infty$ and $0 < s < \infty$ and $\mu$ is a positive Borel measure in $\mathbb{D}$. Then $\mu$ is an $\alpha$-logarithmic $s$-Carleson measure if and only if
|
| 128 |
+
|
| 129 |
+
$$ (2.1) \quad \sup_{a \in \mathbb{D}} \left( \log \frac{2}{1 - |a|^2} \right)^\alpha \int_{\mathbb{D}} \left( \frac{1 - |a|^2}{|1 - \bar{a}z|^2} \right)^s d\mu(z) < \infty. $$
|
| 130 |
+
|
| 131 |
+
We shall write $BMOA_{\log,\alpha}$, $\alpha \ge 0$, (see [Gi] and [PV]) for the space of those $H^1$ functions whose boundary values satisfy
|
| 132 |
+
|
| 133 |
+
$$ (2.2) \quad \|f\|_{BMOA_{\log,\alpha}} = |f(0)| + \sup_{a \in \mathbb{D}} \left( \log \frac{2}{1-|a|} \right)^\alpha \frac{1}{2\pi} \int_0^{2\pi} |f(e^{i\theta}) - f(a)| P_a(e^{i\theta}) d\theta < \infty, $$
|
| 134 |
+
|
| 135 |
+
where $P_a(e^{i\theta}) = (1-|a|^2)/(1-ae^{-i\theta})^2$ is the Poisson kernel.
|
| 136 |
+
|
| 137 |
+
We shall write $VMOA_{\log,\alpha}$ for the subspace of $H^1$ of those functions $f$ such that
|
| 138 |
+
|
| 139 |
+
$$ \lim_{|a| \to 1^-} \left( \log \frac{2}{1 - |a|} \right)^\alpha \int_T |f(e^{i\theta}) - f(a)| P_a(e^{i\theta}) d\theta = 0. $$
|
| 140 |
+
|
| 141 |
+
If $\alpha = 0$, we obtain the classical space BMOA [VMOA] of $H^1$-functions with bounded [vanishing] mean oscillation. For simplicity, we shall write $BMOA_{\log}$ [VMOA$_{\log}$] for the space $BMOA_{\log,1}$ [VMOA$_{\log,1}$].
|
| 142 |
+
|
| 143 |
+
We shall also use Fefferman's result (see [Gi]) that $(H^1)^* \cong \text{BMOA}$ and $(\text{VMOA})^* \cong H^1$, under the Cauchy pairing
|
| 144 |
+
|
| 145 |
+
$$ (2.3) \quad \langle f, g \rangle_{H^2} = \lim_{r \to 1^-} \frac{1}{2\pi} \int_0^{2\pi} f(re^{i\theta}) \overline{g(e^{i\theta})} d\theta, $$
|
| 146 |
+
|
| 147 |
+
$f \in H^1$, $g \in \text{BMOA}$ (resp. VMOA).
|
| 148 |
+
|
| 149 |
+
*Proof of Proposition 1.1.* (i) Let $f(z) = \sum_{n \ge 0} a_n z^n \in H^1$ and assume that $\mu$ is a classical Carleson measure. This means equivalently that (see
|
| 150 |
+
---PAGE_BREAK---
|
| 151 |
+
|
| 152 |
+
[Pe, p. 42]) $\sup_{n \in \mathbb{N}} \mu_n(n+1) < \infty$. This fact together with Hardy's inequality (see [D, p. 48]) implies that
|
| 153 |
+
|
| 154 |
+
$$ \sum_{k=0}^{\infty} \mu_{n,k} |a_k| \le C \sum_{k=0}^{\infty} \frac{|a_k|}{n+k+1} \le C \|f\|_{H^1}, \quad n \in \mathbb{N}, $$
|
| 155 |
+
|
| 156 |
+
so $H_\mu(f)(z) \in \text{Hol}(\mathbb{D})$. The above inequalities also justify that
|
| 157 |
+
|
| 158 |
+
$$ \sum_{k \ge 0} \mu_{n,k} a_k = \int_{[0,1)} t^n f(t) d\mu(t), \quad n \in \mathbb{N}. $$
|
| 159 |
+
|
| 160 |
+
Then
|
| 161 |
+
|
| 162 |
+
$$ H_{\mu}(f)(z) = \sum_{n \ge 0} \left( \int_{[0,1)} t^n f(t) d\mu(t) \right) z^n = \int_{[0,1)} \frac{f(t)}{1-tz} d\mu(t), \quad z \in \mathbb{D}. $$
|
| 163 |
+
|
| 164 |
+
The last equality is true since $\mu$ is a classical Carleson measure and so
|
| 165 |
+
|
| 166 |
+
$$ \sum_{n \ge 0} \left( \int_{[0,1)} t^n |f(t)| d\mu(t) \right) |z|^n \le C \|f\|_{H^1} \frac{1}{1-|z|}. $$
|
| 167 |
+
|
| 168 |
+
(ii) Assume that for any choice of $f \in H^1$ and $z \in D$ the integral (1.2) converges. Fix $f \in H^1$ and choose $z=0$. This means that $\int_{[0,1)} |f(t)| d\mu(t) < \infty$. If for any $\beta \in [0, 1]$ we define $T_\beta : H^1 \to L^1(d\mu)$ by setting $T_\beta(f) = f \cdot \chi_{\{0 \le |z| < \beta\}}$, then there is $C > 0$ such that
|
| 169 |
+
|
| 170 |
+
$$ \|T_\beta(f)\|_{L^1(d\mu)} = \int_{[0,\beta]} |f(t)| d\mu(t) \le \int_{[0,1]} |f(t)| d\mu(t) \le C $$
|
| 171 |
+
|
| 172 |
+
for any $\beta \in [0, 1]$, which together with the uniform boundedness principle gives $\sup_{\beta \in [0,1]} \|T_\beta\|_{L^1(d\mu)} < \infty$, that is, the identity operator from $H^1$ to $L^1(d\mu)$ is bounded, thus by Carleson's result (see [D, Theorem 9.3]) $\mu$ is a classical Carleson measure. $\blacksquare$
|
| 173 |
+
|
| 174 |
+
Now we are ready to prove our main result in this section.
|
| 175 |
+
|
| 176 |
+
*Proof of Theorem 1.2.*
|
| 177 |
+
|
| 178 |
+
*Proof of (i): Boundedness.* We observe that the duality relation (VMOA)* $\cong$ $H^1$, Proposition 1.1, Cauchy's integral representation for functions in $H^1$ (see [D, Theorem 3.9]) and Fubini's theorem imply that
|
| 179 |
+
|
| 180 |
+
$$ (2.4) \qquad \mathcal{H}_{\mu}: H^{1} \rightarrow H^{1} \text{ is bounded} $$
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
\begin{align*}
|
| 184 |
+
&\Leftrightarrow \lim_{r \to 1^{-}} \left| \frac{1}{2\pi} \int_0^{2\pi} \left( \int_0^1 \frac{f(t)}{1 - tre^{i\theta}} d\mu(t) \right) \overline{g(e^{i\theta})} d\theta \right| \le C \|f\|_{H^1} \|g\|_{\text{BMOA}} \\
|
| 185 |
+
&\Leftrightarrow \lim_{r \to 1^{-}} \left| \int_0^1 f(t) \overline{g.rt)} d\mu(t) \right| \le C \|f\|_{H^1} \|g\|_{\text{BMOA}},
|
| 186 |
+
\end{align*}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
for all $f \in H^1$ and $g \in \text{VMOA}$.
|
| 190 |
+
---PAGE_BREAK---
|
| 191 |
+
|
| 192 |
+
Suppose that $\mathcal{H}_\mu : H^1 \to H^1$ is bounded and select the families of test functions
|
| 193 |
+
|
| 194 |
+
$$ (2.5) \qquad g_a(z) = \log \frac{2}{1-az}, \quad f_b(z) = \frac{1-b^2}{(1-bz)^2}, \quad a,b \in [0,1). $$
|
| 195 |
+
|
| 196 |
+
A calculation shows that {$g_a$} $\subset$ VMOA and {$f_b$} $\subset$ $H^1$ with
|
| 197 |
+
|
| 198 |
+
$$ (2.6) \quad \sup_{a \in [0,1)} \|g_a\|_{\text{BMOA}} < \infty \quad \text{and} \quad \sup_{b \in [0,1)} \|f_b\|_{H^1} < \infty. $$
|
| 199 |
+
|
| 200 |
+
Next, taking $a=b \in [0,1)$ and $r \in [a, 1]$ we obtain
|
| 201 |
+
|
| 202 |
+
$$ \begin{aligned} \left|\int_0^1 f_a(t) \overline{g_a(rt)} d\mu(t)\right| &\ge \int_a^1 \frac{1-a^2}{(1-rt)^2} \log \frac{2}{1-rat} d\mu(t), \\ &\ge C \frac{\log \frac{2}{1-a^2}}{1-a^2} \mu([a, 1)), \end{aligned} $$
|
| 203 |
+
|
| 204 |
+
which bearing in mind (2.4) and (2.6) implies that $\mu$ is a 1-logarithmic 1-Carleson measure.
|
| 205 |
+
|
| 206 |
+
Conversely, suppose that $\mu$ is a 1-logarithmic 1-Carleson measure. Then by Lemma A,
|
| 207 |
+
|
| 208 |
+
$$ (2.7) \qquad K_\mu := \sup_{a \in D} \log \frac{2}{1-|a|^2} \int_D \frac{1-|a|^2}{|1-\bar{a}z|^2} d\mu(z) < \infty. $$
|
| 209 |
+
|
| 210 |
+
Let us see that $\mathcal{H}_\mu$ is bounded on $H^1$. Using (2.4), it is enough to prove
|
| 211 |
+
|
| 212 |
+
$$ (2.8) \quad \lim_{r \to 1^-} \int_0^1 |f(t)| |g.rt)| d\mu(t) \le C \|f\|_{H^1} \|g\|_{\text{BMOA}} $$
|
| 213 |
+
|
| 214 |
+
for all $f \in H^1$ and $g \in \text{VMOA}$,
|
| 215 |
+
|
| 216 |
+
which together with [D, Theorem 9.3] and Lemma A is equivalent to
|
| 217 |
+
|
| 218 |
+
$$ (2.9) \quad \lim_{r \to 1^-} \sup_{a \in D} \int_D \frac{|1-a|^2}{|1-\bar{a}z|^2} |g(rz)| d\mu(z) \le C \|g\|_{\text{BMOA}} \quad \text{for all } g \in \text{VMOA}. $$
|
| 219 |
+
|
| 220 |
+
On the other hand, for each $r \in (0,1)$, $a \in D$ and $g \in \text{VMOA}$,
|
| 221 |
+
|
| 222 |
+
$$ (2.10) \quad \begin{aligned} &\int_D \frac{|1-a|^2}{|1-\bar{a}z|^2} |g(rz)| d\mu(z) \\ &\le |g.ra)| \int_D \frac{|1-a|^2}{|1-\bar{a}z|^2} d\mu(z) + \int_D \frac{|1-a|^2}{|1-\bar{a}z|^2} |g(rz)-g.ra)| d\mu(z) \\ &= I_1(r,a) + I_2(r,a). \end{aligned} $$
|
| 223 |
+
---PAGE_BREAK---
|
| 224 |
+
|
| 225 |
+
Bearing in mind that any function $g$ in the Bloch space $\mathcal{B}$ (see [ACP]) has the growth
|
| 226 |
+
|
| 227 |
+
$$|g(z)| \le 2 \|g\|_{\mathcal{B}} \log \frac{2}{1 - |z|} \quad \text{for all } z \in \mathbb{D}$$
|
| 228 |
+
|
| 229 |
+
and BMOA $\subset \mathcal{B}$ (see Theorem 5.1 of [Gi]), by (2.7) we have
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
\begin{align*}
|
| 233 |
+
(2.11) \quad I_1(r, a) &\le C \|g\|_{\text{BMOA}} \log \frac{2}{1-|a|} \int_D \frac{1-|a|^2}{|1-\bar{a}z|^2} d\mu(z) \\
|
| 234 |
+
&\le CK_\mu \|g\|_{\text{BMOA}} < \infty \quad \text{for all } r \in (0,1) \text{ and } a \in \mathbb{D}.
|
| 235 |
+
\end{align*}
|
| 236 |
+
$$
|
| 237 |
+
|
| 238 |
+
Next, combining (2.7), $\mathbb{D}$, Theorem 9.3], (2.2) and the fact that BMOA is closed under subordination (see [Gi, Theorem 10.3]), we deduce that
|
| 239 |
+
|
| 240 |
+
$$
|
| 241 |
+
\begin{align*}
|
| 242 |
+
I_2(r, a) &\le CK_\mu \int_T \frac{1-|a|^2}{|1-\bar{a}e^{i\theta}|^2} |g(re^{i\theta}) - g(ra)| d\theta \\
|
| 243 |
+
&\le CK_\mu \|g_r\|_{\text{BMOA}} \\
|
| 244 |
+
&\le CK_\mu \|g\|_{\text{BMOA}} \quad \text{for all } r \in (0,1), a \in \mathbb{D} \text{ and } g \in \text{VMOA},
|
| 245 |
+
\end{align*}
|
| 246 |
+
$$
|
| 247 |
+
|
| 248 |
+
which together with (2.10) and (2.11) implies (2.9).
|
| 249 |
+
|
| 250 |
+
*Proof of (ii): Compactness.* Suppose that $\mathcal{H}_\mu : H^1 \to H^1$ is compact. Let $\{f_b\}$ be the family of functions defined in (2.5) and let $\{b_n\}$ be a sequence of points of $(0,1)$ such that $\lim_{n\to\infty} b_n = 1$. Since $\{f_{b_n}\}$ is a bounded sequence in $H^1$, there is a subsequence $\{b_{n_k}\}$ and $g \in H^1$ such that $\lim_{k\to\infty} \| \mathcal{H}_\mu(f_{b_{n_k}}) - g \|_{H^1} = 0$. Now, as $\{f_{b_{n_k}}\}$ converges to 0 uniformly on compact subsets of $\mathbb{D}$ and $\mu$ is a 1-logarithmic 1-Carleson measure, $\{\mathcal{H}_\mu(f_{b_{n_k}})\}$ converges to 0 uniformly on compact subsets of $\mathbb{D}$, which implies that $g=0$. Thus, combining the fact that $\lim_{k\to\infty} \|\mathcal{H}_\mu(f_{b_{n_k}})\|_{H^1} = 0$ with the inequality (for all $g \in$ VMOA)
|
| 251 |
+
|
| 252 |
+
$$
|
| 253 |
+
\lim_{r \to 1^{-}} \left| \int_{0}^{1} f_{b_{n_k}}(t) \overline{g(rt)} d\mu(t) \right| \le C \| \mathcal{H}_{\mu}(f_{b_{n_k}}) \|_{H^1} \| g \|_{\text{BMOA}},
|
| 254 |
+
$$
|
| 255 |
+
|
| 256 |
+
and the reasoning used in the boundedness case, we deduce that
|
| 257 |
+
|
| 258 |
+
$$
|
| 259 |
+
\lim_{k \to \infty} \frac{\mu([b_{n_k}, 1)) \log \frac{2}{1-b_{n_k}}}{1-b_{n_k}} = 0.
|
| 260 |
+
$$
|
| 261 |
+
|
| 262 |
+
Consequently, $\mu$ is a vanishing 1-logarithmic 1-Carleson measure.
|
| 263 |
+
|
| 264 |
+
Conversely, assume that $\mu$ is a vanishing 1-logarithmic 1-Carleson measure. The proof of the sufficiency for the boundedness yields
|
| 265 |
+
|
| 266 |
+
$$
|
| 267 |
+
(2.12) \quad \int_0^1 |f(t)| |g(t)| d\mu(t) \le CK_\mu \|f\|_{H^1} \|g\|_{\text{BMOA}}
|
| 268 |
+
$$
|
| 269 |
+
|
| 270 |
+
for all $f \in H^1$ and $g \in \text{VMOA}$.
|
| 271 |
+
---PAGE_BREAK---
|
| 272 |
+
|
| 273 |
+
So, it suffices to prove that for any sequence $\{f_n\}$ such that $\sup_{n \in \mathbb{N}} \|f_n\|_{H^1} < \infty$ and $\lim_{n \to \infty} f_n = 0$ on compact subsets of $\mathbb{D}$,
|
| 274 |
+
|
| 275 |
+
$$ (2.13) \quad \lim_{n \to \infty} \int_0^1 |f_n(t)| |g(t)| d\mu(t) = 0 \quad \text{for all } g \in \text{VMOA.} $$
|
| 276 |
+
|
| 277 |
+
Let us write $d\mu_r = \chi_{\{r<|z|<1\}}d\mu$. Since $\mu$ is a vanishing 1-logarithmic 1-Carleson measure, $\lim_{r \to 1^-} K_{\mu_r} = 0$. This together with the fact that $\lim_{n \to \infty} f_n = 0$ on compact subsets of $\mathbb{D}$, and (2.12), shows (using a standard argument) that $\mathcal{H}_\mu$ is compact on $H^1$. ■
|
| 278 |
+
|
| 279 |
+
In order to present a second proof of Theorem 1.2 some definitions and known results are needed. Given $g(\xi) \sim \sum_{n=-\infty}^{\infty} \hat{g}(n)\xi^n \in L^2(\mathbb{T})$, the associated Hankel operator (see [Pe] or [PV]) is formally defined as
|
| 280 |
+
|
| 281 |
+
$$ H_g(f) = P(gJf) $$
|
| 282 |
+
|
| 283 |
+
where *P* is the Riesz projection and
|
| 284 |
+
|
| 285 |
+
$$ Jf(\xi) = \bar{\xi}f(\bar{\xi}) = \sum_{n=-\infty}^{\infty} \hat{f}(-n-1)\xi^n, \quad \xi \in \mathbb{T}. $$
|
| 286 |
+
|
| 287 |
+
Moreover, if $\mu$ is a classical Carleson measure, Nehari's Theorem implies that (see [Pe, p. 3] or [D, Theorem 6.8]) there is $g_\mu \in L^\infty(\mathbb{T})$ with $\mu_n = \hat{g}_\mu(n+1)$, so
|
| 288 |
+
|
| 289 |
+
$$ \mathcal{H}_\mu(f)(z) = \overline{H_{g_\mu}(f)(\bar{z})}, $$
|
| 290 |
+
|
| 291 |
+
and consequently $\mathcal{H}_\mu$ is bounded on $H^1$ if and only if $H_{g_\mu}$ is bounded on $H^1$. On the other hand,
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
\begin{align*}
|
| 295 |
+
P_1(g_\mu)(z) &:= P(g_\mu)(z) - \hat{g}_\mu(0) = \sum_{n=1}^{\infty} \hat{g}_\mu(n)z^n = \sum_{n=0}^{\infty} \hat{g}_\mu(n+1)z^{n+1} \\
|
| 296 |
+
&= \sum_{n=0}^{\infty} \mu_n z^{n+1} = zh_\mu(z).
|
| 297 |
+
\end{align*}
|
| 298 |
+
$$
|
| 299 |
+
|
| 300 |
+
Thus, we have the next result joining [PV, Theorems 1.6 and 1.7] (see also [CS], [JPS] and [T]).
|
| 301 |
+
|
| 302 |
+
**THEOREM A.** Suppose that $\mu$ is a classical Carleson measure on $[0, 1)$.
|
| 303 |
+
|
| 304 |
+
(i) $\mathcal{H}_{\mu}: H^{1} \rightarrow H^{1}$ is bounded if and only if $h_{\mu} \in \text{BMOA}_{\log}$.
|
| 305 |
+
|
| 306 |
+
(ii) $\mathcal{H}_{\mu}: H^{1} \rightarrow H^{1}$ is compact if and only if $h_{\mu} \in \text{VMOA}_{\log}$.
|
| 307 |
+
|
| 308 |
+
**Second proof of Theorem 1.2**
|
| 309 |
+
|
| 310 |
+
*Proof of (i): Boundedness.* If $\mathcal{H}_{\mu}: H^{1} \rightarrow H^{1}$ is bounded, then by Theorem A the function $h_{\mu}$ is in $\text{BMOA}_{\log}$. For any $a \in (0, 1)$ we deduce that
|
| 311 |
+
---PAGE_BREAK---
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
\begin{equation} \tag{2.14}
|
| 315 |
+
\begin{aligned}
|
| 316 |
+
& \frac{1}{2\pi} \int_0^{2\pi} |h_\mu(e^{i\theta}) - h_\mu(a)| \frac{1-a^2}{|1-ae^{i\theta}|^2} d\theta \\
|
| 317 |
+
&= \frac{1}{2\pi} \int_0^{2\pi} \frac{1-a^2}{|1-ae^{i\theta}|} \left| \int_0^1 \frac{td\mu(t)}{(1-te^{i\theta})(1-ta)} \right| d\theta \\
|
| 318 |
+
&\ge \frac{1}{2\pi} \int_0^{2\pi} \frac{1-a^2}{|1-ae^{i\theta}|} \operatorname{Re} \left( \int_0^1 \frac{td\mu(t)}{(1-te^{i\theta})(1-ta)} \right) d\theta \\
|
| 319 |
+
&= \frac{1}{2\pi} \int_0^{2\pi} \frac{1-a^2}{|1-ae^{i\theta}|} \int_0^1 \frac{t(1-t\cos(\theta))}{|1-te^{i\theta}|^2(1-ta)} d\mu(t) d\theta \\
|
| 320 |
+
&= \int_0^1 \frac{t(1-a^2)}{1-ta} \left( \frac{1}{2\pi} \int_0^{2\pi} \frac{1-t\cos(\theta)}{|1-te^{i\theta}|^2|1-ae^{i\theta}|} d\theta \right) d\mu(t) \\
|
| 321 |
+
&\ge \frac{1}{2} \int_0^1 \frac{t(1-a^2)^2}{1-ta} \left( \frac{1}{2\pi} \int_0^{2\pi} \frac{1-t\cos(\theta)}{|1-te^{i\theta}|^2|1-ae^{i\theta}|^2} d\theta \right) d\mu(t).
|
| 322 |
+
\end{aligned}
|
| 323 |
+
\end{equation}
|
| 324 |
+
$$
|
| 325 |
+
|
| 326 |
+
Assume, for the moment, that
|
| 327 |
+
|
| 328 |
+
$$
|
| 329 |
+
(2.15) \quad \frac{1}{2\pi} \int_{0}^{2\pi} \frac{1 - t \cos(\theta)}{|1 - te^{i\theta}|^2 |1 - ae^{i\theta}|^2} d\theta = \frac{1}{(1 - at)(1 - a^2)}
|
| 330 |
+
$$
|
| 331 |
+
|
| 332 |
+
for any $a, t \in [0, 1)$.
|
| 333 |
+
|
| 334 |
+
This together with (2.14) yields
|
| 335 |
+
|
| 336 |
+
$$
|
| 337 |
+
\sup_{a \in [0,1]} \log \frac{2}{1-a} \int_0^1 \frac{t(1-a^2)}{(1-ta)^2} d\mu(t) \le C \|h_\mu\|_{BMOA_{\log}} < \infty,
|
| 338 |
+
$$
|
| 339 |
+
|
| 340 |
+
so $\mu$ is a 1-logarithmic 1-Carleson measure.
|
| 341 |
+
|
| 342 |
+
Now, (2.15) will be proved. We assume that $a \neq t$ (if $a = t$ a similar calculation also gives (2.15)), and we write
|
| 343 |
+
|
| 344 |
+
$$
|
| 345 |
+
F(z) = \frac{z - \frac{t}{2}(z^2 + 1)}{(z - t)(1 - tz)(z - a)(1 - az)}.
|
| 346 |
+
$$
|
| 347 |
+
|
| 348 |
+
Therefore, using the residue theorem we see that
|
| 349 |
+
|
| 350 |
+
$$
|
| 351 |
+
\begin{align*}
|
| 352 |
+
& \frac{1}{2\pi} \int_0^{2\pi} \frac{1 - t \cos(\theta)}{|1 - te^{i\theta}|^2 |1 - ae^{i\theta}|^2} d\theta \\
|
| 353 |
+
&= \operatorname{Res}(F, t) + \operatorname{Res}(F, a) \\
|
| 354 |
+
&= \frac{\frac{t}{2}}{(t-a)(1-at)} - \frac{a - \frac{t}{2}(a^2 + 1)}{(t-a)(1-at)(1-a^2)} \\
|
| 355 |
+
&= \frac{1}{(1-at)(1-a^2)},
|
| 356 |
+
\end{align*}
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
which proves (2.15).
|
| 360 |
+
---PAGE_BREAK---
|
| 361 |
+
|
| 362 |
+
Conversely, suppose that $\mu$ is a 1-logarithmic 1-Carleson measure. Then $h_\mu$ has finite radial limit a.e. on $\mathbb{T}$, indeed $h_\mu \in H^2$ (see [Pe, p. 42]), and for any $a \in \mathbb{D}$,
|
| 363 |
+
|
| 364 |
+
$$
|
| 365 |
+
\begin{align*}
|
| 366 |
+
(2.16) \quad & \frac{1}{2\pi} \int_0^{2\pi} |h_\mu(e^{i\theta}) - h_\mu(a)| \frac{1-|a|^2}{|1-ae^{i\theta}|^2} d\theta \\
|
| 367 |
+
& = \frac{1}{2\pi} \int_0^{2\pi} \left| \frac{1-|a|^2}{|1-ae^{i\theta}|} \right| \left| \int_0^1 \frac{td\mu(t)}{(1-te^{i\theta})(1-ta)} \right| d\theta \\
|
| 368 |
+
& \le \frac{1}{2\pi} \int_0^{2\pi} \frac{1-|a|^2}{|1-ae^{i\theta}|} \left| \int_0^1 \frac{d\mu(t)}{|1-te^{i\theta}||1-ta|} \right| d\theta \\
|
| 369 |
+
& \le \frac{1-|a|^2}{2\pi} \int_0^1 \frac{1}{|1-ta|} \int_0^{2\pi} \frac{d\theta}{|1-ae^{i\theta}||1-te^{i\theta}|} d\mu(t) \\
|
| 370 |
+
& \le \frac{1-|a|^2}{2\pi} \int_0^1 \frac{1}{|1-ta|} \left( \int_0^{2\pi} \frac{d\theta}{|1-ae^{i\theta}|^2} \right)^{1/2} \left( \int_0^{2\pi} \frac{d\theta}{|1-te^{i\theta}|^2} \right)^{1/2} d\mu(t) \\
|
| 371 |
+
& \le C(1-|a|^2)^{1/2} \int_0^1 \frac{1}{|1-ta|(1-t)^{1/2}} d\mu(t) \\
|
| 372 |
+
& \le C(1-|a|^2)^{1/2} \int_0^1 \frac{1}{(1-t|a|)(1-t)^{1/2}} d\mu(t).
|
| 373 |
+
\end{align*}
|
| 374 |
+
$$
|
| 375 |
+
|
| 376 |
+
Moreover, using that $\mu$ is a 1-logarithmic 1-Carleson measure and a standard argument (see [G] or [Z]) we conclude that
|
| 377 |
+
|
| 378 |
+
$$
|
| 379 |
+
\sup_{a \in (0,1)} (1-a^2)^{1/2} \int_0^1 \frac{1}{(1-ta)(1-t)^{1/2}} d\mu(t) < \infty,
|
| 380 |
+
$$
|
| 381 |
+
|
| 382 |
+
which together with (2.16) shows that $h_\mu \in \text{BMOA}_{\log}$, thus by Theorem A, $\mathcal{H}_\mu : H^1 \to H^1$ is bounded.
|
| 383 |
+
|
| 384 |
+
The proof of (ii) is analogous, so it will be omitted. $\blacksquare$
|
| 385 |
+
|
| 386 |
+
Proof of Theorem 1.3. We recall that $\mathcal{H}_\mu$ is a Hilbert-Schmidt operator on $H^2$ if and only if $\sum_{k \ge 0} \|H_\mu(e_k)\|_{H^2}^2 < \infty$ for any orthonormal base $\{e_k\}_{k=0}^\infty$. We choose the orthonormal base $e_k(z) = z^k$. For $z = re^{i\theta} \in \mathbb{D}$, we observe that $\int_0^{2\pi} |\mathcal{H}_\mu(e_k)(re^{i\theta})|^2 d\theta = \sum_{n \ge 0} |\mu_{n,k}|^2 r^{2n}$. So
|
| 387 |
+
|
| 388 |
+
$$
|
| 389 |
+
\begin{align*}
|
| 390 |
+
\sum_{k \ge 0} \| \mathcal{H}_\mu(e_k) \|_{H^2}^2 &= \sum_{k \ge 0} \sum_{n \ge 0} |\mu_{n,k}|^2 = \sum_{k \ge 0} \sum_{n \ge 0} \int_{[0,1]} \int_{[0,1]} (ts)^{n+k} d\mu(s) d\mu(t) \\
|
| 391 |
+
&= \int_{[0,1]} \int_{[0,1]} \frac{1}{(1-ts)^2} d\mu(s) d\mu(t) \approx \int_{[0,1]} \frac{\mu([t,1])}{(1-t)^2} d\mu(t).
|
| 392 |
+
\end{align*}
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
This finishes the proof. $\blacksquare$
|
| 396 |
+
---PAGE_BREAK---
|
| 397 |
+
|
| 398 |
+
Finally, we shall see that although $\mathcal{H}_\mu$ is not bounded on $H^1$ for a classical Carleson measure $\mu$, in some sense $\mathcal{H}_\mu$ is close to having this property.
|
| 399 |
+
|
| 400 |
+
**THEOREM 2.1.** If $\mu$ is a classical Carleson measure supported on $[0, 1)$ and $0 < p < 1$, then $\mathcal{H}_\mu : H^1 \to H^p$ is bounded.
|
| 401 |
+
|
| 402 |
+
*Proof.* As $\mu$ is a classical Carleson measure,
|
| 403 |
+
|
| 404 |
+
$$
|
| 405 |
+
\begin{aligned}
|
| 406 |
+
(2.17) \quad & \| \mathcal{H}_\mu(f) \|_{H^p}^p \le \sup_{0<r<1} \int_{-\pi}^\pi \left( \int_{[0,1]} \frac{|f(t)|}{|1-tre^{i\theta}|} d\mu(t) \right)^p d\theta \\
|
| 407 |
+
& \le C(\mu) \|f\|_{H^1}^p \sup_{0<r<1} \int_{-\pi}^\pi \sup_{0<t<1} \frac{1}{|1-tre^{i\theta}|p} d\theta \quad \text{for any } f \in H^1.
|
| 408 |
+
\end{aligned}
|
| 409 |
+
$$
|
| 410 |
+
|
| 411 |
+
On the other hand,
|
| 412 |
+
|
| 413 |
+
$$ (2.18) \quad \sup_{0<r<1} \sup_{0<t<1} \frac{1}{|1-tre^{i\theta}|p} \le 1 \quad \text{if } |\theta| \ge \pi/2, $$
|
| 414 |
+
|
| 415 |
+
and a straightforward calculation shows that for $\theta \in (-\pi/2, \pi/2)$,
|
| 416 |
+
|
| 417 |
+
$$ \sup_{0<t<1} \frac{1}{|1-tre^{i\theta}|p} \le \max \left\{ \frac{1}{|1-re^{i\theta}|p}, \frac{1}{\sin^p(\theta)} \right\}, $$
|
| 418 |
+
|
| 419 |
+
which together with (2.17) and (2.18) finishes the proof. $\blacksquare$
|
| 420 |
+
|
| 421 |
+
Indeed, the previous result must be improved. We remind the reader that $f \in \operatorname{Hol}(\mathbb{D})$ is a *Cauchy transform* if it admits a representation
|
| 422 |
+
|
| 423 |
+
$$ f(z) = \int_0^{2\pi} \frac{d\nu(\theta)}{1 - e^{i\theta}z}, \quad z \in \mathbb{D}, $$
|
| 424 |
+
|
| 425 |
+
where $\nu$ is a finite complex valued Borel measure on $\mathbb{T}$. As usual, $\mathcal{K}$ will denote the space of all Cauchy transforms. It is known (see [CSi]) that $\cap_{0<p<1} H^p \subsetneq_K \mathcal{K} \subsetneq H^1$ and moreover $\mathcal{K}$ is isometrically isomorphic (under the Cauchy pairing) to the dual space of $\mathcal{A}$, the disk algebra, which consists of all $g \in \operatorname{Hol}(\mathbb{D})$ such that $g$ is continuous on $\overline{\mathbb{D}}$. This allows us to assert that
|
| 426 |
+
|
| 427 |
+
$$ \|f\|_{\mathcal{K}} = \sup\{\langle f, g \rangle_{H^2} : g \in \mathcal{A}, \|g\|_{H^\infty} \le 1\}. $$
|
| 428 |
+
|
| 429 |
+
**THEOREM 2.2.** If $\mu$ is a classical Carleson measure supported on $[0, 1)$ then $\mathcal{H}_\mu : H^1 \to \mathcal{K}$ is bounded.
|
| 430 |
+
|
| 431 |
+
*Proof.* Putting together the fact that $\mu$ is a classical Carleson measure, Proposition 1.1, Cauchy’s integral representation for functions in $H^1$ and
|
| 432 |
+
---PAGE_BREAK---
|
| 433 |
+
|
| 434 |
+
Fubini's theorem we deduce that for $f \in H^1$ and $g \in \mathcal{A}$,
|
| 435 |
+
|
| 436 |
+
$$ (2.19) \quad \lim_{r \to 1^{-}} \left| \frac{1}{2\pi} \int_0^{2\pi} \left( \int_0^1 \frac{f(t)}{1 - t re^{i\theta}} d\mu(t) \right) \overline{g(e^{i\theta})} d\theta \right| $$
|
| 437 |
+
|
| 438 |
+
$$ = \lim_{r \to 1^{-}} \left| \int_{0}^{1} f(t) \overline{g(rt)} d\mu(t) \right| $$
|
| 439 |
+
|
| 440 |
+
$$ \leq \|g\|_{H^{\infty}} \int_{0}^{1} |f(t)| d\mu(t) \leq C \|f\|_{H^{1}} \|g\|_{H^{\infty}}, $$
|
| 441 |
+
|
| 442 |
+
so $\mathcal{H}_\mu : H^1 \to K$ is bounded. ■
|
| 443 |
+
|
| 444 |
+
In particular, Theorem 2.2 implies that for any $f \in H^1$, $\mathcal{H}_\mu(f)(e^{i\theta})$ is finite for a.e. $e^{i\theta}$ on $\mathbb{T}$. Indeed, a little more can be said.
|
| 445 |
+
|
| 446 |
+
PROPOSITION 2.3. If $\mu$ is a classical Carleson measure supported on $[0,1)$ then the operator $\mathcal{H}_\mu$ is of weak type $(1,1)$ on Hardy spaces. That is, there is a positive constant $C$ such that
|
| 447 |
+
|
| 448 |
+
$$ |\{e^{i\theta} \in \mathbb{T} : |\mathcal{H}_{\mu}(f)(e^{i\theta})| \ge \lambda\}| \le \frac{C}{\lambda} \|f\|_{H^1} \quad \text{for all } f \in H^1. $$
|
| 449 |
+
|
| 450 |
+
*Proof.* Using that $\mu$ is a classical Carleson measure and Nehari's theorem (see [Pe, p. 3] or [D, Theorem 6.8]) we deduce that there is $g \in L^\infty(\mathbb{T})$ such that
|
| 451 |
+
|
| 452 |
+
$$ \mu_n = \frac{1}{2\pi} \int_0^{2\pi} e^{-int} g(t) dt =: \hat{g}(n), \quad n = 0, 1, 2, \dots $$
|
| 453 |
+
|
| 454 |
+
Then, by [DJV, Theorem 1],
|
| 455 |
+
|
| 456 |
+
$$ \mathcal{H}_{\mu}(f) = PM_g T(f) \quad \text{for all } f \in \bigcup_{p>1} H^p $$
|
| 457 |
+
|
| 458 |
+
where $Tf(e^{it}) = f(e^{-it})$ and $M_g$ is the multiplication operator by $g$. Thus, using standard techniques and well-known results we deduce that $\mathcal{H}_{\mu}$ is of weak type $(1,1)$ on Hardy spaces. ■
|
| 459 |
+
|
| 460 |
+
**3. The Hankel matrix $\mathcal{H}_{\mu}$ acting on $A^2$.** We recall that the Bergman projection $Pf(z) = \int_{\mathbb{D}} f(w) \overline{K_z(w)} dA(w)$ is bounded from $L^2(dA)$ to $A^2$ (see [Zh]), where $K_z(w) = (1 - \bar{z}w)^{-2}$ is the Bergman kernel of $A^2$. It follows that any $f \in A^2$ can be represented by its Bergman projection and moreover $(A^2)^* \cong A^2$ under the pairing $\langle f, g \rangle_{A^2} = \int_{\mathbb{D}} f(z) \overline{g(z)} dA(z)$.
|
| 461 |
+
|
| 462 |
+
*Proof of Proposition 1.4.* (i) Fix $n \in \mathbb{N}$. If $f(z) = \sum_{k=0}^{\infty} a_k z^k \in A^2$, then by the Cauchy–Schwarz inequality,
|
| 463 |
+
---PAGE_BREAK---
|
| 464 |
+
|
| 465 |
+
$$ (3.1) \quad \left| \sum_{k \ge 0} \mu_{n,k} a_k \right| \le \sum_{k \ge 0} \mu_{n,k} |a_k| \le \left\{ \sum_{k \ge 0} (k+1) \mu_{n,k}^2 \right\}^{1/2} \|f\|_{A^2}. $$
|
| 466 |
+
|
| 467 |
+
But
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
\begin{align*}
|
| 471 |
+
(3.2) \quad \sum_{k \ge 0} (k+1)\mu_{n,k}^2 &= \int_{[0,1]} \int_{[0,1]} \frac{(ts)^n}{(1-ts)^2} d\mu(s) d\mu(t) \\
|
| 472 |
+
&= 2 \int_{[0,1]} \int_{[t,1)} \frac{(ts)^n}{(1-ts)^2} d\mu(s) d\mu(t) \le 2 \int_{[0,1)} \frac{\mu([t,1))}{(1-t)^2} d\mu(t).
|
| 473 |
+
\end{align*}
|
| 474 |
+
$$
|
| 475 |
+
|
| 476 |
+
Thus, if $\mu$ satisfies (1.4) the power series (1.1) is well defined and it represents an analytic function in $\mathbb{D}$. Under (1.4) we can also write
|
| 477 |
+
|
| 478 |
+
$$ \sum_{k \ge 0} \mu_{n,k} a_k = \int_{[0,1)} t^n f(t) d\mu(t). $$
|
| 479 |
+
|
| 480 |
+
So, for $z \in \mathbb{D}$,
|
| 481 |
+
|
| 482 |
+
$$ \mathcal{H}_{\mu}(f)(z) = \sum_{n \ge 0} \left( \int_{[0,1)} t^n f(t) d\mu(t) \right) z^n = \int_{[0,1)} \frac{f(t)}{1-zt} d\mu(t). $$
|
| 483 |
+
|
| 484 |
+
The last equality is true since
|
| 485 |
+
|
| 486 |
+
$$ \sum_{n \ge 0} \left( \int_{[0,1)} t^n |f(t)| d\mu(t) \right) |z|^n \le \left\{ 2 \int_{[0,1)} \frac{\mu([t,1))}{(1-t)^2} d\mu(t) \right\}^{1/2} \|f\|_{A^2} \frac{1}{1-|z|}. $$
|
| 487 |
+
|
| 488 |
+
(ii) Take $f \in A^2$. Assume that the integral in (1.5) converges for each $z \in D$. We choose $z = 0$. So, there is $C > 0$ such that
|
| 489 |
+
|
| 490 |
+
$$ (3.3) \quad \left| \int_{[0,\beta)} f(t) d\mu(t) \right| \le \int_{[0,\beta)} |f(t)| d\mu(t) \le \int_{[0,1)} |f(t)| d\mu(t) \le C $$
|
| 491 |
+
|
| 492 |
+
for all $\beta \in (0, 1)$.
|
| 493 |
+
|
| 494 |
+
On the other hand, the integral representation of $f \in A^2$ through the Bergman projection, and Fubini's theorem, imply that
|
| 495 |
+
|
| 496 |
+
$$
|
| 497 |
+
\begin{align*}
|
| 498 |
+
\int_{[0,\beta)} f(t) d\mu(t) &= \int_{[0,\beta)} \int_{\mathbb{D}} \frac{f(w)}{(1-\bar{w}t)^2} dA(z) d\mu(t) \\
|
| 499 |
+
&= \int_{\mathbb{D}} f(w) \overline{\int_{[0,\beta)} \frac{1}{(1-wt)^2} d\mu(t)} = \langle f, g_\beta \rangle_{A^2},
|
| 500 |
+
\end{align*}
|
| 501 |
+
$$
|
| 502 |
+
|
| 503 |
+
where $g_\beta(w) = \int_{[0,\beta)} \frac{1}{(1-wt)^2} d\mu(t) \in A^2$ for every $\beta$. Then, combining (3.3), the fact that $(A^2)^* \cong A^2$ under the pairing $\langle \cdot, \cdot \rangle_{A^2}$, and the uniform bound-
|
| 504 |
+
---PAGE_BREAK---
|
| 505 |
+
|
| 506 |
+
edness principle, we conclude that $\sup_{\beta} \|g_{\beta}\|_{A^2} < C$. Thus, using that
|
| 507 |
+
$\|g_{\beta}\|_{A^2}^2 = \int_{[0,\beta]} \int_{[0,\beta)} \frac{1}{(1-ts)^2} d\mu(s) d\mu(t)$, we get
|
| 508 |
+
|
| 509 |
+
$$
|
| 510 |
+
C \geq \int_{[0,1]} \int_{[0,1]} \frac{1}{(1-ts)^2} d\mu(s) d\mu(t) \geq \frac{1}{4} \int_{[0,1]} \frac{\mu([t,1))}{(1-t)^2} d\mu(t).
|
| 511 |
+
$$
|
| 512 |
+
|
| 513 |
+
So condition (1.4) is true. ■
|
| 514 |
+
|
| 515 |
+
Proof of Theorem 1.5. It is known that $(A^2)^* \cong D$ and $D^* \cong A^2$ under the Cauchy pairing $\langle f, g \rangle_{H^2} = \sum_{n \ge 0} a_n \bar{b}_n$ where $f(z) = \sum_n a_n z^n \in A^2$ and $g(z) = \sum_n b_n z^n \in D$. We observe that, under this relation, $\mathcal{H}_\mu$ is self-adjoint. Therefore, $\mathcal{H}_\mu$ is bounded on $A^2$ if and only if it is on $D$.
|
| 516 |
+
|
| 517 |
+
If $f,g \in D$ we shall write $f_1(z) = \sum_n |a_n|z^n$, $g_1(z) = \sum_n |b_n|z^n$ so that
|
| 518 |
+
$\|f\|_D = \|f_1\|_D$ and $\|g\|_D = \|g_1\|_D$. Then
|
| 519 |
+
|
| 520 |
+
$$
|
| 521 |
+
\begin{align*}
|
| 522 |
+
& |\langle \mathcal{H}_{\mu}(f), g \rangle_{\mathcal{D}}| \\
|
| 523 |
+
& \leq \sum_{n \geq 0} (n+1) \left( \sum_{k \geq 0} \mu_{n+1,k} |a_k| \right) |b_{n+1}| + \mu_0 |a_0| |b_0| + |b_0| \sum_{k=0}^{\infty} \mu_{k+1} |a_{k+1}| \\
|
| 524 |
+
& \leq \sum_{n \geq 0} \mu_{n+1} \left( \sum_{k=0}^{n} (k+1) |b_{k+1}| |a_{n-k}| \right) + \mu_0 \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}} \\
|
| 525 |
+
& \phantom{\leq} + \|g\|_{\mathcal{D}} \int_{\mathcal{D}} \left( \frac{f_1(z) - f_1(0)}{z} \right) \overline{h'_{\mu}(z)} dA(z) \\
|
| 526 |
+
& \leq \int_{\mathcal{D}} f_1(z) g'_1(z) \overline{h'_{\mu}(z)} dA(z) + \mu_0 \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}} \\
|
| 527 |
+
& \phantom{\leq} + \|g\|_{\mathcal{D}} \int_{\mathcal{D}} \left( \frac{f_1(z) - f_1(0)}{z} \right) \overline{h'_{\mu}(z)} dA(z).
|
| 528 |
+
\end{align*}
|
| 529 |
+
$$
|
| 530 |
+
|
| 531 |
+
So, if $|\overline{h}'_\mu(z)|^2 dA(z)$ is a Dirichlet Carleson measure, we get
|
| 532 |
+
|
| 533 |
+
$$
|
| 534 |
+
\begin{align*}
|
| 535 |
+
& |\langle \mathcal{H}_{\mu}(f), g \rangle_{\mathcal{D}}| \\
|
| 536 |
+
&\leq \left\{ \int_{\mathcal{D}} |f_1(z)|^2 |\overline{h}'_{\mu}(z)|^2 dA(z) \right\}^{1/2} \left\{ \int_{\mathcal{D}} |g'_1(z)|^2 dA(z) \right\}^{1/2} + \mu_0 \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}} \\
|
| 537 |
+
&\quad + \left\{ \int_{\mathcal{D}} \left| \frac{f_1(z) - f_1(0)}{z} \right|^2 |\overline{h}'_{\mu}(z)|^2 dA(z) \right\}^{1/2} \left\{ \int_{\mathcal{D}} |g'_1(z)|^2 dA(z) \right\}^{1/2} \\
|
| 538 |
+
&\leq C \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}},
|
| 539 |
+
\end{align*}
|
| 540 |
+
$$
|
| 541 |
+
|
| 542 |
+
and consequently $\mathcal{H}_\mu$ is bounded.
|
| 543 |
+
---PAGE_BREAK---
|
| 544 |
+
|
| 545 |
+
Conversely, assume that $\mathcal{H}_\mu$ is bounded on $\mathcal{D}$. Then
|
| 546 |
+
|
| 547 |
+
$$
|
| 548 |
+
\begin{align*}
|
| 549 |
+
& \left| \int_{\mathcal{D}} f(z) g'(z) \overline{h'_\mu(z)} dA(z) \right| \\
|
| 550 |
+
& \leq \int_0^1 \sum_{n \geq 0} (n+1) \mu_{n+1} \left( \sum_{k=0}^n (k+1) |b_{k+1}| |a_{n-k}| \right) r^{n+1} dr \\
|
| 551 |
+
& \leq \sum_{n \geq 0} (n+1) \left( \sum_{k \geq 0} \mu_{n+1,k} |a_k| \right) |b_{n+1}| \\
|
| 552 |
+
& = |\langle \mathcal{H}_\mu(f_1), g_1 \rangle_\mathcal{D}| \leq C \|f\|_\mathcal{D} \|g\|_\mathcal{D}.
|
| 553 |
+
\end{align*}
|
| 554 |
+
$$
|
| 555 |
+
|
| 556 |
+
So (exchanging also the roles of $f$ and $g$) we have
|
| 557 |
+
|
| 558 |
+
$$
|
| 559 |
+
\left| \int_D (fg)'(z) \overline{h'_{\mu}(z)} dA(z) \right| \leq C \|f\|_{\mathcal{D}} \|g\|_{\mathcal{D}}
|
| 560 |
+
$$
|
| 561 |
+
|
| 562 |
+
for every $f,g \in D$. Finally, Theorem 1 of [ARSW] (see also [Wu]) implies
|
| 563 |
+
that $|h'_{\mu}(z)|^2 dA(z)$ is a Dirichlet Carleson measure. $\blacksquare$
|
| 564 |
+
|
| 565 |
+
**REMARK 3.1.** We recall that [ARS, Theorem 1] says that a positive Borel measure $\nu$ in $\mathbb{D}$ is a Dirichlet Carleson measure if and only if there is a positive constant $C$ such that for all $a \in \mathbb{D}$,
|
| 566 |
+
|
| 567 |
+
$$
|
| 568 |
+
(3.4) \quad \int_{\tilde{S}(a)} (\nu(S(z) \cap S(a)))^2 \frac{dA(z)}{(1-|z|^2)^2} \le C\nu(S(a)),
|
| 569 |
+
$$
|
| 570 |
+
|
| 571 |
+
where
|
| 572 |
+
|
| 573 |
+
$$
|
| 574 |
+
\tilde{S}(a) = \left\{ z \in \mathbb{D} : 1 - |z| \le 2(1 - |a|), \left| \frac{\arg(a\bar{z})}{2\pi} \right| \le \frac{1 - |a|}{2} \right\}.
|
| 575 |
+
$$
|
| 576 |
+
|
| 577 |
+
We note that if $\nu$ is finite, (3.4) is equivalent to the simpler condition
|
| 578 |
+
|
| 579 |
+
$$
|
| 580 |
+
(3.5) \quad \int_{S(a)} (\nu(S(z) \cap S(a)))^2 \frac{dA(z)}{(1-|z|^2)^2} \le C\nu(S(a)),
|
| 581 |
+
$$
|
| 582 |
+
|
| 583 |
+
because in this case
|
| 584 |
+
|
| 585 |
+
$$
|
| 586 |
+
\begin{align*}
|
| 587 |
+
& \int_{\tilde{S}(a) \setminus S(a)} (\nu(S(z) \cap S(a)))^2 \frac{dA(z)}{(1 - |z|^2)^2} \\
|
| 588 |
+
&\le C(1 - |a|)^{-2} \iint_{\tilde{S}(a) \setminus S(a)} (\nu(S(z) \cap S(a)))^2 dA(z) \\
|
| 589 |
+
&\le C(1 - |a|)^{-2}\nu(S(a))^2 \int_{\tilde{S}(a) \setminus S(a)} dA(z) \le C\nu(S(a)).
|
| 590 |
+
\end{align*}
|
| 591 |
+
$$
|
| 592 |
+
|
| 593 |
+
Consequently, combining Proposition 1.4 and Theorem 1.5, if $\mu$ is a finite positive Borel measure on $[0,1)$ that satisfies (1.4), $\mathcal{H}_{\mu}$ is bounded in $A^2$ if and only if the measure $\nu = |h'_{\mu}(z)|^2 dA(z)$ satisfies (3.5) for all $a \in D$.
|
| 594 |
+
---PAGE_BREAK---
|
| 595 |
+
|
| 596 |
+
*Proof of Theorem 1.6.* Take the orthonormal basis $\{e_k\}_{k \ge 0} = (k+1)^{1/2} z^k$ and observe that
|
| 597 |
+
|
| 598 |
+
$$
|
| 599 |
+
\begin{align*}
|
| 600 |
+
(3.6) \quad \sum_{k=0}^{\infty} \| \mathcal{H}_{\mu}(e_k) \|_{A^2}^2 &= \sum_{k=0}^{\infty} (k+1) \sum_{n=0}^{\infty} (n+1)^{-1} \mu_{n,k}^2 \\
|
| 601 |
+
&= \sum_{k=0}^{\infty} (k+1) \iint_{0}^{1} (ts)^k \frac{1}{ts} \log \frac{1}{1-ts} d\mu(t) d\mu(s) \\
|
| 602 |
+
&\asymp \int_{[0,1]} \frac{\mu([t,1])}{(1-t)^2} \log \frac{1}{1-t} d\mu(t).
|
| 603 |
+
\end{align*}
|
| 604 |
+
$$
|
| 605 |
+
|
| 606 |
+
So the operator is Hilbert–Schmidt if and only if (1.6) holds. ■
|
| 607 |
+
|
| 608 |
+
Finally we shall prove Proposition 1.7.
|
| 609 |
+
|
| 610 |
+
*Proof of Proposition 1.7.* We claim that if $\mathcal{H}_\mu$ is bounded on $A^2$ then
|
| 611 |
+
|
| 612 |
+
$$
|
| 613 |
+
(3.7) \quad \sup_{a \in (0,1)} \frac{\int_{[0,1]} \frac{\mu([t,1])}{(1-t)^2} \left(\frac{1}{at} \log \frac{1}{1-at}\right)^2 d\mu(t)}{\frac{1}{a^2} \log \frac{1}{1-a^2}} < \infty.
|
| 614 |
+
$$
|
| 615 |
+
|
| 616 |
+
Assume (3.7) for the moment. Let $\beta \in [0,1)$, $\alpha \in ((1+\beta)/2, 1)$ and consider the measure $d\mu_\alpha(t) = (\frac{1}{t}\log\frac{1}{1-t})^{-\alpha}dt$. Using that $\mu_\alpha([t,1)) \asymp (1-t)(\frac{1}{t}\log\frac{1}{1-t})^{-\alpha}$, we deduce
|
| 617 |
+
|
| 618 |
+
$$
|
| 619 |
+
\int_0^1 \frac{\mu_\alpha([t, 1))}{(1-t)^2} \left( \frac{1}{t} \log \frac{1}{1-t} \right)^\beta d\mu_\alpha(t) \asymp \int_0^1 \frac{1}{(1-t)} \left( \frac{1}{t} \log \frac{1}{1-t} \right)^{\beta-2\alpha} dt < \infty
|
| 620 |
+
$$
|
| 621 |
+
|
| 622 |
+
and
|
| 623 |
+
|
| 624 |
+
$$
|
| 625 |
+
\begin{align*}
|
| 626 |
+
& \left(\frac{1}{a^2} \log \frac{1}{1-a^2}\right)^{-1} \int_{[0,1]} \frac{\mu_\alpha([t,1))}{(1-t)^2} \left(\frac{1}{at} \log \frac{1}{1-at}\right)^2 d\mu_\alpha(t) \\
|
| 627 |
+
&\ge C \left(\frac{1}{a^2} \log \frac{1}{1-a^2}\right)^{-1} \int_{[0,a]} \frac{1}{1-t} \left(\frac{1}{t} \log \frac{1}{1-t}\right)^{-2\alpha} \left(\frac{1}{t^2} \log \frac{1}{1-t^2}\right)^2 dt \\
|
| 628 |
+
&\ge C \left(\log \frac{1}{1-a}\right)^{2-2\alpha},
|
| 629 |
+
\end{align*}
|
| 630 |
+
$$
|
| 631 |
+
|
| 632 |
+
which in particular implies that
|
| 633 |
+
|
| 634 |
+
$$
|
| 635 |
+
\lim_{a \to 1^-} \left( \frac{1}{a^2} \log \frac{1}{1-a^2} \right)^{-1} \int_{[0,1]} \frac{\mu_\alpha([t, 1))}{(1-t)^2} \left( \frac{1}{at} \log \frac{1}{1-at} \right)^2 d\mu_\alpha(t) = \infty.
|
| 636 |
+
$$
|
| 637 |
+
|
| 638 |
+
So, $\mu_\alpha$ does not satisfy (3.7) and thus $\mathcal{H}_{\mu_\alpha}$ is not bounded.
|
| 639 |
+
---PAGE_BREAK---
|
| 640 |
+
|
| 641 |
+
In order to prove (3.7), using that $(A^2)^* \cong A^2$ under the pairing $\langle \cdot, \rangle_{A^2}$,
|
| 642 |
+
we obtain
|
| 643 |
+
|
| 644 |
+
$$
|
| 645 |
+
(3.8) \quad \mathcal{H}_\mu : A^2 \to A^2 \text{ is bounded}
|
| 646 |
+
\quad \Leftrightarrow \quad
|
| 647 |
+
\left| \int_D \left( \int_{[0,1]} \frac{f(t)}{1-tz} d\mu(t) \right) \overline{g(z)} dA(z) \right| \le C \|f\|_{A^2} \|g\|_{A^2} \text{ for all } f,g \in A^2.
|
| 648 |
+
$$
|
| 649 |
+
|
| 650 |
+
Set $g_a(z) = \frac{1}{1-az}$, $a \in (0,1)$. Then $\|g_a\|_{A^2}^2 = \frac{1}{a^2} \log \frac{1}{1-az}$ and
|
| 651 |
+
|
| 652 |
+
$$
|
| 653 |
+
\begin{align*}
|
| 654 |
+
\int_D \frac{g_a(z)}{1-t\bar{z}} dA(z) &= \int_D \left(\sum_{n=0}^\infty (az)^n\right) \left(\sum_{n=0}^\infty (t\bar{z})^n\right) dA(z) \\
|
| 655 |
+
&= \frac{1}{at} \log \frac{1}{1-at}, \quad a,t \in (0,1).
|
| 656 |
+
\end{align*}
|
| 657 |
+
$$
|
| 658 |
+
|
| 659 |
+
Then, by (3.8) (with $g = g_a$) and Fubini's theorem, we get
|
| 660 |
+
|
| 661 |
+
$$
|
| 662 |
+
(3.9) \quad \sup_{a \in (0,1)} \left| \int_0^1 f(t) d\mu_a(t) \right| \le C \|f\|_{A^2} \quad \text{for all } f \in A^2,
|
| 663 |
+
$$
|
| 664 |
+
|
| 665 |
+
where
|
| 666 |
+
|
| 667 |
+
$$
|
| 668 |
+
d\mu_a(t) = \frac{\frac{1}{at} \log \frac{1}{1-at}}{\left(\frac{1}{a^2} \log \frac{1}{1-a^2}\right)^{1/2}} d\mu(t).
|
| 669 |
+
$$
|
| 670 |
+
|
| 671 |
+
So, there is $C > 0$ such that
|
| 672 |
+
|
| 673 |
+
$$
|
| 674 |
+
(3.10) \quad \sup_{a, \beta \in (0,1)} \left| \int_0^\beta f(t) d\mu_a(t) \right| \le C \|f\|_{A^2} \quad \text{for all } f \in A^2.
|
| 675 |
+
$$
|
| 676 |
+
|
| 677 |
+
Next, arguing as in the proof of Proposition 1.4, we obtain
|
| 678 |
+
|
| 679 |
+
$$
|
| 680 |
+
(3.11) \quad \sup_{a, \beta \in (0,1)} \left\| \int_0^\beta \frac{d\mu_a(t)}{(1-wt)^2} \right\|_{A^2} < \infty,
|
| 681 |
+
$$
|
| 682 |
+
|
| 683 |
+
which together with the fact that
|
| 684 |
+
|
| 685 |
+
$$
|
| 686 |
+
\begin{align*}
|
| 687 |
+
\left\| \int_0^\beta \frac{d\mu_a(t)}{(1-wt)^2} \right\|_{A^2}^2 &= \sum_{n=0}^\infty (n+1) \left[ \int_0^\beta t^n d\mu_a(t) \right]^2 \\
|
| 688 |
+
&\geq \left( \frac{1}{a^2} \log \frac{1}{1-a^2} \right)^{-1} \sum_{n=0}^\infty (n+1) \int_0^\beta t^{2n} \left( \frac{1}{at} \log \frac{1}{1-at} \right)^2 \mu([t,\beta]) d\mu(t) \\
|
| 689 |
+
&\geq \frac{1}{4} \left( \frac{1}{a^2} \log \frac{1}{1-a^2} \right)^{-1} \int_0^\beta \frac{\left( \frac{1}{at} \log \frac{1}{1-at} \right)^2}{(1-t)^2} \mu([t,\beta]) d\mu(t)
|
| 690 |
+
\end{align*}
|
| 691 |
+
$$
|
| 692 |
+
|
| 693 |
+
finishes the proof. $\blacksquare$
|
| 694 |
+
---PAGE_BREAK---
|
| 695 |
+
|
| 696 |
+
**Acknowledgements.** The authors wish to thank Professor A. Aleman for his helpful comments and for interesting discussions on the topic of the paper.
|
| 697 |
+
|
| 698 |
+
The first author is partially supported by the European Networking Programme “HCAA” of the European Science Foundation. The second author is partially supported by the Ramón y Cajal program of MICINN (Spain). Both authors are supported by grants from “Ministerio de Educación y Ciencia, Spain” (MTM2007-60854) and from “La Junta de Andalucía” (FQM210) and P09-FQM-4468.
|
| 699 |
+
|
| 700 |
+
References
|
| 701 |
+
|
| 702 |
+
[ACP] J. M. Anderson, J. Clunie and Ch. Pommerenke, *On Bloch functions and normal functions*, J. Reine Angew. Math. 270 (1974), 12–37.
|
| 703 |
+
|
| 704 |
+
[ARS] N. Arcozzi, R. Rochberg and E. Sawyer, *Carleson measures for analytic Besov spaces*, Rev. Mat. Iberoamer. 18 (2002), 443–510.
|
| 705 |
+
|
| 706 |
+
[ARSW] N. Arcozzi, R. Rochberg, E. Sawyer and B. Wick, *Bilinear forms on the Dirichlet space*, Anal. PDE 3 (2010), 21–47.
|
| 707 |
+
|
| 708 |
+
[C] L. Carleson, *An interpolation problem for bounded analytic functions*, Amer. J. Math. 80 (1958), 921–930.
|
| 709 |
+
|
| 710 |
+
[CS] J. Cima and D. Stegenga, *Hankel operators on $H^p$, in: Analysis at Urbana. Vol. 1: Analysis in Function Spaces*, London Math. Soc. Lecture Note Ser. 137, Cambridge Univ. Press, Cambridge, 1989), 133–150.
|
| 711 |
+
|
| 712 |
+
[CSi] J. Cima and A. Siskakis, *Cauchy transforms and Cesàro averaging operators*, Acta Sci. Math. (Szeged) (1999), 505–513.
|
| 713 |
+
|
| 714 |
+
[Di] E. Diamantopoulos, *Hilbert matrix on Bergman spaces*, Illinois J. Math. 48 (2004), 1067–1078.
|
| 715 |
+
|
| 716 |
+
[DiS] E. Diamantopoulos and A. Siskakis, *Composition operators and the Hilbert matrix*, Studia Math. 140 (2000), 191–198.
|
| 717 |
+
|
| 718 |
+
[DJV] M. Dostanić, M. Jevtić and D. Vukotić, *Norm of the Hilbert matrix on Bergman and Hardy spaces and a theorem of Nehari type*, J. Funct. Anal. 254 (2008), 2800–2815.
|
| 719 |
+
|
| 720 |
+
[D] P. L. Duren, *Theory of $H^p$ Spaces*, Academic Press, New York, 1970. Reprint: Dover, Mineola, NY, 2000.
|
| 721 |
+
|
| 722 |
+
[DS] P. L. Duren and A. P. Schuster, *Bergman Spaces*, Math. Surveys Monogr. 100, Amer. Math. Soc., Providence, RI, 2004.
|
| 723 |
+
|
| 724 |
+
[G] J. B. Garnett, *Bounded Analytic Functions*, Academic Press, 1981.
|
| 725 |
+
|
| 726 |
+
[Gi] D. Girela, *Analytic functions of bounded mean oscillation*, in: Complex Functions Spaces, R. Aulaskari (ed.), Univ. Joensuu Dept. Math. Rep. Ser. 4 (2001), 61–171.
|
| 727 |
+
|
| 728 |
+
[JPS] S. Janson, J. Petree and S. Semmes, *On the action of Hankel and Toeplitz operators on some function spaces*, Duke Math. J. 51 (1984), 937–958.
|
| 729 |
+
|
| 730 |
+
[PV] M. Papadimitrakis and J. A. Virtanen, *Hankel and Toeplitz operators on $H^1$: continuity, compactness and Fredholm properties*, Integral Equations Operator Theory 61 (2008), 573–591.
|
| 731 |
+
|
| 732 |
+
[Pe] V. Peller, *Hankel Operators and Their Applications*, Springer Monogr. Math., Springer, New York, 2003.
|
| 733 |
+
---PAGE_BREAK---
|
| 734 |
+
|
| 735 |
+
[Po] S. C. Power, *Hankel operators on Hilbert space*, Bull. London Math. Soc. 12 (1980), 422–442.
|
| 736 |
+
|
| 737 |
+
[S] D. Stegenga, *Multipliers of the Dirichlet space*, Illinois J. Math. 24 (1980), 113–139.
|
| 738 |
+
|
| 739 |
+
[T] V. A. Tolokonnikov, *Hankel and Toeplitz operators in Hardy spaces*, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) 141 (1985), 165–175 (in Russian); English transl.: J. Soviet Math. 37 (1987), 1359–1364.
|
| 740 |
+
|
| 741 |
+
[W] H. Widom, *Hankel matrices*, Trans. Amer. Math. Soc. 121 (1966), 1–35.
|
| 742 |
+
|
| 743 |
+
[Wu] Z. Wu, *The dual and second predual of $W_\sigma$*, J. Funct. Anal. 116 (1993), 314–334.
|
| 744 |
+
|
| 745 |
+
[Z] R. Zhao, *On logarithmic Carleson measures*, Acta Sci. Math. (Szeged) 69 (2003), 605–618.
|
| 746 |
+
|
| 747 |
+
[Zh] K. Zhu, *Operator Theory in Function Spaces, I, II*, 2nd ed., Cambridge Univ. Press, Cambridge, 1959.
|
| 748 |
+
|
| 749 |
+
Petros Galanopoulos, José Ángel Peláez
|
| 750 |
+
Departamento de Análisis Matemático
|
| 751 |
+
Universidad de Málaga
|
| 752 |
+
Campus de Teatinos, 29071 Málaga, Spain
|
| 753 |
+
E-mail: galanopoulos_petros@yahoo.gr
|
| 754 |
+
japelaez@uma.es
|
| 755 |
+
|
| 756 |
+
Received December 9, 2009
|
| 757 |
+
|
| 758 |
+
Revised version May 26, 2010
|
| 759 |
+
|
| 760 |
+
(6764)
|
samples_new/texts_merged/6324184.md
ADDED
|
@@ -0,0 +1,395 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Supporting information for
|
| 5 |
+
|
| 6 |
+
“Spatial structure, host heterogeneity and parasite virulence: implications for
|
| 7 |
+
vaccine-driven evolution”
|
| 8 |
+
|
| 9 |
+
Y. H. Zurita-Gutiérrez & S. Lion
|
| 10 |
+
|
| 11 |
+
April 30, 2015
|
| 12 |
+
|
| 13 |
+
**Appendix S1: Theory**
|
| 14 |
+
|
| 15 |
+
S1.1 Spatial invasion fitness
|
| 16 |
+
|
| 17 |
+
The dynamics of the mutant parasite are given by the following equations
|
| 18 |
+
|
| 19 |
+
$$
|
| 20 |
+
\begin{align*}
|
| 21 |
+
\frac{dp_{I'_{N}}}{dt} &= \beta'_{NN}[S_N|I'_N]p_{I'_N} + \beta'_{TN}[S_N|I'_T]p_{I'_T} - (d+\alpha'_{N})p_{I'_N} \\
|
| 22 |
+
\frac{dp_{I'_T}}{dt} &= \beta'_{NT}[S_T|I'_N]p_{I'_N} + \beta'_{TT}[S_T|I'_T]p_{I'_T} - (d+\alpha'_{T})p_{I'_T}
|
| 23 |
+
\end{align*}
|
| 24 |
+
$$
|
| 25 |
+
|
| 26 |
+
or, in matrix form,
|
| 27 |
+
|
| 28 |
+
$$
|
| 29 |
+
\frac{d}{dt} \begin{pmatrix} p_{I'_{N}} \\ p_{I'_{T}} \end{pmatrix} = M \begin{pmatrix} p_{I'_{N}} \\ p_{I'_{T}} \end{pmatrix} \quad (S1.1)
|
| 30 |
+
$$
|
| 31 |
+
|
| 32 |
+
where
|
| 33 |
+
|
| 34 |
+
$$
|
| 35 |
+
M = \begin{pmatrix}
|
| 36 |
+
\beta'_{NN}[S_N|I'_N] - (d+\alpha'_N) & \beta'_{TN}[S_N|I'_T] \\
|
| 37 |
+
\beta'_{NT}[S_T|I'_N] & \beta'_{TT}[S_T|I'_T] - (d+\alpha'_T)
|
| 38 |
+
\end{pmatrix}
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
We can rewrite **M** as **M** = **F** − **V**, where
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
\mathbf{F} = \begin{pmatrix} \beta'_{NN}[S_N | I'_N] & \beta'_{TN}[S_N | I'_T] \\ \beta'_{NT}[S_T | I'_N] & \beta'_{TT}[S_T | I'_T] \end{pmatrix}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
and
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
\boldsymbol{V} = \begin{pmatrix} d + \alpha'_{N} & 0 \\ 0 & d + \alpha'_{T} \end{pmatrix}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
All the entries of **F** and **V**<sup>-1</sup> are positive, and the dominant eigenvalue of −**V** is cleary negative, so we can use the Next-Generation Theorem. Thus, the mutant invades if the dominant eigenvalue of **A** = **F**·**V**<sup>-1</sup> is greater than 1. With the notations
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\begin{align*}
|
| 57 |
+
R'_{NN} &= \beta'_{NN} / \delta'_{N} \\
|
| 58 |
+
R'_{TN} &= \beta'_{TN} / \delta'_{N} \\
|
| 59 |
+
R'_{NT} &= \beta'_{NT} / \delta'_{T} \\
|
| 60 |
+
R'_{TT} &= \beta'_{TT} / \delta'_{T}
|
| 61 |
+
\end{align*}
|
| 62 |
+
$$
|
| 63 |
+
|
| 64 |
+
we have
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
\mathbf{A} = \begin{pmatrix}
|
| 68 |
+
R'_{NN}[S_N | I'_N] & R'_{TN}[S_N | I'_T] \\
|
| 69 |
+
R'_{NT}[S_T | I'_N] & R'_{TT}[S_T | I'_T]
|
| 70 |
+
\end{pmatrix}
|
| 71 |
+
$$
|
| 72 |
+
---PAGE_BREAK---
|
| 73 |
+
|
| 74 |
+
Some straightforward algebra shows that the dominant eigenvalue of this matrix is
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
\begin{align*}
|
| 78 |
+
\mathcal{R} ={}& \frac{1}{2} (R'_{NN}[S_N|I'_N] + R'_{TT}[S_T|I'_T]) \\
|
| 79 |
+
& + \frac{1}{2} \sqrt{(R'_{NN}[S_N|I'_N] + R'_{TT}[S_T|I'_T])^2 + 4(R'_{NT}R'_{TN}[S_N|I'_T][S_T|I'_N] - R'_{NN}R'_{TT}[S_N|I'_N][S_T|I'_T])}
|
| 80 |
+
\end{align*}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
When $g_P = 1$ (global dispersal), we recover the expression found by Gandon (2004) for a well-mixed population.
|
| 84 |
+
|
| 85 |
+
Noting $a_{ij}$ the elements of $\mathbf{A}$, we have
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\mathbf{A} = \begin{pmatrix} a_{NN} & a_{TN} \\ a_{NT} & a_{TT} \end{pmatrix}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
At equilibrium, the dominant eigenvalue is unity, $\mathcal{R} = 1$. An associated right eigenvector is the vector
|
| 92 |
+
of densities of each class of infected hosts at equilibrum, $\mathbf{u} = (\hat{p}_{I_N} \ \hat{p}_{I_T})^T$. We therefore have
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
\frac{p_{I_T}}{p_{I_N}} = \frac{1 - a_{NN}}{a_{TN}} = \frac{a_{NT}}{1 - a_{TT}} \quad (S1.2)
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
An associated left eigenvector is the vector of reproductive values, $\mathbf{v}$ (Taylor, 1990; Rousset, 2004). Normalising $\mathbf{v}$ such that $\mathbf{v}^T\mathbf{u} = 1$, we find that the class reproductive values $c_j = v_j u_j$ at equilibrium satisfy $c_N + c_T = 1$, with
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
c_N = \frac{a_{NT} p_{IN}^2}{a_{NT} p_{IN}^2 + a_{TN} p_{IT}^2}. \qquad (S1.3)
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
Furthermore, at equilibrium, det($\mathbf{A} - \mathbf{I}$) = 0, which yields the following equilibrium condition
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
1 - a_{NN} - a_{TT} = a_{NT}a_{TN} - a_{NN}a_{TT} \quad (S1.4)
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
For the sake of simplicity, we make now the additional assumption that transmission can be written
|
| 111 |
+
as the product of infectivity and susceptibility. Hence, we write $\beta_{ij} = \beta_i \sigma_j$, where $\sigma_N = 1$ and $\sigma_T$ is
|
| 112 |
+
the relative susceptibility of treated hosts. We then have
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
\begin{align*}
|
| 116 |
+
R'_{NN} &= R'_N = \beta'_N / \delta'_N \\
|
| 117 |
+
R'_{TT} &= \sigma_T R'_T = \sigma_T \beta'_T / \delta'_T \\
|
| 118 |
+
R'_{TN} &= R'_T \frac{\delta'_T}{\delta'_N} \\
|
| 119 |
+
R'_{NT} &= \sigma_T R'_N \frac{\delta'_N}{\delta'_T}
|
| 120 |
+
\end{align*}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
and we obtain
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\mathcal{R} = \frac{1}{2} (R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T]) + \frac{1}{2} \sqrt{(R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T])^2 - 4\sigma_T R'_{N} R'_{T} C'} \quad (\text{S1.5})
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
C' = [S_N | I'_N][S_T | I'_T] - [S_N | I'_T][S_T | I'_N] \tag{S1.6}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
measures the spatial correlation of treatments experienced by mutant hosts. Equation (S1.4) can then
|
| 136 |
+
be rewritten as
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
1 - (R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T]) = -\sigma_T R_N R_T C \quad (\text{S1.7})
|
| 140 |
+
$$
|
| 141 |
+
---PAGE_BREAK---
|
| 142 |
+
|
| 143 |
+
## S1.2 Selection gradient
|
| 144 |
+
|
| 145 |
+
Assuming that selection is weak, we can further calculate the selection gradient.
|
| 146 |
+
|
| 147 |
+
$$ \partial \mathcal{R} = \frac{1}{2} \partial (R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T]) + \frac{\frac{1}{4} \partial ((R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T])^2 - 4\sigma_T R'_{N} R'_{T} C')}{\sqrt{(R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T])^2 - 4\sigma_T R_N R_T C}} $$
|
| 148 |
+
|
| 149 |
+
At neutrality, we have $\mathcal{R} = 1$ and therefore
|
| 150 |
+
|
| 151 |
+
$$ \sqrt{(R_N(S_N|I_N) + \sigma_T R_T[S_T|I_T])^2 - 4\sigma_T R_N R_T C} = 2 - (R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T]) > 0 \quad (\text{S1.8}) $$
|
| 152 |
+
|
| 153 |
+
Using equation (S1.7), we thus have
|
| 154 |
+
|
| 155 |
+
$$ \sqrt{(R_N(S_N|I_N) + \sigma_T R_T[S_T|I_T])^2 - 4\sigma_T R_N R_T C} = 1 - \sigma_T R_N R_T C > 0 \quad (\text{S1.9}) $$
|
| 156 |
+
|
| 157 |
+
Hence
|
| 158 |
+
|
| 159 |
+
$$ \partial \mathcal{R} = \frac{1}{2} \partial (R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T]) + \frac{\frac{1}{4} \partial ((R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T])^2 - 4\sigma_T R'_{N} R'_{T} C')}{1 - \sigma_T R_N R_T C} $$
|
| 160 |
+
|
| 161 |
+
The numerator of the right-hand side of the latter equation can be written as
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
\begin{aligned}
|
| 165 |
+
& \frac{1}{2}\left(2 - (R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T])\right) \partial(R'_N[S_N|I'_N] + \sigma_T R'_T[S_T|I'_T]) \\
|
| 166 |
+
& \qquad + \frac{1}{4}\partial\left((R'_N[S_N|I'_N] + \sigma_T R'_T[S_T|I'_T])^2 - 4\sigma_T R'_N R'_T C'\right) \\
|
| 167 |
+
&=\frac{1}{2}\left(2 - (R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T])\right) \partial(R'_N[S_N|I'_N] + \sigma_T R'_T[S_T|I'_T]) \\
|
| 168 |
+
& \qquad + \frac{1}{2}(R_N[S_N|I_N] + \sigma_T R_T[S_T|I_T]) \partial((R'_N[S_N|I'_N] + \sigma_T R'_T[S_T|I'_T])) - \sigma_T \partial(R'_N R'_T C')
|
| 169 |
+
\end{aligned}
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
which yields the following expression for $\mathcal{R}$
|
| 173 |
+
|
| 174 |
+
$$ \partial \mathcal{R} = \frac{1}{1 - \sigma_T R_N R_T C} \partial (R'_{N}[S_N|I'_N] + \sigma_T R'_{T}[S_T|I'_T] - \sigma_T R'_{N} R'_{T} C') \quad (\text{S1.10}) $$
|
| 175 |
+
|
| 176 |
+
## S1.3 Simplifications
|
| 177 |
+
|
| 178 |
+
We can write $\partial \mathcal{R}$ as
|
| 179 |
+
|
| 180 |
+
$$ \partial \mathcal{R} = \frac{\partial W + \partial S}{1 - \sigma_T R_N R_T C} \quad (\text{S1.11}) $$
|
| 181 |
+
|
| 182 |
+
where $\partial W$ collects all direct selective effects, and $\partial S$ collects all indirect selective effects, i.e. the selective effects on local densities.
|
| 183 |
+
|
| 184 |
+
### Direct effects
|
| 185 |
+
|
| 186 |
+
We have
|
| 187 |
+
|
| 188 |
+
$$ \partial W = [S_N | I_N] \partial R'_N + \sigma_T [S_T | I_T] \partial R'_T - \sigma_T C (R_N \partial R'_T + R_T \partial R'_N) \quad (\text{S1.12}) $$
|
| 189 |
+
|
| 190 |
+
Plugging (S1.7) into the expression of $\partial W$, we obtain
|
| 191 |
+
|
| 192 |
+
$$ \partial W = [S_N|I_N]\partial R'_N + \sigma_T [S_T|I_T]\partial R'_T + (R_N\partial R'_T + R_T\partial R'_N)\left(\frac{1-(R_N[S_N|I_N]+\sigma_T R_T[S_T|I_T])}{R_N R_T}\right) \quad (\text{S1.13}) $$
|
| 193 |
+
|
| 194 |
+
which gives after simplifications
|
| 195 |
+
|
| 196 |
+
$$ \partial W = \frac{\partial R'_{N}}{R_{N}} (1 - \sigma_{T} R_{T} [S_{T}|I_{T}]) + \frac{\partial R'_{T}}{R_{T}} (1 - R_{N}[S_{N}|I_{N}]) \quad (\text{S1.14}) $$
|
| 197 |
+
---PAGE_BREAK---
|
| 198 |
+
|
| 199 |
+
From the dynamics of $I_N$ and $I_T$, we have
|
| 200 |
+
|
| 201 |
+
$$R_N[S_N|I_N] = 1 - \frac{\beta_T[S_N|I_T]p_{IT}}{\delta_N p_{IN}} = 1 - \frac{\beta_T[S_N|I_T]p_{IT}}{h_N p_{S_N}} = 1 - \frac{\beta_T[I_T|S_N]}{h_N} \quad (S1.15)$$
|
| 202 |
+
|
| 203 |
+
$$\sigma_T R_T [S_T | I_T] = 1 - \sigma_T \frac{\beta_N [S_T | I_N] p_{IN}}{\delta_T p_{IT}} = 1 - \sigma_T \frac{\beta_N [S_T | I_N] p_{IN}}{\sigma_T h_T p_{S_T}} = 1 - \frac{\beta_N [I_N | S_T]}{h_T} \quad (S1.16)$$
|
| 204 |
+
|
| 205 |
+
so $\tau_T \equiv 1 - R_N[S_N|I_N]$ is the share of the force of infection on naive hosts that is caused by infections from the treated class, and $\tau_N = 1 - \sigma_T R_T[S_T|I_T]$ has the same interpretation for treated hosts. We then have
|
| 206 |
+
|
| 207 |
+
$$\partial W = \tau_N \frac{\partial R'_N}{R_N} + \tau_T \frac{\partial R'_T}{R_T} \quad (S1.17)$$
|
| 208 |
+
|
| 209 |
+
## Indirect effects
|
| 210 |
+
|
| 211 |
+
We now turn to the “spatial” component of the selection gradient
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
\begin{align}
|
| 215 |
+
\partial S &= R_N \partial[S_N | I'_N] + \sigma_T R_T \partial[S_T | I'_T] \nonumber \\
|
| 216 |
+
&\quad + \sigma_T R_N R_T ([S_N | I_T] \partial[S_T | I'_N] + [S_T | I_N] \partial[S_N | I'_T] - [S_N | I_N] \partial[S_T | I'_T] - [S_T | I_T] \partial[S_N | I'_N]) \tag{S1.18} \\
|
| 217 |
+
&= R_N (1 - \sigma_T R_T [S_T | I_T]) \partial[S_N | I'_N] + \sigma_T R_T (1 - R_N [S_N | I_N]) \partial[S_T | I'_T] \nonumber \\
|
| 218 |
+
&\quad + \sigma_T R_N R_T [S_N | I_T] \partial[S_T | I'_N] + \sigma_T R_N R_T [S_T | I_N] \partial[S_N | I'_T] \tag{S1.19} \\
|
| 219 |
+
&= R_N [(1 - \sigma_T R_T [S_T | I_T]) \partial[S_N | I'_N] + \sigma_T R_T [S_N | I_T] \partial[S_T | I'_N]] \nonumber \\
|
| 220 |
+
&\quad + \sigma_T R_T [(1 - R_N [S_N | I_N]) \partial[S_T | I'_T] + R_N [S_T | I_N] \partial[S_N | I'_T]] \tag{S1.20}
|
| 221 |
+
\end{align}
|
| 222 |
+
$$
|
| 223 |
+
|
| 224 |
+
Furthermore, we have
|
| 225 |
+
|
| 226 |
+
$$R_T[S_N|I_T] = \frac{\delta_N p_{IN}}{\delta_T p_{IT}} (1 - R_N[S_N|I_N]) = \frac{h_N p_{S_N}}{\sigma_T h_T p_{ST}} \tau_T \quad (S1.21)$$
|
| 227 |
+
|
| 228 |
+
$$\sigma_T R_N [S_T | I_N] = \frac{\delta_T p_{IT}}{\delta_N p_{IN}} (1 - \sigma_T R_T [S_T | I_T]) = \frac{\sigma_T h_T p_{ST}}{h_N p_{SN}} \tau_N \quad (S1.22)$$
|
| 229 |
+
|
| 230 |
+
This yields
|
| 231 |
+
|
| 232 |
+
$$\partial S = R_N \left[ \tau_N \partial[S_N | I'_N] + \frac{h_N p_{SN}}{h_T p_{ST}} \tau_T \partial[S_T | I'_N] \right] + \sigma_T R_T \left[ \tau_T \partial[S_T | I'_T] + \frac{h_T p_{ST}}{h_N p_{SN}} \tau_N \partial[S_N | I'_T] \right] \quad (S1.23)$$
|
| 233 |
+
|
| 234 |
+
or equivalently
|
| 235 |
+
|
| 236 |
+
$$
|
| 237 |
+
\begin{align}
|
| 238 |
+
\partial S ={}& \tau_N \left[ R_N \partial[S_N | I'_N] + R_T \frac{\sigma_T h_T p_{ST}}{h_N p_{SN}} \partial[S_N | I'_T] \right] \nonumber \\
|
| 239 |
+
& + \tau_T \left[ R_N \frac{h_N p_{SN}}{\sigma_T h_T p_{ST}} \sigma_T \partial[S_T | I'_N] + \sigma_T R_T \partial[S_T | I'_T] \right] \tag{S1.24}
|
| 240 |
+
\end{align}
|
| 241 |
+
$$
|
| 242 |
+
|
| 243 |
+
## Link with reproductive values
|
| 244 |
+
|
| 245 |
+
The quantities $\tau_N$ and $\tau_T$ have a direct interpretation in terms of reproductive values. Indeed, we have
|
| 246 |
+
|
| 247 |
+
$$\tau_T = \frac{\beta_T[I_T|S_N]}{h_N} = \frac{\beta_T[S_N|I_T]p_{IT}}{\delta_N p_{IN}} = a_{TN} \frac{p_{IT}}{p_{IN}} = 1 - a_{NN} \quad (S1.25)$$
|
| 248 |
+
|
| 249 |
+
The last equation comes from equation (S1.2). Similarly, we have
|
| 250 |
+
|
| 251 |
+
$$\tau_N = a_{NT} \frac{p_{IN}}{p_{IT}} = 1 - a_{TT} \quad (S1.26)$$
|
| 252 |
+
---PAGE_BREAK---
|
| 253 |
+
|
| 254 |
+
Hence, it follows from equation (S1.4)
|
| 255 |
+
|
| 256 |
+
$$
|
| 257 |
+
\tau_N + \tau_T = 1 + \sigma_T R_N R_T C \tag{S1.27}
|
| 258 |
+
$$
|
| 259 |
+
|
| 260 |
+
and
|
| 261 |
+
|
| 262 |
+
$$
|
| 263 |
+
\frac{\tau_N}{\tau_N + \tau_T} = \frac{a_{NT} p_{IN}^2}{a_{NT} p_{IN}^2 + a_{TN} p_{IT}^2} \quad (\text{S1.28})
|
| 264 |
+
$$
|
| 265 |
+
|
| 266 |
+
where the last expression can be identified as $c_N$ in equation (S1.3).
|
| 267 |
+
|
| 268 |
+
**Full selection gradient**
|
| 269 |
+
|
| 270 |
+
Plugging equation (S1.17) and (S1.24) into equation (S1.11), and noting that the denominator is $\tau_N + \tau_T$, we obtain the following expression for the selection gradient
|
| 271 |
+
|
| 272 |
+
$$
|
| 273 |
+
\begin{align}
|
| 274 |
+
\partial R = c_N & \left[ \frac{\partial R'_{N}}{R_N} + R_N \partial[S_N | I'_N] + R_T \frac{\sigma_T h_T p_{S_T}}{h_N p_{S_N}} \partial[S_N | I'_T] \right] \tag{S1.29a} \\
|
| 275 |
+
& + c_T \left[ \frac{\partial R'_{T}}{R_T} + R_N \frac{h_N p_{S_N}}{h_T p_{S_T}} \partial[S_T | I'_N] + R_T \sigma_T \partial[S_T | I'_T] \right] \tag{S1.29b}
|
| 276 |
+
\end{align}
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
Although we have obtained this result by direct differentiation of the invasion fitness, we note that an alternative derivation can be obtained by noting that the selection gradient can be written as
|
| 280 |
+
|
| 281 |
+
$$
|
| 282 |
+
\partial \mathcal{R} = \sum_{k,l} v_k u_l \partial(a_{lk})
|
| 283 |
+
$$
|
| 284 |
+
|
| 285 |
+
By writing $a_{\ell k} = F_\ell m_{\ell k}$, we can write an equation similar to equation (5) in Rousset (1999), and further simplifications lead to equation (S1.29).
|
| 286 |
+
|
| 287 |
+
**S1.4 Uncorrelated landscapes**
|
| 288 |
+
|
| 289 |
+
If the landscape is uncorrelated, additional simplifications follow. First, the spatial correlation in treatment is always zero, hence $C = C' = 0$. It follows from equation (S1.5) that the invasion fitness of a rare mutant takes the following simple form:
|
| 290 |
+
|
| 291 |
+
$$
|
| 292 |
+
\mathcal{R} = R'_{N}[S_N | I'_{N}] + R'_{T}\sigma_{T}[S_T | I'_{T}] \quad (\text{S1.30})
|
| 293 |
+
$$
|
| 294 |
+
|
| 295 |
+
Then the selection gradient can be written simply as
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
\partial \mathcal{R} = R_N[S_N | I'_N] \frac{\partial R'_N}{R_N} + \sigma_T R_T [S_T | I'_T] \frac{\partial R'_T}{R_T} + R_N \partial[S_N | I'_N] + R_T \sigma_T \partial[S_T | I'_T] \quad (\text{S1.31})
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
For a neutral mutant, we have at equilibrium [$S_N|I'_N$] = [$S_N|I_N$] and [$S_T|I'_T$] = [$S_T|I_T$]. Furthermore,
|
| 302 |
+
we have at equilibrium
|
| 303 |
+
|
| 304 |
+
$$
|
| 305 |
+
R_N[S_N | I_N] = c_N
|
| 306 |
+
\quad
|
| 307 |
+
(S1.32)
|
| 308 |
+
$$
|
| 309 |
+
|
| 310 |
+
and
|
| 311 |
+
|
| 312 |
+
$$
|
| 313 |
+
\sigma_T R_T [S_T | I_T] = c_T = 1 - c_N \quad (\text{S1.33})
|
| 314 |
+
$$
|
| 315 |
+
|
| 316 |
+
Combining equations (S1.30)-(S1.33), and noting that $\partial[S_x|I'_y] = (1-g_P)q_{S_x/I'_y}$, we obtain equation (9) in the main text.
|
| 317 |
+
|
| 318 |
+
**S1.5 Host reproduction**
|
| 319 |
+
|
| 320 |
+
So far, our results depend neither on host reproduction nor on the specific mechanism generating het-
|
| 321 |
+
erogeneity. The only assumption we make is that the parasite can only transmit horizontally (i.e. there
|
| 322 |
+
is no vertical transmission). For the specific example of vaccination, we consider density-dependent
|
| 323 |
+
reproduction, following previous spatial models of host-parasite interactions (Boots & Sasaki, 2000;
|
| 324 |
+
Lion & Gandon, 2015).
|
| 325 |
+
---PAGE_BREAK---
|
| 326 |
+
|
| 327 |
+
We assume that host reproduction occurs at rate $b$ and can be either global (with probability $g_H$) or local (with probability $1-g_H$). We also assume that only susceptible hosts can reproduce. Reproduction takes place into empty sites, which introduces density-dependence. Offspring are produced at rates $\lambda_N = b[o|S_N]$ and $\lambda_T = b[o|S_T]$ for naive and treated susceptible hosts, respectively, where $[o|S_i] = g_H p_o + (1-g_H)q_o/S_i$.
|
| 328 |
+
|
| 329 |
+
For the vaccination example, we further consider that offspring have a probability $\nu$ of entering the treated class at birth, as depicted in figure 1a. Note that, for a fully imperfect vaccine ($r_i = 0$), all hosts are identical for the parasite and, as a result, $c = \nu$.
|
| 330 |
+
|
| 331 |
+
## S1.6 Stochastic simulations
|
| 332 |
+
|
| 333 |
+
We performed stochastic individual-based simulations to analyse the effect of spatial structure and host quality on the evolution of host exploitation. The program was coded in C and implements the host-parasite life cycle (figure 1a in the main text) on a regular square lattice with 100×100 sites. Each site can contain at most one individual. The lattice is updated asynchronously in continuous time using the Gillespie algorithm (Gillespie, 1977).
|
| 334 |
+
|
| 335 |
+
For the simulations, we used the following trade-off:
|
| 336 |
+
|
| 337 |
+
$$ \beta(x) = 20 \ln(x+1) \quad (\text{S1.34}) $$
|
| 338 |
+
|
| 339 |
+
$$ \alpha(x) = x \quad (\text{S1.35}) $$
|
| 340 |
+
|
| 341 |
+
Upon infection, parasites can mutate at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. All simulations were run with parameters values: $b = 8$, $d = 1$, starting from host exploitation $x = 1.25$. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$.
|
| 342 |
+
|
| 343 |
+
## References
|
| 344 |
+
|
| 345 |
+
[1] Gillespie, D. (1977). Exact stochastic simulation of coupled chemical reactions. *The Journal of Physical Chemistry.* **81**: 2340–2361.
|
| 346 |
+
|
| 347 |
+
[2] Taylor, P. D. (1990). Allele-frequency change in a class-structured population. *Am. Nat.* **135**(1): 95–106. DOI: 10.1086/285034.
|
| 348 |
+
|
| 349 |
+
[3] Rousset, F. (1999). Reproductive value vs sources and sinks. *Oikos*. **86**(3): 591–596.
|
| 350 |
+
|
| 351 |
+
[4] Boots, M. & A. Sasaki (2000). The evolutionary dynamics of local infection and global reproduction in host-parasite interactions. *Ecol. Lett.* **3**: 181–185. DOI: 10.1046/j.1461-0248.2000.00139.x.
|
| 352 |
+
|
| 353 |
+
[5] Gandon, S., M. J. Mackinnon, S. Nee & A. F. Read (2001). Imperfect vaccines and the evolution of pathogen virulence. *Nature*. **414**: 751–756. DOI: 10.1038/414751a.
|
| 354 |
+
|
| 355 |
+
[6] Gandon, S., M. J. Mackinnon, S. Nee & A. F. Read (2003). Imperfect vaccination: some epidemiological and evolutionary consequences. *Proc. R. Soc. B.* **270**: 1129–1136. DOI: 10.1098/rspb.2003.2370.
|
| 356 |
+
|
| 357 |
+
[7] Gandon, S. (2004). Evolution of multihost parasites. *Evolution*. **58**(3): 455–469. DOI: 10.1111/j.0014-3820.2004.tb01669.x.
|
| 358 |
+
|
| 359 |
+
[8] Rousset, F. (2004). Genetic structure and selection in subdivided populations. Princeton University Press, Princeton, NJ, USA.
|
| 360 |
+
|
| 361 |
+
[9] Lion, S. & M. Boots (2010). Are parasites "prudent" in space? *Ecol. Lett.* **13**(10): 1245–55. DOI: 10.1111/j.1461-0248.2010.01516.x.
|
| 362 |
+
|
| 363 |
+
[10] Lion, S. & S. Gandon (2015). Evolution of spatially structured host-parasite interactions. *J. evol. Biol*. DOI: 10.1111/jeb.12551.
|
| 364 |
+
---PAGE_BREAK---
|
| 365 |
+
|
| 366 |
+
Appendix S2: Evolutionary consequences of an anti-growth vaccine:
|
| 367 |
+
vaccine coverage (figure S2)
|
| 368 |
+
|
| 369 |
+
We show here the impact of vaccination coverage on parasite prevalence and virulence, for near-perfect vaccines ($r_2 = 0.9$). We broadly recover the predictions of Gandon et al. (2001, 2003): increasing vaccination coverage has little impact on parasite prevalence, but may select for higher virulence (figure S2a). Note that, as parasite dispersal becomes more local, parasite prevalence is minimised at lower vaccination coverage (figure S2b). Lower parasite dispersal leads to lower prevalence and more prudent exploitation over the whole range of vaccination coverage, but selection for increased virulence is stronger at intermediate parasite dispersal.
|
| 370 |
+
|
| 371 |
+
Figure S2: The evolutionarily stable host exploitation (a) and prevalence (b) of the parasite as a function of vaccine coverage for an anti-growth vaccine $r_2$. The dashed lines indicate the predictions of non-spatial theory. The dots indicate the mean and standard deviation for six runs of the stochastic process. The fractions represent the number of runs that went extinct out of the six runs. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $b = 8$, $d = 1$, starting from host exploitation $x = 1.25$.
|
| 372 |
+
---PAGE_BREAK---
|
| 373 |
+
|
| 374 |
+
Appendix S3: Evolutionary consequences of an anti-transmission vaccine (figure S3)
|
| 375 |
+
|
| 376 |
+
Figure S3: The evolutionarily stable host exploitation (a,b) and prevalence (c,d) of the parasite as a function of parasite dispersal, vaccine efficacy, and vaccine coverage for an anti-transmission vaccine $r_3$. The dashed lines indicate the predictions non-spatial theory. The dots indicate the mean and standard deviation for six runs of the stochastic process. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $b = 8$, $d = 1$, starting from host exploitation $x = 1.25$.
|
| 377 |
+
---PAGE_BREAK---
|
| 378 |
+
|
| 379 |
+
Appendix S4: Effect of parasite evolution on total host density (figure S4)
|
| 380 |
+
|
| 381 |
+
Figure S4: The total host density on the evolutionary attractor as a function of (a,c) vaccine efficacy and (b,d) vaccine coverage for (a,b) anti-infection ($r_1$) and (c,d) anti-growth ($r_2$) vaccines. The dashed lines indicate the predictions of non-spatial theory. The dots indicate the mean for six runs of the stochastic process. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $d = 1$, starting from host exploitation $x = 1.25$.
|
| 382 |
+
---PAGE_BREAK---
|
| 383 |
+
|
| 384 |
+
# Appendix S5: Effect of host dispersal (figure S5)
|
| 385 |
+
|
| 386 |
+
In the main text, we investigate how changes in parasite dispersal affect the parasite evolution when host reproduce locally. Here, we show the robustness of our results when host dispersal is either partially ($g_H = 0.5$) or fully global ($g_H = 1$). For anti-growth (b) and anti-toxin (c) vaccines, global host dispersal weakens the effect of local parasite dispersal on the evolution of virulence. For anti-infection vaccines (a), the interplay between global host dispersal and local parasite dispersal gives rise to a non-linear relationship between vaccine efficacy and ES virulence, with a maximum for near-perfect vaccine. A complete study of the interplay between host and parasite dispersal kernels is beyond the scope of this paper, but this result suggests that the evolutionary outcome depends on both host and parasite dispersal patterns (see also Lion & Gandon, 2015 for a discussion in homogeneous spatially structured populations). Note that, as expected, global host dispersal always leads to higher prevalence (d,e,f).
|
| 387 |
+
|
| 388 |
+
Figure S5: The evolutionarily stable host exploitation (a,b,c) and prevalence (d,e,f) for (a,d) anti-infection ($r_1$), (b,e) anti-growth ($r_2$) and (c,f) anti-toxin ($r_4$) vaccines. For each figure, the results for fully local parasite dispersal ($g_P = 0$) and either fully local ($g_H = 0$, plain lines), partially global ($g_H = 0.5$, dotted lines), or fully global ($g_H = 1$, dashed lines) host dispersal are shown. The dots indicate the mean and standard deviation for six runs of the stochastic process. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $d = 1$, starting from host exploitation $x = 1.25$.
|
| 389 |
+
---PAGE_BREAK---
|
| 390 |
+
|
| 391 |
+
# Appendix S6: Effect of host fecundity (figure S6)
|
| 392 |
+
|
| 393 |
+
Previous studies have shown that, in the absence of vaccination, the kin competition effect is predicted to vanish when habitat saturation increases: as host fecundity increases, the differences between spatial and non-spatial models flatten out (Lion & Boots, 2010). Indeed, when host fecundity is infinite, the model converge towards a simple SIS model without demography, for which parasite dispersal only affects the speed of evolution, but not the endpoint. Stochastic simulations lead to the same result for anti-infection and anti-transmission vaccines, although for an anti-growth vaccine, the effect of host fecundity appears to be more complex (figure S6).
|
| 394 |
+
|
| 395 |
+
Figure S6: The evolutionarily stable host exploitation (plain lines) and prevalence (dashed lines) of the parasite as a function of parasite dispersal for (a) an anti-infection vaccine ($r_1$), (b) an anti-infection vaccine ($r_2$) and (c) an anti-transmission vaccine ($r_3$) for a near-perfect vaccine ($\nu = 0.9$ and $r_i = 0.9$) and increasing values of host fecundity ($b = 8, 12, 24, 40, 100$). The dots indicate the mean and standard deviation for six runs of the stochastic process. The mean equilibrium for each run was estimated as the average value of the trait between $t = 18000$ and the simulation end time $t = 20000$. Mutations occured at rate 0.05. Mutation effects were drawn from a normal distribution with 0 mean and standard deviation 0.05. Simulations were performed on a regular lattice of four neighbours, with 10000 sites. Parameters: $d = 1$, starting from host exploitation $x = 1.25$.
|
samples_new/texts_merged/6332297.md
ADDED
|
@@ -0,0 +1,449 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# The Worst Case Finite Optimal Value in Interval Linear Programming
|
| 5 |
+
|
| 6 |
+
Milan Hladík¹,*
|
| 7 |
+
|
| 8 |
+
¹ Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University,
|
| 9 |
+
Malostranské nám. 25, 11800, Prague, Czech Republic
|
| 10 |
+
E-mail: *hladik@kam.mff.cuni.cz*
|
| 11 |
+
|
| 12 |
+
**Abstract.** We consider a linear programming problem, in which possibly all coefficients are subject to uncertainty in the form of deterministic intervals. The problem of computing the worst case optimal value has already been thoroughly investigated in the past. Notice that it might happen that the value can be infinite due to infeasibility of some instances. This is a serious drawback if we know a priori that all instances should be feasible. Therefore we focus on the feasible instances only and study the problem of computing the worst case finite optimal value. We present a characterization for the general case and investigate special cases, too. We show that the problem is easy to solve provided interval uncertainty affects the objective function only, but the problem becomes intractable in case of intervals in the right-hand side of the constraints. We also propose a finite reduction based on inspecting candidate bases. We show that processing a given basis is still an NP-hard problem even with non-interval constraint matrix, however, the problem becomes tractable as long as uncertain coefficients are situated either in the objective function or in the right-hand side only.
|
| 13 |
+
|
| 14 |
+
**Key words:** linear programming, interval analysis, sensitivity analysis, interval linear programming, NP-completeness
|
| 15 |
+
|
| 16 |
+
Received: September 28, 2018; accepted: November 14, 2018; available online: December 13, 2018
|
| 17 |
+
|
| 18 |
+
DOI: 10.17535/crorr.2018.0019
|
| 19 |
+
|
| 20 |
+
## 1. Introduction
|
| 21 |
+
|
| 22 |
+
Consider a linear programming (LP) problem
|
| 23 |
+
|
| 24 |
+
$$f(A, b, c) = \min c^T x \text{ subject to } x \in M(A, b), \quad (1)$$
|
| 25 |
+
|
| 26 |
+
where $M(A, b)$ is the feasible set with constraint matrix $A \in \mathbb{R}^{m \times n}$ and the right-hand side vector $b \in \mathbb{R}^m$. We use the convention $\min\emptyset = \infty$ and $\max\emptyset = -\infty$. Basically, one of the following canonical forms
|
| 27 |
+
|
| 28 |
+
$$f(A,b,c) = \min c^T x \text{ subject to } Ax = b, x \ge 0, \qquad (\text{A})$$
|
| 29 |
+
|
| 30 |
+
$$f(A,b,c) = \min c^T x \text{ subject to } Ax \le b, \qquad (\text{B})$$
|
| 31 |
+
|
| 32 |
+
$$f(A,b,c) = \min c^T x \text{ subject to } Ax \le b, x \ge 0 \qquad (\text{C})$$
|
| 33 |
+
|
| 34 |
+
is usually considered. As was repeatedly observed, in the interval setting, these forms are not equivalent to each other in general [10, 12, 17], so they have to be analyzed separately. We can consider a general form involving all the canonical forms together [13], but from the sake of exposition, it is better to consider the canonical forms separately.
|
| 35 |
+
|
| 36 |
+
*Corresponding author.
|
| 37 |
+
---PAGE_BREAK---
|
| 38 |
+
|
| 39 |
+
**Interval data.** An interval matrix is defined as the set
|
| 40 |
+
|
| 41 |
+
$$ \mathbf{A} = \{ A \in \mathbb{R}^{m \times n}; \underline{A} \leq A \leq \overline{A} \}, $$
|
| 42 |
+
|
| 43 |
+
where $\underline{A}, \overline{A} \in \mathbb{R}^{m \times n}$, $\underline{A} \leq \overline{A}$ are given matrices. We will use also the notion of the midpoint and radius matrix defined respectively as
|
| 44 |
+
|
| 45 |
+
$$ A_c := \frac{1}{2}(\underline{A} + \overline{A}), \quad A_{\Delta} := \frac{1}{2}(\overline{A} - \underline{A}). $$
|
| 46 |
+
|
| 47 |
+
The set of all $m \times n$ interval matrices is denoted by $\mathbb{IR}^{m \times n}$. Similar notation is used for interval vectors, considered as one column interval matrices, and interval numbers. For interval arithmetic see, e.g., the textbooks [20, 22].
|
| 48 |
+
|
| 49 |
+
**Interval linear programming.** Let $\mathbf{A} \in \mathbb{IR}^{m \times n}$, $\mathbf{b} \in \mathbb{IR}^m$ and $\mathbf{c} \in \mathbb{IR}^n$ be given. By an interval linear programming problem we mean a family of LP problems (1) with $\mathbf{A} \in \mathbf{A}$, $\mathbf{b} \in \mathbf{b}$ and $\mathbf{c} \in \mathbf{c}$. A particular LP problem from this family is called a *realization*.
|
| 50 |
+
|
| 51 |
+
In the recent years, the optimal value range problem was intensively studied. The problem consists of determining the best case and worst case optimal values defined as
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
\begin{align*}
|
| 55 |
+
\underline{f} &:= \min f(\mathbf{A}, \mathbf{b}, \mathbf{c}) && \text{subject to } \mathbf{A} \in \mathbf{A}, \mathbf{b} \in \mathbf{b}, c \in \mathbf{c}, \\
|
| 56 |
+
\overline{f} &:= \max f(\mathbf{A}, \mathbf{b}, \mathbf{c}) && \text{subject to } \mathbf{A} \in \mathbf{A}, \mathbf{b} \in \mathbf{b}, c \in \mathbf{c}.
|
| 57 |
+
\end{align*}
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+
The interval $\boldsymbol{f} = [\boldsymbol{f}, \boldsymbol{\bar{f}}]$ then gives us the range of optimal values of the interval LP problem; each realization (1) has the optimal value in $\boldsymbol{f}$. If we define the image of optimal values
|
| 61 |
+
|
| 62 |
+
$$ f(\mathbf{A}, \mathbf{b}, \mathbf{c}) := \{f(\mathbf{A}, b, c) \mid A \in \mathbf{A}, b \in \mathbf{b}, c \in \mathbf{c}\}, $$
|
| 63 |
+
|
| 64 |
+
then the optimal value range alternatively reads
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
\begin{align*}
|
| 68 |
+
\underline{f} &:= \min f(\mathbf{A}, \mathbf{b}, \mathbf{c}), \\
|
| 69 |
+
\overline{f} &:= \max f(\mathbf{A}, \mathbf{b}, \mathbf{c}).
|
| 70 |
+
\end{align*}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
References [6, 12] present a survey on this topic. Methods and formulae for determining $\underline{f}$ and $\overline{f}$ were discussed in [5, 11, 21, 24]. Some of the values are easily computable, but some are NP-hard, depending on the particular form (A)-(C) of the LP problem. The hard cases are $\overline{f}$ for type (A) and $\underline{f}$ for type (B); NP-hardness was proved in [6, 7, 26, 28]. Hladík [15] proposes approximation method for the intractable cases. Garajová et al. [10] study what is the effect of transformations of the constraints on the optimal value range, among others.
|
| 74 |
+
|
| 75 |
+
Besides the optimal value range problem also the effects on the optimal solution set were investigated. See [2, 16, 19] for some of the recent results and the types of solutions considered.
|
| 76 |
+
|
| 77 |
+
**Problem formulation.** The worst case optimal value $\overline{f}$ can be infinite (i.e., $\overline{f} = \infty$) due to infeasibility of some realization. However, in many situations, we know a priori or can assure that all instances are feasible; a typical example is the transportation problem [4]. Therefore, we focus on feasible realizations only and define the *worst case finite optimal value* as
|
| 78 |
+
|
| 79 |
+
$$ \bar{f}_{fin} := \max f(\mathbf{A}, b, c) \text{ subject to } A \in \mathcal{A}, b \in \mathcal{B}, c \in \mathcal{C}, f(\mathbf{A}, b, c) < \infty. $$
|
| 80 |
+
|
| 81 |
+
**Example 1.** Consider the interval LP problem
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\min x \quad \text{subject to} \quad x \le [-1, 1], x \ge 0.
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
Choosing a negative value from the interval [-1, 1], we obtain an infeasible LP problem. Choosing a nonnegative value, the resulting optimal value is zero. Therefore $f(\mathbf{A}, \mathbf{b}, \mathbf{c}) = \{0, \infty\}$ and $\boldsymbol{f} = [\boldsymbol{\underline{f}}, \boldsymbol{\overline{f}}] = [0, \infty]$, but $\bar{f}_{fin} = 0$.
|
| 88 |
+
---PAGE_BREAK---
|
| 89 |
+
|
| 90 |
+
We will assume that there is at least one infeasible realization, that is, $f(A, b, c) = \infty$ for some $A \in \mathbf{A}$, $b \in \mathbf{b}$ and $c \in \mathbf{c}$; methods for checking this property are discussed in [6, 13], among others. Otherwise, if every realization is feasible, then $\bar{f}_{fin} = \bar{f}$, and we can use standard techniques for computing $\bar{f}$.
|
| 91 |
+
|
| 92 |
+
## 2. General results
|
| 93 |
+
|
| 94 |
+
As the following example shows, even the value of $\bar{f}_{fin}$ can be infinite. We will show later in Proposition 5 that this happens only if there are intervals in the constraint matrix.
|
| 95 |
+
|
| 96 |
+
**Example 2.** Consider the interval LP problem
|
| 97 |
+
|
| 98 |
+
$$ \min -x_1 \quad \text{subject to} \quad [0,1]x_2 = -1, x_1 - x_2 = 0, x_1, x_2 \le 0. $$
|
| 99 |
+
|
| 100 |
+
By direct inspection, we observe that $f(\mathbf{A}, \mathbf{b}, \mathbf{c}) = [1, \infty]$ and $\mathbf{f} = [1, \infty]$. We have $\bar{f} = \infty$ because the LP problem is infeasible when choosing the zero from the interval $[0, 1]$. However, we have also $\bar{f}_{fin} = \infty$ since the optimal value $f(A, b, c) \to \infty$ as the selection from $[0, 1]$ tends to zero.
|
| 101 |
+
|
| 102 |
+
Denote by
|
| 103 |
+
|
| 104 |
+
$$ g(A, b, c) = \max b^T y \quad \text{subject to} \quad y \in N(A^T, c) \qquad (2) $$
|
| 105 |
+
|
| 106 |
+
the dual problem to (1). For the canonical forms (A)–(C), the dual problems respectively read
|
| 107 |
+
|
| 108 |
+
$$ g(A, b, c) = \max b^T y \quad \text{subject to} \quad A^T y \le c, \qquad (A) $$
|
| 109 |
+
|
| 110 |
+
$$ g(A, b, c) = \max b^T y \quad \text{subject to} \quad A^T y = c, y \le 0, \qquad (B) $$
|
| 111 |
+
|
| 112 |
+
$$ g(A, b, c) = \max b^T y \quad \text{subject to} \quad A^T y \le c, y \le 0. \qquad (C) $$
|
| 113 |
+
|
| 114 |
+
By duality in linear programming, we can replace the inner optimization problem in the definition of $\bar{f}_{fin}$ by its dual problem with no additional assumptions. This is a bit surprising since duality in real or interval linear programming usually needs some kind of (strong) feasibility; see Novotná et al. [23].
|
| 115 |
+
|
| 116 |
+
**Proposition 1.** We have
|
| 117 |
+
|
| 118 |
+
$$ \bar{f}_{fin} = \max g(A,b,c) \quad \text{subject to} \quad A \in \mathbf{A}, b \in \mathbf{b}, c \in \mathbf{c}, g(A,b,c) < \infty. \qquad (3) $$
|
| 119 |
+
|
| 120 |
+
**Proof.** By strong duality in linear programming, both primal and dual problems have the same optimal value as long as at least one of them is feasible. If the primal problem is infeasible for every realization of interval data, then the dual problem is for every realization either infeasible or unbounded. In any case, both sides of (3) are equal to $-\infty$. Thus we will assume that the feasible set $M(A,b)$ is nonempty for at least one realization. The assumption ensures feasibility of at least one realization, so we can replace the primal problem by the dual one. Notice that feasibility of all realizations is not necessary to assume since primarily infeasible instances are idle for both primal and dual problems. $\square$
|
| 121 |
+
|
| 122 |
+
The advantage of the formula (3) is that the “max min” optimization problem is reduced to “max max” problem
|
| 123 |
+
|
| 124 |
+
$$ \bar{f}_{fin} = \max b^T y \quad \text{subject to} \quad y \in N(A^T, c), M(A,b) \neq \emptyset, A \in \mathbf{A}, b \in \mathbf{b}, c \in \mathbf{c}, \qquad (4) $$
|
| 125 |
+
|
| 126 |
+
which can be hopefully more easy to deal with.
|
| 127 |
+
---PAGE_BREAK---
|
| 128 |
+
|
| 129 |
+
### 3. Special cases with A real
|
| 130 |
+
|
| 131 |
+
In this section, we focus on certain sub-classes of the main problem. In particular, we consider the case with real constraint matrix, i.e., $A_{\Delta} = 0$. This case is not much on restriction on generality since the matrix $A$ characterizes the structure of the model and often is fixed. This is particularly true in transportations problems or flows in networks [1, 27]. In contrast, costs $c$ in the objective function and capacities corresponding to the right-hand side vectors $b$ are typically affected various kinds of uncertainties.
|
| 132 |
+
|
| 133 |
+
As we already mentioned, transformations between the LP forms (A)-(C) is not equivalent in general. Nevertheless, in some cases, it is possible. Garajová et al. [10] showed that provided $A$ is real, finite optimal values (and therefore also $\bar{f}_{fin}$) is not changed under the following transformations:
|
| 134 |
+
|
| 135 |
+
* transform an interval LP problem of type (A)
|
| 136 |
+
|
| 137 |
+
$$ \min c^T x \text{ subject to } Ax = b, x \ge 0 $$
|
| 138 |
+
|
| 139 |
+
to form (C) splitting equations to double inequalities
|
| 140 |
+
|
| 141 |
+
$$ \min c^T x \text{ subject to } Ax \le b, Ax \ge b, x \ge 0, $$
|
| 142 |
+
|
| 143 |
+
* transform an interval LP problem of type (B)
|
| 144 |
+
|
| 145 |
+
$$ \min c^T x \text{ subject to } Ax \le b $$
|
| 146 |
+
|
| 147 |
+
to form (C) by imposing nonnegativity of variables
|
| 148 |
+
|
| 149 |
+
$$ \min c^T x^{+} - c^T x^{-} \text{ subject to } Ax^{+} - Ax^{-} \le b, x^{+}, x^{-} \ge 0. $$
|
| 150 |
+
|
| 151 |
+
In Garajová et al. [10], it was also observed that the first transformation may change finite optimal values in the case with interval $\mathcal{A}$. Below, we show by an example that this is also true for the second transformation.
|
| 152 |
+
|
| 153 |
+
**Example 3.** Consider the interval LP problem of type (B)
|
| 154 |
+
|
| 155 |
+
$$ \min -x \text{ subject to } [0,1]x \le -1, -[1,2]x \le 5. $$
|
| 156 |
+
|
| 157 |
+
It is easy to see that $f = [1, 5] \cup \{\infty\}$ and $\bar{f}_{fin} = 5$. Imposing nonnegativity of variables leads to the interval LP problem
|
| 158 |
+
|
| 159 |
+
$$ \min -x^{+} + x^{-} \text{ subject to } [0,1]x^{+} - [0,1]x^{-} \le -1, -[1,2]x^{+} + [1,2]x^{-} \le 5. $$
|
| 160 |
+
|
| 161 |
+
Now, the set of optimal values expands significantly. For instance, the realization
|
| 162 |
+
|
| 163 |
+
$$ \min -x^{+} + x^{-} \text{ subject to } 0.1x^{+} - 0.1x^{-} \le -1, -2x^{+} + 1x^{-} \le 5 $$
|
| 164 |
+
|
| 165 |
+
has the optimal value of 10. By direct inspection, we can see that $f = \{-\infty\} \cup [1, \infty]$. That is, the worst case finite optimal value grows to $\bar{f}_{fin} = \infty$.
|
| 166 |
+
|
| 167 |
+
#### 3.1. Interval objective function
|
| 168 |
+
|
| 169 |
+
If interval data are situated in the objective vector only, computation of $\bar{f}_{fin}$ is easy just by solving one LP problem.
|
| 170 |
+
|
| 171 |
+
**Proposition 2.** If $A$ and $b$ are real, then computation of $\bar{f}_{fin}$ is a polynomial problem.
|
| 172 |
+
---PAGE_BREAK---
|
| 173 |
+
|
| 174 |
+
**Proof.** Under the assumptions, the problem (4) takes the form of an LP problem in variables $x, y, c$. Moreover, the variable $c$ can be easily eliminated. For types (A) and (C) in particular, the resulting LP problem draw, respectively
|
| 175 |
+
|
| 176 |
+
$$ \bar{f}_{fin} = \max b^T y \text{ subject to } Ax = b, x \ge 0, A^T y \le \bar{c}, \qquad (5) $$
|
| 177 |
+
|
| 178 |
+
$$ \bar{f}_{fin} = \max b^T y \text{ subject to } Ax \le b, x \ge 0, A^T y \le \bar{c}, y \le 0. \qquad (6) $$
|
| 179 |
+
|
| 180 |
+
For type (B) we have
|
| 181 |
+
|
| 182 |
+
$$ \bar{f}_{fin} = \max b^T y \text{ subject to } Ax \le b, c \le A^T y \le \bar{c}, y \le 0. \quad \square $$
|
| 183 |
+
|
| 184 |
+
**Corollary 1.** Suppose that $A$ and $b$ are real and $M(A, b) \neq \emptyset$. For interval LP problems of types (A) and (C) the value of $\bar{f}_{fin}$ is attained at $c := \bar{c}$.
|
| 185 |
+
|
| 186 |
+
**Proof.** Due to $M(A,b) \neq \emptyset$, problems (5) and (6) take respectively the form of
|
| 187 |
+
|
| 188 |
+
$$ \bar{f}_{fin} = \max b^T y \text{ subject to } A^T y \le \bar{c}, $$
|
| 189 |
+
|
| 190 |
+
$$ \bar{f}_{fin} = \max b^T y \text{ subject to } A^T y \le \bar{c}, y \le 0. $$
|
| 191 |
+
|
| 192 |
+
Again by $M(A,b) \neq \emptyset$, we can replace the LP problems by their duals
|
| 193 |
+
|
| 194 |
+
$$ \bar{f}_{fin} = \min \bar{c}^T x \text{ subject to } Ax = b, x \ge 0, $$
|
| 195 |
+
|
| 196 |
+
$$ \bar{f}_{fin} = \min \bar{c}^T x \text{ subject to } Ax \le b, x \ge 0. $$
|
| 197 |
+
|
| 198 |
+
The LP problems on the right-hand sides yield $\bar{f}_{fin}$ for the corresponding LP forms. $\square$
|
| 199 |
+
|
| 200 |
+
Notice that for LP problems of type (B), this property is not true. In general, $\bar{f}_{fin}$ is not attained for extremal values of $c$, which is illustrated by the following example.
|
| 201 |
+
|
| 202 |
+
**Example 4.** Consider the interval LP problem of type (B)
|
| 203 |
+
|
| 204 |
+
$$ \min -x_1 + c_2 x_2 \text{ subject to } x_1 + x_2 \le 2, -x_1 + x_2 \le 0, $$
|
| 205 |
+
|
| 206 |
+
where $c_2 \in c_2 = [-0.5, 2]$. It is not hard to see that $\bar{f}_{fin} = \bar{f} = -2$, and it is attained for the value of $c_2 := -1$ at the point $x = (1, 1)^T$. For smaller $c_2$, the optimal value is $-1 + c_2 < -2$. For larger $c_2$, the optimal value is $-\infty$ since the problem is unbounded.
|
| 207 |
+
|
| 208 |
+
## 3.2. Interval right-hand side
|
| 209 |
+
|
| 210 |
+
In contrast to the previous case, if interval data are situated in the right-hand side vector only (i.e., $A_{\Delta} = 0$ and $c_{\Delta} = 0$), computation of $\bar{f}_{fin}$ is intractable.
|
| 211 |
+
|
| 212 |
+
**Proposition 3.** If $A$ and $c$ are real, then checking $\bar{f}_{fin} > 0$ is NP-hard for type (A).
|
| 213 |
+
|
| 214 |
+
**Proof.** By [9], checking whether there is at least one feasible realization of the interval system
|
| 215 |
+
|
| 216 |
+
$$ A^T y \le 0, b^T y > 0 $$
|
| 217 |
+
|
| 218 |
+
is an NP-hard problem. Hence it is NP-hard to check $\bar{f} > 0$ (not yet speaking about $\bar{f}_{fin}$) for the interval LP problem
|
| 219 |
+
|
| 220 |
+
$$ \max b^T y \text{ subject to } A^T y \le 0. $$
|
| 221 |
+
---PAGE_BREAK---
|
| 222 |
+
|
| 223 |
+
Due to positive homogeneity of the constraints, we can rewrite the problem as
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
\max \mathbf{b}^T y \text{ subject to } A^T y \le 0, y \le e, -y \le e, \qquad (7)
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
where $e = (1, \dots, 1)^T$. For this interval problem, checking $\bar{f}_{fin} > 0$ is NP-hard.
|
| 230 |
+
|
| 231 |
+
The interval problem (7) follows the form (3); the condition $g(A, b, c) < \infty$ needn't be considered since the problem is feasible and finite for each realization. Thus we can view this problem as the dual of an interval LP problem of type (A), which has a fixed objective function vector and a fixed constraint matrix. $\square$
|
| 232 |
+
|
| 233 |
+
**Corollary 2.** If *A* and *c* are real, then checking $\bar{f}_{fin} > 0$ is NP-hard for type (B) and for type (C).
|
| 234 |
+
|
| 235 |
+
*Proof.* By Proposition 3, checking $\bar{f}_{fin} > 0$ is NP-hard for an interval LP problem
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
\min c^T x \text{ subject to } Ax = \mathbf{b}, x \ge 0.
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
According to the discussion at the beginning of Section 3, the value of $\bar{f}_{fin}$ is not changed under
|
| 242 |
+
the transformation of equations to double inequalities
|
| 243 |
+
|
| 244 |
+
$$
|
| 245 |
+
\min c^T x \text{ subject to } Ax \leq b, Ax \geq b, x \geq 0.
|
| 246 |
+
$$
|
| 247 |
+
|
| 248 |
+
This is, however, a type (C) problem, which must therefore be NP-hard.
|
| 249 |
+
|
| 250 |
+
Type (B) problems are also NP-hard since every problem in the form of (C) is essentially
|
| 251 |
+
in the form of (B). $\square$
|
| 252 |
+
|
| 253 |
+
Despite intractability, computation of $\bar{f}_{fin}$ need not be always so hard. If $A$ is real, then
|
| 254 |
+
(4) takes the form of a bilinear programming problem, that is, the constraints are linear and
|
| 255 |
+
the objective function is bilinear (with respect to variables $y, b, c$). Even though it is NP-hard,
|
| 256 |
+
some instances may be faster solvable.
|
| 257 |
+
|
| 258 |
+
**Example 5.** Consider an interval LP problem in the form
|
| 259 |
+
|
| 260 |
+
$$
|
| 261 |
+
\min c^T x \text{ subject to } Ax \geq b
|
| 262 |
+
$$
|
| 263 |
+
|
| 264 |
+
with $b > 0$. Then (4) reads
|
| 265 |
+
|
| 266 |
+
$$
|
| 267 |
+
\bar{f}_{\text{fin}} = \max b^T y \text{ subject to } Ax \geq b, A^T y = c, y \geq 0, b \in \mathbf{b}.
|
| 268 |
+
$$
|
| 269 |
+
|
| 270 |
+
Since the variables are nonnegative, it has the special form of a geometric program, and hence
|
| 271 |
+
it is efficiently solvable [3].
|
| 272 |
+
|
| 273 |
+
**4. Basis approach**
|
| 274 |
+
|
| 275 |
+
If the LP problem (1) has a finite optimal value, then it possesses an optimal solution cor-
|
| 276 |
+
responding to an optimal basis. For concreteness, consider type (A) problem. A basis B is
|
| 277 |
+
optimal if and only if the following two conditions are satisfied
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
A_B^{-1} b \ge 0, \tag{8a}
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
c_N^T - c_B^T A_B^{-1} A_N \ge 0^T. \tag{8b}
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
The optimal value then is $f(A, b, c) = c_B^T A_B^{-1} b.$
|
| 288 |
+
|
| 289 |
+
Given a basis *B* and an interval LP problem, we will now address the question what is
|
| 290 |
+
the highest optimal value achievable at this basis. This can be formulated as an optimization
|
| 291 |
+
problem
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
\max c_B^T A_B^{-1} b \text{ subject to (8), } A \in \mathbf{A}, b \in \mathbf{b}, c \in \mathbf{c}. \quad (9)
|
| 295 |
+
$$
|
| 296 |
+
---PAGE_BREAK---
|
| 297 |
+
|
| 298 |
+
**Real constraint matrix.** Suppose from now on that $A$ is real. Then the optimization problem (9) reads
|
| 299 |
+
|
| 300 |
+
$$ \max c_B^T A_B^{-1} b \quad \text{subject to} \quad (8), \ b \in \mathbf{b}, \ c \in \mathbf{c}. \tag{10} $$
|
| 301 |
+
|
| 302 |
+
Its constraints are linear in variables $b, c$. Therefore, checking its feasibility is an easy task. In accordance with [12], we say that a basis $B$ is weakly optimal if it admits at least one finite optimal value, that is, $B$ is optimal for some realization. From the above reasoning, we have
|
| 303 |
+
|
| 304 |
+
**Proposition 4.** *Checking whether a basis B is weakly optimal is a polynomial problem.*
|
| 305 |
+
|
| 306 |
+
The feasible set of (10) is bounded, so the optimal value is bounded, too. Since there are finitely many basis, the worst case finite optimal value must be finite. Hence we just derived
|
| 307 |
+
|
| 308 |
+
**Proposition 5.** If $A$ is real, then $\bar{f}_{fin} < \infty$.
|
| 309 |
+
|
| 310 |
+
If $c$ is real, then (9) takes the form of an LP problem
|
| 311 |
+
|
| 312 |
+
$$ \max c_B^T A_B^{-1} b \quad \text{subject to} \quad (8), \ b \in \mathbf{b}, \tag{11} $$
|
| 313 |
+
|
| 314 |
+
and so it is polynomially solvable. Similarly in the case when $b$ is real.
|
| 315 |
+
|
| 316 |
+
**Proposition 6.** If $A, b$ are real or $A, c$ are real, then solving (9) is polynomial.
|
| 317 |
+
|
| 318 |
+
Solving problem (9) with $A$ real and $\mathbf{b}, \mathbf{c}$ interval values is, however, still intractable.
|
| 319 |
+
|
| 320 |
+
**Proposition 7.** If $A$ is real, then solving (9) is NP-hard.
|
| 321 |
+
|
| 322 |
+
*Proof.* By Witsenhausen [29], it is NP-hard to find the maximum value of a bilinear form $u^T M v$ on interval domain $u, v \in [0, 1]^n$, where $M$ is symmetric nonsingular. We will reduce this problem to our problem. We put $\mathbf{b} := [0, 1]^n$ and $A_B := I_n$, where $I_n$ is the identity matrix. Next, we substitute $c_B := M u$. The condition
|
| 323 |
+
|
| 324 |
+
$$ c_B = M u, \ u \in [0, 1]^n $$
|
| 325 |
+
|
| 326 |
+
is equivalent to
|
| 327 |
+
|
| 328 |
+
$$ 0 \leq M^{-1} c_B \leq 1, $$
|
| 329 |
+
|
| 330 |
+
so we can formulate it as (8b) for $A_N = (M^{-1}, -M^{-1})$ and $c_N = (1^T, 0^T)^T$. The condition (8a) is trivially satisfied as $A_B^{-1} b = b \in [0, 1]^n$. This completes the reduction. $\square$
|
| 331 |
+
|
| 332 |
+
**Real A and c.** By Proposition 3 we know that computing $\bar{f}_{fin}$ is NP-hard even when $A$ and $c$ are real, and intervals are situated in the right-hand side vector $\mathbf{b}$ only. The above considerations give us a finite reduction for computing $\bar{f}_{fin}$. For each basis $B$, check if it is weakly optimal and determine the worst case optimal value associated with $B$ by solving the LP problem (11).
|
| 333 |
+
|
| 334 |
+
In this way, the box $\mathbf{b}$ splits into convex polyhedral sub-parts, which are usually called stability or critical regions in the context of sensitivity analysis and parametric programming [8]. Each region corresponds to a weakly optimal basis. In the area of interval linear programming, but in another context, stability regions were also discussed in Mráz [21].
|
| 335 |
+
|
| 336 |
+
The obvious drawback of this approach is that there are exponentially many bases. On the other hand, the number of weakly optimal bases might be reasonably small. In order to process them, consider the following graph. The nodes correspond to weakly optimal bases. There is an edge between two nodes if and only if the corresponding bases are neighbors, that is, they differ in exactly one entry (the basic index sets differ in one entry). Since the set $\mathbf{b}$ of the objective vectors of the dual problem (2) is convex and compact, the graph of weakly
|
| 337 |
+
---PAGE_BREAK---
|
| 338 |
+
|
| 339 |
+
Figure 1: (Example 6) Illustration of the dual problem: for different values of the objective vector **b**, the optimal solution moves from $y^1$ to $y^2$ and to unbounded instances.
|
| 340 |
+
|
| 341 |
+
optimal bases is connected. Therefore, we can start with one weakly optimal basis, inspect the neighboring bases for weak optimality and process until all weakly optimal bases are found.
|
| 342 |
+
|
| 343 |
+
This method can be significantly faster than processing all possible bases. In particular, if the interval vector **b** is narrow, then we can expect that the number of weakly optimal basis is small, or even there is a unique one. This case of unique basis is called *basis stable* problem and was investigated in [14, 18, 25]. Even though it is NP-hard to check for basis stability of a basis B for a general interval LP problem, there are practically efficient sufficient conditions; see [14].
|
| 344 |
+
|
| 345 |
+
Moreover, basis stability is polynomially decidable provided A, b or A, c are real, which is our case. Concretely, we have to verify two conditions. First, check (8b), which is easy as all data are constant. Second, compute by interval arithmetic the expression $A_B^{-1}b$, and check that the lower bound is nonnegative.
|
| 346 |
+
|
| 347 |
+
**Example 6.** Consider the interval LP problem of type (A) with data
|
| 348 |
+
|
| 349 |
+
$$A = \begin{pmatrix} 1 & 2 & 0 & -1 & -1 \\ 1 & 1 & 1 & 1 & 0 \end{pmatrix}, \quad b = \begin{pmatrix} [3, 5] \\ [2, 4] \end{pmatrix}, \quad c = (10 \ 20 \ 5 \ 3 \ 1)^T.$$
|
| 350 |
+
|
| 351 |
+
The dual problem is illustrated on Figure 1. There are two weakly optimal bases, $B = \{1, 2\}$ and $B' = \{1, 3\}$. On the figure, they correspond to vertices $y^1 = (10,0)^T$ and $y^2 = (5,5)^T$.
|
| 352 |
+
|
| 353 |
+
For basis B, the constraint $A_B^{-1}b \ge 0$ from (8a) takes the form
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
\begin{aligned}
|
| 357 |
+
-b_1 + 2b_2 &\geq 0, \\
|
| 358 |
+
b_1 - b_2 &\geq 0.
|
| 359 |
+
\end{aligned}
|
| 360 |
+
$$
|
| 361 |
+
|
| 362 |
+
By the LP problem (11), we compute the value of the highest optimal value corresponding to this basis as 50.
|
| 363 |
+
|
| 364 |
+
For basis $B'$, the constraint $A_B^{-1}b \ge 0$ draws
|
| 365 |
+
|
| 366 |
+
$$
|
| 367 |
+
\begin{aligned}
|
| 368 |
+
b_1 &\geq 0, \\
|
| 369 |
+
-b_1 + b_2 &\geq 0.
|
| 370 |
+
\end{aligned}
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
The LP problem (11) now gives the value of 40 for the highest optimal value associated to $B'$.
|
| 374 |
+
|
| 375 |
+
In total, we see that the worst case optimal value is $\bar{f}_{fin} = 50$ and it is attained for basis B. Figure 2 depicts the interval vector **b** and its subparts corresponding to the optimal bases B and $B'$ and to infeasible instances.
|
| 376 |
+
---PAGE_BREAK---
|
| 377 |
+
|
| 378 |
+
Figure 2: (Example 6) The sub-parts of interval vector **b** corresponding to the optimal bases **B** and **B'** and to infeasible instances.
|
| 379 |
+
|
| 380 |
+
## 5. Conclusion
|
| 381 |
+
|
| 382 |
+
We investigated the problem of computing the highest possible optimal value when input data are subject to variations in given intervals and we restrict to feasible instances only. We analyzed the computational complexity issues by identifying the cases that are already polynomially solvable and those that are still NP-hard. The basis reduction proposes an approach that is not a priori exponential even for the NP-hard cases.
|
| 383 |
+
|
| 384 |
+
Several open questions arised during the work on the topic. This includes for example the problem of what is the computational complexity of this question: Is $f_{fin}$ attained for a given basis $\mathbf{B}$?
|
| 385 |
+
|
| 386 |
+
## Acknowledgement
|
| 387 |
+
|
| 388 |
+
The author was supported by the Czech Science Foundation Grant P403-18-04735S.
|
| 389 |
+
|
| 390 |
+
## References
|
| 391 |
+
|
| 392 |
+
[1] Ahuja, R. K., Magnanti, T. L. and Orlin, J. B. (1993). Network Flows. Theory, Algorithms, and Applications. Englewood Cliffs, NJ: Prentice Hall.
|
| 393 |
+
|
| 394 |
+
[2] Ashayerinasab, H. A., Nehi, H. M. and Allahdadi, M. (2018). Solving the interval linear programming problem: A new algorithm for a general case. Expert Systems with Applications, 93, Suppl. C, 39–49.
|
| 395 |
+
|
| 396 |
+
[3] Boyd, S. and Vandenberghe, L. (2004). Convex Optimization. Cambridge University Press.
|
| 397 |
+
|
| 398 |
+
[4] Cerulli, R., D'Ambrosio, C. and Gentili, M. (2017). Best and worst values of the optimal cost of the interval transportation problem. In Sforza, A., and Sterle, C. (Eds.), Optimization and Decision Science: Methodologies and Applications, volume 217 of Springer Proceedings in Mathematics & Statistics, (pp. 367–374). Cham: Springer.
|
| 399 |
+
|
| 400 |
+
[5] Chinneck, J. W. and Ramadan, K. (2000). Linear programming with interval coefficients. Journal of the Operational Research Society, 51(2), 209–220.
|
| 401 |
+
---PAGE_BREAK---
|
| 402 |
+
|
| 403 |
+
[6] Fiedler, M., Nedoma, J., Ramík, J., Rohn, J. and Zimmermann, K. (2006). Linear Optimization Problems with Inexact Data. New York: Springer.
|
| 404 |
+
|
| 405 |
+
[7] Gabrel, V. and Murat, C. (2010). Robustness and duality in linear programming. Journal of the Operational Research Society, 61(8), 1288-1296.
|
| 406 |
+
|
| 407 |
+
[8] Gal, T. and Greenberg, H. J. (Eds.) (1997). Advances in Sensitivity Analysis and Parametric Programming. Boston: Kluwer Academic Publishers.
|
| 408 |
+
|
| 409 |
+
[9] Garajová, E., Hladík, M. and Rada, M. (2017). On the properties of interval linear programs with a fixed coefficient matrix. In Sforza, A., and Sterle, C. (Eds.), Optimization and Decision Science: Methodologies and Applications, volume 217 of Springer Proceedings in Mathematics & Statistics, (pp. 393–401). Cham: Springer.
|
| 410 |
+
|
| 411 |
+
[10] Garajová, E., Hladík, M. and Rada, M. (2018). Interval linear programming under transformations: Optimal solutions and optimal value range. Central European Journal of Operations Research. In press, doi: 10.1007/s10100-018-0580-5
|
| 412 |
+
|
| 413 |
+
[11] Hladík, M. (2009). Optimal value range in interval linear programming. Fuzzy Optimization and Decision Making, 8(3), 283-294.
|
| 414 |
+
|
| 415 |
+
[12] Hladík, M. (2012). Interval linear programming: A survey. In Mann, Z. A. (Ed.), Linear Programming – New Frontiers in Theory and Applications, chapter 2, (pp. 85–120). New York: Nova Science Publishers.
|
| 416 |
+
|
| 417 |
+
[13] Hladík, M. (2013). Weak and strong solvability of interval linear systems of equations and inequalities. Linear Algebra and its Applications, 438(11), 4156–4165.
|
| 418 |
+
|
| 419 |
+
[14] Hladík, M. (2014). How to determine basis stability in interval linear programming. Optimization Letters, 8(1), 375–389.
|
| 420 |
+
|
| 421 |
+
[15] Hladík, M. (2014). On approximation of the best case optimal value in interval linear programming. Optimization Letters, 8(7), 1985–1997.
|
| 422 |
+
|
| 423 |
+
[16] Hladík, M. (2017). On strong optimality of interval linear programming. Optimization Letters, 11(7), 1459–1468.
|
| 424 |
+
|
| 425 |
+
[17] Hladík, M. (2017). Transformations of interval linear systems of equations and inequalities. Linear and Multilinear Algebra, 65(2), 211–223.
|
| 426 |
+
|
| 427 |
+
[18] Koníčková, J. (2001). Sufficient condition of basis stability of an interval linear programming problem. ZAMM, Z. Angew. Mathematics and Mechanics of Solids, 81, Suppl. 3, 677–678.
|
| 428 |
+
|
| 429 |
+
[19] Li, W., Liu, X. and Li, H. (2015). Generalized solutions to interval linear programmes and related necessary and sufficient optimality conditions. Optimization Methods and Software, 30(3), 516–530.
|
| 430 |
+
|
| 431 |
+
[20] Moore, R. E., Kearfott, R. B., and Cloud, M. J. (2009). Introduction to Interval Analysis. Philadelphia, PA: SIAM.
|
| 432 |
+
|
| 433 |
+
[21] Mráz, F. (1998). Calculating the exact bounds of optimal values in LP with interval coefficients. Annals of Operations Research, 81, 51–62.
|
| 434 |
+
|
| 435 |
+
[22] Neumaier, A. (1990). Interval Methods for Systems of Equations. Cambridge: Cambridge University Press.
|
| 436 |
+
|
| 437 |
+
[23] Novotná, J., Hladík, M. and Masařík, T. (2017). Duality gap in interval linear programming. In Zadnik Stirn et al., L. (Ed.), Proceedings of the 14th International Symposium on Operational Research SOR’17, Bled, Slovenia, September 27-29, 2017, (pp. 501–506). Ljubljana, Slovenia: Slovenian Society Informatika.
|
| 438 |
+
|
| 439 |
+
[24] Rohn, J. (1984). Interval linear systems. Freiburger Intervall-Berichte 84/7, Albert-Ludwigs-Universität, Freiburg.
|
| 440 |
+
|
| 441 |
+
[25] Rohn, J. (1993). Stability of the optimal basis of a linear program under uncertainty. Operations Research Letters, 13(1), 9–12.
|
| 442 |
+
|
| 443 |
+
[26] Rohn, J. (1997). Complexity of some linear problems with interval data. Reliable Computing, 3(3), 315–323.
|
| 444 |
+
|
| 445 |
+
[27] Schrijver, A. (2004). Combinatorial Optimization. Polyhedra and efficiency, volume 24 of Algorithms and Combinatorics. Berlin: Springer.
|
| 446 |
+
|
| 447 |
+
[28] Serafini, P. (2005). Linear programming with variable matrix entries. Operations Research Letters, 33(2), 165–170.
|
| 448 |
+
|
| 449 |
+
[29] Witsenhausen, H. S. (1986). A simple bilinear optimization problem. Systems & Control Letters, 8(1), 1–4.
|
samples_new/texts_merged/6376231.md
ADDED
|
@@ -0,0 +1,621 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Project Choice from a Verifiable Proposal
|
| 5 |
+
|
| 6 |
+
Yingni Guo
|
| 7 |
+
|
| 8 |
+
Eran Shmaya*
|
| 9 |
+
|
| 10 |
+
May 8, 2021
|
| 11 |
+
|
| 12 |
+
Abstract
|
| 13 |
+
|
| 14 |
+
An agent observes the set of available projects and proposes some, but not neces-
|
| 15 |
+
sarily all, of them. A principal chooses one or none from the proposed set. We solve
|
| 16 |
+
for a mechanism that minimizes the principal's worst-case regret. If the agent can pro-
|
| 17 |
+
pose only one project, it is chosen for sure if the principal's payoff exceeds a threshold;
|
| 18 |
+
otherwise, the probability that it is chosen decreases in the agent's payoff. If the agent
|
| 19 |
+
can propose multiple projects, his payoff from a multiproject proposal equals the max-
|
| 20 |
+
imal payoff from proposing each project alone. Our results highlight the benefits from
|
| 21 |
+
randomization and from the ability to propose multiple projects.
|
| 22 |
+
|
| 23 |
+
JEL: D81, D82, D86
|
| 24 |
+
|
| 25 |
+
Keywords: verifiable disclosure, evidence, project choice, regret minimization
|
| 26 |
+
|
| 27 |
+
# 1 Introduction
|
| 28 |
+
|
| 29 |
+
Project choice is one of the most important functions of an organization. The process
|
| 30 |
+
often involves two parties: (i) a party at a lower hierarchical level who has expertise and
|
| 31 |
+
|
| 32 |
+
*Guo: Department of Economics, Northwestern University; email: yingni.guo@northwestern.edu.
|
| 33 |
+
Shmaya: Department of Economics, Stony Brook University; email: eran.shmaya@stonybrook.edu. We
|
| 34 |
+
thank seminar audiences at the One World Mathematical Game Theory Seminar, the Toulouse School of
|
| 35 |
+
Economics, the University of Bonn, Northwestern University, the University of Pittsburgh, and Carnegie
|
| 36 |
+
Mellon University for valuable feedback.
|
| 37 |
+
---PAGE_BREAK---
|
| 38 |
+
|
| 39 |
+
proposes projects, and (ii) a part at a higher hierarchical level who evaluates the proposed projects and makes the choice. This describes the relationship between a division and the headquarters when the division has a chance to choose a factory location or to choose an office building. It also applies to the relationship between a department and the university when the department has a hiring slot open.
|
| 40 |
+
|
| 41 |
+
This process of project choice is naturally a principal-agent problem. The agent privately observes which projects are available and proposes a subset of the available projects. The principal chooses one from the proposed projects or rejects them all. If the two parties had identical preferences over projects, the agent would propose the project that is their shared favorite among the available ones, and the principal would always automatically approve the agent's proposal. In many applications, however, the two parties do not share the same preferences. For instance, the division may fail to internalize each project's externalities on other divisions; the department and the university may put different weights on candidates' research and nonresearch abilities. Armed with the proposal-setting power, the agent has a tendency to propose his favorite project and hide his less preferred ones, even if those projects are "superstars" for the principal. How shall the principal encourage the agent to propose the principal's preferred projects? What is the principal's optimal mechanism for choosing a project?
|
| 42 |
+
|
| 43 |
+
It is easy to see that no mechanism can guarantee that the principal's favorite project among the available ones will always be chosen. We define the principal's *regret* as the difference between his payoff from his favorite project and his expected payoff from the project chosen under the mechanism. We look for a mechanism that works fairly well for the principal in all circumstances, i.e., a mechanism that minimizes the principal's worst-case regret. This worst-case regret approach to uncertainty can be traced back to Wald (1950) and Savage (1951). It has since been used widely in game theory, mechanism design, and machine learning. A decision theoretical axiomatization for the minimax regret criterion can
|
| 44 |
+
---PAGE_BREAK---
|
| 45 |
+
|
| 46 |
+
be found in Milnor (1954) and Stoye (2011).
|
| 47 |
+
|
| 48 |
+
Depending on the principal's verification capacity, we distinguish two environments. In the *multiproject* environment, the agent can propose any subset of the available projects. In the *single-project* environment, the agent can propose only one available project. Besides project choice within organizations, the single-project environment also applies to antitrust regulation: a firm chooses a merger from available merger opportunities to propose and the regulator decides whether to approve or reject the firm's proposal (e.g., Lyons (2003), Neven and Röller (2005), Armstrong and Vickers (2010), Ottaviani and Wickelgren (2011), Nocke and Whinston (2013)).
|
| 49 |
+
|
| 50 |
+
We take the environment as exogenous and derive the optimal mechanisms in both environments. In the single-project environment, the only way for the principal to incentivize the agent is to reject his proposal with positive probability. The multiproject environment, however, allows the principal to “spend” this rejection probability on other proposed projects. Therefore, even though the principal chooses at most one project, he expects to do better in the multiproject environment than in the single-project one. Comparing the two environments will also allow us to quantify the principal’s gain from higher verification capacity.
|
| 51 |
+
|
| 52 |
+
We begin with the single-project environment. A mechanism specifies for each proposed single project the probability that it will be approved. In the optimal mechanism, if the proposed project gives the principal a sufficiently high payoff, it is approved for sure. We call such projects *good* projects for the principal. If, on the contrary, the proposed project is *mediocre* for the principal, it is approved only with some probability. The probability that a mediocre project is approved decreases in its payoff to the agent, in order to deter the agent from hiding projects that are more valuable for the principal. This mechanism aligns the incentives of the agent with those of the principal in the following ways. First, if the agent has at least one good project for the principal, he will propose a good project. Second, if all his projects are mediocre for the principal, he will propose the principal's favorite one.
|
| 53 |
+
---PAGE_BREAK---
|
| 54 |
+
|
| 55 |
+
In the multiproject environment, a mechanism specifies for each proposed set of projects a randomization over the proposed projects and “no project.” If the agent proposes only one project, the optimal mechanism takes a form similar to the one in the single-project environment. In particular, if the proposed project is sufficiently good for the principal, it is chosen for sure. Otherwise, the project is chosen with some probability that decreases in its payoff to the agent. If the agent proposes more than one project, the randomization maximizes the principal’s expected payoff, subject to the constraint that the agent is promised the maximal expected payoff he would get from proposing each project alone. Under this mechanism, the more projects the agent proposes, the weakly higher his expected payoff is, so the agent is willing to propose all available projects.
|
| 56 |
+
|
| 57 |
+
Since the agent gets the maximal expected payoff from proposing each project alone, we call this mechanism the *project-wide maximal-payoff mechanism*. This mechanism implements a compromise between the two parties in the multiproject environment: with some probability the choice favors the agent and with some probability it favors the principal. We also show that randomization is crucial for the principal's minimal worst-case regret to be lower in the multiproject environment than in the single-project one. In other words, if the principal is restricted to deterministic mechanisms, his minimal worst-case regret is the same in both the single-project and multiproject environments.
|
| 58 |
+
|
| 59 |
+
**Related literature.** Our paper is closely related to Armstrong and Vickers (2010) and Nocke and Whinston (2013), which study the project choice problem using the Bayesian approach. Armstrong and Vickers (2010) characterize the optimal deterministic mechanism in the single-project environment and show through examples that the principal does strictly better if randomization or multiproject proposals are allowed. Nocke and Whinston (2013) focus on mergers (i.e., projects) that are ex ante different and further incorporate the bargaining process among firms. They show that a tougher standard is imposed on mergers
|
| 60 |
+
---PAGE_BREAK---
|
| 61 |
+
|
| 62 |
+
involving larger partners. We take the worst-case regret approach to this multidimensional
|
| 63 |
+
screening problem. This more tractable approach allows us to explore questions which are
|
| 64 |
+
intractable under the Bayesian approach, including how much the principal benefits from
|
| 65 |
+
randomization, from higher verification capacity and from a smaller project domain.
|
| 66 |
+
|
| 67 |
+
Goel and Hann-Caruthers (2020) consider the project choice problem where the number of available projects is public information. The projects are only partially verifiable, since the agent's only constraint is not to overreport projects' payoffs to the principal. Because their agent cannot hide projects like our agent does, he loses the proposal-setting power. The resulting incentive schemes are thus quite different.
|
| 68 |
+
|
| 69 |
+
Since in our model the agent can propose only those projects that are available, the agent's proposal is some evidence about his private information. Hence, our paper is closely related to research on verifiable disclosure (e.g., Grossman and Hart (1980), Grossman (1981), Milgrom (1981), Dye (1985)) and, more broadly, the evidence literature (see Dekel (2016) for a survey). We discuss the relation to this literature in more detail after we introduce the model.
|
| 70 |
+
|
| 71 |
+
Our result relates to a theme in Aghion and Tirole (1997), namely, that the principal has formal authority, but the agent shares real authority due to his private information. We take this theme one step further. Our agent's real authority has two sources: he knows which projects are available, and he determines the proposal from which the principal chooses a project. The idea of striking a compromise is related to Bonatti and Rantakari (2016). They examine the compromise between two symmetric, competing agents whose efforts are crucial for discovering projects. We instead focus on the compromise between an agent who proposes projects and a principal who chooses one or none from the proposed projects.
|
| 72 |
+
|
| 73 |
+
Finally, our paper contributes to the literature on mechanism design in which the designer minimizes his worst-case regret. Hurwicz and Shapiro (1978) examine a moral hazard problem. Bergemann and Schlag (2008, 2011) examine monopoly pricing. Renou and Schlag
|
| 74 |
+
---PAGE_BREAK---
|
| 75 |
+
|
| 76 |
+
(2011) apply the solution concept of $\epsilon$-minimax regret to the problem of implementing social choice correspondences. Beviá and Corchón (2019) examine the contest which minimizes the designer's worst-case regret. Guo and Shmaya (2019) study the optimal mechanism for monopoly regulation and Malladi (2020) studies the optimal approval rules for innovation. More broadly, we contribute to the growing literature of mechanism design with worst-case objectives. For a survey on robustness in mechanism design, see Carroll (2019).
|
| 77 |
+
|
| 78 |
+
## 2 Model and mechanism
|
| 79 |
+
|
| 80 |
+
Let $D$ be the domain of all possible *verifiable projects*. Let $u: D \to R_+$ be the agent's payoff function, so his payoff is $u(a)$ if project $a$ is chosen. If no project is chosen, the agent's payoff is zero.
|
| 81 |
+
|
| 82 |
+
The agent's private type $A \subseteq D$ is a finite set of available projects. The agent proposes a set $P$ of projects, and the principal can choose one project from this set. The set $P$ is called the agent's *proposal*. It must satisfy two conditions. First, the agent can propose only available projects. Hence, the agent's proposal must be a subset of his type, $P \subseteq A$. This is what we meant earlier when we said that projects are verifiable. Second, $P \in \mathcal{E}$ for some fixed set $\mathcal{E}$ of subsets of $D$. The set $\mathcal{E}$ captures all the exogenous restrictions on the proposal. For instance, in the setting of antitrust regulation, the agent is restricted to proposing at most one project. In many organizations, the principal have limited verification capacity or limited attention, so the agent can propose at most a certain number of projects.
|
| 83 |
+
|
| 84 |
+
We begin with two environments which are natural first steps: *single-project* and *multiproject*. In the single-project environment, the agent can propose at most one available project, so $\mathcal{E} = \{P \subseteq D : |P| \le 1\}$. In the multiproject environment, the agent can propose any set of available projects so $\mathcal{E} = 2^D$, the power set of $D$. In subsection 6.1, we discuss the intermediate environments in which the agent can propose up to $k$ projects for some fixed
|
| 85 |
+
---PAGE_BREAK---
|
| 86 |
+
|
| 87 |
+
number $k \ge 2$.
|
| 88 |
+
|
| 89 |
+
The agent's proposal *P* serves two roles. First, if we view a proposal as a message, then different types have access to different messages. Hence, the agent's proposal is some evidence about his type, as in Green and Laffont (1986). We explore the implication of this evidence role in section 3. Second, the proposal determines the set of projects from which the principal can choose. This second role is a key difference between our paper and the evidence literature. Once the agent puts his proposal on the table, there is no relevant information asymmetry left. This implies that cheap-talk communication will not help. We elaborate on this point in subsection 6.2.
|
| 90 |
+
|
| 91 |
+
A subprobability measure over D with a finite support is given by $\pi: D \to [0, 1]$ such that
|
| 92 |
+
|
| 93 |
+
$$\text{support}(\pi) = \{a \in D : \pi(a) > 0\}$$
|
| 94 |
+
|
| 95 |
+
is finite, and $\sum_a \pi(a) \le 1$. When we say that a project *is chosen from* a subprobability measure $\pi$ with finite support, we mean that project *a* is chosen with probability $\pi(a)$, and that no project is chosen with probability $1 - \sum_a \pi(a)$.
|
| 96 |
+
|
| 97 |
+
The principal's ability to reject all proposed projects (or equivalently, to choose no project) is crucial for him to retain some "bargaining power." If, on the contrary, the principal must choose a project as long as the agent has proposed some, then the agent effectively has all the bargaining power. The agent will propose only his favorite project which will be chosen for sure.
|
| 98 |
+
|
| 99 |
+
A *mechanism* $\rho$ attaches to each proposal $P \in \mathcal{E}$ a subprobability measure $\rho(\cdot|P)$ such that $\text{support}(\rho(\cdot|P)) \subseteq P$. The interpretation is that, if the agent proposes $P$, then a project is chosen from the subprobability measure $\rho(\cdot|P)$. Thus, the agent's expected payoff under the mechanism $\rho$ if he proposes $P$ is $U(\rho, P) = \sum_{a \in P} u(a)\rho(a|P)$.
|
| 100 |
+
|
| 101 |
+
A *choice function* $f$ attaches to each type $A$ of the agent a subprobability measure $f(\cdot|A)$
|
| 102 |
+
---PAGE_BREAK---
|
| 103 |
+
|
| 104 |
+
such that $\text{support}(f(\cdot|A)) \subseteq A$. The interpretation is that, if the set of available projects is $A$, then a project is chosen from the subprobability measure $f(\cdot|A)$.
|
| 105 |
+
|
| 106 |
+
A choice function $f$ is *implemented* by a mechanism $\rho$ if, for every type $A$ of the agent, there exists a probability measure $\mu$ with support over $\text{argmax}_{P\subseteq A, P\in\mathcal{E}} U(\rho, P)$ such that $f(a|A) = \sum_P \mu(P)\rho(a|P)$. The interpretation is that the agent selects only proposals that give him the highest expected payoff among the proposals that he can make, and that, if the agent has multiple optimal proposals, then he can randomize among them.
|
| 107 |
+
|
| 108 |
+
# 3 The evidence structure
|
| 109 |
+
|
| 110 |
+
When the agent proposes a set $P$ of projects, he provides evidence that his type $A$ satisfies $P \subseteq A$. In this section, we discuss the implication of this role of the agent's proposal as well as the relation to the evidence literature.
|
| 111 |
+
|
| 112 |
+
## 3.1 Normality in the multiproject environment
|
| 113 |
+
|
| 114 |
+
In our multiproject environment, where $\mathcal{E} = 2^D$, the agent has the ability to provide the maximal evidence for his type. This property is called *normality* in the literature (Lipman and Seppi (1995), Bull and Watson (2007), Ben-Porath, Dekel and Lipman (2019)). Another interpretation of the multiproject environment is to view an agent who proposes a set $P$ as an agent who claims that his type is $P$. The relation that “type $A$ can claim to be type $B$” between types is reflexive and transitive, by the corresponding properties of the inclusion relation between sets. Transitivity is called the nested range condition in Green and Laffont (1986) and is also assumed in Hart, Kremer and Perry (2017).
|
| 115 |
+
|
| 116 |
+
In our single-project environment, where $\mathcal{E} = \{P \subseteq D : |P| \le 1\}$, normality does not hold. The single-project environment is the main focus in Armstrong and Vickers (2010) and Nocke and Whinston (2013), and is similar to the assumption in Glazer and Rubinstein
|
| 117 |
+
---PAGE_BREAK---
|
| 118 |
+
|
| 119 |
+
(2006) and Sher (2014) that the speaker can make one and only one of the statements he has access to.
|
| 120 |
+
|
| 121 |
+
## 3.2 Revelation principle in the multiproject environment
|
| 122 |
+
|
| 123 |
+
Consider the multiproject environment $\mathcal{E} = 2^D$. A mechanism $\rho$ is incentive-compatible (IC) if the agent finds it optimal to propose his type $A$ truthfully. That is, $U(\rho, A) \ge U(\rho, P)$ for every finite set $A \subseteq D$ and every subset $P \subseteq A$. Equivalently, a mechanism $\rho$ is IC if and only if $U(\rho, P)$ weakly increases in $P$ with respect to set inclusion. The following proposition states the revelation principle in the multiproject environment.
|
| 124 |
+
|
| 125 |
+
**Proposition 3.1.** *Assume $\mathcal{E} = 2^D$. If a choice function $f$ is implemented by some mechanism, then the mechanism $f$ is IC and implements the choice function $f$.*
|
| 126 |
+
|
| 127 |
+
As we explained in subsection 3.1, the multiproject environment satisfies normality and the nested range condition. Previous papers (e.g., Green and Laffont (1986), Bull and Watson (2007)) have shown that the revelation principle holds under these assumptions. Our proposition 3.1 does not follow directly from their theorems, however, because the agent's proposal $P$ serves two roles in our model. In addition to providing evidence, the proposal also determines the set of projects from which the principal can choose. Nonetheless, a similar argument for the revelation principle can be made within our model as well.
|
| 128 |
+
|
| 129 |
+
*Proof of Proposition 3.1.* Assume that the mechanism $\rho$ implements the choice function $f$. Then for every finite set $A \subseteq D$ and every subset $P \subseteq A$, we have:
|
| 130 |
+
|
| 131 |
+
$$U(f, A) = \max_{Q \subseteq A} U(\rho, Q) \ge \max_{Q \subseteq P} U(\rho, Q) = U(f, P),$$
|
| 132 |
+
|
| 133 |
+
where the inequality follows from the fact that $Q \subseteq P$ implies $Q \subseteq A$, and the two equalities follow from the fact that $\rho$ implements $f$. Hence, the mechanism $f$ is IC. Also, by definition,
|
| 134 |
+
---PAGE_BREAK---
|
| 135 |
+
|
| 136 |
+
if the mechanism $f$ is IC, then it implements the choice function $f$. $\square$
|
| 137 |
+
|
| 138 |
+
Since an implementable choice function is itself an IC mechanism and vice versa, we will
|
| 139 |
+
use both terms interchangeably whenever we discuss the multiproject environment.
|
| 140 |
+
|
| 141 |
+
# 4 The principal's problem
|
| 142 |
+
|
| 143 |
+
Let $v: D \rightarrow \mathbb{R}_{+}$ be the principal's payoff function, so his payoff is $v(a)$ if project $a$ is chosen.
|
| 144 |
+
If no project is chosen, the principal's payoff is zero.
|
| 145 |
+
|
| 146 |
+
The principal's *regret* from a choice function *f* when the set of available projects is *A* is:
|
| 147 |
+
|
| 148 |
+
$$ \mathrm{RGRT}(f, A) = \max_{a \in A} v(a) - \sum_{a \in A} v(a)f(a|A). $$
|
| 149 |
+
|
| 150 |
+
The regret is the difference between what the principal could have achieved if he knew the set $A$ of available projects and what he actually achieves. Savage (1951) calls this difference *loss*. We instead call it regret, by following the more recent game theory and computer science literature. Wald (1950) and Savage (1972) propose to consider only *admissible* choice functions (i.e., choice functions that are not weakly dominated). A choice function $f$ is *admissible* if there exists no other $f'$ such that the principal's regret is weakly higher under $f$ than under $f'$ for every type of the agent and strictly higher for some type. For the rest of the paper, we focus on admissible choice functions.
|
| 151 |
+
|
| 152 |
+
The worst-case regret (WCR) from a choice function $f$ is:
|
| 153 |
+
|
| 154 |
+
$$ \text{WCR}(f) = \sup_{A \subseteq D, |A| < \infty} \text{RGRT}(f, A), $$
|
| 155 |
+
|
| 156 |
+
where the supremum ranges over all possible types of the agent (i.e., all possible finite sets of available projects). The principal's problem is to minimize WCR($f$) over all implementable
|
| 157 |
+
---PAGE_BREAK---
|
| 158 |
+
|
| 159 |
+
choice functions $f$. This step is our only departure from the Bayesian approach. The Bayesian approach will instead assign a prior belief over the number and the characteristics of the available projects. The principal's problem, then, is to minimize the *expected* regret instead of the *worst-case* regret.
|
| 160 |
+
|
| 161 |
+
Note that, while our principal takes the worst-case regret approach to uncertainty about the agent’s type, he calculates the expected payoff with respect to his own objective randomization. The same assumption is made by Savage (1972) when he discusses the use of randomized acts under the worst-case regret approach (Savage, 1972, Chapter 9.3). A similar assumption is made in the ambiguity aversion literature. For example, in Gilboa and Schmeidler (1989), the decision maker calculates his expected payoff with respect to random outcomes (i.e., “roulette lotteries”) but evaluates acts using the maxmin approach with non-unique priors. If we make the alternative assumption that the principal takes the worst-case regret approach even towards his own randomization, we effectively restrict the principal to deterministic mechanism.
|
| 162 |
+
|
| 163 |
+
From now on, we assume that the set $D$ of all possible verifiable projects is $[\underline{u}, 1] \times [\underline{v}, 1]$ for some parameters $\underline{u}, \underline{v} \in [0, 1]$, and that the functions $u(\cdot)$ and $v(\cdot)$ are projections over the first and second coordinates. Abusing notation, we denote a project $a \in D$ also by $a = (u, v)$, where $u$ and $v$ are the agent’s and the principal’s payoffs, respectively, if project $a$ is chosen.
|
| 164 |
+
|
| 165 |
+
The parameters $\underline{u}$ and $\underline{v}$ quantify the uncertainty faced by the principal: the higher they are, the smaller the uncertainty. They also measure players’ preference intensity over projects. As $\underline{u}$ increases, the agent’s preferences over projects become less strong, so it becomes easier to align the incentives of the agent with those of the principal. As $\underline{v}$ increases, the principal’s preferences over projects become less strong, so the agent’s tendency to propose his own favorite project becomes less costly for the principal.
|
| 166 |
+
---PAGE_BREAK---
|
| 167 |
+
|
| 168 |
+
# 5 Optimal mechanisms
|
| 169 |
+
|
| 170 |
+
## 5.1 Preliminary intuition
|
| 171 |
+
|
| 172 |
+
We now use an example to illustrate the fundamental trade-off faced by the principal, as well as the intuition behind the optimal mechanisms. We first explain how randomization helps to reduce the WCR in the single-project environment. We then explain how the multiproject environment can further reduce the WCR. For this illustration, we assume that $v = 0$ so $D = [\underline{u}, 1] \times [0, 1]$.
|
| 173 |
+
|
| 174 |
+
Figure 1: Preliminary intuition, $v = 0$
|
| 175 |
+
|
| 176 |
+
Consider the single-project environment and assume first that the principal is restricted to deterministic mechanisms. In this case, a mechanism is a set of projects that the principal approves for sure, and all other projects are rejected outright. For each such mechanism, the principal has two fears. First, if the agent has multiple projects which will be approved, then he will propose what he likes the most, even if projects are available that are more valuable to the principal. Second, if the agent has only projects which will be rejected, then the principal loses the payoff from these projects. Applied to the project $\bar{a} = (1, 1/2)$, these two fears imply that no matter how the principal designs the deterministic mechanism, his
|
| 177 |
+
---PAGE_BREAK---
|
| 178 |
+
|
| 179 |
+
WCR is at least 1/2. As shown in figure 1, this project $\bar{a}$ gives the agent his highest payoff 1, while giving the principal only a moderate payoff 1/2. If the mechanism approves $\bar{a}$ and the set of available projects is $\{\bar{a}, (\underline{u}, 1)\}$, then the agent will propose $\bar{a}$ rather than $(\underline{u}, 1)$, so the principal suffers regret 1/2. If the mechanism rejects $\bar{a}$ but $\bar{a}$ is the only available project, then the principal also suffers regret 1/2. Thus, the WCR under any deterministic mechanism is at least 1/2. On the other hand, the deterministic mechanism that approves project $(u, v)$ if and only if $v \ge 1/2$ achieves the WCR of 1/2, so it is optimal among all the deterministic mechanisms.
|
| 180 |
+
|
| 181 |
+
We now explain how randomization can reduce the WCR in the single-project environment. We first note that, if $\underline{u} = 0$, then, even with randomized mechanisms, the principal cannot reduce his WCR below 1/2. This is because the only way to incentivize the agent to propose the project $(\underline{u}, 1) = (0, 1)$ when the set of available projects is $\{\bar{a}, (0, 1)\}$ is still to reject the project $\bar{a}$ outright if $\bar{a}$ is proposed. However, if $\underline{u} > 0$, then the principal can do better. He can approve the project $\bar{a}$ with probability $\underline{u}$, while still maintaining the agent's incentive to propose the principal's preferred project $(\underline{u}, 1)$ when the set of available projects is $\{\bar{a}, (\underline{u}, 1)\}$. We carry out this idea in Theorem 5.1 in subsection 5.2.
|
| 182 |
+
|
| 183 |
+
Let us now consider the multiproject environment. We again begin with deterministic mechanisms. Under deterministic mechanisms, more choice functions can be implemented in the multiproject environment than in the single-project one.¹ However, when restricted to deterministic mechanisms, the principal has the same minimal WCR in the multiproject environment as in the single-project one. This is because, if the principal wants to choose $(\underline{u}, 1)$ when the set of available projects is $\{\bar{a}, (\underline{u}, 1)\}$, then the only way to incentivize the agent to include $(\underline{u}, 1)$ in his proposal is to reject the project $\bar{a}$ when $\bar{a}$ is proposed alone.
|
| 184 |
+
|
| 185 |
+
We now explain how randomization can help in the multiproject environment, even when
|
| 186 |
+
|
| 187 |
+
¹For example, the principal can implement the choice function that chooses (i) the agent’s favorite project, if there are at least two available projects, and (ii) nothing, if there is at most one available project.
|
| 188 |
+
---PAGE_BREAK---
|
| 189 |
+
|
| 190 |
+
<u> = 0. While a deterministic mechanism must pick either $\bar{a}$ or (0, 1) or nothing when the agent proposes $\{\bar{a}, (0, 1)\}$, a randomized mechanism can reach a compromise by choosing each project with probability 1/2. On the other hand, if the agent proposes only $\bar{a}$, the principal chooses $\bar{a}$ with probability 1/2, so the agent of type $\{\bar{a}, (0, 1)\}$ is willing to propose $\{\bar{a}, (0, 1)\}$ instead of just $\bar{a}$. The regret is 1/4 both when the agent's type is $\{\bar{a}, (0, 1)\}$ and when his type is $\{\bar{a}\}$. We carry out this idea of reaching a compromise in Theorem 5.2 in subsection 5.3. Specifically, when the agent proposes $P$, the principal gives the agent the maximal payoff he can offer, subject to the constraint that he can give the agent this same payoff if the agent proposes $P \cup \{(\underline{u}, 1)\}$ and can still keep his regret under control.
|
| 191 |
+
|
| 192 |
+
## 5.2 Optimal mechanism in the single-project environment
|
| 193 |
+
|
| 194 |
+
Since the agent can propose at most one project, a mechanism specifies the approval probability for each proposed project. Instead of using our previous notation $\rho(a|\{a\})$, we let $\alpha(u, v) \in [0, 1]$ denote the approval probability if the agent proposes the project $(u, v)$.
|
| 195 |
+
|
| 196 |
+
**Theorem 5.1 (Single-project environment).** Assume $\mathcal{E} = \{P \subseteq D : |P| \le 1\}$. Let
|
| 197 |
+
|
| 198 |
+
$$R^s = \max_{v \in [\underline{u}, 1]} \min((1-\underline{u})v, 1-v) = \min\left(\frac{1-\underline{u}}{2-\underline{u}}, 1-\frac{v}{u}\right).$$
|
| 199 |
+
|
| 200 |
+
1. The WCR under any mechanism is at least $R^s$.
|
| 201 |
+
|
| 202 |
+
2. The mechanism $\alpha^s$ is given by:
|
| 203 |
+
|
| 204 |
+
$$\alpha^s(u, v) = \begin{cases} 1, & \text{if } v \ge 1 - R^s \text{ or } u = 0, \\ \frac{u}{u}, & \text{if } v < 1 - R^s \text{ and } u > 0. \end{cases}$$
|
| 205 |
+
|
| 206 |
+
It implements a choice function that has the WCR of $R^s$ and is admissible.
|
| 207 |
+
---PAGE_BREAK---
|
| 208 |
+
|
| 209 |
+
3. If a mechanism $\alpha$ implements a choice function that has the WCR of $R^s$, then $\alpha(u, v) \le \alpha^s(u, v)$ for every $(u, v) \in D$.
|
| 210 |
+
|
| 211 |
+
The mechanism $\alpha^s$ consists of an *automatic-approval* region and a *chance* region. If the proposed project is sufficiently good for the principal (i.e., $v \ge 1 - R^s$), then it is automatically approved. If the project is mediocre for the principal (i.e., $v < 1 - R^s$), then the approval probability equals $\underline{u}/u$, so the agent expects a payoff $\underline{u}$ from proposing a mediocre project.
|
| 212 |
+
|
| 213 |
+
The agent will propose a project in the automatic-approval region if he has at least one such project. If all his projects are in the chance region, he will propose a project that gives the principal the highest payoff. The principal still suffers regret from two sources. First, if the agent has multiple projects that will be automatically approved, he will propose what he favors instead of what the principal favors. Second, if the agent has only projects in the chance region, his proposal is rejected with positive probability. The threshold for the automatic-approval region, $1 - R^s$, is chosen to keep the regret from both sources under control.
|
| 214 |
+
|
| 215 |
+
The approval probability $\alpha^s(u, v)$ increases in $v$ (the principal's payoff) and decreases in $u$ (the agent's payoff). This monotonicity in $v$ and $u$ is natural. In particular, the principal is less likely to approve projects that give the agent high payoffs in order to deter the agent from hiding projects that give the principal high payoffs. It is interesting to compare our optimal mechanism $\alpha^s$ in the single-project environment to that in Armstrong and Vickers (2010). They characterize the optimal deterministic mechanism in a Bayesian setting. Under the assumptions that (i) projects are i.i.d. and (ii) the number of available projects is independent of their characteristics, they show that the optimal deterministic mechanism $\alpha(u, v)$ increases in $v$: a project $(u, v)$ is approved if and only if $v \ge r(u)$ for some function $r(u)$. They also characterize the optimal $r(u)$ explicitly. Their argument can be generalized to show that the optimal randomized mechanism $\alpha(u, v)$ also increases in $v$,
|
| 216 |
+
---PAGE_BREAK---
|
| 217 |
+
|
| 218 |
+
but it is not clear how to solve for the optimal $\alpha(u, v)$. It is an open problem under which assumptions on the prior belief the optimal randomized mechanism $\alpha(u, v)$ in the Bayesian setting decreases in $u$.
|
| 219 |
+
|
| 220 |
+
The typical situation under the worst-case regret approach to uncertainty is that multiple mechanisms can achieve the minimal WCR. Assertion 3 in Theorem 5.1 says that the mechanism $\alpha^s$ is uniformly more generous in approving the agent's proposal than any other mechanism that can have the WCR of $R^s$. This assertion has two implications. First, among all mechanisms that can have the WCR of $R^s$, the mechanism $\alpha^s$ is the agent's most preferred one. Second, compared to any mechanism that can have the WCR of $R^s$, the mechanism $\alpha^s$ gives the principal a higher payoff (or equivalently, a lower regret) for every singleton $A$ and a strictly higher payoff for some singleton $A$.
|
| 221 |
+
|
| 222 |
+
## 5.3 Optimal mechanism in the multiproject environment
|
| 223 |
+
|
| 224 |
+
We now present the optimal mechanism in the multiproject environment. Let $\alpha : [\underline{u}, 1] \times [\underline{u}, 1] \rightarrow [0, 1]$ be a function and consider the following *project-wide maximal-payoff mechanism* (PMP mechanism) induced by the function $\alpha$:
|
| 225 |
+
|
| 226 |
+
1. If the proposal $P$ includes only one project $(u, v)$, it is approved with probability $\alpha(u, v)$.
|
| 227 |
+
|
| 228 |
+
2. If the proposal $P$ includes multiple projects, the mechanism randomizes over the proposed projects and no project to maximize the principal's expected payoff, while promising the agent an expected payoff of $\max_{(u,v)\in P} \alpha(u, v)u$. This is the maximal expected payoff the agent could get from proposing each project alone.
|
| 229 |
+
|
| 230 |
+
By the definition of a PMP mechanism, the more projects the agent proposes, the weakly higher his expected payoff will be. The agent is therefore willing to propose his type truthfully. In other words, PMP mechanisms are IC. Note that for a mechanism to be IC, the
|
| 231 |
+
---PAGE_BREAK---
|
| 232 |
+
|
| 233 |
+
agent's payoff from a multiproject proposal must be at least his payoff from proposing each project alone. A PMP mechanism has the feature that the agent is promised exactly the maximal payoff from proposing each project alone, but not more.
|
| 234 |
+
|
| 235 |
+
Our next theorem shows that there exists an optimal PMP mechanism.
|
| 236 |
+
|
| 237 |
+
**Theorem 5.2** (Multiproject environment). Assume $\mathcal{E} = 2^D$. For every $u \in [\underline{u}, 1]$ and $p \in [0, 1]$, let $\gamma(p, u)$ be
|
| 238 |
+
|
| 239 |
+
$$ \gamma(p, u) = \min\{q \in [0, 1] : qu + (1-q)\underline{u} \ge pu\}. \quad (1) $$
|
| 240 |
+
|
| 241 |
+
Let
|
| 242 |
+
|
| 243 |
+
$$ R^m = \max_{(u,v) \in D} \min_{p \in [0,1]} \max(v(1-p), (1-v)\gamma(p,u)). \quad (2) $$
|
| 244 |
+
|
| 245 |
+
1. The WCR under any mechanism is at least $R^m$.
|
| 246 |
+
|
| 247 |
+
2. Let $\rho^m$ be the PMP mechanism induced by
|
| 248 |
+
|
| 249 |
+
$$ \alpha^m(u, v) = \max\{p \in [0, 1] : (1-v)\gamma(p, u) \le R^m\}. \quad (3) $$
|
| 250 |
+
|
| 251 |
+
It has the WCR of $R^m$ and is admissible.
|
| 252 |
+
|
| 253 |
+
3. If $\rho$ is an IC, admissible mechanism which has the WCR of $R^m$, then $U(\rho, A) \le U(\rho^m, A)$ for every type $A$.
|
| 254 |
+
|
| 255 |
+
The explicit expressions for $R^m$ and $\alpha^m(u, v)$ are presented at the end of this subsection.
|
| 256 |
+
|
| 257 |
+
It follows from (1) and (3) that $\alpha^m(u, v) = 1$ if $v \ge 1 - R^m$ and $\alpha^m(u, v) < 1$ otherwise. Like in the case of the single-project environment, when the agent proposes only one project, the project is approved for sure if its payoff to the principal is sufficiently high and approved with some probability otherwise. For this reason, we still call $v \ge 1 - R^m$ and $v < 1 - R^m$ the automatic-approval and the chance regions, respectively. Figure 2 depicts these two regions.
|
| 258 |
+
---PAGE_BREAK---
|
| 259 |
+
|
| 260 |
+
When the agent proposes more than one project, the principal promises the agent an
|
| 261 |
+
expected payoff of $\max_{(u,v) \in P} \alpha^m(u, v)u$. In both panels of figure 2, each dotted curve con-
|
| 262 |
+
nects all the projects that induce the same value of $\alpha^m(u, v)u$, so it can be interpreted as
|
| 263 |
+
an “indifference curve” for the agent. For a project in the automatic-approval region, the
|
| 264 |
+
principal is willing to compensate the agent his full payoff. In contrast, for a project in the
|
| 265 |
+
chance region, the principal is willing to compensate the agent only a discounted payoff.
|
| 266 |
+
The lower the project’s payoff to the principal, the more severe the discounting. Hence,
|
| 267 |
+
indifference curves are vertical in the automatic-approval region and tilt counterclockwise as
|
| 268 |
+
the principal’s payoff $v$ further decreases. The agent’s expected payoff is determined by the
|
| 269 |
+
project (among those proposed) that is on the highest indifference curve.
|
| 270 |
+
|
| 271 |
+
Figure 2: Reaching a compromise when agent's favorite project is in chance region, $u = v = 0$
|
| 272 |
+
|
| 273 |
+
Under the optimal mechanism $\rho^m$, if the agent's favorite project is in the automatic-approval region, then this project will be chosen for sure. In this case, there is no benefit to either party from proposing other available projects. The left panel of figure 2 gives such an example: ⭐ and ▲ denote the available projects and ▲ will be chosen for sure. In contrast, if the agent's favorite project is in the chance region, the benefit to the principal from the
|
| 274 |
+
---PAGE_BREAK---
|
| 275 |
+
|
| 276 |
+
agent's proposing multiple projects can be significant. The right panel of figure 2 illustrates such an example. Instead of rejecting ▲ with positive probability, the mechanism randomizes between ▲ and ★ while promising the agent the same payoff he would get from proposing ▲ alone. In such cases, the optimal mechanism imposes a compromise between the two parties: sometimes the choice favors the agent, and at other times it favors the principal.
|
| 277 |
+
|
| 278 |
+
Lastly, the explicit expressions for $R^m$ and $\alpha^m$ are given by:
|
| 279 |
+
|
| 280 |
+
$$R^m = \begin{cases} \frac{(1-\underline{u})(2-\underline{u}-2\sqrt{1-\underline{u}})}{\underline{u}^2} & \text{if } \underline{v} < \frac{1-\sqrt{1-\underline{u}}}{\underline{u}}, \\ \frac{(1-\underline{u})(1-\underline{v})\underline{v}}{1-\underline{u}\underline{v}} & \text{otherwise,} \end{cases}$$
|
| 281 |
+
|
| 282 |
+
and
|
| 283 |
+
|
| 284 |
+
$$\alpha^m(u, v) = \begin{cases} 1, & \text{if } v \ge 1 - R^m \text{ or } u = 0, \\ \left(1 - \frac{R^m}{1-v}\right) \frac{u}{v} + \frac{R^m}{1-v}, & \text{if } v < 1 - R^m \text{ and } u > 0. \end{cases}$$
|
| 285 |
+
|
| 286 |
+
## 5.4 Comparing the WCR under two environments
|
| 287 |
+
|
| 288 |
+
Figure 3 compares the WCR under the single-project and the multiproject environments. The left panel depicts the WCR as a function of $\underline{u}$ for a fixed $\underline{v}$. The right panel depicts the WCR as a function of $\underline{v}$ for a fixed $\underline{u}$. Roughly speaking, the principal's gain from having the multiproject environment as compared to the single-project environment, measured by $R^m - R^s$, is larger when $\underline{u}$ or $\underline{v}$ is smaller (i.e., when the principal faces more uncertainty or when players can potentially have strong preferences over projects).
|
| 289 |
+
---PAGE_BREAK---
|
| 290 |
+
|
| 291 |
+
Figure 3: WCR: single-project (dashed curve) vs. multiproject (solid curve)
|
| 292 |
+
|
| 293 |
+
# 6 Discussion
|
| 294 |
+
|
| 295 |
+
## 6.1 Intermediate verification capacity
|
| 296 |
+
|
| 297 |
+
We have focused on the single-project and the multiproject environments, which are natural first steps for us to study. Nonetheless, there are intermediate environments in which the principal can verify up to $k$ projects for some fixed $k \ge 2$, so $\mathcal{E} = \{P \subseteq D : |P| \le k\}$. We call this the $k$-project environment.
|
| 298 |
+
|
| 299 |
+
**Proposition 6.1** (Two are enough). For any $k \ge 2$, the PMP mechanism induced by $\alpha^m(u, v)$ is optimal in the $k$-project environment. The WCR under this mechanism is $R^m$.
|
| 300 |
+
|
| 301 |
+
*Proof*. Let $A$ be the set of available projects. Let $(u_p, v_p) \in \argmax\{v : (u, v) \in A\}$ and $(u_a, v_a) \in \argmax\{\alpha^m(u, v)u : (u, v) \in A\}$. Let $P = \{(u_p, v_p), (u_a, v_a)\}$. Then under the PMP mechanism induced by $\alpha^m(u, v)$, the agent is willing to propose $P$ since this proposal gives him $\alpha^m(u_a, v_a)u_a$, the maximal payoff he can get under the mechanism. The principal's payoff given the proposal $P$ equals his payoff if the set of available projects was actually $P$. By Theorem 5.2 this payoff is at least $v_p - R^m$, so the principal's regret is at most $R^m$. $\square$
|
| 302 |
+
|
| 303 |
+
Proposition 6.1 shows that having the full benefit of compromise does not require infinite
|
| 304 |
+
---PAGE_BREAK---
|
| 305 |
+
|
| 306 |
+
or high verification capacity. A capacity of only two projects is sufficient. Furthermore, even if the principal can verify up to ten projects, it suffices to let the agent propose up to two, which provides a parsimonious way to get the full benefit of compromise.
|
| 307 |
+
|
| 308 |
+
## 6.2 Cheap-talk communication does not help for any $\mathcal{E}$
|
| 309 |
+
|
| 310 |
+
We could have started from a more general definition of a mechanism that chooses a project based on both the proposal $P$ and a cheap-talk message $m$ from the agent, as in Bull and Watson (2007) and Ben-Porath, Dekel and Lipman (2019). However, in our model cheap talk does not benefit the principal. This is because the principal can choose a project only from the proposed set $P$ and he knows the payoffs that each project in $P$ gives to both parties. Hence, no information asymmetry remains after the agent proposes $P$, and so there is no benefit to cheap talk.
|
| 311 |
+
|
| 312 |
+
More specifically, for any proposal $P$ and any cheap-talk messages $m_1, m_2$, we argue that it is without loss for the principal to choose the same subprobability measure over $P$ after $(P, m_1)$ and after $(P, m_2)$. Suppose otherwise that the principal chooses a subprobability measure $\pi_1$ after $(P, m_1)$ and chooses $\pi_2$ after $(P, m_2)$. If the agent strictly prefers $\pi_1$ to $\pi_2$, then he can profitably deviate to $(P, m_1)$ whenever he is supposed to say $(P, m_2)$. Hence, $(P, m_2)$ never occurs on the equilibrium path. If the agent is indifferent between $\pi_1$ and $\pi_2$, then the principal can pick his preferred measure between $\pi_1$ and $\pi_2$ after both $(P, m_1)$ and $(P, m_2)$, without affecting the agent's incentives. This argument does not depend on the exogenous restriction $\mathcal{E}$ on the agent's proposal $P$, so cheap-talk communication does not help for any $\mathcal{E}$.
|
| 313 |
+
---PAGE_BREAK---
|
| 314 |
+
|
| 315 |
+
## 6.3 The commitment assumption
|
| 316 |
+
|
| 317 |
+
Commitment is crucial for the principal to have some “bargaining power” in the project choice problem. If the principal has no commitment power, sequential rationality requires that he choose his favorite project among the proposed one(s). The agent now has all the bargaining power. He will propose only his favorite project which will be chosen for sure.
|
| 318 |
+
|
| 319 |
+
In the multiproject environment, the full-commitment solution involves two types of ex post suboptimality. First, no project is chosen despite that the agent has proposed some. Second, a worse project for the principal is chosen despite that a better project for him is also proposed. Some applications may fall between the full-commitment and the no-commitment settings: the principal can commit to choosing no project but cannot commit to choosing a worse project when a better project is also proposed. In such a partial-commitment setting, a multiproject proposal is effectively a single-project proposal with only the principal’s favorite project among the proposed one. The optimal mechanism in this partial-commitment setting is then the same as that in the single-project environment characterized in Theorem 5.1.
|
| 320 |
+
|
| 321 |
+
# 7 Proofs
|
| 322 |
+
|
| 323 |
+
## 7.1 Proof of Theorem 5.1
|
| 324 |
+
|
| 325 |
+
**Claim 7.1.** *The WCR from any mechanism is at least R<sup>s</sup>.*
|
| 326 |
+
|
| 327 |
+
*Proof.* Let $v \in [\underline{v}, 1]$. If $\alpha(1, v) > \underline{u}$, then, if the agent has two projects $(1, v)$ and $(\underline{u}, 1)$, the agent will propose $(1, v)$ and the regret will be $1 - \alpha(1, v)v \ge 1 - v$. If $\alpha(1, v) \le \underline{u}$, then, if the agent has only the project $(1, v)$, the regret is $v - \alpha(1, v)v \ge v(1 - \underline{u})$. Therefore, WCR $\ge \min((1 - \underline{u})v, 1 - v)$ for every $v \in [\underline{v}, 1]$. $\square$
|
| 328 |
+
|
| 329 |
+
**Claim 7.2.** *The WCR from $\alpha^s$ is R<sup>s</sup>.*
|
| 330 |
+
---PAGE_BREAK---
|
| 331 |
+
|
| 332 |
+
*Proof.* We call a project $(u, v)$ good if $v \ge 1 - R^s$ and mediocre if $v < 1 - R^s$. From the definition of $R^s$ it follows that $(1 - \underline{u})v \le R^s$ for every mediocre project.
|
| 333 |
+
|
| 334 |
+
According to $\alpha^s$, if the agent proposes a mediocre project, then his expected payoff is $\underline{u}$; if the agent proposes a good project $(u, v)$, then his expected payoff is $u \ge \underline{u}$. Therefore, if the agent has some good project, he will propose a good project $(u, v)$ and the regret is at most $1 - v \le R^s$. If all projects are mediocre, then the agent will propose the project $(u, v)$ with the highest $v$, so the regret is at most $(1 - \alpha^s(u, v))v = (1 - \underline{u}/u)v \le (1 - \underline{u})v \le R^s$. $\square$
|
| 335 |
+
|
| 336 |
+
**Claim 7.3.** If $\alpha$ has the WCR of $R^s$, then $\alpha(u,v) \le \alpha^s(u,v)$ for every $(u,v) \in D$. Hence, $\alpha^s$ is admissible.
|
| 337 |
+
|
| 338 |
+
*Proof.* Fix a project $(u,v)$. If $v \ge 1 - R^s$ or $u=0$, then $\alpha^s(u,v)=1$ and therefore $\alpha(u,v) \le \alpha^s(u,v)$. If $v < 1 - R^s$ and $u > 0$, then since the WCR under $\alpha$ is $R^s$, it must be the case that if $A = \{(u,v), (\underline{u},1)\}$, then the agent proposes the project $(\underline{u},1)$. Otherwise, the regret is at least $1 - v > R^s$. Therefore $\alpha(u,v)u \le \alpha(\underline{u},1)\underline{u} \le \underline{u}$, which implies $\alpha(u,v) \le \underline{u}/u = \alpha^s(u,v)$, as desired.
|
| 339 |
+
|
| 340 |
+
Finally, if $\alpha$ has the WCR of $R^s$ and $\alpha \ne \alpha^s$, then there exists $(u,v) \in D$ such that $\alpha(u,v) < \alpha^s(u,v)$. The regret is strictly higher under $\alpha$ than under $\alpha^s$ if $A = \{(u,v)\}$, so $\alpha^s$ is admissible. $\square$
|
| 341 |
+
---PAGE_BREAK---
|
| 342 |
+
|
| 343 |
+
## 7.2 Proof of Theorem 5.2
|
| 344 |
+
|
| 345 |
+
Let $a^* = (\underline{u}, 1)$. Let $\bar{U}(P)$ be the optimal value of the following linear programming with variables $\pi(u, v)$ for every $(u, v) \in P$:
|
| 346 |
+
|
| 347 |
+
$$ \bar{U}(P) = \max_{\pi} \underbrace{\underline{u} + \sum_{(u,v) \in P} \pi(u,v)(u-\underline{u})}_{\text{s.t.}} \quad (4a) $$
|
| 348 |
+
|
| 349 |
+
$$ \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (4b) $$
|
| 350 |
+
|
| 351 |
+
$$ \sum_{(u,v) \in P} \pi(u, v) \le 1, \quad (4c) $$
|
| 352 |
+
|
| 353 |
+
$$ \sum_{(u,v) \in P} \pi(u, v)(1-v) \le R^m. \quad (4d) $$
|
| 354 |
+
|
| 355 |
+
The following claim explains the role of $\bar{U}(P)$ in our argument: $\bar{U}(P)$ is the maximal payoff that the principal can give the agent for the proposal $P$ such that the principal can give the agent this same payoff if the agent proposed $P \cup \{a^*\}$, while still keeping regret below $R^m$.
|
| 356 |
+
|
| 357 |
+
**Claim 7.4.** If $\rho$ is an IC mechanism which has the WCR of at most $R^m$, then $U(\rho, P) \le \bar{U}(P)$ for every proposal $P$.
|
| 358 |
+
|
| 359 |
+
*Proof.* Let $\tilde{P} = P \cup \{a^*\}$. Let $\pi = \rho(\cdot|\tilde{P})$. Since the regret under the mechanism $\rho$ when the set of available projects is $\tilde{P}$ is at most $R^m$, it follows that $\sum_{(u,v) \in P} \pi(u,v)(1-v) \le R^m$. Therefore the restriction of $\pi$ to the set $P$ is a feasible point in problem (4). Moreover
|
| 360 |
+
|
| 361 |
+
$$ U(\rho, \tilde{P}) = \pi(a^*)\underline{u} + \sum_{(u,v) \in P} \pi(u,v)u \le \underline{u} + \sum_{(u,v) \in P} \pi(u,v)(u-\underline{u}), \quad (5) $$
|
| 362 |
+
|
| 363 |
+
where the inequality follows from $\pi(a^*) + \sum_{(u,v) \in P} \pi(u,v) \le 1$. The right hand side of (5) is the objective function of (4) at $\pi$. Therefore, $U(\rho, \tilde{P}) \le \bar{U}(P)$. Finally, since the mechanism $\rho$ is IC, it follows that $U(\rho, P) \le U(\rho, \tilde{P})$. Therefore, $U(\rho, P) \le \bar{U}(P)$, as desired. $\square$
|
| 364 |
+
|
| 365 |
+
When $P$ is a singleton $\{(u,v)\}$, we also denote $\bar{U}(\{(u,v)\})$ by $\bar{U}(u,v)$. The following
|
| 366 |
+
---PAGE_BREAK---
|
| 367 |
+
|
| 368 |
+
claim, which follows immediately from (1) and (3), explains the role of the function $\alpha^m(u, v)$
|
| 369 |
+
in our argument.
|
| 370 |
+
|
| 371 |
+
**Claim 7.5.** When $P$ is a singleton $\{(u, v)\}$, $\overline{U}(u, v) = \alpha^m(u, v)u$.
|
| 372 |
+
|
| 373 |
+
For a proposal $P$, let $\underline{U}(P) = \max_{(u,v) \in P} \alpha^m(u, v)u$. The following claim explains the role of $\underline{U}(P)$ in our argument.
|
| 374 |
+
|
| 375 |
+
**Claim 7.6.** If $\rho$ is an IC mechanism that accepts the singleton proposal $\{(u, v)\}$ with probability $\alpha^m(u, v)$, then $U(\rho, P) \ge \underline{U}(P)$.
|
| 376 |
+
|
| 377 |
+
*Proof.* Since $\rho$ is IC, we have that $U(\rho, P) \ge U(\{(u, v)\}, \rho) = \alpha^m(u, v)u$ for every $(u, v) \in P$. $\square$
|
| 378 |
+
|
| 379 |
+
Claims 7.4 bounds from above the agent's expected payoff in an IC mechanism which has
|
| 380 |
+
the WCR of at most $R^m$. Claim 7.6 bounds from below the agent's expected payoff in an IC
|
| 381 |
+
mechanism which approves the singleton proposal $\{(u, v)\}$ with probability $\alpha^m(u, v)$. The
|
| 382 |
+
following claim shows that the definition of $R^m$ is such that both bounds can be satisfied.
|
| 383 |
+
|
| 384 |
+
**Claim 7.7.** $\underline{U}(P) \le \overline{U}(P)$ for every $P$.
|
| 385 |
+
|
| 386 |
+
*Proof.* The function $\overline{U}(P)$ defined in (4) is increasing in $P$. Therefore, from Claim 7.5 we have:
|
| 387 |
+
|
| 388 |
+
$$
|
| 389 |
+
\alpha^m(u, v)u = \overline{U}(u, v) \le \overline{U}(P), \quad \forall (u, v) \in P.
|
| 390 |
+
$$
|
| 391 |
+
|
| 392 |
+
It follows that:
|
| 393 |
+
|
| 394 |
+
$$
|
| 395 |
+
\underline{U}(P) = \max_{(u,v) \in P} \alpha^m(u,v)u \leq \overline{U}(P).
|
| 396 |
+
\quad \square
|
| 397 |
+
$$
|
| 398 |
+
---PAGE_BREAK---
|
| 399 |
+
|
| 400 |
+
By definition, the mechanism $\rho^m$ solves the following linear programming:
|
| 401 |
+
|
| 402 |
+
$$ \rho^m(\cdot|P) \in \arg\max_{\pi} \sum_{(u,v) \in P} \pi(u,v)v \quad (6a) $$
|
| 403 |
+
|
| 404 |
+
$$ \text{s.t.} \quad \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (6b) $$
|
| 405 |
+
|
| 406 |
+
$$ \sum_{(u,v) \in P} \pi(u, v) \le 1, \quad (6c) $$
|
| 407 |
+
|
| 408 |
+
$$ \sum_{(u,v) \in P} \pi(u, v)u = \underline{U}(P). \quad (6d) $$
|
| 409 |
+
|
| 410 |
+
It is possible that (6) has multiple optimal solutions. Since all the optimal solutions are payoff-equivalent for both the principal and the agent, we do not distinguish among them. From now on, the notation $\rho(\cdot|P) \neq \rho^m(\cdot|P)$ means that $\rho(\cdot|P)$ is not among the optimal solutions to (6).
|
| 411 |
+
|
| 412 |
+
The following lemma is the core of the argument. It gives an equivalent characterization of the mechanism $\rho^m$.
|
| 413 |
+
|
| 414 |
+
**Lemma 7.8.** The optimal solutions to (6) and those to the following problem coincide. Hence, $\rho^m(\cdot|P)$ is also given by the solution to the following problem:
|
| 415 |
+
|
| 416 |
+
$$ \rho(\cdot|P) \in \arg\max_{\pi} \sum_{(u,v) \in P} \pi(u,v)v \quad (7a) $$
|
| 417 |
+
|
| 418 |
+
$$ \text{s.t.} \quad \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (7b) $$
|
| 419 |
+
|
| 420 |
+
$$ \sum_{(u,v) \in P} \pi(u, v) \le 1, \quad (7c) $$
|
| 421 |
+
|
| 422 |
+
$$ \sum_{(u,v) \in P} \pi(u, v)u \ge \underline{U}(P), \quad (7d) $$
|
| 423 |
+
|
| 424 |
+
$$ \sum_{(u,v) \in P} \pi(u, v)u \le \overline{U}(P). \quad (7e) $$
|
| 425 |
+
|
| 426 |
+
*Proof of Lemma 7.8.* We discuss two cases separately.
|
| 427 |
+
---PAGE_BREAK---
|
| 428 |
+
|
| 429 |
+
Case 1. Assume that there exists some $(u, v) \in P$ such that $v \ge 1 - R^m$. Consider the following linear programming which is a relaxation of both problem (6) and (7):
|
| 430 |
+
|
| 431 |
+
$$ \rho(\cdot|P) \in \arg\max_{\pi} \sum_{(u,v) \in P} \pi(u,v)v \quad (8a) $$
|
| 432 |
+
|
| 433 |
+
s.t.
|
| 434 |
+
|
| 435 |
+
$$ \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (8b) $$
|
| 436 |
+
|
| 437 |
+
$$ \sum_{(u,v) \in P} \pi(u,v) \le 1, \quad (8c) $$
|
| 438 |
+
|
| 439 |
+
$$ \sum_{(u,v) \in P} \pi(u,v) u \ge \underline{U}(P). \quad (8d) $$
|
| 440 |
+
|
| 441 |
+
We claim that the constraint (8d) holds with equality at every optimal solution. Indeed, if (8d) is not binding then an optimal solution to (8) is also an optimal solution to the following linear programming:
|
| 442 |
+
|
| 443 |
+
$$ \rho(\cdot|P) \in \arg\max_{\pi} \sum_{(u,v) \in P} \pi(u,v)v \quad (9a) $$
|
| 444 |
+
|
| 445 |
+
s.t.
|
| 446 |
+
|
| 447 |
+
$$ \pi(u, v) \ge 0, \forall (u, v) \in P, \quad (9b) $$
|
| 448 |
+
|
| 449 |
+
$$ \sum_{(u,v) \in P} \pi(u,v) \le 1, \quad (9c) $$
|
| 450 |
+
|
| 451 |
+
which is derived from (8) by removing (8d). Let $v_p = \max_{(u,v) \in P} v$ and $u_p = \max_{(u,v_p) \in P} u$. By the definition of $\alpha^m$ in (3), $\alpha^m(u_p, v_p) = 1$ given that $v_p \ge 1 - R^m$. Every optimal solution $\pi^*$ to problem (9) satisfies $\text{support}(\pi^*) \subseteq \text{argmax}_{(u,v) \in P} v$, which implies that
|
| 452 |
+
|
| 453 |
+
$$ \sum_{(u,v) \in P} \pi^*(u,v)u \le u_p = \alpha^m(u_p, v_p)u_p \le \underline{U}(P). $$
|
| 454 |
+
|
| 455 |
+
This implies that every optimal solution to (8) satisfies (8d) with equality, so it is a feasible point in both (6) and (7). Since problem (8) is a relaxation of both problem (6) and (7),
|
| 456 |
+
---PAGE_BREAK---
|
| 457 |
+
|
| 458 |
+
the optimal values of (6), (7), and (8) coincide. Hence, every optimal solution to (6) or (7)
|
| 459 |
+
is optimal in (8). This, combined with the fact that every optimal solution to (8) is optimal
|
| 460 |
+
in (6) and (7), implies that the optimal solutions to (6) and (7) coincide.
|
| 461 |
+
|
| 462 |
+
Case 2. Assume now that $v < 1 - R^m$ for every $(u, v) \in P$. We claim that $\underline{U}(P) = \overline{U}(P)$ and therefore problems (6) and (7) coincide. Given that $v < 1 - R^m$ for every $(u, v) \in P$, the constraint (4c) in problem (4) must be slack since if it is satisfied with an equality then (4d) is violated. Therefore, in this case $\overline{U}(P)$ also satisfies
|
| 463 |
+
|
| 464 |
+
$$
|
| 465 |
+
\begin{aligned}
|
| 466 |
+
\bar{U}(P) &= \max_{\pi} \underbrace{u + \sum_{(u,v) \in P} \pi(u,v)(u-u)}_{\text{s.t.}} \\
|
| 467 |
+
&\qquad \sum_{(u,v) \in P} \pi(u,v)(1-v) \le R^m,
|
| 468 |
+
\end{aligned}
|
| 469 |
+
\tag{10} $$
|
| 470 |
+
|
| 471 |
+
which is derived from problem (4) by removing (4c). Problem (10) admits a solution $\pi^*$ with the property that, for some $(u^*, v^*) \in P$, the only non-zero element of $\pi^*$ is $\pi^*(u^*, v^*)$. Therefore, by Claim 7.5,
|
| 472 |
+
|
| 473 |
+
$$ \bar{U}(P) = \bar{U}(u^*, v^*) = \alpha^m(u^*, v^*)u^* \le \underline{U}(P). $$
|
| 474 |
+
|
| 475 |
+
Therefore, by Claim 7.7 we get $\bar{U}(P) = \underline{U}(P)$, as desired.
|
| 476 |
+
□
|
| 477 |
+
|
| 478 |
+
We now show that, when the set of available projects is a singleton, the regret under the
|
| 479 |
+
mechanism $\rho^m$ is at most $R^m$.
|
| 480 |
+
|
| 481 |
+
**Claim 7.9.** For every singleton $A = \{(u, v)\}$, the regret under $\rho^m$ is at most $R^m$.
|
| 482 |
+
|
| 483 |
+
*Proof.* In this case, $\rho^m$ accepts with probability $\alpha^m(u, v)$ so the regret is $v(1 - \alpha^m(u, v))$. By the definition of $R^m$, there exists some $\bar{p} \in [0, 1]$ such that $\max(v(1-\bar{p}), (1-v)\gamma(\bar{p}, u)) \le R^m$.
|
| 484 |
+
---PAGE_BREAK---
|
| 485 |
+
|
| 486 |
+
By (3), $\bar{p} \le \alpha^m(u, v)$. Therefore, it also follows that $v(1 - \alpha^m(u, v)) \le v(1 - \bar{p}) \le R^m$. $\square$
|
| 487 |
+
|
| 488 |
+
**Claim 7.10.** The optimal value in problem (7) is at least $\max_{(u,v)\in P} v - R^m$.
|
| 489 |
+
|
| 490 |
+
*Proof.* Since the constraints (7d) and (7e) cannot both be binding, it is sufficient to prove that the optimal value in the two problems derived from (7) by removing either (7d) or (7e) is at least $v_p - R^m$ where $v_p = \max_{(u,v)\in P} v$. Let $(u_p, v_p) \in P$ denote a principal's favorite project.
|
| 491 |
+
|
| 492 |
+
If we remove (7d) let $\pi$ be given by $\pi(u_p, v_p) = \alpha^m(u_p, v_p)$ and $\pi(u, v) = 0$ when $(u, v) \ne (u_p, v_p)$. Then $\sum_{(u,v)\in P} \pi(u,v)u = \alpha^m(u_p, v_p)u_p \le \underline{U}(P) \le \overline{U}(P)$ so (7e) is satisfied. Also $v_p(1 - \alpha^m(u_p, v_p)) \le R^m$ by Claim 7.9, which implies that the value of the objective function in (7) at $\pi$ is at least $v_p - R^m$, as desired.
|
| 493 |
+
|
| 494 |
+
If we remove (7e) let $\pi$ be the optimal solution to (4) and let $\pi'$ be the probability distribution over $P$ such that $\pi'(u, v) = \pi(u, v)$ when $(u, v) \ne (u_p, v_p)$ and $\pi'(u_p, v_p) = 1 - \sum_{(u,v)\in P\setminus\{(u_p,v_p)\}} \pi(u,v)$, so $\pi'$ is derived from $\pi$ by allocating the probability of choosing no project to $(u_p, v_p)$. Then
|
| 495 |
+
|
| 496 |
+
$$ \sum_{(u,v) \in P} \pi'(u,v)u = u_p + \sum_{(u,v) \in P} \pi(u,v)(u-u_p) \ge \underline{u} + \sum_{(u,v) \in P} \pi(u,v)(u-\underline{u}) = \overline{U}(P) \ge \underline{U}(P), $$
|
| 497 |
+
|
| 498 |
+
where the last equality follows from the fact that $\pi$ is optimal in (4). Therefore, $\pi'$ satisfies (7d). Also
|
| 499 |
+
|
| 500 |
+
$$ \sum_{(u,v) \in P} \pi'(v)(v_p - v) = \sum_{(u,v) \in P} \pi(v)(v_p - v) \le \sum_{(u,v) \in P} \pi(v)(1-v) \le R^m $$
|
| 501 |
+
|
| 502 |
+
where the last inequality follows from (4d), as desired. $\square$
|
| 503 |
+
|
| 504 |
+
*Proof of Theorem 5.2.*
|
| 505 |
+
|
| 506 |
+
1. Fix $(u, v) \in D$ and let $P = \{(u, v)\}$ and $\tilde{P} = \{(u, v), (\underline{u}, 1)\}$.
|
| 507 |
+
|
| 508 |
+
Let $p$ be the probability that $\rho$ accepts $(u, v)$ when the proposal is $P$. So, $RGRT(P, \rho) =$
|
| 509 |
+
---PAGE_BREAK---
|
| 510 |
+
|
| 511 |
+
$(1-p)v$. Since the mechanism is IC, the agent's expected payoff under $\tilde{P}$ must be at least $pu$. By definition of $\gamma(u,p)$, this implies that when the proposal is $\tilde{P}$ the mechanism accepts $(u,v)$ with probability at least $\gamma(u,p)$. So, $RGRT(\tilde{P}, \rho) \ge (1-v)\gamma(u,p)$. Therefore $WCR(\rho) \ge \max((1-p)v, (1-v)\gamma(u,p))$.
|
| 512 |
+
|
| 513 |
+
2. The mechanism $\rho^m$ is IC, and it solves problem (7) by Lemma 7.8. By Claim 7.10, the optimal value in problem (7) is at least $\max_{(u,v) \in P} v - R^m$. Since the objective function in (7) is the principal's payoff under $\pi$, the principal's regret is at most $R^m$.
|
| 514 |
+
|
| 515 |
+
We next argue that $\rho^m$ is admissible. Let $\rho$ be an IC mechanism which has the WCR of $R^m$ and let $\alpha(u, v)$ be the probability that $\rho$ accepts a singleton proposal $\{(u, v)\}$. Then, $\rho^m$ is not weakly dominated by $\rho$ based on the following two claims:
|
| 516 |
+
|
| 517 |
+
(a) If the agent's type $A$ is a singleton $\{(u,v)\}$, then $\alpha(u,v) \le \alpha^m(u,v)$ by claims 7.4 and 7.5. Hence, the principal's payoff is weakly higher under $\rho^m$ than under $\rho$ for singleton $A$.
|
| 518 |
+
|
| 519 |
+
(b) Suppose that $\alpha(u,v) = \alpha^m(u,v)$ for every $(u,v)$. Fix a proposal $P$ and let $\pi = \rho(\cdot|P)$ so $U(\rho,P) = \sum_{(u,v) \in P} \pi(u,v)u$. Then, since $\rho$ is IC it follows from Claim 7.6 that $U(\rho,P) \ge \underline{U}(P)$, and, from Claim (7.4), that $U(\rho,P) \le \overline{U}(P)$. Therefore $\pi$ is a feasible point in problem (7). Since $\rho^m(\cdot|P)$ is the optimal solution to (7), the principal's payoff is weakly higher under $\rho^m$ than under $\rho$.
|
| 520 |
+
|
| 521 |
+
3. Let $\rho$ be an IC, admissible mechanism which has the WCR of $R^m$ and which differs from $\rho$. We want to show that $U(\rho,P) \le U(\rho^m,P)$ for every finite $P \subseteq D$. Recall that $U(\rho^m,P) = \underline{U}(P)$ for every $P$.
|
| 522 |
+
---PAGE_BREAK---
|
| 523 |
+
|
| 524 |
+
We first construct a new mechanism $\tilde{\rho}$ based on $\rho$ and $\rho^m$:
|
| 525 |
+
|
| 526 |
+
$$ \tilde{\rho}(\cdot|P) = \begin{cases} \rho^m(\cdot|P), & \text{if } U(P, \rho) \ge \underline{U}(P) \\ \rho(\cdot|P), & \text{if } U(P, \rho) < \underline{U}(P). \end{cases} $$
|
| 527 |
+
|
| 528 |
+
By definition, $U(\tilde{\rho}, P) = \min(U(\rho, P), U(\rho^m, P))$. The functions $U(P, \rho)$ and $U(P, \rho^m)$ are increasing in $P$ since $\rho$ and $\rho^m$ are IC. Therefore $U(P, \tilde{\rho})$ is increasing in $P$, so $\tilde{\rho}$ is also IC. Moreover, for every $P$ either $\tilde{\rho}(\cdot|P) = \rho(\cdot|P)$ or $\tilde{\rho}(\cdot|P) = \rho^m(\cdot|P)$. Therefore the WCR under $\tilde{\rho}$ is also $R^m$.
|
| 529 |
+
|
| 530 |
+
We next argue that for every $P$, $\tilde{\rho}$ gives the principal a weakly higher payoff than $\rho$ does.
|
| 531 |
+
|
| 532 |
+
(a) Consider a set $P$ such that $U(\rho, P) < \underline{U}(P)$. Then $\tilde{\rho}(\cdot|P) = \rho(\cdot|P)$, so $\tilde{\rho}$ gives the principal the same payoff as $\rho$ does.
|
| 533 |
+
|
| 534 |
+
(b) Consider a set $P$ such that $U(\rho, P) \ge \underline{U}(P)$. From Claim 7.4 we know that $U(P, \rho) \le \overline{U}(P)$ for every $P$. Therefore, $\rho(\cdot|P)$ is a feasible point in problem (7). It follows from Lemma 7.8 that $\rho^m$ gives the principal a weakly higher payoff than $\rho$ does. Moreover, if $\rho(\cdot|P) \ne \rho^m(\cdot|P)$, then $\rho^m$ gives the principal a strictly higher payoff than $\rho$ does.
|
| 535 |
+
|
| 536 |
+
Since $\tilde{\rho}(\cdot|P) = \rho^m(\cdot|P)$ for every $P$ such that $U(\rho, P) \ge \underline{U}(P)$, $\tilde{\rho}$ gives the principal a weakly higher payoff than $\rho$ does for every such $P$.
|
| 537 |
+
|
| 538 |
+
We have argued that $\tilde{\rho}$ gives the principal a weakly higher payoff than $\rho$ does for every $P$. On the other hand, $\rho$ is admissible, so there cannot be a $P$ such that $\tilde{\rho}$ gives the principal a strictly higher payoff than $\rho$ does. This implies that for every $P$ such that $U(\rho, P) \ge \underline{U}(P)$, $\rho(\cdot|P) = \rho^m(\cdot|P)$, so $U(\rho, P)$ is equal to $\underline{U}(P)$. Hence, for every $P$, $U(\rho, P) \le \underline{U}(P).
|
| 539 |
+
---PAGE_BREAK---
|
| 540 |
+
|
| 541 |
+
|
| 542 |
+
---PAGE_BREAK---
|
| 543 |
+
|
| 544 |
+
References
|
| 545 |
+
|
| 546 |
+
Aghion, Philippe, and Jean Tirole. 1997. "Formal and Real Authority in Organizations." *Journal of Political Economy*, 105(1): 1–29.
|
| 547 |
+
|
| 548 |
+
Armstrong, Mark, and John Vickers. 2010. "A Model of Delegated Project Choice." *Econometrica*, 78(1): 213–244.
|
| 549 |
+
|
| 550 |
+
Ben-Porath, Elchanan, Eddie Dekel, and Barton L Lipman. 2019. "Mechanisms with evidence: Commitment and robustness." *Econometrica*, 87(2): 529–566.
|
| 551 |
+
|
| 552 |
+
Bergemann, Dirk, and Karl H. Schlag. 2008. "Pricing without Priors." *Journal of the European Economic Association*, 6(2/3): 560–569.
|
| 553 |
+
|
| 554 |
+
Bergemann, Dirk, and Karl Schlag. 2011. "Robust monopoly pricing." *Journal of Economic Theory*, 146(6): 2527–2543.
|
| 555 |
+
|
| 556 |
+
Beviá, Carmen, and Luis Corchón. 2019. "Contests with dominant strategies." *Economic Theory*.
|
| 557 |
+
|
| 558 |
+
Bonatti, Alessandro, and Heikki Rantakari. 2016. "The Politics of Compromise." *American Economic Review*, 106(2): 229–59.
|
| 559 |
+
|
| 560 |
+
Bull, Jesse, and Joel Watson. 2007. "Hard evidence and mechanism design." *Games and Economic Behavior*, 58(1): 75–93.
|
| 561 |
+
|
| 562 |
+
Carroll, Gabriel. 2015. "Robustness and linear contracts." *American Economic Review*, 105(2): 536–63.
|
| 563 |
+
|
| 564 |
+
Carroll, Gabriel. 2019. "Robustness in Mechanism Design and Contracting." *Annual Review of Economics*, 11(1): 139–166.
|
| 565 |
+
---PAGE_BREAK---
|
| 566 |
+
|
| 567 |
+
Chassang, Sylvain. 2013. “Calibrated incentive contracts.” *Econometrica*, 81(5): 1935–1971.
|
| 568 |
+
|
| 569 |
+
Dekel, Eddie. 2016. “On Evidence in Games and Mechanism Design.” *Econometric Society* *Presidential Address*.
|
| 570 |
+
|
| 571 |
+
Dye, Ronald A. 1985. “Strategic Accounting Choice and the Effects of Alternative Financial Reporting Requirements.” *Journal of Accounting Research*, 23(2): 544–574.
|
| 572 |
+
|
| 573 |
+
Gilboa, Itzhak, and David Schmeidler. 1989. “Maxmin expected utility with non-unique prior.” *Journal of Mathematical Economics*, 18(2): 141–153.
|
| 574 |
+
|
| 575 |
+
Glazer, Jacob, and Ariel Rubinstein. 2006. “A study in the pragmatics of persuasion: a game theoretical approach.” *Theoretical Economics*, 1: 395–410.
|
| 576 |
+
|
| 577 |
+
Goel, Sumit, and Wade Hann-Caruthers. 2020. “Project selection with partially verifiable information.”
|
| 578 |
+
|
| 579 |
+
Green, Jerry R., and Jean-Jacques Laffont. 1986. “Partially Verifiable Information and Mechanism Design.” *The Review of Economic Studies*, 53(3): 447–456.
|
| 580 |
+
|
| 581 |
+
Grossman, Sanford J. 1981. “The Informational Role of Warranties and Private Disclosure about Product Quality.” *The Journal of Law and Economics*, 24(3): 461–483.
|
| 582 |
+
|
| 583 |
+
Grossman, S. J., and O. D. Hart. 1980. “Disclosure Laws and Takeover Bids.” *The Journal of Finance*, 35(2): 323–334.
|
| 584 |
+
|
| 585 |
+
Guo, Yingni, and Eran Shmaya. 2019. “Robust Monopoly Regulation.” *Working paper*.
|
| 586 |
+
|
| 587 |
+
Hart, Sergiu, Ilan Kremer, and Motty Perry. 2017. “Evidence Games: Truth and Commitment.” *American Economic Review*, 107(3): 690–713.
|
| 588 |
+
---PAGE_BREAK---
|
| 589 |
+
|
| 590 |
+
Hurwicz, Leonid, and Leonard Shapiro. 1978. "Incentive Structures Maximizing Residual Gain under Incomplete Information." *The Bell Journal of Economics*, 9(1): 180–191.
|
| 591 |
+
|
| 592 |
+
Kasberger, Bernhard, and Karl H Schlag. 2020. "Robust bidding in first-price auctions: How to bid without knowing what others are doing." Available at SSRN 3044438.
|
| 593 |
+
|
| 594 |
+
Lipman, Barton L, and Duane J Seppi. 1995. "Robust inference in communication games with partial provability." *Journal of Economic Theory*, 66(2): 370–405.
|
| 595 |
+
|
| 596 |
+
Lyons, Bruce R. 2003. "Could politicians be More Right Than Economists? A Theory of Merger Standards." *Working paper*.
|
| 597 |
+
|
| 598 |
+
Malladi, Suraj. 2020. "Judged in Hindsight: Regulatory Incentives in Approving Innovations." Available at SSRN.
|
| 599 |
+
|
| 600 |
+
Milgrom, Paul R. 1981. "Good News and Bad News: Representation Theorems and Applications." *The Bell Journal of Economics*, 12(2): 380–391.
|
| 601 |
+
|
| 602 |
+
Milnor, John. 1954. "Games against nature, in \"Decision Processes\" (RM Thrall, CH Coombs, and RL Davis, Eds.)."
|
| 603 |
+
|
| 604 |
+
Neven, Damien J., and Lars-Hendrik Röller. 2005. "Consumer surplus vs. welfare standard in a political economy model of merger control." *International Journal of Industrial Organization*, 23(9): 829–848. Merger Control in International Markets.
|
| 605 |
+
|
| 606 |
+
Nocke, Volker, and Michael D. Whinston. 2013. "Merger Policy with Merger Choice." *American Economic Review*, 103(2): 1006–33.
|
| 607 |
+
|
| 608 |
+
Ottaviani, Marco, and Abraham L. Wickelgren. 2011. "Ex ante or ex post competition policy? A progress report." *International Journal of Industrial Organization*, 29(3): 356–359. Special Issue: Selected Papers, European Association for Research in Industrial Economics 37th Annual Conference, Istanbul, Turkey, September 2-4, 2010.
|
| 609 |
+
---PAGE_BREAK---
|
| 610 |
+
|
| 611 |
+
Renou, Ludovic, and Karl H. Schlag. 2011. “Implementation in minimax regret equilibrium.” *Games and Economic Behavior*, 71(2): 527–533.
|
| 612 |
+
|
| 613 |
+
Savage, Leonard J. 1972. *The foundations of statistics*. Courier Corporation.
|
| 614 |
+
|
| 615 |
+
Savage, L. J. 1951. "The Theory of Statistical Decision." *Journal of the American Statistical Association*, 46(253): 55-67.
|
| 616 |
+
|
| 617 |
+
Sher, Itai. 2014. “Persuasion and dynamic communication.” *Theoretical Economics*, 9(1): 99–136.
|
| 618 |
+
|
| 619 |
+
Stoye, Jörg. 2011. “Axioms for minimax regret choice correspondences.” *Journal of Economic Theory*, 146(6): 2226–2251.
|
| 620 |
+
|
| 621 |
+
Wald, Abraham. 1950. "Statistical decision functions."
|
samples_new/texts_merged/6535016.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples_new/texts_merged/6697438.md
ADDED
|
@@ -0,0 +1,416 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Parallel Continuation-Based Global Optimization for Molecular Conformation and Protein Folding*
|
| 5 |
+
|
| 6 |
+
Thomas F. Coleman† and Zhijun Wu‡
|
| 7 |
+
|
| 8 |
+
**Abstract.** This paper presents our recent work on developing parallel algorithms and software for solving the global minimization problem for molecular conformation, especially protein folding. Global minimization problems are difficult to solve when the objective functions have many local minimizers, such as the energy functions for protein folding. In our approach, to avoid directly minimizing a "difficult" function, a special integral transformation is introduced to transform the function into a class of gradually deformed, but "smoother" or "easier" functions. An optimization procedure is then applied to the new functions successively, to trace their solutions back to the original function. The method can be applied to a large class of nonlinear partially separable functions including energy functions for molecular conformation and protein folding. Mathematical theory for the method, as a special continuation approach to global optimization, is established. Algorithms with different solution tracing strategies are developed. Different levels of parallelism are exploited for the implementation of the algorithms on massively parallel architectures.
|
| 9 |
+
|
| 10 |
+
**Abbreviated title:** Parallel Continuation-Based Global Optimization
|
| 11 |
+
|
| 12 |
+
**Key words:** global/local minimization, numerical continuation, parallel computation, protein folding
|
| 13 |
+
|
| 14 |
+
**AMS (MOS) subject classification:** 49M37, 65Y05, 68Q22, 92-08
|
| 15 |
+
|
| 16 |
+
*To be presented at Supercomputing '94, November, 1994, Washington D.C.
|
| 17 |
+
†Department of Computer Science and Center for Applied Mathematics, Cornell University, Ithaca, NY 14853.
|
| 18 |
+
‡Advanced Computing Research Institute, Cornell University, Ithaca, NY 14853.
|
| 19 |
+
---PAGE_BREAK---
|
| 20 |
+
|
| 21 |
+
# 1 Motivation
|
| 22 |
+
|
| 23 |
+
We are developing massively parallel algorithms and software for molecular conformation, especially protein folding. This paper reports on our recent progress.
|
| 24 |
+
|
| 25 |
+
The prediction of protein native structures and the understanding of how they fold from sequences of their constituent amino acids is one of the most important and challenging computational science problems of the decade. The protein folding problem is fundamental to almost all theoretical studies of proteins and protein related life processes. It also has many applications in the biotechnology industries such as structure-based drug design for the treatment of important diseases like polio, cancer, and AIDS.
|
| 26 |
+
|
| 27 |
+
Optimization approaches to the protein folding problem are based on the hypothesis that the protein native structure corresponds to the global minimum of the protein energy. The problem can be attacked computationally by minimizing the protein energy over all possible protein structures. The structure with the lowest energy is presumed to be the most stable protein structure.
|
| 28 |
+
|
| 29 |
+
Mathematically, for a protein molecule of $n$ atoms, let $x = \{x_i \in \mathbb{R}^3, i = 1, \dots, n\}$ represent the molecular structure with each $x_i$ specifying the spatial position of atom $i$. Then the computational problem for protein folding is to globally minimize a nonlinear function $f(x)$ for all $x \in S$, i.e.,
|
| 30 |
+
|
| 31 |
+
$$ \min_{x \in S} f(x) \qquad (1) $$
|
| 32 |
+
|
| 33 |
+
where $S$ is the set of all possible molecular structures, and $f(x)$ is the energy function for the protein defined for all $x$.
|
| 34 |
+
|
| 35 |
+
The difficulty with this approach is that global optimization problems are computationally intractable in general, and especially difficult to solve when problem sizes are large and objective functions contain many local minimizers. For protein folding, the problem sizes tend to be very large with possibly thousands of variables, and the objective functions usually have exponentially many local minimizers. Therefore, to solve the optimization problems for protein folding, special algorithms must be developed which exploit the problem structure. In addition, parallel high performance computing is also essential for the solutions to be computationally feasible.
|
| 36 |
+
|
| 37 |
+
Our work focuses on establishing a new continuation-based approach to global optimization; we develop efficient parallel algorithms and software specifically for molecular conformation and protein folding.
|
| 38 |
+
---PAGE_BREAK---
|
| 39 |
+
|
| 40 |
+
## 2 The basic approach
|
| 41 |
+
|
| 42 |
+
The idea behind our approach is the following. To avoid directly minimizing a "difficult" objective function, a smoothing technique is introduced to transform the function into a class of gradually deformed, but "smoother" or "easier" functions. An optimization procedure is then applied to the new functions successively, to trace their solutions back to the original function.
|
| 43 |
+
|
| 44 |
+
To obtain our smoothing transformation, a parametrized integral transformation is introduced, transforming a given function into a class of new functions corresponding to a set of parameter values. A transformed function is in some sense a coarse approximate to the original function. After applying the transform, the original function becomes smoother with small and narrow minimizers being removed while the overall structure of the function is maintained. This allows a solution tracing procedure to skip less interesting local minimizers, and concentrate on regions with average low function values where a global minimizer is most likely to be located.
|
| 45 |
+
|
| 46 |
+
Different methods can be employed to trace the solutions. For example, a simple method is to apply a random search procedure to the transformed functions successively to locate their low local minimizers. Another possible method is to apply local optimization procedures to each transformed function and trace a set of local minimizers.
|
| 47 |
+
|
| 48 |
+
Our approach is called continuation-based, because the transformation can actually be viewed as a special continuation process by the theory described in [7]. Following this theory, our new approach can be studied in a general numerical continuation setting, and algorithms can be developed by employing standard advanced numerical methods. We will discuss these issues later in this paper.
|
| 49 |
+
|
| 50 |
+
## 3 Transformation
|
| 51 |
+
|
| 52 |
+
We first introduce the transformation.
|
| 53 |
+
|
| 54 |
+
**Definition 1** Given a nonlinear function $f$, the transformation $<f>_\lambda$ for $f$ is defined such that for all $x$,
|
| 55 |
+
|
| 56 |
+
$$ \langle f \rangle_{\lambda}(x) = C_{\lambda} \int f(x') e^{-\|x-x'\|^2/\lambda^2} dx', \quad (2) $$
|
| 57 |
+
|
| 58 |
+
or equivalently,
|
| 59 |
+
|
| 60 |
+
$$ \langle f \rangle_{\lambda}(x) = C_{\lambda} \int f(x-x') e^{-\|x'\|^2/\lambda^2} dx', \quad (3) $$
|
| 61 |
+
---PAGE_BREAK---
|
| 62 |
+
|
| 63 |
+
where $\lambda$ is a positive number and $C_\lambda$ is a normalization constant such that
|
| 64 |
+
|
| 65 |
+
$$ C_{\lambda} \int e^{-\|x\|^2 / \lambda^2} dx = 1. \quad (4) $$
|
| 66 |
+
|
| 67 |
+
To understand this transformation, consider that given a random function $g(x')$ and a probability distribution function $p(x')$ for the random variable $x'$, the expectation of the function $g$ with respect to $p$ is
|
| 68 |
+
|
| 69 |
+
$$ \langle g \rangle_p = \int g(x') p(x') dx'. \quad (5) $$
|
| 70 |
+
|
| 71 |
+
In light of (5), the defined transformation (2) yields a function value for $\langle f \rangle_\lambda$ at any $x$ equal to the expectation for $f$ sampled by a Gaussian distribution function centered at $x$.
|
| 72 |
+
|
| 73 |
+
For example, consider the following nonlinear function:
|
| 74 |
+
|
| 75 |
+
$$ f(x) = (x - 1)^2 + 0.1 \sin(20(x - 1)) \quad (6) $$
|
| 76 |
+
|
| 77 |
+
which is a quadratic function augmented with a “noise” function. The transformation for this function can be computed:
|
| 78 |
+
|
| 79 |
+
$$ \langle f \rangle_{\lambda}(x) = (x-1)^2 + \frac{\lambda^2}{2} + 0.1e^{-(20\lambda)^2/4} \sin(20(x-1)). \quad (7) $$
|
| 80 |
+
|
| 81 |
+
The function value $\langle f \rangle_\lambda(x)$ for fixed $x$ is equal to the integration with respect to the product of two functions, the original function $f(x')$ and the Gaussian distribution function $p(x') = C_\lambda e^{-\|x-x'\|^2/\lambda^2}$ (Figure 1 (a)), where $\lambda$ determines the size of the dominant region of the Gaussian. Since the most significant part of the integration is that within the dominant region of the Gaussian, $\langle f \rangle_\lambda(x)$ can be viewed as the average value for the original function $f$ within a small $\lambda$-neighborhood around $x$. If $\lambda$ is equal to zero the transformed function is exactly the original function. Otherwise, original function variations in small regions are averaged out, and the transformed function will become “smoother” (Figure 1 (b)).
|
| 82 |
+
|
| 83 |
+
Figure 2 shows how the function $\langle f \rangle_\lambda$ in (7) behaves with increasing $\lambda$. Observe that when $\lambda = 0.0$ the function is the original function; when we increase $\lambda$ to 0.1, the function becomes “smoother”; when $\lambda$ is increased further to 0.2, the function becomes entirely “smooth”. As we will show in the following sections, what we observed here is a general property of the transformation, i.e., for any function $f$, the larger of $\lambda$, the “smoother” the transformed function.
|
| 84 |
+
---PAGE_BREAK---
|
| 85 |
+
|
| 86 |
+
Figure 1: A 1-dimensional transformation example.
|
| 87 |
+
---PAGE_BREAK---
|
| 88 |
+
|
| 89 |
+
Figure 2: A class of gradually deformed functions.
|
| 90 |
+
---PAGE_BREAK---
|
| 91 |
+
|
| 92 |
+
# 4 Smoothness
|
| 93 |
+
|
| 94 |
+
Let $\hat{f}$ be the Fourier transformation for function $f$, and $\langle \tilde{f} \rangle_\lambda$ the Fourier transformation for function $\langle f \rangle_\lambda$. Recall that the transformation $\langle f \rangle_\lambda$ for $f$ is just a convolution of $f$ and $p$, where $p$ is the Gaussian distribution function
|
| 95 |
+
|
| 96 |
+
$$p(x) = C_\lambda e^{-\|x\|^2/\lambda^2}. \quad (8)$$
|
| 97 |
+
|
| 98 |
+
Therefore the Fourier transformation for $\langle f \rangle_\lambda$ is equal to the product of the Fourier transformations for $f$ and $p$. The Fourier transformation for the Gaussian distribution function is
|
| 99 |
+
|
| 100 |
+
$$\hat{g}(\omega) = e^{-\frac{\lambda^2 \|\omega\|^2}{4}}. \quad (9)$$
|
| 101 |
+
|
| 102 |
+
So, we have
|
| 103 |
+
|
| 104 |
+
$$\langle \tilde{f} \rangle_{\lambda}(\omega) = e^{-\frac{\lambda^2 \|\omega\|^2}{4}} \hat{f}(\omega). \quad (10)$$
|
| 105 |
+
|
| 106 |
+
We see from (10) that if $\lambda \to 0$, $\langle \tilde{f} \rangle_\lambda$ converges to $\hat{f}$, and $\langle f \rangle_\lambda$ converges to $f$.
|
| 107 |
+
|
| 108 |
+
Also by (10), for fixed $\lambda$, if $\omega$ is large $\langle \tilde{f} \rangle_\lambda(\omega)$ will be very small. This implies that high frequency components of the original function become very small after the transformation. This is why the transformed function is “smoother”. In addition, for larger $\lambda$ values, wider ranges of high frequency components of the original function practically vanish after the transformation. Therefore, the transformed function becomes increasingly smooth as $\lambda$ increases. We state these properties formally in the following theorem.
|
| 109 |
+
|
| 110 |
+
**Theorem 1** Let $f$, $\hat{f}$, $\langle f \rangle_\lambda$ and $\langle \tilde{f} \rangle_\lambda$ all be given and well defined. Then $\forall \varepsilon > 0$, $\exists \delta > 1/\lambda$ for fixed $\lambda$, such that $\forall \omega$ with $\|\omega\| > \delta$,
|
| 111 |
+
|
| 112 |
+
$$\frac{|\langle \tilde{f} \rangle_{\lambda}(\omega)|}{|\hat{f}(\omega)|} < \varepsilon. \quad (11)$$
|
| 113 |
+
|
| 114 |
+
*Proof:* See [7]. □
|
| 115 |
+
|
| 116 |
+
From this theorem we learn that the relative size of $\langle \tilde{f} \rangle_\lambda(\omega)$ can be made arbitrarily small for all $\omega$ with $\|\omega\|$ greater than a small value $\delta$. Since $\delta$ is inversely proportional to $\lambda$, high frequency components are removed when $\lambda$ is large.
|
| 117 |
+
---PAGE_BREAK---
|
| 118 |
+
|
| 119 |
+
# 5 Numerical Properties
|
| 120 |
+
|
| 121 |
+
The definition of the transformation (2) involves high dimensional integration which cannot be computed in general (except perhaps by the Monte Carlo method which is not appropriate for our purposes because it is too expensive). So the transformation may not be applicable to arbitrary functions, at least numerically. However, this transformation does apply to a large class of nonlinear partially separable functions, and especially to typical molecular conformation and protein folding energy functions.
|
| 122 |
+
|
| 123 |
+
Consider a large class of nonlinear partially separable functions, called
|
| 124 |
+
generalized multilinear functions,
|
| 125 |
+
|
| 126 |
+
$$f = \sum_i \prod_j g_j^i, \quad (12)$$
|
| 127 |
+
|
| 128 |
+
where $g_j^i$'s are one dimensional nonlinear functions. It is easy to verify that
|
| 129 |
+
|
| 130 |
+
$$\langle f \rangle_{\lambda} = \sum_{i} \prod_{j} \langle g_{j}^{i} \rangle_{\lambda}. \qquad (13)$$
|
| 131 |
+
|
| 132 |
+
Since transformation $\langle g_j^i \rangle_\lambda$, for all $i$ and $j$, involves only one dimensional integration, the transformation for a generalized multilinear function can be numerically computed.
|
| 133 |
+
|
| 134 |
+
In particular, let us consider a typical n-atom molecular conformation energy function,
|
| 135 |
+
|
| 136 |
+
$$f(x) = \sum_{i=1, j=1}^{n} h_{ij}(\|x_i - x_j\|) \quad (14)$$
|
| 137 |
+
|
| 138 |
+
where $x = \{x_i \in \mathbb{R}^3, i = 1, \dots, n\}$ and $h_{ij}$ is the pairwise energy function determined by $\|x_i - x_j\|$, the distance between atoms $i$ and $j$. Because of the partial separability of this type of function, the transformation for $f$ is equal to the sum of the transformations for the pairwise functions $h_{ij}$. However the computation for the pairwise transformation still cannot be conducted directly, because there is still more than one variable. Nevertheless, the following theorem provides a feasible way to compute the molecular energy transformation:
|
| 139 |
+
|
| 140 |
+
**Theorem 2** Let $f$ be defined as in (14). Then the transformation (2) for $f$ can be computed using the formula
|
| 141 |
+
|
| 142 |
+
$$\langle f \rangle_{\lambda}(x) = \sum_{i=1, j=1}^{n} \langle h_{ij} \rangle_{\sqrt{2}\lambda} (\|r_{ij}\|) \quad (15)$$
|
| 143 |
+
---PAGE_BREAK---
|
| 144 |
+
|
| 145 |
+
where $r_{ij} = x_i - x_j$ and
|
| 146 |
+
|
| 147 |
+
$$ \langle h_{ij} \rangle_{\sqrt{2}\lambda} (\|r_{ij}\|) = c_{\sqrt{2}\lambda} \int h_{ij}(\|r'_{ij}\|) e^{-\|r_{ij}-r'_{ij}\|^2 / 2\lambda^2} dr'_{ij}. \quad (16) $$
|
| 148 |
+
|
| 149 |
+
**Proof:** See [7]. □
|
| 150 |
+
|
| 151 |
+
Note that $\langle h_{ij} \rangle_{\sqrt{2}\lambda} (\|r_{ij}\|)$ can be computed with a standard numerical integration technique; therefore, the transformation $\langle f \rangle_\lambda (x)$ can be computed in this fashion.
|
| 152 |
+
|
| 153 |
+
# 6 Minimization
|
| 154 |
+
|
| 155 |
+
In summary, we have introduced a parametrized integral transformation to transform the object function of a global optimization problem. Statistically, the transformation averages the function values, and provides coarse estimates for the function variation. Geometrically, the transformation deforms the function into a class of “smoother” functions with small high frequency components removed in the transformed functions. Physically, the transformation allows a physical system to have small perturbations, and the transformed function reflects the average behavior of the system dynamics. Finally, the transformation can exploit partial separability, and is particularly suitable for molecular conformation and protein folding energy functions.
|
| 156 |
+
|
| 157 |
+
With this transformation, a general global minimization procedure can immediately be constructed as illustrated in Figure 3. That is, given a global minimization problem with a nonlinear objective function $f$, we first transform the function into a class of new functions $\langle f \rangle_{\lambda_1}$, $\langle f \rangle_{\lambda_2}$, ..., $\langle f \rangle_{\lambda_m}$ for $\lambda_1 > \lambda_2 > \dots > \lambda_m = 0$ with $\langle f \rangle_{\lambda_m}$ corresponding to $f$. We then apply local optimization procedures to the transformed functions successively, to trace their solutions back to the original function. Since the transformed function with a larger $\lambda$ value is “smoother” with possibly fewer local minimizers, we can start by minimizing $\langle f \rangle_{\lambda_1}$, and next, take its solution as the initial point and minimize $\langle f \rangle_{\lambda_2}$, and so on and so forth. Since a transformed function is also a coarse approximate to the original function, its solution should also be a rough estimate for the solution of the original function. So by minimizing the transformed functions successively,
|
| 158 |
+
---PAGE_BREAK---
|
| 159 |
+
|
| 160 |
+
1 Choose
|
| 161 |
+
|
| 162 |
+
$$ \{\lambda_i : i = 1, \dots, m, \lambda_1 > \dots > \lambda_m = 0\} $$
|
| 163 |
+
|
| 164 |
+
2 For $i = 1, \dots, m$
|
| 165 |
+
|
| 166 |
+
$$ \min_{x \in S} <f>_{\lambda_i}(x) $$
|
| 167 |
+
|
| 168 |
+
Figure 3: A global minimization procedure.
|
| 169 |
+
|
| 170 |
+
the whole process is concentrated in regions where the solution of the original function is most likely to be located.
|
| 171 |
+
|
| 172 |
+
# 7 Tracing Solutions
|
| 173 |
+
|
| 174 |
+
The continuation-based global minimization approach contains two major components:
|
| 175 |
+
|
| 176 |
+
1. Application and computation of the transformation (2),
|
| 177 |
+
|
| 178 |
+
2. A solution tracing procedure.
|
| 179 |
+
|
| 180 |
+
Clearly, different algorithms can be implemented if different solution tracing procedures are employed. An efficient solution tracing method is crucial for the algorithm to be numerically effective and efficient.
|
| 181 |
+
|
| 182 |
+
In principle, tracing solutions means tracing global minimizers: the solution for a global minimization problem is sought for each transformed function. However, in a broader sense, the solutions can actually be either global or local, as long as they form a “path” that can lead to a global minimizer for the original objective function. Under some circumstances, such a “path” exists as a smooth curve, and then tracing solutions simply implies following a smooth solution curve determined by a set of transformed functions.
|
| 183 |
+
---PAGE_BREAK---
|
| 184 |
+
|
| 185 |
+
A random search procedure is an example of a simple solution tracing method, e.g., the simulated annealing random search [1]. This method is easy to implement, and especially robust in the sense that the random search procedure can be designed to converge asymptotically to a global minimizer. However, convergence depends on how thoroughly the search can be conducted. Usually, an unaffordable amount of computation is required even for small problems. Another problem with this method is that the randomness introduces uncertainty.
|
| 186 |
+
|
| 187 |
+
A more deterministic and efficient alternative is to use a local minimization procedure. This method applies local minimization to the transformed functions successively, and returns a local minimizer as the candidate for the solution to the given problem. The method is relatively inexpensive, and clearly more feasible for large scale problems, e.g., the protein problems. In particular, it can take advantage of well-developed local optimization techniques [6].
|
| 188 |
+
|
| 189 |
+
The effectiveness of this method can be illustrated in the following simple experiment: Consider the function in (6), and suppose that we want to find its global minimizer. First we transform the function to obtain a class of new functions given in (7). Choose $\lambda_1 = 0.2$, $\lambda_2 = 0.1$ and $\lambda_3 = 0.0$. We then have three transformed functions as shown in Figure 2 (a), (b) and (c). The function in Figure 2 (c) is equivalent to the original function. Then we apply a local minimization procedure to the transformed functions from $\langle f \rangle_{\lambda_1}$ to $\langle f \rangle_{\lambda_3}$. Since $\langle f \rangle_{\lambda_1}$ is “smooth” with only one local minimizer, the solution can immediately be found for it. Started from this solution, a local minimizer, being also a global minimizer, for $\langle f \rangle_{\lambda_2}$ can be found subsequently. Continuing the process, the global minimizer for the original function can be located at the end.
|
| 190 |
+
|
| 191 |
+
The example shows that the local minimization skips small local minimizers at the first stages and goes directly to a region of interest, where a global minimizer is very likely to be found subsequently. In general, the method may not always be this fortunate. For example, the early transformed functions may still have more than one local minimizer; the chosen minimizer may not necessarily lead to a global minimizer for the function at the final stage.
|
| 192 |
+
|
| 193 |
+
To begin with the “right local minimizer”, either a good initial point is provided based on the known knowledge of given problem, or a set of local minimizers can be selected and traced, and one of them may lead to a good solution.
|
| 194 |
+
---PAGE_BREAK---
|
| 195 |
+
|
| 196 |
+
## 8 Numerical Continuation
|
| 197 |
+
|
| 198 |
+
Our recent work [7] shows that the parametrized integral transform in (2) defines for $f$ a homotopy on $[0, \lambda_0]$ for any $\lambda_0 < \infty$. Moreover, under appropriate assumptions, the transformed functions $\{\langle f \rangle_\lambda : \lambda \in [0, \lambda_0]\}$ determine for any given local minimizer $x_0$ of $\langle f \rangle_{\lambda_0}$ a continuous and differentiable curve $x(\lambda)$ so that for all $\lambda \in [0, \lambda_0]$, $x(\lambda)$ is a local minimizer of $\langle f \rangle_\lambda$. In this case, the deterministic trace of the solution, e.g., using local minimization, is equivalent to following a solution curve $x(\lambda)$ (or a set of such curves). This forms the theoretical basis for our method as a special continuation approach to global optimization. Therefore, an initial value problem to determine the solution curve can be derived in a simple and computable form:
|
| 199 |
+
|
| 200 |
+
$$ x' = -\frac{\lambda}{2} \langle \nabla^2 f \rangle_{\lambda}^{-1}(x) \langle \Delta g \rangle_{\lambda}(x) \quad (17) $$
|
| 201 |
+
|
| 202 |
+
$$ x_0 = x(\lambda_0) \quad (18) $$
|
| 203 |
+
|
| 204 |
+
where $\nabla^2 f$ is the Hessian of the function, and $\Delta g$ the Laplace operation applied to the components of the gradient. This result opens another direction for the effective trace of the solution - solve the initial value problem using standard numerical IVP-methods, e.g., the predictor-corrector methods [2]. One simple example is to use an Euler-Newton method as shown in Figure 4. In this method, at each iteration, an Euler predictor is computed to start a Newton's local minimization procedure to find a solution on the curve. The process is continued, and the solution curve is followed to its end.
|
| 205 |
+
|
| 206 |
+
## 9 Parallelism
|
| 207 |
+
|
| 208 |
+
Different levels of parallelism can be exploited for continuation-based global optimization, e.g., parallel solution tracing, parallel function evaluation, and parallel linear algebra and optimization.
|
| 209 |
+
|
| 210 |
+
At the solution tracing level, parallelism can be exploited by using multiprocessors to generate multiple random searches, or trace a set of local minimizers in parallel. For the random search technique, increasing the number of processors is equivalent to increasing the number of trials. The more processors that are used, the higher the probability a solution can be found. For tracing multiple local minimizers, using multiprocessors simply reduces the total computation and increases the potential for finding the best
|
| 211 |
+
---PAGE_BREAK---
|
| 212 |
+
|
| 213 |
+
$$ \lambda = \lambda_0, \quad x = x_0 $$
|
| 214 |
+
|
| 215 |
+
Repeat
|
| 216 |
+
|
| 217 |
+
$$ \text{Compute } x' = -\frac{\lambda}{2} \langle \nabla^2 f \rangle_{\lambda}^{-1}(x) \quad \langle \Delta g \rangle_{\lambda}(x) $$
|
| 218 |
+
|
| 219 |
+
$$ \lambda = \lambda + h, \quad x = x + x' h $$
|
| 220 |
+
|
| 221 |
+
Repeat
|
| 222 |
+
|
| 223 |
+
$$ \text{Compute } s = - \langle \nabla^2 f \rangle_{\lambda}^{-1}(x) \langle g \rangle_{\lambda}(x) $$
|
| 224 |
+
|
| 225 |
+
$$ x = x + \alpha s $$
|
| 226 |
+
|
| 227 |
+
End
|
| 228 |
+
|
| 229 |
+
End
|
| 230 |
+
|
| 231 |
+
Figure 4: Euler-Newton prediction and correction.
|
| 232 |
+
---PAGE_BREAK---
|
| 233 |
+
|
| 234 |
+
possible local minimizer. In either case, the parallelism is coarsely grained
|
| 235 |
+
with little communication required among processors but intensive compu-
|
| 236 |
+
tation for each, which is good for massively parallel computation, especially
|
| 237 |
+
on the machines with high communication to computation ratios.
|
| 238 |
+
|
| 239 |
+
Parallel function evaluation is important for both local and global op-
|
| 240 |
+
timization. For the continuation-based global optimization method, more
|
| 241 |
+
than half of the total computation involves function evaluation, and each
|
| 242 |
+
evaluation is costly, requiring numerical integration. However, for molecular
|
| 243 |
+
conformation and protein folding, the energy functions to be minimized are
|
| 244 |
+
partially separable with typically a small number of element functions. So
|
| 245 |
+
for each element function, we can construct a function value look-up table.
|
| 246 |
+
The function evaluation can then be conducted with cubic spline interpola-
|
| 247 |
+
tion using the function values already calculated in the look-up tables. In
|
| 248 |
+
this way, the total function evaluation cost can be reduced; moreover, the
|
| 249 |
+
function value look-up tables, no matter how expensive they are, can be
|
| 250 |
+
computed in parallel with perfect parallel efficiency. In this sense, we say
|
| 251 |
+
that the function evaluation can be indirectly parallelized.
|
| 252 |
+
|
| 253 |
+
Finally, the continuation-based global optimization method is rich in
|
| 254 |
+
linear algebra which is good for high performance computing. When the
|
| 255 |
+
problem is large, say, the problem for a protein with ten thousand atoms,
|
| 256 |
+
the parallelism at this level can also be exploited by parallelizing the major
|
| 257 |
+
linear algebra operations, e.g., linear system solve and local minimization.
|
| 258 |
+
This type of parallelism has been well studied and understood, and can be
|
| 259 |
+
exploited using standard techniques.
|
| 260 |
+
|
| 261 |
+
# 10 Numerical Experience
|
| 262 |
+
|
| 263 |
+
The development of the continuation-based approach to global optimization
|
| 264 |
+
has been accompanied with a series of computational works [3, 4, 5]. The
|
| 265 |
+
algorithms have been implemented on parallel machines and tested with a
|
| 266 |
+
set of molecular conformation problems. The results we obtained support
|
| 267 |
+
the approach, and show that the algorithms perform much more effectively
|
| 268 |
+
and efficiently than conventional global optimization methods. They are also
|
| 269 |
+
very suitable for massively parallel computation. We illustrate in the follow-
|
| 270 |
+
ing some of our numerical experience with two particular algorithms. Both
|
| 271 |
+
methods are continuation-based, but differ in solution tracing strategies.
|
| 272 |
+
|
| 273 |
+
The first method, called the effective energy simulated annealing, uses
|
| 274 |
+
a random search procedure, the simulated annealing method, to trace the
|
| 275 |
+
---PAGE_BREAK---
|
| 276 |
+
|
| 277 |
+
solutions. Recall that in the simulated annealing method, a temperature parameter $T$ is decreased from a positive number to zero as the iteration count increases. For each value of $T$, a number of random trials is applied to the given energy function. For the effective energy simulated annealing method, a function $\lambda = \alpha T$ first is defined, where $\alpha$ is a constant. For each value of $T$, a $\lambda$ value is determined, which, in turn, defines a transformed function, called the effective energy function. A number of random trials is then conducted on this function to locate a solution. The parameter $\lambda$ goes to zero as $T$ decreases, and the transformed function changes to the original function. The process is equivalent to tracing the solutions for a set of transformed functions using the Monte Carlo search with a different temperature $T$ for each different transformed function. Note that if $\alpha$ is set to zero, $\lambda$ is equal to zero for all $T$. In this case all transformed functions are the same original function, and the algorithm is reduced to a standard simulated annealing procedure.
|
| 278 |
+
|
| 279 |
+
The effective energy simulated annealing algorithm has been implemented on a 32-node Intel iPSC/860 at Cornell. The machine is a parallel distributed memory system with a hypercube interconnection network. Each processor has 8 Mbytes of local memory, and achieves a theoretical peak performance of 40 Mflops. The parallelization of the algorithm is straightforward: Multiple processors are used at each iteration to generate multiple sequences of trials independently. Little communication is required among processors except for calculating the global acceptance rate at the end of each iteration. The load also is well balanced: the number of trials is the same each processor. For more implementation details, readers are referred to [3].
|
| 280 |
+
|
| 281 |
+
The algorithm is tested with a set of small sizes of Lennard-Jones microcluster conformation problems, which have been well studied, and widely used as model problems for molecular conformation. Typical results for these problems are shown in Figure 5, where three pictures for clusters of $n = 8, 12, 16$ atoms are given. The curves indicate the energy levels for the solutions obtained by the algorithm with different $\alpha$ values. We see when $\alpha$ is equal to zero, the algorithm corresponding to a standard simulated annealing procedure can only find solutions with very high energy levels. However, within the same amount of computation time, the effective energy simulated annealing algorithm with a proper choice of positive $\lambda$ value can find solutions whose energy levels are already very close to the best known values (the bottom lines of the pictures). As a matter of fact, by applying a local minimization procedure started with these solutions, we obtained
|
| 282 |
+
---PAGE_BREAK---
|
| 283 |
+
|
| 284 |
+
immediately the best known solutions for all the clusters. These results just show how effective the method with the transformation scheme can be for molecular conformation, compared with a conventional global optimization technique.
|
| 285 |
+
|
| 286 |
+
The parallel performance for the algorithm is illustrated in Figure 6, where two examples are given to show how rapidly the energy levels of the solutions found by the algorithm decrease with increasing numbers of processors.
|
| 287 |
+
|
| 288 |
+
The second algorithm we want to discuss is the deterministic local tracing algorithm, which uses local minimization as a solution tracing procedure. The algorithm first requires the objective function to be transformed into a class of new functions $<f>_{\lambda_1}$, $<f>_{\lambda_2}$, ..., $<f>_{\lambda_m}$ for a set of parameter values $\lambda_1 > \lambda_2 > ... > \lambda_m = 0$, with $<f>_{\lambda_m}$ corresponding to $f$. A set of starting points are sampled randomly so that a group of local minimizers for $<f>_{\lambda_1}$ are obtained at the beginning. Then local minimization is applied to the remaining transformed functions successively to trace the changes of these local minimizers, and the one with the lowest function value is selected at the last stage as a candidate for the solution to the given problem.
|
| 289 |
+
|
| 290 |
+
The deterministic local tracing algorithm has been implemented on a 64-node IBM SP1 at Cornell. The SP1 is a parallel distributed memory system with a high performance switch installed for better interprocessor communication. Each processor is an IBM RS/6000 with 128 Mbytes of memory and a peak performance of 125 Mflops. In this implementation, multiprocessors are used to trace multiple local minimizers in parallel with one local minimizer for each processor. Little communication is required. Each processor carries a sequence of local minimizations. Basically, the more processors used, the more local minimizers traced, and hence the higher the probability of obtaining a good solution. Also, the larger the problem sizes, the more intensive the computation for each processor. Since the problem sizes of practical interest tend to be very large, the machines with high communication to computation ratios, such as the IBM SP1, can be very suitable for the algorithm to achieve good performance in practice.
|
| 291 |
+
|
| 292 |
+
The algorithm has been tested with a set of "perturbed Lennard-Jones microcluster conformation problems". Such a problem is obtained by adding in each pairwise Lennard-Jones potential function a periodically varying term, $\rho \sin(\omega r)/r$, where $\rho$ and $\omega$ are constants, and $r$ is the distance between given pair of atoms. The functions with properly adjusted $\rho$ and $\omega$ can generate a set of even more complicated global optimization test problems. The perturbed functions reduce to pure Lennard-Jones problems when $\rho$ is
|
| 293 |
+
---PAGE_BREAK---
|
| 294 |
+
|
| 295 |
+
Figure 5: Typical numerical results obtained by the effective energy simulated annealing algorithm.
|
| 296 |
+
---PAGE_BREAK---
|
| 297 |
+
|
| 298 |
+
Figure 6: The parallel performance of the effective energy simulated annealing algorithm.
|
| 299 |
+
---PAGE_BREAK---
|
| 300 |
+
|
| 301 |
+
<table>
|
| 302 |
+
<caption>Deterministic Local Tracing</caption>
|
| 303 |
+
<thead>
|
| 304 |
+
<tr>
|
| 305 |
+
<th rowspan="2">p</th>
|
| 306 |
+
<th colspan="2">n = 16</th>
|
| 307 |
+
<th colspan="2">n = 20</th>
|
| 308 |
+
<th colspan="2">n = 24</th>
|
| 309 |
+
</tr>
|
| 310 |
+
<tr>
|
| 311 |
+
<th>m = 1</th>
|
| 312 |
+
<th>m = 40</th>
|
| 313 |
+
<th>m = 1</th>
|
| 314 |
+
<th>m = 40</th>
|
| 315 |
+
<th>m = 1</th>
|
| 316 |
+
<th>m = 40</th>
|
| 317 |
+
</tr>
|
| 318 |
+
</thead>
|
| 319 |
+
<tbody>
|
| 320 |
+
<tr>
|
| 321 |
+
<td>1</td>
|
| 322 |
+
<td>-4.2805e1</td>
|
| 323 |
+
<td>-5.7933e1</td>
|
| 324 |
+
<td>-5.2270e1</td>
|
| 325 |
+
<td>-7.6255e1</td>
|
| 326 |
+
<td>-1.0112e2</td>
|
| 327 |
+
<td>-1.0312e2</td>
|
| 328 |
+
</tr>
|
| 329 |
+
<tr>
|
| 330 |
+
<td>2</td>
|
| 331 |
+
<td>-5.5878e1</td>
|
| 332 |
+
<td>-5.6551e1</td>
|
| 333 |
+
<td>-7.4508e1</td>
|
| 334 |
+
<td>-8.0626e1</td>
|
| 335 |
+
<td>-1.0129e2</td>
|
| 336 |
+
<td>-1.0048e2</td>
|
| 337 |
+
</tr>
|
| 338 |
+
<tr>
|
| 339 |
+
<td>4</td>
|
| 340 |
+
<td>-5.8068e1</td>
|
| 341 |
+
<td>-6.0420e1</td>
|
| 342 |
+
<td>-7.6577e1</td>
|
| 343 |
+
<td>-7.9048e1</td>
|
| 344 |
+
<td>-1.0555e2</td>
|
| 345 |
+
<td>-1.0419e2</td>
|
| 346 |
+
</tr>
|
| 347 |
+
<tr>
|
| 348 |
+
<td>8</td>
|
| 349 |
+
<td>-5.8068e1</td>
|
| 350 |
+
<td>-6.1350e1</td>
|
| 351 |
+
<td>-7.7593e1</td>
|
| 352 |
+
<td>-7.9561e1</td>
|
| 353 |
+
<td>-1.0250e2</td>
|
| 354 |
+
<td>-1.0419e2</td>
|
| 355 |
+
</tr>
|
| 356 |
+
<tr>
|
| 357 |
+
<td>16</td>
|
| 358 |
+
<td>-5.8068e1</td>
|
| 359 |
+
<td>-6.1350e1</td>
|
| 360 |
+
<td>-8.0518e1</td>
|
| 361 |
+
<td>-8.3793e1</td>
|
| 362 |
+
<td>-1.0411e2</td>
|
| 363 |
+
<td>-1.0604e2</td>
|
| 364 |
+
</tr>
|
| 365 |
+
<tr>
|
| 366 |
+
<td>32</td>
|
| 367 |
+
<td>-6.1350e1</td>
|
| 368 |
+
<td>-6.1350e1</td>
|
| 369 |
+
<td>-8.3664e1</td>
|
| 370 |
+
<td>-8.3793e1</td>
|
| 371 |
+
<td>-1.0463e2</td>
|
| 372 |
+
<td>-1.0604e2</td>
|
| 373 |
+
</tr>
|
| 374 |
+
</tbody>
|
| 375 |
+
</table>
|
| 376 |
+
|
| 377 |
+
Table 1: Energy values obtained by the deterministic local tracing method for the perturbed Lennard-Jones problems.
|
| 378 |
+
|
| 379 |
+
set to zero. In this test, *p* is set to 1, and *ω* to 10.
|
| 380 |
+
|
| 381 |
+
Table 1 lists the results for some example problems (n=16,20,24), obtained by the algorithm using different numbers of processors (*p*). The data in the table are the energy values for the solutions obtained by the algorithm. To transform the function, a set of values {*λ*ᵢ: *i* = 1, ..., *m*} are used with *λ*ᵢ = (*i* − 1)h, h = 0.01. So, *m* = 1 simply implies that no transformation is used, and the algorithm is just a local minimization sampling procedure. The comparison between the two cases, *m* = 1 and *m* = 40, shows that with transformation, the algorithm performs much more effectively than directly doing local minimization on the given function. In the table, we can also see that as the number of processors increases, the energy values for the solutions obtained by the algorithm decreases rapidly.
|
| 382 |
+
|
| 383 |
+
# 11 Software Development
|
| 384 |
+
|
| 385 |
+
Based on this work, we are currently developing a parallel continuation-based global optimization software system, called Cglop (Figure 7), for molecular conformation and protein folding. An initial version of the system has just been completed (see [5] for more details).
|
| 386 |
+
|
| 387 |
+
The system transforms the objective function into a sequence of gradually deformed functions. There are three subsystems corresponding to three different solution tracing procedures, namely, the global simulated annealing random search (GLOBAL), the Newton's local minimization method (LOCAL), and the Euler-Newton predictor-corrector method (PC). As we
|
| 388 |
+
---PAGE_BREAK---
|
| 389 |
+
|
| 390 |
+
have discussed in this paper, the random search method is more robust but also costly. The deterministic local tracing is efficient, but may not guarantee a global minimizer. The predictor-corrector method provides a more accurate way to trace the solution. Overall, each of these methods has advantages and disadvantages, but the combination of them provides a robust set of numerical tools for both effective and efficient trace of the solutions. The system also provides transformation routines (TRANSFORMATION) to both transform user-supplied functions (USER FUNCTIONS) using numerical integration (INTEGRAL) and construct corresponding function values look-up tables. The function evaluations in the solution tracing process are conducted by cubic spline (SPLINE) using the function values in the look-up tables.
|
| 391 |
+
|
| 392 |
+
The system is written in C and developed on the IBM SP1 with PVM used for parallel message passing extensions. It is easy to port to a variety of parallel architectures including a cluster of local workstations. The system is meant to be used as a computational platform for basic interdisciplinary studies on molecular conformation and protein folding.
|
| 393 |
+
|
| 394 |
+
## Acknowledgements
|
| 395 |
+
|
| 396 |
+
This research was supported partially by the Cornell Theory Center, which receives funding from members of its Corporate Research Institute, the National Science Foundation (NSF), the Advanced Research Projects Agency (ARPA), the National Institutes of Health (NIH), New York State, and IBM Corporation.
|
| 397 |
+
|
| 398 |
+
## References
|
| 399 |
+
|
| 400 |
+
[1] Emile Aarts, and Jan Korst [1989]. *Simulated Annealing and Boltzmann Machines*. John Wiley & Sons, New York, NY.
|
| 401 |
+
|
| 402 |
+
[2] Eugene L. Allgower and Kurt Georg [1990]. *Numerical Continuation Methods*. Springer-Verlag, New York, NY.
|
| 403 |
+
|
| 404 |
+
[3] Thomas F. Coleman, David Shalloway and Zhijun Wu [1993]. *Isotropic Effective Energy Simulated Annealing Searches for Low Energy Molecular Cluster States*. Computational Optimization and Applications, 2, 145-170, 1993.
|
| 405 |
+
---PAGE_BREAK---
|
| 406 |
+
|
| 407 |
+
[4] Thomas F. Coleman, David Shalloway and Zhijun Wu [1994]. *A Parallel Build-Up Algorithm for Global Energy Minimizations of Molecular Clusters Using Effective Energy Simulated Annealing*. Journal of Global Optimization, 4, 171-185, 1994.
|
| 408 |
+
|
| 409 |
+
[5] Thomas F. Coleman and Zhijun Wu [1994]. *Cglop – A Parallel Continuation-Based Global Optimization Package for Molecular Conformation*. Advanced Computing Research Institute, Cornell University, Ithaca, NY, to be submitted to ACM Transactions on Mathematical Software.
|
| 410 |
+
|
| 411 |
+
[6] J. E. Dennis, Jr. and R. B. Schnabel [1983]. *Numerical Methods for Unconstrained Optimization and Nonlinear Equations*. Prentice-Hall, Englewood Cliffs, NJ.
|
| 412 |
+
|
| 413 |
+
[7] Zhijun Wu [1993]. *The Effective Energy Transformation Scheme as a General Continuation Approach to Global Optimization with Application to Molecular Conformation*. Technical Report CTC93TR143, Advanced Computing Research Institute, Cornell University, Ithaca, NY, submitted to SIAM Journal on Optimization.
|
| 414 |
+
---PAGE_BREAK---
|
| 415 |
+
|
| 416 |
+
Figure 7: The Cglop system structure.
|
samples_new/texts_merged/6724971.md
ADDED
|
@@ -0,0 +1,641 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# THE EXISTENCE OF FIXED POINTS FOR THE $·/GI/1$ QUEUE
|
| 5 |
+
|
| 6 |
+
BY JEAN MAIRESSE AND BALAJI PRABHAKAR
|
| 7 |
+
|
| 8 |
+
CNRS-Université Paris 7 and Stanford University
|
| 9 |
+
|
| 10 |
+
A celebrated theorem of Burke's asserts that the Poisson process is a fixed point for a stable exponential single server queue; that is, when the arrival process is Poisson, the equilibrium departure process is Poisson of the same rate. This paper considers the following question: Do fixed points exist for queues which dispense i.i.d. services of finite mean, but otherwise of arbitrary distribution (i.e., the so-called $·/GI/1/∞$/FCFS queues)? We show that if the service time $S$ is nonconstant and satisfies $\int P\{S \ge u\}^{1/2} du < \infty$, then there is an unbounded set $\mathcal{S} \subset (E[S], \infty)$ such that for each $\alpha \in \mathcal{S}$ there exists a unique ergodic fixed point with mean inter-arrival time equal to $\alpha$. We conjecture that in fact $\mathcal{S} = (E[S], \infty)$.
|
| 11 |
+
|
| 12 |
+
## 1. Introduction.
|
| 13 |
+
Consider a single server First-Come-First-Served queue with infinite waiting room, at which the service times are i.i.d. ($a·/GI/1/∞$/FCFS queue). We are interested in the question of whether such queues possess fixed points: an inter-arrival process which has the same distribution as the corresponding inter-departure process.
|
| 14 |
+
|
| 15 |
+
The question of the existence of fixed points is intimately related to the limiting behavior of the distribution of departure processes from a tandem of queues. Specifically, consider an infinite tandem of $·/GI/1/∞$/FCFS queues. The queues are indexed by $k \in \mathbb{N}$ and the customers are indexed by $n \in \mathbb{Z}$. The numbering of each customer is fixed at the first queue and remains the same as he/she passes through the tandem. Each customer leaving queue $k$ immediately enters queue $k+1$. At queue $k$, write $S(n, k)$ for the service time of customer $n$ and $A(n, k)$ for the inter-arrival time between customers $n$ and $n+1$. We assume that the initial inter-arrival process, $A^0 = (A(n, 0), n \in \mathbb{Z})$, is ergodic and independent of $(S(n, k), n \in \mathbb{Z}, k \in \mathbb{N})$. We also assume that the service variables $(S(n, k), n, k)$ are i.i.d. and that $E[S(0, 0)] < E[A(0, 0)] < \infty$. To avoid trivialities we assume that the service times are nonconstant, that is, $P\{S(0, 0) \neq E[S(0, 0)]\} > 0$.
|
| 16 |
+
|
| 17 |
+
By Loynes' results [15], each of the equilibrium departure processes $A^k = (A(n, k), n \in \mathbb{Z})$ for $k \ge 1$ is ergodic of mean $E[A(0, 0)]$. The following are natural fixed point problems:
|
| 18 |
+
|
| 19 |
+
Received February 2001; revised January 2003.
|
| 20 |
+
AMS 2000 subject classifications. 60K25, 60K35, 68M20, 90B15, 90B22.
|
| 21 |
+
Key words and phrases. Queue, tandem queueing networks, general independent services, stability, Loynes theorem, Burke theorem.
|
| 22 |
+
---PAGE_BREAK---
|
| 23 |
+
|
| 24 |
+
*Existence.* For a given service distribution, does there exist a mean $\alpha$ ergodic inter-arrival process such that the corresponding inter-departure process has the same distribution? If yes, call such a distribution an *ergodic fixed point* of mean $\alpha$.
|
| 25 |
+
|
| 26 |
+
*Uniqueness.* If an ergodic fixed point of mean $\alpha$ exists, is it unique?
|
| 27 |
+
|
| 28 |
+
*Convergence.* Assume there is a unique ergodic fixed point of mean $\alpha$. If the inter-arrival process to the first queue, $A^0$, is ergodic of mean $\alpha$, then does the law of $A^k$ converge weakly to the ergodic fixed point of mean $\alpha$ as $k \to \infty$? If yes, call the fixed point an *attractor*.
|
| 29 |
+
|
| 30 |
+
A strand of research in stochastic network theory has pursued these questions for some time. Perhaps the earliest and best-known result is Burke's theorem [7], which shows that the Poisson process of rate $1/\alpha$ is a fixed point for exponential server queues with mean service time $\beta < \alpha$. Anantharam [1] established its uniqueness, and Mountford and Prabhakar [18] established that it is an attractor.
|
| 31 |
+
|
| 32 |
+
For $·/GI/1/∞/FCFS$ queues, the subject of this paper, Chang [8] established the uniqueness of an ergodic fixed point, should it exist, assuming that the services have a finite mean and an unbounded support. Prabhakar [19] provides a complete solution to the problems of uniqueness and convergence assuming only a finite mean for the service time and the existence of an ergodic fixed point. However, the existence of such fixed points was only known for exponential and geometric service times.
|
| 33 |
+
|
| 34 |
+
This paper establishes the existence of fixed points for a large class of service time distributions. We obtain the following result: if the service time $S$ has mean $\beta$ and if $\int P\{S \ge u\}^{1/2} du < \infty$, then there is a set $\mathcal{S}$ closed in $(\beta, \infty)$, with $\inf\{u \in \mathcal{S}\} = \beta$, $\sup\{u \in \mathcal{S}\} = \infty$ and such that:
|
| 35 |
+
|
| 36 |
+
(a) For $\alpha \in \mathcal{S}$, there exists a mean $\alpha$ ergodic fixed point for the queue. Given this, [19] implies the attractiveness of the fixed point.
|
| 37 |
+
|
| 38 |
+
(b) For $\alpha \notin \mathcal{S}$, consider the stationary (but not ergodic) process $F$ of mean $\alpha$ obtained as the convex combination of the ergodic fixed points of means $\underline{\alpha}$ and $\bar{\alpha}$ where $\underline{\alpha} = \sup\{u \in \mathcal{S}, u \le \alpha\}$ and $\bar{\alpha} = \inf\{u \in \mathcal{S}, \alpha \le u\}$. (Since $\mathcal{S}$ is closed, $\underline{\alpha}$ and $\bar{\alpha}$ belong to $\mathcal{S}$ and $F$ is a fixed point for the queue.) If the inter-arrival times of the input process have a mean $\alpha$, then the Cesaro average of the laws of the first $k$ inter-departure processes converges weakly to $F$ as $k \to \infty$.
|
| 39 |
+
|
| 40 |
+
These results rely heavily on a strong law of large numbers for the total time spent by a customer in a tandem of queues proved in [2]. We conjecture that our results are suboptimal and that in fact $\mathcal{S} = (\beta, \infty)$.
|
| 41 |
+
|
| 42 |
+
**2. Preliminaries.** The presence of an underlying probability space $(\Omega, \mathcal{F}, P)$ on which all the r.v.'s are defined is assumed all along. Given a measurable space $(K, \mathcal{K})$, we denote by $\mathcal{L}(K)$ the set of $K$-valued random variables, and by $\mathcal{M}(K)$ the set of probability measures on $(K, \mathcal{K})$. Throughout the paper, we
|
| 43 |
+
---PAGE_BREAK---
|
| 44 |
+
|
| 45 |
+
consider random variables valued in $\mathbb{R}_+^Z$. Equipped with the product topology, or
|
| 46 |
+
topology of coordinate-wise convergence, $\mathbb{R}_+^Z$ is a Polish space. We shall work
|
| 47 |
+
on the measurable space $(\mathbb{R}_+^Z, \mathcal{B})$ where $\mathcal{B}$ is the corresponding Borel $\sigma$-algebra.
|
| 48 |
+
With the topology of weak convergence, the space $\mathcal{M}(\mathbb{R}_+^Z)$ is a Polish space. For
|
| 49 |
+
details see, for instance, [3], [10] or [11]. The weak convergence of $(\mu_n)_n$ to $\mu$ is
|
| 50 |
+
denoted by $\mu_n \xrightarrow{w} \mu$. Furthermore, for $X_n, X \in \mathcal{L}(\mathbb{R}_+^Z)$, we say that $X_n$ converges
|
| 51 |
+
weakly to $X$ (and we write $X_n \xrightarrow{w} X$) if the law of $X_n$ converges weakly to the law
|
| 52 |
+
of $X$. A process $X \in \mathcal{L}(\mathbb{R}_+^Z)$ is *constant* if $X = (c)^Z$ a.s. for some $c \in \mathbb{R}_+$.
|
| 53 |
+
|
| 54 |
+
We write $\mathcal{M}_s(\mathbb{R}_+^Z)$ for the set of stationary probability measures with finite one-
|
| 55 |
+
dimensional mean, and $\mathcal{M}_c(\mathbb{R}_+^Z)$ for the set of ergodic probability measures with
|
| 56 |
+
finite one-dimensional mean. For $\alpha \in \mathbb{R}_+$, we denote by $\mathcal{M}_s^\alpha(\mathbb{R}_+^Z)$ and $\mathcal{M}_c^\alpha(\mathbb{R}_+^Z)$
|
| 57 |
+
the sets of stationary and ergodic probability measures with one-dimensional
|
| 58 |
+
mean $\alpha$.
|
| 59 |
+
|
| 60 |
+
The strong order on $\mathcal{M}(\mathbb{R}_+^Z)$, or $\mathcal{L}(\mathbb{R}_+^Z)$, is defined as follows (see [21] for more
|
| 61 |
+
on strong orders). Consider $A, B \in \mathcal{L}(\mathbb{R}_+^Z)$ with respective distributions $\mu$ and $\nu$.
|
| 62 |
+
We say that $A$ (resp. $\mu$) is strongly dominated by $B$ (resp. $\nu$), denoted $A \le_{\text{st}} B$
|
| 63 |
+
resp. $\mu \le_{\text{st}} \nu$), if
|
| 64 |
+
|
| 65 |
+
$$E[f(A)] \le E[f(B)] \quad (\text{resp.} \int f d\mu \le \int f dv),$$
|
| 66 |
+
|
| 67 |
+
for any measurable $f: \mathbb{R}_+^Z \to \mathbb{R}$ which is increasing and such that the expectations
|
| 68 |
+
are well defined. Here we consider the usual component-wise ordering of $\mathbb{R}_+^Z$.
|
| 69 |
+
|
| 70 |
+
PROPOSITION 2.1 ([22]). For $\mu$ and $\nu$ belonging to $\mathcal{M}(\mathbb{R}_+^Z)$, $\mu \le_{\text{st}} \nu$ iff
|
| 71 |
+
$\int f d\mu \le \int f dv$ for any increasing and continuous real function $f$ such that the
|
| 72 |
+
expectations are well defined. For $\mu_n, \nu_n, n \in \mathbb{N}$, $\mu$ and $\nu$ belonging to $\mathcal{M}(\mathbb{R}_+^Z)$,
|
| 73 |
+
suppose that $\mu_n \xrightarrow{w} \mu$, $\nu_n \xrightarrow{w} \nu$ and that $\mu_n \le_{\text{st}} \nu_n$. Then $\mu \le_{\text{st}} \nu$.
|
| 74 |
+
|
| 75 |
+
We shall use the following fact a couple of times. Consider two random
|
| 76 |
+
processes on $\mathbb{R}_+^Z$: A which is ergodic and B which is stationary. Assume
|
| 77 |
+
that $A \le_{\text{st}} B$. Let B be compatible with a P-stationary shift $\theta: \Omega \to \Omega$ and denote
|
| 78 |
+
by $\tilde{\mathfrak{T}}$ the invariant $\sigma$-algebra. Then we have
|
| 79 |
+
|
| 80 |
+
$$ (1) \qquad E[A(0)] \le E[B(0)|\tilde{\mathfrak{T}}] \qquad \text{a.s.} $$
|
| 81 |
+
|
| 82 |
+
Furthermore, if A is independent of B then the conditional law of B on the event
|
| 83 |
+
{$E[B(0)|\tilde{\mathfrak{T}}] = E[A(0)]$} is equal to the law of A. To prove this, the two ingredients
|
| 84 |
+
are a representation theorem such as Theorem 1 in [14] and Birkhoff's ergodic
|
| 85 |
+
theorem.
|
| 86 |
+
|
| 87 |
+
The symbols $\sim$ and $\perp$ stand for "is distributed as" and "is independent of," respectively. We use the notation $N^* = N \setminus \{0\}$, $R^* = R \setminus \{0\}$, and $x^+ = \max(x, 0) = x \vee 0$. For $u, v$ in $\mathbb{R}^N$ or $\mathbb{R}^Z$, $u \le v$ denotes $u(n) \le v(n)$ for all $n$.
|
| 88 |
+
---PAGE_BREAK---
|
| 89 |
+
|
| 90 |
+
**3. The model.** We introduce successively the $·/·/1/∞/FCFS$ queue (Section 3.1), the $G/G/1/∞/FCFS$ queue (Section 3.2), and the infinite tandem $G/G/1/∞/FCFS → ·/GI/1/∞/FCFS → ...$ (Section 3.3). The presentation is made in an abstract and functional way. However, to help intuition, we use the queueing terminology and notation.
|
| 91 |
+
|
| 92 |
+
**3.1. The single queue.** Define the mapping
|
| 93 |
+
|
| 94 |
+
$$ (2) \qquad \begin{aligned} \Psi : \mathbb{R}_+^Z &\times \mathbb{R}_+^Z \rightarrow \mathbb{R}_+^Z \cup \{(+\infty)^Z\}, \\ (a,s) &\mapsto w = \Psi(a,s), \end{aligned} $$
|
| 95 |
+
|
| 96 |
+
with
|
| 97 |
+
|
| 98 |
+
$$ (3) \qquad \begin{aligned} w(n) &= \Psi(a, s)(n) \\ &= \left[ \sup_{j \le n-1} \sum_{i=j}^{n-1} s(i) - a(i) \right]^+. \end{aligned} $$
|
| 99 |
+
|
| 100 |
+
A priori, $\Psi$ is valued in $[0, \infty)^Z$, but it is easily checked using (5) below that $\Psi$ actually takes values in $\mathbb{R}_+^Z \cup \{(+\infty)^Z\}$. The map $\Psi$ computes the workloads ($w$) from the inter-arrivals ($a$) and the services ($s$). Observe that we have, for $m < n$ (Lindley's equations),
|
| 101 |
+
|
| 102 |
+
$$ (4) \qquad w(n) = [w(n-1) + s(n-1) - a(n-1)]^+, $$
|
| 103 |
+
|
| 104 |
+
$$ (5) \qquad w(n) = \left[ \max_{m<j \le n-1} \sum_{i=j}^{n-1} s(i) - a(i) \right] \vee \left[ w(m) + \sum_{i=m}^{n-1} s(i) - a(i) \right] \vee 0. $$
|
| 105 |
+
|
| 106 |
+
Define the mapping
|
| 107 |
+
|
| 108 |
+
$$ (6) \qquad \begin{aligned} \Phi : \mathbb{R}_+^Z &\times \mathbb{R}_+^Z \rightarrow \mathbb{R}_+^Z, \\ (a,s) &\mapsto d = \Phi(a,s), \end{aligned} $$
|
| 109 |
+
|
| 110 |
+
with
|
| 111 |
+
|
| 112 |
+
$$ (7) \qquad d(n) = \Phi(a,s)(n) = [a(n) - s(n) - \Psi(a,s)(n)]^+ + s(n+1). $$
|
| 113 |
+
|
| 114 |
+
Let $L: \mathbb{R}_+^Z \rightarrow \mathbb{R}_+^Z$ denote the translation shift: $Lu(n) = u(n+1)$. Equation (7) can be rewritten as $d = [a - s - \Psi(a,s)]^+ + Ls$. Observe that $d = Ls$ when $\Psi(a,s) = (+\infty)^Z$. In particular, $d$ is always finite. The function $\Phi$ maps the ordered pair formed by the inter-arrival and service processes into the inter-departure process.
|
| 115 |
+
|
| 116 |
+
When $w \in \mathbb{R}_+^Z$, the above equations yield
|
| 117 |
+
|
| 118 |
+
$$ (8) \qquad ∀n, \quad d(n) = a(n) + w(n+1) - w(n) + s(n+1) - s(n) $$
|
| 119 |
+
|
| 120 |
+
or equivalently: $\Phi(a,s) = a + L\Psi(a,s) - \Psi(a,s) + Ls - s$.
|
| 121 |
+
---PAGE_BREAK---
|
| 122 |
+
|
| 123 |
+
The functions $\Psi$ and $\Phi$ are, respectively, decreasing and increasing with respect to the first variable:
|
| 124 |
+
|
| 125 |
+
$$ (9) \qquad \begin{array}{l} \forall a, b \in \mathbb{R}_+^Z, \forall s \in \mathbb{R}_+^Z, \\ a \le b \implies \Psi(a,s) \ge \Psi(b,s), \Phi(a,s) \le \Phi(b,s). \end{array} $$
|
| 126 |
+
|
| 127 |
+
In words, increasing the inter-arrival times increases the inter-departure times and decreases the workloads.
|
| 128 |
+
|
| 129 |
+
**3.2. The stationary queue and Loynes' results.** Consider a measurable and $P$-stationary shift $\theta: \Omega \to \Omega$ and denote by $\mathfrak{T}$ the invariant $\sigma$-algebra. Consider the random processes $A: \Omega \to \mathbb{R}_+^Z$ and $S: \Omega \to \mathbb{R}_+^Z$. Assume that $A$ and $S$ are compatible with $\theta$ (hence stationary), and have a finite and nonzero (one-dimensional) mean. Set $W = \Psi(A, S)$ and $D = \Phi(A, S)$.
|
| 130 |
+
|
| 131 |
+
This model is called a *stationary queue*. When the shift $\theta$ is ergodic, the model is an *ergodic queue*. At last, when the service process $S$ is i.i.d. and nonconstant, the model is called an *i.i.d. queue*. The case of a queue with a constant service process is left out since the fixed point problems are trivial in this case.
|
| 132 |
+
|
| 133 |
+
The following results are standard and due to Loynes [15]. The processes $W$ and $D$ are clearly compatible with $\theta$, hence stationary. We distinguish three cases.
|
| 134 |
+
|
| 135 |
+
*The stable case.* On the event $\{E[S(0)|\mathfrak{T}] < E[A(0)|\mathfrak{T}]\}$, we have $W \in \mathbb{R}_+^Z$ and $E[D(0)|\mathfrak{T}] = E[A(0)|\mathfrak{T}]$. On this event, the queue preserves pathwise means.
|
| 136 |
+
|
| 137 |
+
*The unstable case.* On the event $\{E[S(0)|\mathfrak{T}] > E[A(0)|\mathfrak{T}]\}$, we have $W = (\infty)^Z$ and $D = LS$ [i.e., $\forall n, D(n) = S(n+1)$].
|
| 138 |
+
|
| 139 |
+
*The critical case.* On the event $\{E[S(0)|\mathfrak{T}] = E[A(0)|\mathfrak{T}]\}$, we have $D=LS$ and anything may happen for $W$. For instance, if $A=S=(c)^Z$ for $c \in \mathbb{R}_+$, then $W=(0)^Z$. If $S$ is i.i.d. and nonconstant and $A \perp S$, then $W=(\infty)^Z$.
|
| 140 |
+
|
| 141 |
+
Observe that a consequence of the above is that
|
| 142 |
+
|
| 143 |
+
$$ \{E[D(0)|\mathfrak{T}] = E[A(0)|\mathfrak{T}]\} = \{E[S(0)|\mathfrak{T}] \le E[A(0)|\mathfrak{T}]\} $$
|
| 144 |
+
|
| 145 |
+
(more rigorously, the symmetric difference of the two events has 0 probability).
|
| 146 |
+
|
| 147 |
+
When the shift $\theta$ is ergodic, we are a.s. in the stable case when $E[S(0)] < E[A(0)]$, respectively, in the unstable case when $E[S(0)] > E[A(0)]$, and in the critical case when $E[S(0)] = E[A(0)]$.
|
| 148 |
+
|
| 149 |
+
Let $\sigma$ be the law of $S$. Define
|
| 150 |
+
|
| 151 |
+
$$ (10) \qquad \begin{array}{l} \Phi_{\sigma}: M_S(\mathbb{R}_+^Z) \to M_S(\mathbb{R}_+^Z), \\ \mu \mapsto \Phi_{\sigma}(\mu), \end{array} $$
|
| 152 |
+
---PAGE_BREAK---
|
| 153 |
+
|
| 154 |
+
where $\Phi_{\sigma}(\mu)$ is the law of $\Phi(A, S)$ where $A \sim \mu$, $S \sim \sigma$ and $A \perp S$. The map $\Phi_{\sigma}$ is called the *queueing map*. A distribution $\mu$ such that $\Phi_{\sigma}(\mu) = \mu$ is called a *fixed point* for the queue. If the inter-arrival process is distributed as a fixed point $\mu$, then so is the inter-departure process. Consider now an ergodic queue. Rephrasing Loynes' results, we get
|
| 155 |
+
|
| 156 |
+
$$
|
| 157 |
+
\begin{align*}
|
| 158 |
+
\forall \alpha > \beta, \quad \Phi_{\sigma}: \mathcal{M}_{c}^{\alpha}(\mathbb{R}_{+}^{\mathbb{Z}}) &\rightarrow \mathcal{M}_{c}^{\alpha}(\mathbb{R}_{+}^{\mathbb{Z}}), \\
|
| 159 |
+
\forall \alpha \leq \beta, \quad \Phi_{\sigma}: \mathcal{M}_{c}^{\alpha}(\mathbb{R}_{+}^{\mathbb{Z}}) &\rightarrow \{\sigma\}.
|
| 160 |
+
\end{align*}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
In particular, we have $\Phi_{\sigma}(\sigma) = \sigma$. We say that $\sigma$ is a *trivial* fixed point for the ergodic queue.
|
| 164 |
+
|
| 165 |
+
Below the main objective is to get nontrivial fixed points for $\Phi_{\sigma}$ in the special case of an i.i.d. queue. More precisely, we want to address the following question: for any $\alpha > \beta$, does there exist a fixed point which is ergodic and of mean $\alpha$?
|
| 166 |
+
|
| 167 |
+
3.3. *Stable i.i.d. queues in tandem.* Consider a family $\{S(n, k), n \in \mathbb{Z}, k \in \mathbb{N}\}$ of i.i.d. random variables valued in $\mathbb{R}_+$ with $E[S(0, 0)] = \beta \in \mathbb{R}_+^*$. Assume that $S(0, 0)$ is nonconstant, that is, $P\{S(0, 0) = \beta\} < 1$. For $k$ in $\mathbb{N}$, define $S^k: \Omega \to \mathbb{R}_+^\mathbb{Z}$ by $S^k = (S(n,k))_{n \in \mathbb{Z}}$. Let $\sigma$ be the distribution of $S^k$. Consider $A^0 = (A(n,0))_{n \in \mathbb{Z}}: \Omega \to \mathbb{R}_+^\mathbb{Z}$ and assume that $A^0$ is stationary, independent of $S^k$ for all $k$, and satisfies $E[A(0,0)] = \alpha \in \mathbb{R}_+^*$. Let $\theta$ be a $P$-stationary shift such that $A^0$ and $S^k$ for all $k$ are compatible with $\theta$. Let $\mathfrak{T}$ be the corresponding invariant $\sigma$-algebra. We assume that the stability condition $\beta < E[A(0,0)|\mathfrak{T}]$ holds a.s.
|
| 168 |
+
|
| 169 |
+
Define recursively for all $k \in \mathbb{N}$
|
| 170 |
+
|
| 171 |
+
$$ (11) \qquad W^k = (W(n,k))_{n \in \mathbb{Z}} = \Psi(A^k, S^k), $$
|
| 172 |
+
|
| 173 |
+
$$ (12) \qquad A^{k+1} = (A(n, k+1))_{n \in \mathbb{Z}} = \Phi(A^k, S^k). $$
|
| 174 |
+
|
| 175 |
+
The random processes $A^k$, $S^k$ and $W^k$ are, respectively, the inter-arrival, service and workload processes at queue $k$. The random process $A^{k+1}$ is the inter-departure process at queue $k$ and the inter-arrival process at queue $k+1$. Each $(A^k, S^k)$ defines a stable i.i.d. queue according to the terminology of Section 3.2. Globally, this model is called a *tandem of stable i.i.d. queues*.
|
| 176 |
+
|
| 177 |
+
The sequence $(A^k)_k$ is a Markov chain on the state space $\mathbb{R}_+^\mathbb{Z}$. Clearly, $\mu$ is a stationary distribution of $(A^k)_k$ if and only if $\mu$ is a fixed point for the queue, that is, iff $\Phi_\sigma(\mu) = \mu$. Hence, the problem to be solved can be rephrased as: does the Markov chain $(A^k)_k$ admit nontrivial stationary distributions?
|
| 178 |
+
|
| 179 |
+
**4. Uniqueness of fixed points and convergence.** In this section, we recall several results about the uniqueness of fixed points as well as convergence results. Associated with the existence results to be proved in Section 5, the results recalled here complete the picture about fixed point theorems. More importantly, they will be instrumental in several of the later proofs.
|
| 180 |
+
---PAGE_BREAK---
|
| 181 |
+
|
| 182 |
+
THEOREM 4.1 ([2, 17]). Consider the stable i.i.d. tandem model defined in Section 3.3 with an ergodic inter-arrival process of mean $\alpha > \beta$. Assume that
|
| 183 |
+
|
| 184 |
+
$$ (13) \qquad \int_{\mathbb{R}_+} P\{S(0, 0) \ge u\}^{1/2} du < \infty. $$
|
| 185 |
+
|
| 186 |
+
Then there exists $M(\alpha) \in \mathbb{R}_+$ such that almost surely $\lim_{n \to +\infty} n^{-1} \times \sum_{i=0}^{n-1} W(0, i) = M(\alpha)$, where $M(\alpha) = \sup_{x \ge 0} (\gamma(x) - \alpha x)$ and the function $\gamma: \mathbb{R}_+ \to \mathbb{R}_+$ depends only on the service process. If we further assume that the initial inter-arrival process satisfies
|
| 187 |
+
|
| 188 |
+
$$ (14) \qquad \exists c, E[S(0, 0)] < c < E[A(0, 0)], \\ E\left[\sup_{n \in \mathbb{N}^*}\left[\sum_{i=-n}^{-1} c - A(i, 0)\right]^+\right] < \infty, $$
|
| 189 |
+
|
| 190 |
+
then the convergence to $M(\alpha)$ also holds in $L_1$.
|
| 191 |
+
|
| 192 |
+
Observe that $M(\alpha)$ depends on the inter-arrival process only via its mean. The function $\gamma$ in Theorem 4.1 is continuous, strictly increasing, concave and satisfies $\gamma(0) = 0$. For details on $\gamma$, refer to [2, 12].
|
| 193 |
+
|
| 194 |
+
Theorem 4.1 is proved in [2] under the condition: $E[S(0, 0)^{3+a}] < \infty$ for some $a > 0$. The above version is proved in [17] (using similar methods as in [2]) and is better since we have
|
| 195 |
+
|
| 196 |
+
$$ \begin{align*} [\exists a > 0, E[S(0, 0)^{2+a}] < \infty] &\implies \int P\{S(0, 0) \ge u\}^{1/2} du < \infty \\ &= E[S(0, 0)^2] < \infty. \end{align*} $$
|
| 197 |
+
|
| 198 |
+
Condition (14) is slightly stronger than $E[W(0, 0)] < \infty$. Indeed, recall the following results from [9]. If $E[S(0, 0)^2] < \infty$, then, setting $\beta = E[S(0, 0)]$,
|
| 199 |
+
|
| 200 |
+
$$ (15) \qquad \begin{aligned} \exists c > \beta, E\left[\sup_{n \ge 1}\left[\sum_{i=-n}^{-1} c - A(i, 0)\right]^+\right] &< \infty \\ &\implies E[W(0, 0)] < \infty, \\ E[W(0, 0)] &< \infty \\ &\implies E\left[\sup_{n \ge 1}\left[\sum_{i=-n}^{-1} \beta - A(i, 0)\right]^+\right] < \infty. \end{aligned} $$
|
| 201 |
+
|
| 202 |
+
Condition (14) is satisfied, for example, by the deterministic process $P\{A^0 = (\alpha)^Z\} = 1$.
|
| 203 |
+
|
| 204 |
+
The next result requires some preparation. Let $\mathcal{L}_s(\mathbb{R}_+^Z \times \mathbb{R}_+^Z)$ be the set of random processes $((X(n), Y(n))_{n \in \mathbb{Z}}$ which are stationary in $n$. Consider $\mu$ and $\nu$
|
| 205 |
+
---PAGE_BREAK---
|
| 206 |
+
|
| 207 |
+
in $\mathcal{M}_s(\mathbb{R}_+^Z)$ and let $\mathcal{D}(\mu, \nu) = \{(X, Y) \in \mathcal{L}_s(\mathbb{R}_+^Z \times \mathbb{R}_+^Z) | X \sim \mu, Y \sim \nu\}$. That is, $\mathcal{D}(\mu, \nu)$ is the set of jointly stationary processes whose marginals are distributed as $\mu$ and $\nu$. The $\bar{\rho}$ distance between $\mu$ and $\nu$ is given by
|
| 208 |
+
|
| 209 |
+
$$ (16) \qquad \bar{\rho}(\mu, \nu) = \inf_{(X,Y) \in \mathcal{D}(\mu,\nu)} E[|X(0) - Y(0)|]. $$
|
| 210 |
+
|
| 211 |
+
See Gray [13], Chapter 8, for a proof that $\bar{\rho}$ is indeed a distance. Given two r.v.'s A and B with respective laws $\mu$ and $\nu$, set $\bar{\rho}(A, B) = \bar{\rho}(\mu, \nu)$. We recall a well-known fact (see also Section 7): convergence in the $\bar{\rho}$ distance implies weak convergence, but not conversely.
|
| 212 |
+
|
| 213 |
+
**THEOREM 4.2 ([8, 19]).** Consider a stationary queue as in Section 3.2 with service process S and two inter-arrival processes A and B, possibly of different means. Assume that $A \perp S$ and $B \perp S$. Then,
|
| 214 |
+
|
| 215 |
+
$$ (17) \qquad \bar{\rho}(\Phi(A, S), \Phi(B, S)) \le \bar{\rho}(A, B). $$
|
| 216 |
+
|
| 217 |
+
Consider now a stable i.i.d. tandem model as in Section 3.3 with inter-arrival processes $A^0$ and $B^0$ with different laws but such that $E[A(0,0)|\mathcal{T}] = E[B(0,0)|\mathcal{T}]$ a.s. Recall that $(A^n)_n$ and $(B^n)_n$ are defined recursively by $A^{n+1} = \Phi(A^n, S^n)$ and $B^{n+1} = \Phi(B^n, S^n)$. Then there exists $k \in \mathbb{N}^*$ such that
|
| 218 |
+
|
| 219 |
+
$$ (18) \qquad \bar{\rho}(A^k, B^k) < \bar{\rho}(A^0, B^0). $$
|
| 220 |
+
|
| 221 |
+
If we further assume that $B^1 = \Phi(B^0, S^0) \sim B^0$, then
|
| 222 |
+
|
| 223 |
+
$$ (19) \qquad \lim_{n \to +\infty} \bar{\rho}(A^n, B^0) = 0 \quad \text{and} \quad \text{hence } A^n \xrightarrow{w} B^0. $$
|
| 224 |
+
|
| 225 |
+
Chang [8] gives an elegant proof of (17). He also proves (18) for unbounded services. Prabhakar [19] removes this restriction and also establishes (19). As opposed to Theorem 4.1, observe that the convergence result in (19) is proved under the a priori assumption of existence of a fixed point.
|
| 226 |
+
|
| 227 |
+
Define (“*p*: α” stands for “pathwise means are equal to α”)
|
| 228 |
+
|
| 229 |
+
$$ (20) \qquad \mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z) = \left\{ \mu \in \mathcal{M}_s^\alpha(\mathbb{R}_+^Z) \mid X \sim \mu \Rightarrow \lim_{n \to \infty} \frac{1}{n} \sum_{i=0}^{n-1} X(i) = \alpha \text{ a.s.} \right\}. $$
|
| 230 |
+
|
| 231 |
+
Obviously, $\mathcal{M}_e^\alpha(\mathbb{R}_+^Z) \subset \mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z) \subset \mathcal{M}_s^\alpha(\mathbb{R}_+^Z)$. The ergodic components of $\chi \in \mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z)$ all have one-dimensional mean $\alpha$. An important consequence of (18) is the following uniqueness result.
|
| 232 |
+
|
| 233 |
+
**COROLLARY 4.3.** Consider an i.i.d. queue as in Section 3.2. The corresponding queueing map $\Phi_\sigma$ has at most one fixed point in $\mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z)$ for $\alpha > E[S(0)]$.
|
| 234 |
+
|
| 235 |
+
In particular, there is at most one fixed point in $\mathcal{M}_e^\alpha(\mathbb{R}_+^Z)$. In fact, we have the following stronger result.
|
| 236 |
+
---PAGE_BREAK---
|
| 237 |
+
|
| 238 |
+
PROPOSITION 4.4. Consider an i.i.d. queue as in Section 3.2 and $\alpha > E[S(0)]$. If $\zeta \in M_s^{p:\alpha}(\mathbb{R}_+^Z)$ is a fixed point, then it is necessarily ergodic; that is, $\zeta \in M_c^\alpha(\mathbb{R}_+^Z)$.
|
| 239 |
+
|
| 240 |
+
PROOF. Suppose that the ergodic decomposition of $\zeta$ is given by $\zeta = \int \mu\Gamma(d\mu)$, where $\Gamma$ is a probability measure on $M_c^\alpha(\mathbb{R}_+^Z)$. Denote the support of $\Gamma$ by $\text{supp}(\Gamma) \subset M_c^\alpha(\mathbb{R}_+^Z)$. Assume that $\zeta$ is nonergodic, meaning that $\text{supp}(\Gamma)$ is not a singleton. Let $S$ be a subset of $\text{supp}(\Gamma)$ such that $0 < \Gamma\{S\} < 1$.
|
| 241 |
+
|
| 242 |
+
Consider a stable i.i.d. tandem model as in Section 3.3. Let $A^0$ and $B^0$ be two inter-arrival processes, independent of the services, and such that $A^0 \sim \zeta$, $B^0 \sim \zeta$, $A^0 \perp B^0$. Define $(A^k)_k$ and $(B^k)_k$ as in (12). Let $C_b(\mathbb{R}_+^Z)$ be the set of continuous and bounded functions from $\mathbb{R}_+^Z$ to $\mathbb{R}$. Recall that $L$ is the left translation shift of $\mathbb{R}_+^Z$ and define recursively $L^{i+1} = L \circ L^i$. Define the $\theta$-invariant events
|
| 243 |
+
|
| 244 |
+
$$ A = \left\{ \exists \mu \in S, \forall f \in C_b(\mathbb{R}_+^Z), \lim_{n} \frac{1}{n} \sum_{i=0}^{n-1} f(L^i A^0) = \int f d\mu \right\}, $$
|
| 245 |
+
|
| 246 |
+
$$ B = \left\{ \exists \mu \in \text{supp}(\Gamma) \setminus S, \forall f \in C_b(\mathbb{R}_+^Z), \lim_{n} \frac{1}{n} \sum_{i=0}^{n-1} f(L^i B^0) = \int f d\mu \right\}. $$
|
| 247 |
+
|
| 248 |
+
Roughly speaking, on the event $A \cap B$, the processes $A^0$ and $B^0$ are distributed according to different components of the ergodic decomposition of $\zeta$. Using the independence of $A^0$ and $B^0$, we have
|
| 249 |
+
|
| 250 |
+
$$ P\{A \cap B\} = P\{A\}P\{B\} = \Gamma\{S\}(1 - \Gamma\{S\}) > 0. $$
|
| 251 |
+
|
| 252 |
+
Define the processes
|
| 253 |
+
|
| 254 |
+
$$ \tilde{A}^0 = A^0 1_{A \cap B} + (\alpha)^{\mathbb{Z}} 1_{(A \cap B)^c}, \quad \tilde{B}^0 = B^0 1_{A \cap B} + (\alpha)^{\mathbb{Z}} 1_{(A \cap B)^c}. $$
|
| 255 |
+
|
| 256 |
+
By construction, the laws of $\tilde{A}^0$ and $\tilde{B}^0$ are different and we have $E[\tilde{A}(0, 0)|\mathcal{T}] = E[\tilde{B}(0, 0)|\mathcal{T}] = \alpha$ almost surely. Hence we can apply (18) in Theorem 4.2: there exists $k \in \mathbb{N}^*$ such that $\tilde{\rho}(\tilde{A}^k, \tilde{B}^k) < \tilde{\rho}(\tilde{A}^0, \tilde{B}^0)$. We deduce easily that $\tilde{\rho}(A^k, B^k) < \tilde{\rho}(A^0, B^0)$. This is in obvious contradiction with $\tilde{\rho}(A^0, B^0) = 0$ which follows from $A^0 \sim B^0$. We conclude that the support of $\Gamma$ is a singleton. $\square$
|
| 257 |
+
|
| 258 |
+
**5. Existence of fixed points.** Consider the stable i.i.d. tandem model of Section 3.3. The objective is to prove Theorem 5.1, that is, to obtain nontrivial stationary distributions for $(A^k)_k$, or equivalently nontrivial fixed points for $\Phi_\sigma$.
|
| 259 |
+
|
| 260 |
+
The first step is classical and consists of considering Cesaro averages of the laws of $A^k$. Consider the quadruple $(A^k, S^k, W^k, A^{k+1})$ and denote its law by
|
| 261 |
+
---PAGE_BREAK---
|
| 262 |
+
|
| 263 |
+
$v_k \in \mathcal{M}(\mathbb{R}_+^Z \times \mathbb{R}_+^Z \times [0, \infty]^Z \times \mathbb{R}_+^Z)$. For $n \in \mathbb{N}^*$, define $\mu_n \in \mathcal{M}(\mathbb{R}_+^Z \times \mathbb{R}_+^Z \times [0, \infty]^Z \times \mathbb{R}_+^Z)$ by
|
| 264 |
+
|
| 265 |
+
$$\mu_n = \frac{1}{n} \sum_{k=0}^{n-1} v_k.$$
|
| 266 |
+
|
| 267 |
+
The following interpretation may be useful: $\mu_n$ is the law of $(A^N, S^N, W^N, A^{N+1})$ where $N$ is a r.v. uniformly distributed over $\{0, \dots, n-1\}$ and independent of all the other r.v.'s of the problem.
|
| 268 |
+
|
| 269 |
+
For all $n \in \mathbb{N}^*$, consider a quadruple of random processes $(\hat{A}^n, \hat{S}^n, \hat{W}^n, \hat{D}^n)$ distributed according to $\mu_n$. We have
|
| 270 |
+
|
| 271 |
+
$$ (21) \qquad \hat{S}^n \sim \sigma, \quad \hat{S}^n \perp \hat{A}^n, \quad \hat{W}^n = \Psi(\hat{A}^n, \hat{S}^n), \quad \hat{D}^n = \Phi(\hat{A}^n, \hat{S}^n). $$
|
| 272 |
+
|
| 273 |
+
First of all, we argue that the sequence $(\mu_n)_n$ is tight. Denote by $\mu_n^{(1)}$, $\mu_n^{(2)}$, $\mu_n^{(3)}$ and $\mu_n^{(4)}$ the marginals of $\mu_n$ corresponding respectively to the laws of $\hat{A}^n$, $\hat{S}^n$, $\hat{W}^n$ and $\hat{D}^n$. Since $\mu_n^{(3)}$ is defined on the compact space $[0, \infty]^Z$ and since $\mu_n^{(2)} = \sigma$, the only point to be argued is that $(\mu_n^{(1)})_n$ and $(\mu_n^{(4)})_n$ are tight. According to Loynes' results, we have $\mu_n^{(1)}$, $\mu_n^{(4)} \in \mathcal{M}_s^\alpha(\mathbb{R}_+^Z)$ [we even have $\mu_n^{(1)}$, $\mu_n^{(4)} \in \mathcal{M}_s^{p:\alpha}(\mathbb{R}_+^Z)$]. For $\varepsilon > 0$, the set $K = \prod_{i \in Z}[0, 2^{li+2}/\varepsilon]$ is compact in the product topology according to Tychonoff's theorem. It is immediate to check that for $\mu \in \mathcal{M}_s^\alpha(\mathbb{R}_+^Z)$, we have $\mu\{K\} \ge 1 - \alpha\varepsilon$. We conclude that $(\mu_n^{(1)})_n$ and $(\mu_n^{(4)})_n$ are tight.
|
| 274 |
+
|
| 275 |
+
Consequently, by Prohorov's theorem, $(\mu_n)_n$ admits weakly converging subsequences. Let $\mu$ be a subsequential limit of $(\mu_n)_n$. Consider a quadruple of random processes
|
| 276 |
+
|
| 277 |
+
$$ (22) \qquad (\hat{A}, \hat{S}, \widetilde{\hat{W}}, \widetilde{\hat{D}}) \sim \mu. $$
|
| 278 |
+
|
| 279 |
+
It follows immediately from (21) that
|
| 280 |
+
|
| 281 |
+
$$ (23) \qquad \hat{S} \sim \sigma, \quad \hat{S} \perp \hat{A}. $$
|
| 282 |
+
|
| 283 |
+
Recall that we have $\hat{D}^n = [\hat{A}^n - \hat{S}^n - \hat{W}^n]^+ + L\hat{S}^n$. By the continuous mapping theorem, we deduce that
|
| 284 |
+
|
| 285 |
+
$$ (24) \qquad \tilde{D} = [\hat{A} - \hat{S} - \widetilde{\hat{W}}]^+ + L\widetilde{\hat{S}}. $$
|
| 286 |
+
|
| 287 |
+
On the other hand, it is not a priori true that $\widetilde{\hat{W}} = \Psi(\hat{A}, \hat{S})$ and $\tilde{D} = \Phi(\hat{A}, \hat{S})$ (which is the reason for the notation $\hat{A}, \hat{S}$ on the one side and $\widetilde{\hat{W}}, \tilde{D}$ on the other). Using (5) we have, for all $k < l-1$,
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
\begin{aligned}
|
| 291 |
+
& \left[ \max_{k<l \le j-l-1} \sum_{i=j}^{l-1} (\hat{S}^n(i) - \hat{A}^n(i)) \right]^+ \\
|
| 292 |
+
& \leq \widehat{W}^n(l) = \left[ \max_{k<l \le j-l-1} \sum_{i=j}^{l-1} (\widehat{S}^n(i) - \widehat{A}^n(i)) \right]^+ \vee \left[ (\widehat{W}^n(k) + \sum_{i=k}^{l-1} (\widehat{S}^n(i) - \widehat{A}^n(i)))^+ \right].
|
| 293 |
+
\end{aligned}
|
| 294 |
+
$$
|
| 295 |
+
---PAGE_BREAK---
|
| 296 |
+
|
| 297 |
+
By the continuous mapping theorem, we get
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
\begin{equation} \tag{25}
|
| 301 |
+
\begin{aligned}
|
| 302 |
+
& \left[ \max_{k<j \le l-1} \sum_{i=j}^{l-1} (\hat{S}(i) - \hat{A}(i)) \right]^{+} \\
|
| 303 |
+
& \le \widetilde{W}(l) = \left[ \max_{k<j \le l-1} \sum_{i=j}^{l-1} (\hat{S}(i) - \hat{A}(i)) \right]^{+} \vee \left[ \widetilde{W}(k) + \sum_{i=k}^{l-1} (\hat{S}(i) - \hat{A}(i)) \right].
|
| 304 |
+
\end{aligned}
|
| 305 |
+
\end{equation}
|
| 306 |
+
$$
|
| 307 |
+
|
| 308 |
+
By letting $k$ go to $-\infty$, and using (24), we conclude that
|
| 309 |
+
|
| 310 |
+
$$
|
| 311 |
+
(26) \qquad \Psi(\hat{A}, \hat{S}) \le \tilde{W}, \quad L\hat{S} \le \tilde{D} \le \Phi(\hat{A}, \hat{S}).
|
| 312 |
+
$$
|
| 313 |
+
|
| 314 |
+
The right-hand side equality in (25) also shows that $\tilde{W} \in \mathbb{R}_+^{\mathbb{Z}} \cup \{(\infty)^{\mathbb{Z}}\}$ (a priori the definition only implied $\tilde{W} \in [0, \infty]^{\mathbb{Z}}$).
|
| 315 |
+
|
| 316 |
+
The next argument which uses properties of Cesaro averages to show that $\hat{A} \sim \tilde{D}$ is standard. Let $\zeta$ be the distribution of $A^0$. We have by definition $A^n \sim \Phi_\sigma^n(\zeta)$ and $\hat{A}^n \sim n^{-1} \sum_{i=0}^{n-1} \Phi_\sigma^i(\zeta) = \zeta_n$. We have
|
| 317 |
+
|
| 318 |
+
$$
|
| 319 |
+
\hat{D}^n = \Phi(\hat{A}^n, \hat{S}^n) \sim \Phi_\sigma(\zeta_n) = \zeta_n + \frac{1}{n}(\Phi_\sigma^n(\zeta) - \zeta),
|
| 320 |
+
$$
|
| 321 |
+
|
| 322 |
+
where the left-hand side equality makes sense as an equality between signed
|
| 323 |
+
measures. We deduce that $\Phi_\sigma(\zeta_n) - \zeta_n$ converges in total variation, hence
|
| 324 |
+
also weakly, to the zero measure (here we consider weak and total variation
|
| 325 |
+
convergence of signed measures). There is a subsequence along which $\zeta_n$,
|
| 326 |
+
respectively $\Phi_\sigma(\zeta_n)$, converges to the law of $\hat{A}$, respectively $\tilde{D}$. We conclude that
|
| 327 |
+
|
| 328 |
+
$$
|
| 329 |
+
(27) \qquad \hat{A} \sim \tilde{D}.
|
| 330 |
+
$$
|
| 331 |
+
|
| 332 |
+
Now if we manage to prove that $\tilde{D} = \Phi(\hat{A}, \hat{S})$, we can conclude that the law of $\hat{A}$ is a fixed point for the queue. We now turn our attention to proving this last and tricky point.
|
| 333 |
+
|
| 334 |
+
Stationarity is preserved by weak convergence. Hence the law of $(\hat{A}(n), \hat{S}(n), \tilde{W}(n), \tilde{D}(n))_n$ is stationary in $n$. Let $\theta$ be a stationary shift on the underlying probability space such that $(\hat{A}, \hat{S}, \tilde{W}, \tilde{D})$ is compatible with $\theta$. Let $\mathfrak{T}$ be the corresponding invariant $\sigma$-algebra.
|
| 335 |
+
|
| 336 |
+
Using (26) and (27), we deduce that $\hat{A} \ge_{st} \hat{S}$. In particular, $E[\hat{A}(0)|\mathfrak{T}] \ge \beta$ a.s. using (1). Define the events
|
| 337 |
+
|
| 338 |
+
$$
|
| 339 |
+
\mathcal{A} = \{ E[\hat{A}(0)|\mathfrak{T}] = \beta \}, \quad \mathcal{A}^c = \{ E[\hat{A}(0)|\mathfrak{T}] > \beta \}.
|
| 340 |
+
$$
|
| 341 |
+
|
| 342 |
+
Using Loynes' results for the critical case, we have $\Phi(\hat{A}, \hat{S}) = L\hat{S}$ on the event $\mathcal{A}$.
|
| 343 |
+
Now using (26), we deduce that $\tilde{D} = \Phi(\hat{A}, \hat{S}) = L\tilde{S}$ on the event $\mathcal{A}$.
|
| 344 |
+
|
| 345 |
+
Since $\hat{A} \ge_{st} \hat{S}$ and $\hat{A} \perp \tilde{S}$, we have, according to (1),
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
\hat{A} = \bar{\bar{S}} 1_{\mathcal{A}} + \hat{A} 1_{\mathcal{A}^c},
|
| 349 |
+
$$
|
| 350 |
+
---PAGE_BREAK---
|
| 351 |
+
|
| 352 |
+
where $\bar{S} \sim \hat{S}$. Furthermore, we have just proved that
|
| 353 |
+
|
| 354 |
+
$$ \tilde{D} = L\hat{S}_{1A} + \tilde{D}_{1A^c}. $$
|
| 355 |
+
|
| 356 |
+
Since $\hat{A} \sim \tilde{D}$, we deduce readily that $\hat{A}1_{A^c} \sim \tilde{D}1_{A^c}$. On the event $A^c$, we have, using Birkhoff's ergodic theorem,
|
| 357 |
+
|
| 358 |
+
$$ \lim_{n \to \infty} \frac{1}{n} \sum_{i=-n}^{-1} \hat{A}(i) = E[\hat{A}(0)|\mathcal{T}] > \beta \implies \lim_{n \to \infty} \frac{1}{n} \sum_{i=-n}^{-1} \tilde{D}(i) > \beta. $$
|
| 359 |
+
|
| 360 |
+
In view of $\tilde{D} = [\hat{A} + \hat{S} - \tilde{W}]^+ + L\hat{S}$, we deduce that on $A^c$, we have $\tilde{W} \in \mathbb{R}_+^Z$ a.s. For $k<l-1$, set $Z_k = [\tilde{W}(k) + \sum_{i=k}^{l-1} \hat{S}(i) - \hat{A}(i)]$. Using Birkhoff's ergodic theorem, on the event $A^c$, $Z_k$ converges in probability to $-\infty$ as $k$ goes to $-\infty$. Going back to the inequalities in (25), it follows that on the event $A^c$,
|
| 361 |
+
|
| 362 |
+
$$ \tilde{W}(l) = \left[ \sup_{j \le l-1} \sum_{i=j}^{l-1} (\hat{S}(i) - \hat{A}(i)) \right]^+ = \Psi(\hat{A}, \hat{S}). $$
|
| 363 |
+
|
| 364 |
+
It implies that on the event $A^c$, we have $\tilde{D} = \Phi(\hat{A}, \hat{S})$. Summarizing all of the above, we have proved that
|
| 365 |
+
|
| 366 |
+
$$ (28) \qquad \tilde{D} = \Phi(\hat{A}, \hat{S}) \quad \text{a.s.} $$
|
| 367 |
+
|
| 368 |
+
Let $\zeta$ be the law of $\hat{A}$ and $\tilde{D}$. We have just proved that $\Phi_\sigma(\zeta) = \zeta$. The only point left out is to find conditions ensuring that $\zeta$ is not equal to the trivial fixed point $\sigma$.
|
| 369 |
+
|
| 370 |
+
**REMARK (Ergodic queues in tandem).** Up to this point in the proof, the assumption that the service processes are i.i.d. has not been used. All of the above remains valid if we assume only that $\sigma \in M_c^\beta(\mathbb{R}_+^Z)$ (still assuming that the service processes $S^k \sim \sigma$ are independent of one another and independent of $A^0$). From now on, the i.i.d. assumption becomes central.
|
| 371 |
+
|
| 372 |
+
First, we need to show that
|
| 373 |
+
|
| 374 |
+
$$ (29) \qquad \tilde{W} = \Psi(\hat{A}, \hat{S}) \quad \text{a.s.} $$
|
| 375 |
+
|
| 376 |
+
We have just proved the equality on $A^c$, it remains to prove it on $A$. Denote by $\mathcal{T}_{\hat{A}}$ and $\mathcal{T}_{\tilde{S}}$ the $\sigma$-algebras generated respectively by $\hat{A}$ and $\hat{S}$. Clearly $A \in \mathcal{T}_{\hat{A}}$ which implies that $\hat{A}1_A = \bar{S}1_A$ is measurable with respect to $\mathcal{T}_{\hat{A}}$. We conclude that we have: $\hat{A} \perp \tilde{S} \Rightarrow \mathcal{T}_{\hat{A}} \perp \mathcal{T}_{\tilde{S}} \Rightarrow \bar{S}1_A \perp \tilde{S}$. On the event $A$, we have for all $n$,
|
| 377 |
+
|
| 378 |
+
$$ \tilde{W}(n) \geq \Psi(\hat{A}, \hat{S})(n) = \left[ \sup_{j \le n-1} \sum_{i=j}^{n-1} (\hat{S}(i) - \bar{S}(i)) \right]^{+}. $$
|
| 379 |
+
---PAGE_BREAK---
|
| 380 |
+
|
| 381 |
+
Using that the services are i.i.d. and nonconstant and that $\bar{S}$ and $\hat{S}$ are independent, we have on the event $\mathcal{A} : \tilde{W} = \Psi(\hat{A}, \hat{S}) = (\infty)^{\mathbb{Z}}$. In addition to (29), we have proved the following:
|
| 382 |
+
|
| 383 |
+
$$ (30) \qquad \begin{aligned} \tilde{W} &= (\infty)^{\mathbb{Z}} & \text{on } \{E[\hat{A}(0)|\mathfrak{T}] = \beta\}, \\ \tilde{W} &\in \mathbb{R}_+^{\mathbb{Z}} & \text{on } \{E[\hat{A}(0)|\mathfrak{T}] > \beta\}. \end{aligned} $$
|
| 384 |
+
|
| 385 |
+
Consequently, if $\tilde{W} = (\infty)^{\mathbb{Z}}$ a.s. then $\zeta = \sigma$, and if $P\{\tilde{W} \in \mathbb{R}_+^{\mathbb{Z}}\} > 0$ then $\zeta$ is a nontrivial fixed point for the queue.
|
| 386 |
+
|
| 387 |
+
Assume now that the moment condition $\int P\{S(0, 0) \ge u\}^{1/2} du < \infty$ is satisfied. This is the condition needed in Theorem 4.1 to obtain that $\lim_n n^{-1} \times \sum_{i=0}^{n-1} W(0, i) = M(\alpha)$ a.s. for a finite constant $M(\alpha)$. Let us prove that
|
| 388 |
+
|
| 389 |
+
$$ (31) \qquad \lim_{n \to +\infty} \frac{1}{n} \sum_{i=0}^{n-1} W(0, i) = M(\alpha) \text{ a.s.} \implies \tilde{W}(0) \in \mathbb{R}_{+} \text{ a.s.} $$
|
| 390 |
+
|
| 391 |
+
We argue by contradiction; hence, suppose that $P\{\tilde{W}(0) = +\infty\} = a > 0$. Fix $K > 0$. Let $f$ be a strictly increasing function of $\mathbb{N}$ such that $\mu_{f(n)} \xrightarrow{w} \mu$. We have $\widehat{W}^{f(n)}(0) \xrightarrow{w} \tilde{W}(0)$. Recall that $P\{\widehat{W}^n(0) \ge K\} = n^{-1}\sum_{i=0}^{n-1} P\{W(0, i) \ge K\}$. We deduce that
|
| 392 |
+
|
| 393 |
+
$$ \forall b \in (0, a), \exists N, \forall n = f(k) \ge N, \quad \frac{1}{n} \sum_{i=0}^{n-1} P\{W(0, i) \ge K\} \ge b. $$
|
| 394 |
+
|
| 395 |
+
Fix $b \in (0, a)$, $c \in (0, b)$ and $n = f(k) \ge N$. Define the event $\mathcal{E} = \{n^{-1} \times \sum_{i=0}^{n-1} 1_{\{W(0,i)\ge K\}} \ge c\}$ and set $q = P\{\mathcal{E}\}$. We have
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
\begin{aligned}
|
| 399 |
+
& \sum_{i=0}^{n-1} 1_{\{W(0,i)\ge K\}} \\
|
| 400 |
+
& = \left(\sum_{i=0}^{n-1} 1_{\{W(0,i)\ge K\}}\right) 1_{\mathcal{E}} + \left(\sum_{i=0}^{n-1} 1_{\{W(0,i)\ge K\}}\right) 1_{\mathcal{E}^c} \le n1_{\mathcal{E}} + nc1_{\mathcal{E}^c}.
|
| 401 |
+
\end{aligned}
|
| 402 |
+
$$
|
| 403 |
+
|
| 404 |
+
Taking expectations, we get
|
| 405 |
+
|
| 406 |
+
$$ nb \le \sum_{i=0}^{n-1} P\{W(0, i) \ge K\} \le nq + n(1-q)c. $$
|
| 407 |
+
|
| 408 |
+
We conclude that $q \ge (b-c)/(1-c) > 0$. Since this last inequality is valid for any $K$, we clearly have a contradiction with the a.s. convergence of $n^{-1}\sum_{i=0}^{n-1} W(0, i)$ to a finite constant.
|
| 409 |
+
|
| 410 |
+
We conclude that under the assumptions of Theorem 4.1, the fixed point $\zeta$ is nontrivial. Summarizing all of the above, we obtain the following result.
|
| 411 |
+
---PAGE_BREAK---
|
| 412 |
+
|
| 413 |
+
**THEOREM 5.1.** Consider a single server infinite buffer FCFS queue with an i.i.d. service process $S$ satisfying: $E[S(0)] \in \mathbb{R}_+^*$, $P\{S(0) = E[S(0)]\} < 1$ and $\int P\{S(0) \ge u\}^{1/2} du < \infty$. Then there exists an ergodic inter-arrival process $A$ with $A \perp S$ and $E[S(0)] < E[A(0)] < \infty$, and such that the corresponding inter-departure process $D$ has the same distribution as $A$.
|
| 414 |
+
|
| 415 |
+
PROOF. Consider a tandem of queues as in Section 3.3 where the service processes $S^k$ are distributed as $S$ with law $\sigma$. Consider the process $\hat{A}$ with law $\zeta$ as defined in (22). By the ergodic decomposition theorem and the linearity of $\Phi_\sigma$, we have
|
| 416 |
+
|
| 417 |
+
$$ \zeta = \int_{\mathcal{M}_c(\mathbb{R}_+^Z)} \chi \Gamma(d\chi), \quad \Phi_\sigma(\zeta) = \int_{\mathcal{M}_c(\mathbb{R}_+^Z)} \Phi_\sigma(\chi) \Gamma(d\chi). $$
|
| 418 |
+
|
| 419 |
+
But $\zeta = \Phi_\sigma(\zeta)$. Therefore, the uniqueness of ergodic decompositions and the mean preservation property of stable queues imply that
|
| 420 |
+
|
| 421 |
+
$$ \zeta_\alpha = \int_{\mathcal{M}_c^\alpha(\mathbb{R}_+^Z)} \chi \Gamma(d\chi) = \int_{\mathcal{M}_c^\alpha(\mathbb{R}_+^Z)} \Phi_\sigma(\chi) \Gamma(d\chi) = \Phi_\sigma(\zeta_\alpha) $$
|
| 422 |
+
|
| 423 |
+
for every $\alpha$ in the support of $E[\hat{A}(0)|\mathfrak{T}]$. By Proposition 4.4, the distributions $\zeta_\alpha$ are ergodic. According to (31), which holds since $\int P\{S(0) \ge u\}^{1/2} du < \infty$, we have $P\{\tilde{W} \in \mathbb{R}_+^Z\} = 1$ and $E[\hat{A}(0)|\mathfrak{T}] > E[S(0)]$ according to (30). Hence any $\alpha$ in the support of $E[\hat{A}(0)|\mathfrak{T}]$ is such that $\alpha > E[S(0)]$ and we conclude that the corresponding distribution $\zeta_\alpha \in \mathcal{M}_c^\alpha(\mathbb{R}_+^Z)$ is such that $\Phi_\sigma(\zeta_\alpha) = \zeta_\alpha$. $\square$
|
| 424 |
+
|
| 425 |
+
To the best of our knowledge, this provides the first positive answer (apart from the cases of exponential and geometric service times) to the intriguing question of the existence of nontrivial ergodic fixed points for a $./GI/1/\infty$/FCFS queue.
|
| 426 |
+
|
| 427 |
+
**6. Values of the means for which a fixed point exists.** Consider a tandem of stable i.i.d. queues as in Section 3.3 and let $\Phi_\sigma$ be the corresponding queueing operator. Assume also that the condition (13) holds. Define
|
| 428 |
+
|
| 429 |
+
$$ (32) \qquad \mathcal{S} = \{\alpha \in (\beta, +\infty) \mid \exists \mu \in \mathcal{M}_c^\alpha(\mathbb{R}_+^Z), \Phi_\sigma(\mu) = \mu\}. $$
|
| 430 |
+
|
| 431 |
+
According to Theorem 5.1, the set $\mathcal{S}$ is nonempty. We establish in Theorem 6.4 that $\mathcal{S}$ is unbounded, and closed in $(\beta, \infty)$. We believe that $\mathcal{S} = (\beta, +\infty)$ but we have not been able to prove this last point (see Conjecture 6.6). Proposition 6.5 also describes the limiting behavior from inputting in the tandem an ergodic inter-arrival process whose mean $\alpha$ does not belong to $\mathcal{S}$ (the case $\alpha \in \mathcal{S}$ is settled by Theorem 4.2).
|
| 432 |
+
|
| 433 |
+
From now on, for $\alpha \in \mathcal{S}$, denote by $\zeta_\alpha$ the unique ergodic fixed point of mean $\alpha$ and by $A_\alpha$ an inter-arrival process distributed as $\zeta_\alpha$. Let $S$ be distributed as $\sigma$ and independent of all other r.v.'s. Also it is convenient to denote by $\mathcal{L}(A)$ the law of a r.v. $A$, and by supp $A$ its support.
|
| 434 |
+
---PAGE_BREAK---
|
| 435 |
+
|
| 436 |
+
The following argument is used several times. Consider $\alpha \in \mathcal{S}$ and let $(A^n)_n$ be defined as in (12) starting from an ergodic process $A^0$ of mean $\alpha$. According to (19), we have $A^n \xrightarrow{w} A_\alpha$. It implies that $n^{-1} \sum_{i=0}^{n-1} \mathcal{L}(A^i) \xrightarrow{w} \mathcal{L}(A_\alpha)$. According to (28), we have
|
| 437 |
+
|
| 438 |
+
$$ (33) \qquad \frac{1}{n} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(A^i, S^i)) \xrightarrow{w} \mathcal{L}(\Psi(A_\alpha, S)). $$
|
| 439 |
+
|
| 440 |
+
We now prove a series of preliminary lemmas.
|
| 441 |
+
|
| 442 |
+
LEMMA 6.1. For any $\alpha > \beta$, $\mathcal{S} \cap (\beta, \alpha) \neq \emptyset$.
|
| 443 |
+
|
| 444 |
+
PROOF. Fix $\alpha > \beta$. Let $(A^n)_n$ be defined as in (12) starting from an ergodic process $A^0$ of mean $\alpha$. Let $\hat{A}$ be distributed as a weak subsequential limit of the Cesaro averages of the laws of $(A^k)_k$. Recall from the proof of Theorem 5.1 that
|
| 445 |
+
|
| 446 |
+
$$ (34) \qquad \operatorname{supp} E[\hat{A}(0)|\mathfrak{T}] \subset \mathfrak{S} \subset (\beta, \infty). $$
|
| 447 |
+
|
| 448 |
+
By Fatou's lemma, $E[\hat{A}(0)] \le \alpha$. Since $E[\hat{A}(0)] = E[E[\hat{A}(0)|\mathfrak{T}]]$, we conclude that $\mathfrak{S} \cap (\beta, \alpha] \ne \emptyset$. $\square$
|
| 449 |
+
|
| 450 |
+
LEMMA 6.2. Consider an ergodic inter-arrival process $A^0$ of mean $\alpha > \beta$. Let $\hat{A}$ be distributed as a weak subsequential limit of the Cesaro averages of the laws of $(A^k)_k$. Consider $\delta \in \mathcal{S} \cap (\beta, \alpha]$ (resp. $\delta \in \mathcal{S} \cap [\alpha, \infty)$, assuming $\mathcal{S} \cap [\alpha, \infty) \ne \emptyset$), then $A_\delta \le_{\text{st}} \hat{A}$ and $\Psi(\hat{A}, S) \ge_{\text{st}} \Psi(A_\delta, S)$ [resp., $A_\delta \ge_{\text{st}} \hat{A}$ and $\Psi(\hat{A}, S) \le_{\text{st}} \Psi(A_\delta, S)$]. Further, if $\mathcal{S} \cap [\alpha, \infty) \ne \emptyset$, then $E[\hat{A}(0)] = \alpha$.
|
| 451 |
+
|
| 452 |
+
PROOF. Consider the case $\delta \in \mathcal{S} \cap [\alpha, \infty)$. The other case can be treated similarly. Define the process $B^0 = \delta\alpha^{-1}A^0$, that is,
|
| 453 |
+
|
| 454 |
+
$$ \forall n, \quad B(n, 0) = -\frac{\delta}{\alpha} A(n, 0). $$
|
| 455 |
+
|
| 456 |
+
The process $B^0$ is ergodic and of mean $\delta$. At mean $\delta$, $\Phi_\sigma$ admits the fixed point $\zeta_\delta$. By (19), we have $B^k \xrightarrow{w} A_\delta$. By construction, we have $A^0 \le B^0$ almost surely. Using the monotonicity property (9), we get that, for all $k \in \mathbb{N}$,
|
| 457 |
+
|
| 458 |
+
$$ A^k \le B^k \quad \text{and} \quad \Psi(A^k, S^k) \ge \Psi(B^k, S^k). $$
|
| 459 |
+
|
| 460 |
+
It implies that for all $k \in \mathbb{N}^*$,
|
| 461 |
+
|
| 462 |
+
$$ \frac{1}{k} \sum_{i=0}^{k-1} \mathcal{L}(A^i) \le_{\text{st}} \frac{1}{k} \sum_{i=0}^{k-1} \mathcal{L}(B^i) $$
|
| 463 |
+
---PAGE_BREAK---
|
| 464 |
+
|
| 465 |
+
and
|
| 466 |
+
|
| 467 |
+
$$ \frac{1}{k} \sum_{i=0}^{k-1} \mathcal{L}(\Psi(A^k, S^k)) \geq_{\text{st}} \frac{1}{k} \sum_{i=0}^{k-1} \mathcal{L}(\Psi(B^k, S^k)). $$
|
| 468 |
+
|
| 469 |
+
Going to the limit along an appropriate subsequence and applying (33), we obtain
|
| 470 |
+
|
| 471 |
+
$$ \hat{A} \leq_{\text{st}} A_{\delta} \quad \text{and} \quad \Psi(\hat{A}, S) \geq_{\text{st}} \Psi(A_{\delta}, S). $$
|
| 472 |
+
|
| 473 |
+
We are left with having to show that $E[\hat{A}(0)] = \alpha$. Observe that $k^{-1} \times \sum_{i=0}^{k-1} \mathcal{L}(B^k) \xrightarrow{w} \zeta_\delta$, and that the one-dimensional marginals converge in expectation since $k^{-1} \sum_{i=0}^{k-1} E[B(0, i)] = \delta = E[A_\delta(0)]$. It follows by Theorem 5.4 of [3] that the sequence $(k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(B^k))_k$ is uniformly integrable. It implies that the dominated sequence $(k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i))_k$ is also uniformly integrable. Along an appropriate subsequence, this last sequence converges weakly to the law of $\hat{A}$ and we conclude (Theorem 5.4 of [3]) that it also converges in expectation. Since $k^{-1} \sum_{i=0}^{k-1} E[A(0, k)] = \alpha$ for all $k$, we deduce that $E[\hat{A}(0)] = \alpha$. $\square$
|
| 474 |
+
|
| 475 |
+
LEMMA 6.3. *The following statements are true:*
|
| 476 |
+
|
| 477 |
+
(a) for $\alpha, \delta \in \mathcal{S}$ and $\alpha < \delta$, $A_\alpha \leq_{\text{st}} A_\delta$ and $\Psi(A_\alpha, S) \geq_{\text{st}} \Psi(A_\delta, S)$;
|
| 478 |
+
|
| 479 |
+
(b) for $\alpha \in \mathcal{S}, E[\Psi(A_\alpha, S)(0)] = M(\alpha)$, where $M(\alpha)$ is defined in Theorem 4.1.
|
| 480 |
+
|
| 481 |
+
PROOF. Part (a) is a direct consequence of Lemma 6.2. Consider part (b). Fix $\alpha \in \mathcal{S}$. Consider $A^0$ an ergodic inter-arrival process of mean $\alpha$ satisfying condition (14). From Theorem 4.1, we have
|
| 482 |
+
|
| 483 |
+
$$ \lim_{n} \frac{1}{n} E[\Psi(A^i, S^i)(0)] = M(\alpha). $$
|
| 484 |
+
|
| 485 |
+
Starting from (33) and applying Fatou's lemma, we get
|
| 486 |
+
|
| 487 |
+
$$ E[\Psi(A_\alpha, S)(0)] \leq \lim_{n} \frac{1}{n} E[\Psi(A^i, S^i)(0)] = M(\alpha). $$
|
| 488 |
+
|
| 489 |
+
Now let us prove that $M(\alpha) \leq E[\Psi(A_\alpha, S)(0)]$. By Lemma 6.1, there exists $\delta \in \mathcal{S} \cap (\beta, \alpha)$. Define the process $B^0 = \alpha\delta^{-1}A_\delta$ and let $(B^n)_n$ be defined as in (12). The process $B^0$ is ergodic of mean $\alpha$. We also have $B^0 \geq A_\delta$ a.s. Using (9), this implies
|
| 490 |
+
|
| 491 |
+
$$ \frac{1}{n} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(B^i, S^i(0))) \leq_{\text{st}} \mathcal{L}(\Psi(A_\delta, S)(0)) \quad \text{for all } n. $$
|
| 492 |
+
|
| 493 |
+
Since $E[\Psi(A_\delta, S)(0)] \leq M(\delta) < \infty$, the sequence $\{n^{-1} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(B^i, S^i)(0)), n \in \mathbb{N}^*\}$ is uniformly integrable. Furthermore, we have from (33)
|
| 494 |
+
---PAGE_BREAK---
|
| 495 |
+
|
| 496 |
+
that $n^{-1} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(B^i, S^i)(0)) \xrightarrow{w} \mathcal{L}(\Psi(A_\alpha, S)(0))$. Applying Theorem 5.4 of [3], weak convergence plus uniform integrability implies convergence in expectation:
|
| 497 |
+
|
| 498 |
+
$$\lim_n \frac{1}{n} \sum_{i=0}^{n-1} E[\Psi(B^i, S^i)(0)] = E[\Psi(A_\alpha, S)(0)].$$
|
| 499 |
+
|
| 500 |
+
Now recall from Theorem 4.1 that we have $n^{-1} \sum_{i=0}^{n-1} \Psi(B^i, S^i)(0) \to M(\alpha)$ almost surely. Applying Fatou's lemma, we get
|
| 501 |
+
|
| 502 |
+
$$M(\alpha) \leq \lim_n \frac{1}{n} \sum_{i=0}^{n-1} E[\Psi(B^i, S^i)(0)].$$
|
| 503 |
+
|
| 504 |
+
Summarizing, we have $M(\alpha) \leq E[\Psi(A_\alpha, S)(0)]$. This completes the proof. $\square$
|
| 505 |
+
|
| 506 |
+
**THEOREM 6.4.** *The set $\mathcal{S}$ is closed in $(\beta, \infty)$ and $\inf\{u \in \mathcal{S}\} = \beta$, $\sup\{u \in \mathcal{S}\} = +\infty$.*
|
| 507 |
+
|
| 508 |
+
**PROOF.** A direct consequence of Lemma 6.1 is that $\inf\{u \in \mathcal{S}\} = \beta$. We prove that $\sup\{u \in \mathcal{S}\} = +\infty$ by contradiction. Thus, suppose $\sup\{u \in \mathcal{S}\} < \infty$ and consider $\alpha > \sup\{u \in \mathcal{S}\}$. Let $A^0$ be an ergodic inter-arrival process of mean $\alpha$ satisfying condition (14). Let $\hat{A}$ be distributed as a weak subsequential limit of the Cesaro averages of the laws of $(A^k)_k$. By Lemma 6.2, $A_\delta \le_{st} \hat{A}$ for any $\delta \in \mathcal{S}$. According to (1), this implies that $\delta \le E[\hat{A}(0)|\mathfrak{T}]$ a.s. Since $\supp E[\hat{A}(0)|\mathfrak{T}] \subset \mathcal{S}$, see (34), we conclude that almost surely
|
| 509 |
+
|
| 510 |
+
$$E[\hat{A}(0)|\mathfrak{T}] = \sup\{u \in \mathcal{S}\} \in \mathcal{S}.$$
|
| 511 |
+
|
| 512 |
+
Set $\eta = \sup\{u \in \mathcal{S}\}$. Since $\hat{A}$ is a fixed point, we must have $\hat{A} \sim A_\eta$. In particular, along an appropriate subsequence, we have that $n^{-1} \sum_{i=0}^{n-1} \mathcal{L}(\Psi(A^i, S^i))$ converges weakly to $\mathcal{L}(\Psi(A_\eta, S))$. Now, a sequential use of Lemma 6.3, Fatou's lemma and Theorem 4.1 gives us
|
| 513 |
+
|
| 514 |
+
$$M(\eta) = E[\Psi(A_\eta, S)(0)] \leq \lim_n \frac{1}{n} \sum_{i=0}^{n-1} E[\Psi(A^i, S^i)(0)] = M(\alpha).$$
|
| 515 |
+
|
| 516 |
+
It follows from the properties of $\gamma$ recalled after the statement of Theorem 4.1 that $M(x)$ is a positive and decreasing function that is strictly decreasing on the interval $\{x | M(x) > 0\}$. Since $\alpha < \eta$ and $M(\alpha) \le M(\eta)$, we conclude that $M(\alpha) = M(\eta) = 0$. Thus, $E[\Psi(A_\eta, S)(0)] = 0$, that is, $P\{\Psi(A_\eta, S) = (0)^{\mathbb{Z}}\} = 1$. Let us input the process $A_\eta$ into the tandem of queues. Using (8) recursively, we obtain
|
| 517 |
+
|
| 518 |
+
$$\begin{align*}
|
| 519 |
+
A_{\eta}^{k}(0) &= A_{\eta}(0) + \sum_{i=0}^{k-1}[S(1, i) - S(0, i)] + \sum_{i=0}^{k-1}[\Psi(A_{\eta}^{i}, S^{i})(1) - \Psi(A_{\eta}^{i}, S^{i})(0)] \\
|
| 520 |
+
&= A_{\eta}(0) + \sum_{i=0}^{k-1}[S(1, i) - S(0, i)].
|
| 521 |
+
\end{align*}$$
|
| 522 |
+
---PAGE_BREAK---
|
| 523 |
+
|
| 524 |
+
Since the service times are i.i.d. and nonconstant, the partial sums $\sum_{i=0}^{k-1}[S(1, i) - S(0, i)]$ form a null-recurrent random walk. Thus there is a $k$ for which $A_{\alpha_k}^k(0) < 0$ with strictly positive probability, which is impossible. Or, we cannot have $M(\eta) = 0$. In turn, this implies $\sup\{u \in \mathcal{S}\} = \infty$, and via Lemma 6.2 we get that $E[\hat{A}(0)] = \alpha$.
|
| 525 |
+
|
| 526 |
+
We now prove that $\mathcal{S}$ is closed in $(\beta, \infty)$. Consider a sequence $\alpha_k$ of elements of $\mathcal{S}$ that increases to $\alpha \in (\beta, \infty)$. Let $A^0$ and $\hat{A}$ be defined as above (for the mean $\alpha$). Using Lemma 6.2, we have $A_{\alpha_k} \le \text{st} \hat{A}$ and using (1), we have $\alpha_k \le E[\hat{A}(0)|\mathfrak{T}]$ a.s. Passing to the limit, we get $\alpha \le E[\hat{A}(0)|\mathfrak{T}]$ a.s. Since $E[\hat{A}(0)] = E[E[\hat{A}(0)|\mathfrak{T}]] = \alpha$, we conclude that $\text{supp } E[\hat{A}(0)|\mathfrak{T}] = \{\alpha\}$. It implies that $\alpha \in \mathcal{S}$. The proof works similarly when $\alpha_k$ is a decreasing sequence. $\square$
|
| 527 |
+
|
| 528 |
+
**PROPOSITION 6.5.** *Consider an ergodic inter-arrival process $A^0$ of mean $\alpha$. There are two possibilities:*
|
| 529 |
+
|
| 530 |
+
1. if $\alpha \in \mathcal{S}$, then $\bar{\rho}(A^k, A_\alpha) \xrightarrow{k} 0$ and hence $A^k \xrightarrow{w} A_\alpha$;
|
| 531 |
+
|
| 532 |
+
2. if $\alpha \notin \mathcal{S}$, then $k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i) \xrightarrow{w} p\mathcal{L}(A_{\underline{\alpha}}) + (1-p)\mathcal{L}(A_{\overline{\alpha}})$, where
|
| 533 |
+
|
| 534 |
+
$$ (35) \qquad \underline{\alpha} = \sup\{u \in \mathcal{S}; u \le \alpha\}, \qquad \overline{\alpha} = \inf\{u \in \mathcal{S}; u \ge u\} \quad \text{and} \quad p = \frac{\overline{\alpha} - \alpha}{\underline{\alpha} - \overline{\alpha}}. $$
|
| 535 |
+
|
| 536 |
+
In words, the weak Cesaro limit is a linear combination of the largest ergodic fixed point of mean less than $\alpha$ and of the smallest ergodic fixed point of mean more than $\alpha$. The weak Cesaro limit always has mean $\alpha$.
|
| 537 |
+
|
| 538 |
+
**PROOF.** The case $\alpha \in \mathcal{S}$ is a restatement of (19). Consider $\alpha \notin \mathcal{S}$. Denote by $\hat{A}$ a process whose law is a weak subsequential limit of the Cesaro averages of the laws of $(A^k)_k$. By Lemma 6.2, we have $A_u \le \text{st} \hat{A} \le \text{st} A_v$ for any $u, v \in \mathcal{S}$ such that $u < \alpha < v$. Therefore, using (1), we get that $u \le E[\hat{A}(0)|\mathfrak{T}] \le v$ a.s. Since $\text{supp } E[\hat{A}(0)|\mathfrak{T}] \subset \mathcal{S}$ [see (34)] and $E[\hat{A}(0)] = \alpha$ (Lemma 6.2) we conclude that $\text{supp } E[\hat{A}(0)|\mathfrak{T}] = \{\underline{\alpha}, \overline{\alpha}\}$, where $\underline{\alpha}$ and $\overline{\alpha}$ are defined as in (35).
|
| 539 |
+
|
| 540 |
+
We know from Section 5 that the law of $\hat{A}$ is a fixed point. Given that $\text{supp } E[\hat{A}(0)|\mathfrak{T}] = \{\underline{\alpha}, \overline{\alpha}\}$, Proposition 4.4 tells us that $\hat{A} \sim pA_{\underline{\alpha}} + (1-p)A_{\overline{\alpha}}$ for some $p$. Therefore $E[\hat{A}(0)] = p\underline{\alpha} + (1-p)\overline{\alpha}$ and from $E[\hat{A}(0)] = \alpha$, we conclude that $p = (\overline{\alpha} - \alpha)/((\underline{\alpha} - \alpha))$.
|
| 541 |
+
|
| 542 |
+
A consequence of the above argument is that any convergent subsequence of $k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i)$ must converge weakly to $p\mathcal{L}(A_{\underline{\alpha}}) + (1-p)\mathcal{L}(A_{\overline{\alpha}})$. Recalling an argument of Section 5, the sequence $(k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i), k \in \mathbb{N}^*)$ is tight, hence sequentially compact. This implies that $k^{-1} \sum_{i=0}^{k-1} \mathcal{L}(A^i) \xrightarrow{w} p\mathcal{L}(A_{\underline{\alpha}}) + (1-p)\mathcal{L}(A_{\overline{\alpha}})$. $\square$
|
| 543 |
+
|
| 544 |
+
The previous results characterize $\mathcal{S}$ to a certain extent. We believe that more is true.
|
| 545 |
+
---PAGE_BREAK---
|
| 546 |
+
|
| 547 |
+
CONJECTURE 6.6. For any $\alpha > \beta = E[S(0, 0)]$, there exists an ergodic fixed point of mean $\alpha$. That is, $\mathcal{S} = (\beta, +\infty)$.
|
| 548 |
+
|
| 549 |
+
It is possible to show that $\mathcal{S}$ is equal to the image of the derivative of $\gamma$ defined in Theorem 4.1. (Since $\gamma$ is concave, its derivative $\gamma'$ is continuous except at a countable number of points. At the points of discontinuity, we consider that both the left and the right-hand limits belong to the image.) Hence the conjecture is true if the function $\gamma$ has a continuous derivative. However, we have not been able to prove this. The function $\gamma$ defines the limit shape of an oriented last-passage time percolation model on $\mathbb{N}^2$ with weights $(S(i, j))_{i,j}$ on the lattice points; see [2, 12, 17]. Establishing the smoothness of the limit shape in percolation models is usually a difficult question.
|
| 550 |
+
|
| 551 |
+
**7. Complements.** In proving Theorem 5.1, an essential step was to establish the identity (28): $\tilde{D} = \Phi(\hat{A}, \hat{S})$. This can be rephrased as the weak continuity of the operator $\Phi_\sigma$ of an i.i.d. queue on the converging subsequences of the Cesaro averages of the laws of $A^k$. In fact a much stronger result holds:
|
| 552 |
+
|
| 553 |
+
**THEOREM 7.1.** For a stationary queue defined as in Section 3.2, the operator $\Phi_\sigma$ is weakly continuous on $M_s(\mathbb{R}_+^Z)$.
|
| 554 |
+
|
| 555 |
+
Theorem 7.1 is a generalization of a result due to Borovkov ([4], Chapter 11 or [5], Chapter 4); see also [6]. Borovkov proves that for an ergodic queue, $\Phi_\sigma$ is weakly continuous on $\bigcup_{\beta<x} M_c^x(\mathbb{R}_+^Z)$. The proof of Theorem 7.1, which follows closely the arguments in [4, 5], appears in the preprint version [16] of the present article.
|
| 556 |
+
|
| 557 |
+
We have quoted Theorem 7.1 since we believe it to be of independent interest. However, we have not included the proof since Theorem 7.1 does not provide any shortcut to the proof of Theorem 5.1. Let us explain this last point in more detail.
|
| 558 |
+
|
| 559 |
+
Considering Theorem 7.1, a natural approach to the existence of fixed points for $\Phi_\sigma$ is the following. Consider the $\mathbb{R}$-vector space $\mathcal{M}$ of finite signed measures on $\mathbb{R}^Z$, and observe that $M_s(\mathbb{R}_+^Z)$ is a convex subset of $\mathcal{M}$. Equipped with the topology of weak convergence, recall that $\mathcal{M}$ is a locally convex and Hausdorff space, and that $M_s(\mathbb{R}_+^Z)$ is closed in $\mathcal{M}$. Now, find a convex and compact subset $\mathcal{C}$ of $M_s(\mathbb{R}_+^Z)$ such that $\Phi_\sigma$ maps $\mathcal{C}$ into itself. Since $\Phi_\sigma$ is continuous, the existence of a fixed point in $\mathcal{C}$ then follows from the Schauder-Tychonoff fixed point theorem ([20], Chapter 5).
|
| 560 |
+
|
| 561 |
+
A suitable candidate for the set $\mathcal{C}$ is dicted by Loynes' results. Indeed, assume that $\sigma$ is ergodic and consider $\alpha > E[S(0)]$. The set $M_c^\alpha(\mathbb{R}_+^Z)$ is mapped into itself by $\Phi_\sigma$. However, it is not convex. Its convexification is the set $M_s^{p:\alpha}(\mathbb{R}_+^Z)$ defined in (20). The set $M_s^{p:\alpha}(\mathbb{R}_+^Z)$ is not weakly closed [as can be seen by considering $(\xi_n)_n$ defined in (36)]. Its closure is the set $\bigcup_{x \le \alpha} M_s^x(\mathbb{R}_+^Z)$.
|
| 562 |
+
---PAGE_BREAK---
|
| 563 |
+
|
| 564 |
+
Since $\Phi_{\sigma}(\mu) \geq_{\text{st}} \sigma$ for all $\mu$, we deduce the following natural and “minimal” candidate for $\mathcal{C}$:
|
| 565 |
+
|
| 566 |
+
$$ \mathcal{C} = \bigcup_{x \leq \alpha} \mathcal{M}_s^x(\mathbb{R}_+^Z) \cap \{\mu \mid \mu \geq_{\text{st}} \sigma\}. $$
|
| 567 |
+
|
| 568 |
+
It is easily checked that $\mathcal{C}$ is compact, convex, and mapped into itself by $\Phi_{\sigma}$. We therefore conclude that there exists a fixed point in $\mathcal{C}$. The problem is that $\mathcal{C}$ is too large: it contains the trivial fixed point $\sigma$, and we have no way to assert the existence of a nontrivial fixed point.
|
| 569 |
+
|
| 570 |
+
Building on the above idea, one could try the same approach with another topology on $\mathcal{M}_s(\mathbb{R}_+^Z)$: the one induced by the $\bar{\rho}$ distance defined in (16). According to Theorem 4.2, the map $\Phi_{\sigma}$ is 1-Lipschitz on $\mathcal{M}_s(\mathbb{R}_+^Z)$, hence continuous. However, there is no clear way to build a compact and convex set on which to work. Indeed, let $\xi_n \in \mathcal{M}_e^1(\mathbb{R}_+^Z)$ be the distribution of the periodic process whose period is given by
|
| 571 |
+
|
| 572 |
+
$$ (36) \qquad (\underbrace{0, \dots, 0}_{n}, \underbrace{2, \dots, 2}_{n}). $$
|
| 573 |
+
|
| 574 |
+
It is easy to see that $(\xi_n)_n$ is not sequentially compact in $\mathcal{M}_s(\mathbb{R}_+^Z)$ for the $\bar{\rho}$ topology. Indeed, we have $\xi_n \xrightarrow{w} \xi$, where $\xi$ is defined by $P\{\xi = (0)^Z\} = P\{\xi = (2)^Z\} = 1/2$. Since convergence in the $\bar{\rho}$ topology implies weak convergence, if $(\xi_n)_n$ admits a subsequential limit in the $\bar{\rho}$ topology, then it has to be $\xi$. However, it is easy to check that $\bar{\rho}(\xi_n, \xi) = 1$ for all $n$.
|
| 575 |
+
|
| 576 |
+
**Acknowledgment.** The authors would like to thank Tom Kurtz for a very careful reading and in particular for suggesting a simplification of the original proof of Theorem 5.1. This has led to an important shortening and overall improvement of the paper.
|
| 577 |
+
|
| 578 |
+
## REFERENCES
|
| 579 |
+
|
| 580 |
+
[1] ANANTHARAM, V. (1993). Uniqueness of stationary ergodic fixed point for a $M/K$ node. *Ann. Appl. Probab.* **3** 154–172. [Correction (1994) *Ann. Appl. Probab.* **4** 607.]
|
| 581 |
+
|
| 582 |
+
[2] BACCELLI, F., BOROVKOV, A. and MAIRESSE, J. (2000). Asymptotic results on infinite tandem queueing networks. *Probab. Theory Related Fields* **118** 365–405.
|
| 583 |
+
|
| 584 |
+
[3] BILLINGSLEY, P. (1968). *Convergence of Probability Measures*. Wiley, New York.
|
| 585 |
+
|
| 586 |
+
[4] BOROVKOV, A. (1976). *Stochastic Processes in Queueing Theory*. Springer, Berlin. [Russian edition (1972), Nauka, Moscow.]
|
| 587 |
+
|
| 588 |
+
[5] BOROVKOV, A. (1984). *Asymptotic Methods in Queueing Theory*. Wiley, New York. [Russian edition (1980), Nauka, Moscow.]
|
| 589 |
+
|
| 590 |
+
[6] BRANDT, A., FRANKEN, P. and LISEK, B. (1990). *Stationary Stochastic Models*. Wiley, New York.
|
| 591 |
+
|
| 592 |
+
[7] BURKE, P. (1956). The output of a queueing system. *Oper. Res.* **4** 699–704.
|
| 593 |
+
|
| 594 |
+
[8] CHANG, C. S. (1994). On the input-output map of a $G/G/1$ queue. *J. Appl. Probab.* **31** 1128–1133.
|
| 595 |
+
---PAGE_BREAK---
|
| 596 |
+
|
| 597 |
+
[9] DALEY, D. and ROLSKI, T. (1992). Finiteness of waiting-time moments in general stationary single-server queues. *Ann. Appl. Probab.* **2** 987–1008.
|
| 598 |
+
|
| 599 |
+
[10] DUDLEY, R. (1989). *Real Analysis and Probability*. Wadsworth & Brooks/Cole, Belmont, CA.
|
| 600 |
+
|
| 601 |
+
[11] ETHIER, S. and KURTZ, T. (1986). *Markov Processes: Characterization and Convergence*. Wiley, New York.
|
| 602 |
+
|
| 603 |
+
[12] GLYNN, P. and WHITT, W. (1991). Departures from many queues in series. *Ann. Appl. Probab.* **1** 546–572.
|
| 604 |
+
|
| 605 |
+
[13] GRAY, R. (1988). *Probability, Random Processes, and Ergodic Properties*. Springer, Berlin.
|
| 606 |
+
|
| 607 |
+
[14] KAMAE, T., KRENGEL, U. and O'BRIEN, G. L. (1977). Stochastic inequalities on partially ordered spaces. *Ann. Probab.* **5** 899–912.
|
| 608 |
+
|
| 609 |
+
[15] LOYNES, R. (1962). The stability of a queue with non-independent interarrival and service times. *Proc. Cambridge Philos. Soc.* **58** 497–520.
|
| 610 |
+
|
| 611 |
+
[16] MAIRESSE, J. and PRABHAKAR, B. (1999). On the existence of fixed points for the $·/GI/1$ queue. LIAFA Research Report 99/25, Université Paris 7.
|
| 612 |
+
|
| 613 |
+
[17] MARTIN, J. (2002). Large tandem queueing networks with blocking. *Queueing Systems Theory Appl.* **41** 45–72.
|
| 614 |
+
|
| 615 |
+
[18] MOUNTFORD, T. and PRABHAKAR, B. (1995). On the weak convergence of departures from an infinite sequence of $·/M/1$ queues. *Ann. Appl. Probab.* **5** 121–127.
|
| 616 |
+
|
| 617 |
+
[19] PRABHAKAR, B. (2003). The attractiveness of the fixed points of a $·/GI/1$ queue. *Ann. Probab.* **31** 2237–2269.
|
| 618 |
+
|
| 619 |
+
[20] RUDIN, W. (1991). *Functional Analysis*, 2nd ed. McGraw-Hill, New York.
|
| 620 |
+
|
| 621 |
+
[21] STOYAN, D. (1984). *Comparison Methods for Queues and Other Stochastic Models*. Wiley, New York.
|
| 622 |
+
|
| 623 |
+
[22] WHITT, W. (1992). Uniform conditional stochastic order. *J. Appl. Probab.* **17** 112–123.
|
| 624 |
+
|
| 625 |
+
LIAFA
|
| 626 |
+
UNIVERSITY DENIS DIDEROT
|
| 627 |
+
CASE 7014
|
| 628 |
+
2 PLACE JUSSIEU
|
| 629 |
+
F-75251 PARIS CEDEX 05
|
| 630 |
+
FRANCE
|
| 631 |
+
|
| 632 |
+
E-MAIL: jean.mairesse@liafa.jussieu.fr
|
| 633 |
+
|
| 634 |
+
DEPARTMENTS OF ELECTRICAL ENGINEERING
|
| 635 |
+
AND COMPUTER SCIENCE
|
| 636 |
+
|
| 637 |
+
STANFORD UNIVERSITY
|
| 638 |
+
|
| 639 |
+
STANFORD, CALIFORNIA 94305-9510
|
| 640 |
+
|
| 641 |
+
E-MAIL: balaji@stanford.edu
|
samples_new/texts_merged/6772016.md
ADDED
|
@@ -0,0 +1,219 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
GEOMETRIC EVOLUTION PROBLEMS AND
|
| 5 |
+
ACTION-MEASURES
|
| 6 |
+
|
| 7 |
+
M. BULIGA
|
| 8 |
+
|
| 9 |
+
1. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Geometric evolution problems are connected to many interesting phenom-
|
| 12 |
+
ena, such as ice melting, metal solidification, explosions, damage mechanics.
|
| 13 |
+
Any such problem numbers among the unknowns a geometric object. The
|
| 14 |
+
canonical example of a geometric evolution problem is the mean curvature
|
| 15 |
+
flow of a surface. A more complex situation arises in the study of brittle crack
|
| 16 |
+
propagation. The state of a brittle body is described by a pair displacement-
|
| 17 |
+
crack, therefore the crack propagation problem has two unknowns. We have
|
| 18 |
+
to suppose that, at any moment, the displacement has no discontinuities away
|
| 19 |
+
from the crack. Moreover, the displacement is connected with the crack by
|
| 20 |
+
the boundary conditions: these contain conditions such as unilateral contact
|
| 21 |
+
of the lips of the crack.
|
| 22 |
+
|
| 23 |
+
In most of the studies the fracture propagation is not recognized to have a
|
| 24 |
+
geometrical nature. It is the purpose of this paper to formulate a general geo-
|
| 25 |
+
metric evolution problem based on the notion of action-measure, introduced
|
| 26 |
+
here. For particular choices of the action-measure we obtain formulations of
|
| 27 |
+
the mean curvature flow or the brittle fracture propagation problems.
|
| 28 |
+
|
| 29 |
+
2. ACTION MEASURES AND VISCOSITY SOLUTIONS
|
| 30 |
+
|
| 31 |
+
($L$, $\le$, $\tau$) is a sequential topological ordered set (or t.o.s.) if ($L$, $\le$) is an ordered set and for any sequence $(\beta_h)_h$ in $L$, converging to some $\beta \in L$, if there exists $\alpha \in L$ such that $\beta_h \le \alpha$ for any $h$, then $\beta \le \alpha$.
|
| 32 |
+
|
| 33 |
+
Let us consider $F : X \to L$, where $X$ is a topological space and $L$ is a
|
| 34 |
+
sequential t.o.s. A minimal element of $F$ is any $x \in X$ such that for any
|
| 35 |
+
$y \in X$, if $F(y) \le F(x)$ then $F(y) = F(x)$. Remark however that, due to the
|
| 36 |
+
|
| 37 |
+
*Key words and phrases.* geometric evolution problems, viscosity solutions, brittle fracture mechanics, mean curvature flow.
|
| 38 |
+
---PAGE_BREAK---
|
| 39 |
+
|
| 40 |
+
lack of total ordering, a minimal element may not be a minimizer, i.e. even
|
| 41 |
+
if $x \in X$ is a minimal element of $F$, it is not true that $F(x) \leq F(y)$ for any
|
| 42 |
+
$y \in X$.
|
| 43 |
+
|
| 44 |
+
A particular case of t.o.s. is any space of measures. An action measure
|
| 45 |
+
is a function defined over a topological space with values in a space of mea-
|
| 46 |
+
sures. The direct method in the calculus of variations can be reformulated in
|
| 47 |
+
this frame. In particular, if the space of measures is a topological dual of a
|
| 48 |
+
space of functions then the direct method can be written in a particular form.
|
| 49 |
+
We leave to the reader the formulation of the general direct method and the
|
| 50 |
+
reformulation of the theorem in this case.
|
| 51 |
+
|
| 52 |
+
Action measures are related to (first order) viscosity solutions (see [4], [5],
|
| 53 |
+
[6]). Indeed, take a function
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
H : \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
$C^1$ in the first argument and positive one-homogeneous in the second. (Weaker assumptions may be taken.) Consider now $L$, the polar of $H$,
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
L(x, p) = \sup \{ \langle p, q \rangle - H(x, q) : q \in \mathbb{R}^n \}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
For any fixed $T > 0$ we define the set
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\begin{align*}
|
| 69 |
+
\Lambda_{\tau} &= \{ c : \bar{\Omega} \times [0, T] \to \bar{\Omega} : c(x, \cdot) \in C^1([0, T]) \quad \forall x \in \Omega, \\
|
| 70 |
+
&c(\cdot, 0) = id, c(x, T) \in \partial\Omega \quad \forall x \in \Omega \}
|
| 71 |
+
\end{align*}
|
| 72 |
+
$$
|
| 73 |
+
|
| 74 |
+
and the function $F: A_T \to M(\Omega)$
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
F(c)(B) = \int_B g(x,T) \, dx + \int_B \int_0^T L(c(x,t), \dot{c}(x,t)) \, dt \, dx .
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
Here $g$ is a positive function defined on $\partial\Omega$. This action-measure has minimal elements. Moreover it has minimizing elements. Let $c_0$ be any one of them. Then
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
(1) \qquad F(c_0)(B) = \int_B u(x) \, dx \quad \forall B \in \mathcal{B}(\Omega)
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
where *u* is the viscosity solution of the problem
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
(2) \qquad H(u, \nabla u) = 0 , \quad u = g \text{ on } \partial\Omega .
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
Notice that in this setting of the problem (2) the primary unknown is the map $c_0$. The viscosity solution of (2), that is $u$, is the Lebesgue density of the measure $F(c_0)$.
|
| 93 |
+
|
| 94 |
+
Any function $c \in A_T$ can be identified with a path of deformations of $\Omega$ by
|
| 95 |
+
$c(\cdot, t) \mapsto c_j(\cdot) : \bar{\Omega} \to \bar{\Omega}$. This fact make us formulate the following general
|
| 96 |
+
problem:
|
| 97 |
+
---PAGE_BREAK---
|
| 98 |
+
|
| 99 |
+
Consider a space $M$ of curves $t \mapsto \phi_t : \Omega \to \Omega$ and an action measure $\Lambda: M \to \mathrm{Meas}(\Omega)$, where $\mathrm{Meas}(\Omega)$ is a space of scalar measures over $\Omega$. Find and describe, under suitable conditions over $M$ and $\Lambda$, the minimal elements of the action measure $\Lambda$.
|
| 100 |
+
|
| 101 |
+
### 3. EVOLUTION DRIVEN BY DIFFEMORPHISMS
|
| 102 |
+
|
| 103 |
+
Diff$_0(\Omega)$ denotes the space of $C^\infty$ diffeomorphisms of $\Omega$ with compact support, that is the set of all $C^\infty$ functions $\phi: R^n \to R^n$ such that $\phi^{-1} \in C^\infty$ and $\mathrm{supp}(\phi - id) \subset \subset \Omega$. It is well known that any vector-field $\eta \in C_0^\infty(\Omega, R^n)$ (i.e. with compact support in $\Omega$) generates a one-parameter flow $t \mapsto \phi_t \in \mathrm{Diff}_0(\Omega)$, solution of the problem: $\dot{\phi}_t = \eta \cdot \phi_t$, $\phi_0 = id$, where the dot "·" denotes function composition.
|
| 104 |
+
|
| 105 |
+
Consider a sufficiently regular set $B \subset \Omega$. Let $\xi_B$ be the characteristic function of B. For any $\phi \in \mathrm{Diff}(\Omega)$ we have the equality: $\xi_{\phi(B)} = \xi_B \cdot \phi^{-1}$.
|
| 106 |
+
|
| 107 |
+
A geometric evolution of the set $B$ is any curve $t \mapsto B(t)$, such that $B(0) = B$. A particular case of geometric evolution of $B$ is when $B(t)$ is isotopically equivalent to $B$. Such an evolution (which we call isotopic) can be obtained by considering a curve $t \mapsto \phi_t \in \mathrm{Diff}_0(\Omega)$, $\phi_0 = id$. Any such curve induces a geometric evolution of $B$ by $B(t) = \phi_t(B)$. Therefore, this kind of geometric evolution of the set $B$ is equivalent to a curve in $\mathrm{Diff}_0(\Omega)$, with origin at $id$.
|
| 108 |
+
|
| 109 |
+
We can make weaker assumptions upon the geometric evolution of $B$. In this paper we shall introduce the notion of geometric evolution driven by diffeomorphisms. The advantage of this notion is that potentially complex evolutions of $B$ are locally approximated by isotopic evolutions. We describe further what an evolution driven by diffeomorphisms is.
|
| 110 |
+
|
| 111 |
+
The regularity assumptions upon the initial set $B$ are described first. $H^k$ denotes the $k$-dimensional Hausdorff measure. We shall suppose that $B$ has $k$ Hausdorff dimension. We suppose also that for any vector-field $\eta \in C_0^\infty(\Omega, R^n)$ there exists the derivative with respect to $t$ of the function $t \mapsto \xi_{\phi_t(B)}H^k$, where $\phi_t$ is the one-parameter flow generated by $\eta$. Moreover, this derivative is supposed to be absolutely continuous with respect to the measure $H^{k-1}$.
|
| 112 |
+
|
| 113 |
+
An evolution of $B$ driven by diffeomorphisms is a curve $t \mapsto B(t)$, $B(0) = B$, such that:
|
| 114 |
+
|
| 115 |
+
i) $d/dt \xi_{B(t)} H^k$ is absolutely continuous with respect to $H^{k-1}$. The support of this measure is denoted by $\partial^+ B(t)$ and is called the border of $B(t)$.
|
| 116 |
+
|
| 117 |
+
ii) there is a curve $t \mapsto \eta(t) \in C_0^\infty(\Omega, R^n)$ such that for almost any $t$ we have the inequality of measures:
|
| 118 |
+
|
| 119 |
+
$$ \frac{d}{dt} \xi_{B(t)} H^k \leq \frac{d}{ds} \xi_{B(t)} \cdot \phi_s^{-1} \cdot \eta(s) H^k $$
|
| 120 |
+
---PAGE_BREAK---
|
| 121 |
+
|
| 122 |
+
where $s \mapsto \phi_{s,\eta(t)}$ is the one-parameter flow generated by $\eta(t)$ and the derivative with respect to $s$ is made for $s=0$.
|
| 123 |
+
|
| 124 |
+
iii) the function $t \mapsto d/ds \xi_{B(t)} \cdot \phi_{s,\eta(t)}^{-1} \mathcal{H}^k(\Omega)$ is measurable.
|
| 125 |
+
|
| 126 |
+
iv) for any $t < t'$ we have $B(t) \subset B(t')$.
|
| 127 |
+
|
| 128 |
+
Let us denote by $Bar^+(t, Q)$ the set of all $\eta \in C_0^\infty(\Omega, \mathbb{R}^n)$ with compact support in $Q \subset \Omega$ which satisfy: $d/dt \xi_{B(t)} \mathcal{H}^k \leq d/ds \xi_{B(t)} \cdot \phi_{s,\eta}^{-1} \mathcal{H}^k$. Obviously, the set $Bar^+(t, Q)$ depends on the evolution $t \mapsto B(t)$.
|
| 129 |
+
|
| 130 |
+
We have the following result: for almost any $t$ there is a positive vector-field $v(t)$, with support on $\partial^* B(t)$, called the normal velocity field, such that for any $\eta \in Bar^+(t, \Omega)$ we have
|
| 131 |
+
|
| 132 |
+
$$ \frac{d}{dt} \xi_{B(t)} \mathcal{H}^k \leq v(t) \mathcal{H}^{k-1} \leq \frac{d}{ds} \xi_{B(t)} \cdot \phi_{s,\eta}^{-1} \mathcal{H}^k . $$
|
| 133 |
+
|
| 134 |
+
### 4. A GENERAL GEOMETRIC EVOLUTION PROBLEM
|
| 135 |
+
|
| 136 |
+
Consider now a set $C \subset P(\Omega)$, which contains only regular closed sets $B \subset \Omega$ and let $M$ be a family of evolutions of an initial set $B_0 \in C$ driven by diffeomorphisms, such that for any $t$ and any curve $t \mapsto B(t) \in M$ we have $B(t) \in C$. Let us consider also a functional $E: C \to R$, such that $E(B) \geq E(B')$ if $B \subset B'$. $E$ is smooth in the following sense: for any $B \in C$ and any one-parameter flow $t \mapsto \phi_{1,\eta}$ the function $t \mapsto E(\phi_{1,\eta}(B))$ is derivable in $t=0$. This derivative will be denoted by $dE(B, \eta)$. Given a geometric evolution $t \mapsto B(t) \in M$, for any borelian set $Q \in B(\Omega)$, the variation of $E$ at $B(t) \in C$, inside $Q$ is defined by the formula:
|
| 137 |
+
|
| 138 |
+
$$ dE(B(t))(Q) = \sup \left\{ dE(B(t), \eta) : \exists \lambda > 0, \lambda\eta \in Bar^+(t, Q), d(\partial^* B, \phi_{1,\eta}(\partial^* B)) \leq 1 \right\}. $$
|
| 139 |
+
|
| 140 |
+
Under suitable assumptions $-dE(B(t))$ is a positive measure.
|
| 141 |
+
|
| 142 |
+
We introduce now the action-measure defined for any geometric evolution $t \mapsto B(t) \in M$ by the expression:
|
| 143 |
+
|
| 144 |
+
$$ A(t \mapsto B(t))(Q) = \int_0^T \int_{\partial^* B(t) \cap Q} v(t) d\mathcal{H}^{k-1} dt + \int_0^T dE(B(t))(Q) dt . $$
|
| 145 |
+
|
| 146 |
+
Notice that the first term of $A$ can be written as the variation of $\mathcal{H}^k(B(t))$ from 0 to $T$. Remark also that we can consider functions $E = E(B, t)$, such that $E(B, t) \geq E(B, t')$ if $B \subset B'$.
|
| 147 |
+
|
| 148 |
+
**Example 1.** Mean curvature flow. (see [1]) Let us take $k=n$ in the regularity assumptions, that is $B_0$ $n$-dimensional, and $E(B) = -\mathcal{H}^{n-1}(\partial^* B)$. Then any minimal element of the action measure $A$ defined above is a super-solution of the mean curvature flow problem, that is for almost any $t$ and
|
| 149 |
+
---PAGE_BREAK---
|
| 150 |
+
|
| 151 |
+
almost any $x \in \partial^{\ast}B(t)$ we have $v(t) \geq k(x,t)$, where $k(x,t)$ is the mean curvature of $\partial^{\ast}B(t)$ in $x$ (with the convention of positive curvature for spheres).
|
| 152 |
+
|
| 153 |
+
**Example 2.** **Brittle crack propagation.** By a crack set in $\Omega$ we mean a closed, finite rectifiable set $B$. $\Omega$ represents the reference configuration and $\mathbf{u}: \bar{\Omega} \to \mathbb{R}^n$ is the deformation of a hyper-elastic body. The free energy density is $w(\nabla \mathbf{u})$; in the case of infinitesimal deformations $\mathbf{u}$ represents the displacement of the body and $w$ is a quadratic function of the symmetric gradient of $\mathbf{u}$.
|
| 154 |
+
|
| 155 |
+
A path $t \mapsto \mathbf{v}(t)$ of deformations (or displacements) is given on $\partial\Omega$. The evolution of the body is supposed to be quasi-static. An initial crack set $B_0$ is present in the body. We are interested in the propagation of this crack under the path of imposed deformations. We introduce for this the following functional, defined for any crack set $B$ and any moment $t$:
|
| 156 |
+
|
| 157 |
+
$$E(B, t) = \inf \left\{ \int_{\Omega} w(\nabla(\mathbf{u})) dx : \mathbf{u} \in C^1(\bar{\Omega} \setminus B), \mathbf{u} = \mathbf{v}(t) \text{ on } \partial\Omega \setminus B \right\}.$$
|
| 158 |
+
|
| 159 |
+
Our principle of brittle crack propagation states that the evolution of the initial crack $B_0$ is a minimal element of the action-measure:
|
| 160 |
+
|
| 161 |
+
$$\Lambda(t \mapsto B(t))(Q) = G H^{n-1}(B(T) \cap Q) + \int_{0}^{T} dE(B(t), t)(Q) dt .$$
|
| 162 |
+
|
| 163 |
+
The physical meaning of this principle is: choose the crack propagation $t \mapsto B(t)$ such that the energy consumed by the body in order to produce in $Q$ the crack growth $t \mapsto B(t) \cap Q$ is less than the energy released in $Q$ due only to crack propagation.
|
| 164 |
+
|
| 165 |
+
In the particular case of infinitesimal deformations if we take the curve $t \mapsto B_0(t) = B_0$ we see that $\Lambda(B_0(\cdot))(Q) = 0$ for any $Q$, therefore $\Lambda(B(\cdot))$ is a negative measure. Therefore, in this case, a generalization of Griffith criterion holds.
|
| 166 |
+
|
| 167 |
+
In [2], [3] we have proposed a minimizing movement model of brittle crack propagation in infinitesimal deformations ([3], definitions 4.1 and 5.1). The model is presented here in a condensed form. Let us consider the set $M$ of all $(\mathbf{u}, K)$ such that $K \subset \bar{\Omega}$ is a crack set, $\mathbf{u} \in \mathbf{u} \in C^1(\bar{\Omega} \setminus K, \mathbb{R}^n)$ and for $H^{n-1}$-almost any $x \in K$ there exist the normal $\mathbf{n}(x)$ at $K$ in $x$ and $\mathbf{u}^+(x), \mathbf{u}^-(x)$.
|
| 168 |
+
|
| 169 |
+
We define the functions
|
| 170 |
+
|
| 171 |
+
$$J: M \times M \to R,$$
|
| 172 |
+
|
| 173 |
+
$$J((\mathbf{u}, K), (\mathbf{v}, L)) = \int_{\Omega} w(\nabla \mathbf{v}) \, d\mathbf{x} + G H^{n-1}(L \setminus K),$$
|
| 174 |
+
|
| 175 |
+
$$\Psi: [0, \infty) \times M \to \{0, +\infty\},$$
|
| 176 |
+
---PAGE_BREAK---
|
| 177 |
+
|
| 178 |
+
$$ \Psi(\lambda, (v, K)) = \begin{cases} 0 & \text{if } v = u_0(\lambda) \text{ on } \partial\Omega \setminus K \\ +\infty & \text{otherwise.} \end{cases} $$
|
| 179 |
+
|
| 180 |
+
We consider the initial data $(u_0, K) \in M$ such that $u_0 = u(u_0(0), K)$. For any $s \ge 1$ we define the sequences
|
| 181 |
+
|
| 182 |
+
$$ k \in N \mapsto u^s(k), L^s(k), K^s(k), $$
|
| 183 |
+
|
| 184 |
+
$(u^s(k), L^s(k)) \in M$ and $(u^s(k), K^s(k)) \in M$, recursively:
|
| 185 |
+
|
| 186 |
+
i) $(u^s, K^s)(0) = (u_0, K)$, $L^s(0) = K$,
|
| 187 |
+
|
| 188 |
+
ii) for any $k \in N$ $(u^s, L^s)(k+1) \in M$ minimizes the functional
|
| 189 |
+
|
| 190 |
+
$$ (v, L) \in M \mapsto J(((u^s, K^s)(k), (v, L)) + \Psi((k+1)/s, (v, L)) $$
|
| 191 |
+
|
| 192 |
+
over $M$. $K^s(k+1)$ is defined by the formula:
|
| 193 |
+
|
| 194 |
+
$$ K^s(k+1) = K^s(k) \cup L^s(k+1). $$
|
| 195 |
+
|
| 196 |
+
$(u, L, K): [0, +\infty) \to M$ is an energy minimizing movement associated to $J$ with the constraint $\Psi$ and initial data $(u_0, K)$ if there is a diverging sequence $(s_i)$ such that for any $t > 0$ we have: $u^s([s_i t]) \to u(t)$ in $L^2(\Omega, R^n)$. $L(t)$ is called the active crack at the moment $t$ and
|
| 197 |
+
|
| 198 |
+
$$ K(t) = \bigcup_{s \in [0, t]} S(s) $$
|
| 199 |
+
|
| 200 |
+
is the total damaged region at the same moment.
|
| 201 |
+
|
| 202 |
+
We have the following result which connects these two models of brittle
|
| 203 |
+
crack propagation presented here.
|
| 204 |
+
|
| 205 |
+
**Theorem.** Let us consider an energy minimizing brittle crack propagation $t \mapsto (u, S(t), K(t))$. Suppose that $t \mapsto K(t)$ is driven by diffeomorphisms. Then the curve $t \mapsto K(t)$ is a minimal element of the action-measure $\mathcal{A}$ defined above, in the case of infinitesimal deformations.
|
| 206 |
+
|
| 207 |
+
REFERENCES
|
| 208 |
+
|
| 209 |
+
[1] L. Ambrosio, Geometric evolution problems, distance function and viscosity solutions, *Università di Pisa Preprint* 2.245.986, 1996
|
| 210 |
+
|
| 211 |
+
[2] M. Buliga, Variational Formulations in Brittle Fracture Mechanics. PhD Thesis, Institute of Mathematics of the Romanian Academy, 1997
|
| 212 |
+
|
| 213 |
+
[3] M. Buliga, Energy minimizing brittle crack propagation, *Journal of Elasticity*, (to appear), 1998
|
| 214 |
+
|
| 215 |
+
[4] M.G. Crandall, P.L. Lions, Viscosity solutions of Hamilton-Jacobi equations *Trans. Amer. Math. Soc.*, **277**, 1983, 1–43
|
| 216 |
+
|
| 217 |
+
[5] M.G. Crandall, L.C. Evans, P.L. Lions, Some properties of viscosity solutions to Hamilton-Jacobi equations, *Trans. Amer. Math. Soc.*, **282**, 1984, 487–502
|
| 218 |
+
|
| 219 |
+
[6] P.L. Lions, Generalized solutions of Hamilton-Jacobi equations, *Research Notes in Math*, **69**, Pitman, 1982
|
samples_new/texts_merged/6838080.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples_new/texts_merged/7100604.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
samples_new/texts_merged/7604074.md
ADDED
|
@@ -0,0 +1,475 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
Cosmology of a polynomial model for de Sitter
|
| 5 |
+
gauge theory sourced by a fluid
|
| 6 |
+
|
| 7 |
+
Jia-An Lu¹
|
| 8 |
+
|
| 9 |
+
School of Physics, Sun Yat-sen University,
|
| 10 |
+
Guangzhou 510275, China
|
| 11 |
+
|
| 12 |
+
**Abstract**
|
| 13 |
+
|
| 14 |
+
In the de Sitter gauge theory (DGT), the fundamental variables are the de Sitter (dS) connection and the gravitational Higgs/Goldstone field $\xi^A$. Previously, a model for DGT was analyzed, which generalizes the MacDowell–Mansouri gravity to have a variable cosmological constant $\Lambda = 3/l^2$, where $l$ is related to $\xi^A$ by $\xi^A\xi_A = l^2$. It was shown that the model sourced by a perfect fluid does not support a radiation epoch and the accelerated expansion of the parity invariant universe. In this work, I consider a similar model, namely, the Stelle–West gravity, and couple it to a modified perfect fluid, such that the total Lagrangian 4-form is polynomial in the gravitational variables. The Lagrangian of the modified fluid has a nontrivial variational derivative with respect to $l$, and as a result, the problems encountered in the previous work no longer appear. Moreover, to explore the elegance of the general theory, as well as to write down the basic framework, I perform the Lagrange–Noether analysis for DGT sourced by a matter field, yielding the field equations and the identities with respect to the symmetries of the system. The resulted formula are dS covariant and do not rely on the existence of the metric field.
|
| 15 |
+
|
| 16 |
+
PACS numbers: 04.50.Kd, 98.80.Jk, 04.20.Cv
|
| 17 |
+
|
| 18 |
+
Key words: Stelle–West gravity, gauge theory of gravity, cosmic acceleration
|
| 19 |
+
|
| 20 |
+
# 1 Introduction
|
| 21 |
+
|
| 22 |
+
The gauge theories of gravity (GTG) aim at treating gravity as a gauge field, in particular, constructing a Yang–Mills-type Lagrangian which reduces to GR in some limiting case, while providing some novel falsifiable predictions. A well-founded subclass of GTG is the Poincaré gauge theory (PGT) [1–5], in which the gravitational field consists of the Lorentz connection and the co-tetrad field. Moreover, the PGT can be reformulated as de Sitter gauge theory (DGT), in which the Lorentz connection and the co-tetrad field are united into a de Sitter (dS) connection [6, 7]. In fact, before the idea of DGT is realized, a related Yang–Mills-type Lagrangian for gravity was proposed by MacDowell and Mansouri [8], and reformulated into a dS-invariant form by West [9], which reads
|
| 23 |
+
|
| 24 |
+
$$
|
| 25 |
+
\begin{aligned}
|
| 26 |
+
\mathcal{L}^{\text{MM}} &= \epsilon_{ABCDE} \xi^E \mathcal{F}^{AB} \wedge \mathcal{F}^{CD} \\
|
| 27 |
+
&= \epsilon_{\alpha\beta\gamma\delta} (l R^{\alpha\beta} \wedge R^{\gamma\delta} - 2l^{-1} R^{\alpha\beta} \wedge e^{\gamma} \wedge e^{\delta} + l^{-3} e^{\alpha} \wedge e^{\beta} \wedge e^{\gamma} \wedge e^{\delta}),
|
| 28 |
+
\end{aligned}
|
| 29 |
+
\quad (1)
|
| 30 |
+
$$
|
| 31 |
+
|
| 32 |
+
where $\epsilon_{ABCDE}$ and $\epsilon_{\alpha\beta\gamma\delta}$ are the 5d and 4d Levi-Civita symbols, $\xi^A$ is a dS vector constrained by $\xi^A\xi_A = l^2$, $l$ is a positive constant, $\mathcal{F}^{AB}$ is the dS curvature, $R^{\alpha\beta}$ is the
|
| 33 |
+
|
| 34 |
+
¹Email: ljagdgz@163.com
|
| 35 |
+
---PAGE_BREAK---
|
| 36 |
+
|
| 37 |
+
Lorentz curvature, and $e^\alpha$ is the orthonormal co-tetrad field. This theory is equivalent to the Einstein–Cartan (EC) theory with a cosmological constant $\Lambda = 3/l^2$ and a Gauss–Bonnet (GB) topological term, as seen in Eq. (1).
|
| 38 |
+
|
| 39 |
+
Note that some special gauges with the residual Lorentz symmetry can be defined by $\xi^A = \delta^A_4 l$. Henceforth, $\xi^A$ is akin to an unphysical Goldstone field. To make $\xi^A$ physical, and so become the gravitational Higgs field, one may replace the constant $l$ by a dynamical $l$, resulting in the Stelle–West (SW) theory [7]. The theory is further explored by Refs. [10,11] (see also the review [12]), in which the constraint $\xi^A\xi_A = l^2$ is completely removed, in other words, $\xi^A\xi_A$ needs not to be positive. Suppose that $\xi^A\xi_A = \sigma l^2$, where $\sigma = \pm 1$. When $l \neq 0$, the metric field can be defined by $g_{\mu\nu} = (\tilde{D}_\mu\xi^A)(\tilde{D}_\nu\xi_A)$, where $\tilde{D}_\mu\xi^A = \tilde{\delta}^A_B D_\mu\xi^B$, $\tilde{\delta}^A_B = \delta^A_B - \xi^A\xi_B/\sigma l^2$, $D_\mu\xi^A = d_\mu\xi^A + \Omega^A_{B\mu}\xi^B$, and $\Omega^A_{B\mu}$ is the dS connection. It was shown that $\sigma = \pm 1$ corresponds to the Lorentz/Euclidean signature of the metric field, and the signature changes when $\xi^A\xi_A$ changes its sign [11].
|
| 40 |
+
|
| 41 |
+
On the other hand, it remains to check whether the SW gravity is viable. Although the SW lagrangian reduces to the MM Lagrangian when $l$ is a constant, the field equations do not. In the SW theory, there is an additional field equation coming from the variation with respect to $l$, which is nontrivial even when $l$ is a constant. Actually, a recent work [13] presents some negative results for a related model, whose Lagrangian is equal to the SW one times $(-l/2)$. For a homogeneous and isotropic universe with parity-invariant torsion, it is found that $l$ being a constant implies the energy density of the material fluid being a constant, and so $l$ should not be a constant in the general case. Moreover, in the radiation epoch, the $l$ equation forces the energy density equal to zero; while in the matter epoch, a dynamical $l$ only works to renormalize the gravitational constant by some constant factor, and hence the cosmic expansion decelerates as in GR.
|
| 42 |
+
|
| 43 |
+
In this work, it is shown that the SW gravity suffers from similar problems encountered in the model considered by Ref. [13]. Also, I try to solve these problems by using a new fluid with the Lagrangian being polynomial in the gravitational variables. The merits of a Lagrangian polynomial in some variables are that it is simple and nonsingular with respect to those variables. In Refs. [14,15], the polynomial Lagrangian for gravitation and other fundamental fields were proposed, while in this paper, the polynomial Lagrangian for a perfect fluid is proposed, which reduces to the Lagrangian of a usual perfect fluid when $l$ is a constant. It turns out that, in contrast to the case with an ordinary fluid, the SW gravity coupled with the new fluid supports the radiation epoch and naturally drives the cosmic acceleration. In addition, when writing down the basic framework of DGT, a Lagrangian–Noether analysis is performed, which generalizes the results of Ref. [16] to the cases with arbitrary matter field and arbitrary $\xi^A$.
|
| 44 |
+
|
| 45 |
+
The article is organized as follows. In Sec. 2.1, a Lagrangian–Noether analysis is done for the general DGT sourced by a matter field. In Sec. 2.2, I reduce the analysis in Sec. 2.1 in the Lorentz gauges, and show how the two Noether identities in PGT can be elegantly unified into one identity in DGT. In Sec. 3.1, the SW model of DGT is introduced, with the field equations derived both in the general gauge and the Lorentz gauges. Further, the matter source is discussed in Sec. 3.2, where a modified perfect fluid with the Lagrangian polynomial in the gravitational variables is constructed, and a general class of perfect fluids is defined, which contains both the usual and the modified perfect fluids. Then I couple the SW gravity with the class of fluids and study the coupling system in the homogeneous, isotropic and parity-invariant universe. The field equations are deduced in Sec. 4.1 and solved in Sec. 4.2, and the results are compared with observations in Sec.
|
| 46 |
+
---PAGE_BREAK---
|
| 47 |
+
|
| 48 |
+
4.3. In Sec. 5, I give some conclusions, and discuss the remaining problems and possible solutions.
|
| 49 |
+
|
| 50 |
+
# 2 de Sitter gauge theory
|
| 51 |
+
|
| 52 |
+
## 2.1 Lagrangian–Noether machinery
|
| 53 |
+
|
| 54 |
+
The DGT sourced by a matter field is described by the Lagrangian 4-form
|
| 55 |
+
|
| 56 |
+
$$ \mathcal{L} = \mathcal{L}(\psi, D\psi, \xi^A, D\xi^A, \mathcal{F}^{AB}), \quad (2) $$
|
| 57 |
+
|
| 58 |
+
where $\psi$ is a $p$-form valued at some representation space of the dS group $SO(1, 4)$, $D\psi = d\psi + \Omega^{AB}T_{AB} \wedge \psi$ is the covariant exterior derivative, $T_A{}^B$ are representations of the dS generators, $\xi^A$ is a dS vector, $D\xi^A = d\xi^A + \Omega^A{}_B\xi^B$, $\Omega^A{}_B$ is the dS connection 1-form, and $\mathcal{F}^A{}_B = d\Omega^A{}_B + \Omega^A{}_C \wedge \Omega^C{}_B$ is the dS curvature 2-form. The variation of $\mathcal{L}$ resulted from the variations of the explicit variables reads
|
| 59 |
+
|
| 60 |
+
$$ \begin{aligned} \delta \mathcal{L} = & \delta\psi \wedge \partial\mathcal{L}/\partial\psi + \delta D\psi \wedge \partial\mathcal{L}/\partial D\psi + \delta\xi^A \cdot \partial\mathcal{L}/\partial\xi^A + \delta D\xi^A \wedge \partial\mathcal{L}/\partial D\xi^A \\ & + \delta\mathcal{F}^{AB} \wedge \partial\mathcal{L}/\partial\mathcal{F}^{AB}, \end{aligned} \quad (3) $$
|
| 61 |
+
|
| 62 |
+
where $(\partial\mathcal{L}/\partial\psi)_{\mu_{p+1}\cdots\mu_4} \equiv \partial\mathcal{L}_{\mu_1\cdots\mu_p\mu_{p+1}\cdots\mu_4}/\partial\psi_{\mu_1\cdots\mu_p}$, and the other partial derivatives are similarly defined. The variations of $D\psi$, $D\xi^A$ and $\mathcal{F}^{AB}$ can be transformed into the variations of the fundamental variables $\psi$, $\xi^A$, and $\Omega^{AB}$, leading to
|
| 63 |
+
|
| 64 |
+
$$ \begin{aligned} \delta \mathcal{L} = & \delta\psi \wedge V_{\psi} + \delta\xi^A \cdot V_A + \delta\Omega^{AB} \wedge V_{AB} \\ & + d(\delta\psi \wedge \partial\mathcal{L}/\partial D\psi + \delta\xi^A \cdot \partial\mathcal{L}/\partial D\xi^A + \delta\Omega^{AB} \wedge \partial\mathcal{L}/\partial \mathcal{F}^{AB}), \end{aligned} \quad (4) $$
|
| 65 |
+
|
| 66 |
+
where
|
| 67 |
+
|
| 68 |
+
$$ V_{\psi} = \delta \mathcal{L} / \delta \psi = \partial \mathcal{L} / \partial \psi - (-1)^p D \partial \mathcal{L} / \partial D \psi, \quad (5) $$
|
| 69 |
+
|
| 70 |
+
$$ V_A = \delta L / \delta\xi^A = \partial L / \partial\xi^A - D \partial L / \partial D\xi^A, \quad (6) $$
|
| 71 |
+
|
| 72 |
+
$$ V_{AB} = \delta L / \delta\Omega^{AB} = T_{AB}\psi \wedge \partial L / \partial D\psi + \partial L / \partial D\xi^{[A} \cdot \xi_{B]} + D\partial L / \partial F^{AB}. \quad (7) $$
|
| 73 |
+
|
| 74 |
+
The symmetry transformations in DGT consist of the diffeomorphism transformations and the dS transformations. For the diffeomorphism transformations, they can be promoted to a gauge-invariant version [16, 17], namely, the parallel transports in the fiber bundle with the gauge group as the structure group. The action of an infinitesimal parallel transport on a variable is a gauge-covariant Lie derivative$^2$ $L_v = v]D + Dv]$, where $v$ is the vector field which generates the infinitesimal parallel transport, and ] denotes a contraction, for example, $(v]_\psi)_{\mu_2...,\mu_p} = v^{\mu_1}\psi_{\mu_1,\mu_2...,\mu_p}$. Put $\delta = L_v$ in Eq. (3), utilize the arbitrariness of $v$, then one obtains the chain rule
|
| 75 |
+
|
| 76 |
+
$$ v]_{\mathcal{L}} = (v]_{\psi}) \wedge \partial_{\mathcal{L}}/\partial\psi + (v]_{D\psi}) \wedge \partial_{\mathcal{L}}/\partial D\psi + (v]_{D\xi^A}) \cdot \partial_{\mathcal{L}}/\partial D\xi^A \\ +(v]_{F^{AB}}) \wedge \partial_{\mathcal{L}}/\partial F^{AB}, \quad (8) $$
|
| 77 |
+
|
| 78 |
+
and the first Noether identity
|
| 79 |
+
|
| 80 |
+
$$ (v]D\psi) \wedge V_{\psi} + (-1)^p(v]\psi) \wedge DV_{\psi} + (v]D\xi^A) \cdot V_A + (v]F^{AB}) \wedge V_{AB} = 0. \quad (9) $$
|
| 81 |
+
|
| 82 |
+
$^2$The gauge-covariant Lie derivative has been used in the metric-affine gauge theory of gravity [18].
|
| 83 |
+
---PAGE_BREAK---
|
| 84 |
+
|
| 85 |
+
On the other hand, the dS transformations are defined as vertical isomorphisms on the fiber bundle. The actions of an infinitesimal dS transformation on the fundamental variables are as follows:
|
| 86 |
+
|
| 87 |
+
$$ \delta\psi = B^{AB}T_{AB}\psi, \quad \delta\xi^A = B^{AB}\xi_B, \quad \delta\Omega^{AB} = -DB^{AB}, \qquad (10) $$
|
| 88 |
+
|
| 89 |
+
where $B^A{}_B$ is a dS algebra-valued function which generates the infinitesimal dS transformation. Substitute Eq. (10) and $\delta\mathcal{L} = 0$ into Eq. (4), and make use of Eq. (7) and the arbitrariness of $B^{AB}$, one arrives at the second Noether identity
|
| 90 |
+
|
| 91 |
+
$$ DV_{AB} = -T_{AB}\psi \wedge V_{\psi} - V[A \cdot \xi_{B}]. \qquad (11) $$
|
| 92 |
+
|
| 93 |
+
The above analyses are so general that they do not require the existence of a metric field. In the special case with a metric field being defined, $\xi^A \xi_A$ equating to a positive constant, and $p=0$, the above analyses coincide with those in Ref. [16].
|
| 94 |
+
|
| 95 |
+
## 2.2 Reduction in the Lorentz gauges
|
| 96 |
+
|
| 97 |
+
Consider the case with $\xi^A \xi_A = l^2$, where $l$ is a positive function. Then we may define the projector $\tilde{\delta}^A{}_B = \delta^A{}_B - \xi^A \xi_B / l^2$, the generalized tetrad $\tilde{D} \xi^A = \tilde{\delta}^A{}_B D \xi^B$, and a symmetric rank-2 tensor³
|
| 98 |
+
|
| 99 |
+
$$ g_{\mu\nu} = \eta_{AB}(\tilde{D}_{\mu}\xi^{A})(\tilde{D}_{\nu}\xi^{B}), \qquad (12) $$
|
| 100 |
+
|
| 101 |
+
which is a localization of the dS metric $\hat{g}_{\mu\nu} = \eta_{AB}(d_{\mu}\dot{\xi}^{A})(d_{\nu}\dot{\xi}^{B})$, where $\dot{\xi}^{A}$ are the 5d Minkowski coordinates on the 4d dS space. Though Eq. (12) seems less natural than the choice $g^{*}_{\mu\nu} = \eta_{AB}(D_{\nu}\xi^{A})(D_{\nu}\xi^{B})$, it coincides with another natural identification (15) (the relation between Eqs. (12) and (15) will be discussed later). If $g_{\mu\nu}$ is non-degenerate, it is a metric field with Lorentz signature, and one may define $\tilde{D}^{\mu}\xi_A \equiv g^{\mu\nu}\tilde{D}_{\nu}\xi_A$. Put $v^\mu = \tilde{D}^\mu\xi_A$ in Eq. (9) and utilize $(\tilde{D}_\mu\xi^A)(\tilde{D}^\mu\xi_B) = \tilde{\delta}^A{}_B$, we get
|
| 102 |
+
|
| 103 |
+
$$ \begin{aligned} \tilde{V}_A = &-(\tilde{D}\xi_A]D\psi) \wedge V_\psi - (-1)^p(\tilde{D}\xi_A]\psi) \wedge DV_\psi - (\tilde{D}\xi_A]d\ln l) \cdot V_C\xi^C \\ &-(\tilde{D}\xi_A]\mathcal{F}^{CD}) \wedge V_{CD}, \end{aligned} \qquad (13) $$
|
| 104 |
+
|
| 105 |
+
where $\tilde{V}_A = \tilde{\delta}^B{}_AV_B$. When $l$ is a constant, Eq. (13) implies that the $\xi^A$ equation ($\tilde{V}_A = 0$ for this case) can be deduced from the other field equations ($V_\psi = 0$ and $V_{CD} = 0$), as pointed out by Ref. [19]. Substitute Eq. (13) into Eq. (11), and make use of $\tilde{V}_{[A} \cdot \xi_{B]} = V_{[A} \cdot \xi_{B]}$ and $\tilde{D}\xi_{[A} \cdot \xi_{B]} = D\xi_{[A} \cdot \xi_{B]}$, one attains
|
| 106 |
+
|
| 107 |
+
$$ \begin{aligned} DV_{AB} = &-T_{AB}\psi \wedge V_{\psi} + (D\xi_{[A} \cdot \xi_{B]})[D\psi) \wedge V_{\psi} + (-1)^p(D\xi_{[A} \cdot \xi_{B]})[\psi) \wedge DV_{\psi} \\ &+(D\xi_{[A} \cdot \xi_{B]})[d \ln l) \cdot V_C\xi^C + (D\xi_{[A} \cdot \xi_{B]})[\mathcal{F}^{CD}) \wedge V_{CD}. \end{aligned} \qquad (14) $$
|
| 108 |
+
|
| 109 |
+
When $l$ is a constant, Eq. (14) coincides with the corresponding result in Ref. [16]. As will be shown later, Eq. (14) unifies the two Noether identities in PGT.
|
| 110 |
+
|
| 111 |
+
To see this, let us define the Lorentz gauges by the condition $\xi^A = \delta^A{}_4l$ [7]. If $h^A{}_B \in SO(1, 4)$ preserves these gauges, then $h^A{}_B = \text{diag}(h^\alpha_\beta, 1)$, where $h^\alpha_\beta$ belongs to the Lorentz group $SO(1, 3)$. In the Lorentz gauges, $\Omega^\alpha_\beta$ transforms as a Lorentz connection,
|
| 112 |
+
|
| 113 |
+
³This formula has been given by Refs. [11, 19], and is different from that originally proposed by Stelle and West [7] by a factor $(l_0/l)^2$, where $l_0$ is the vacuum expectation value of $l$.
|
| 114 |
+
---PAGE_BREAK---
|
| 115 |
+
|
| 116 |
+
and $\Omega^{\alpha}_4$ transforms as a co-tetrad field. Therefore, one may identify $\Omega^{\alpha}_{\beta}$ as the spacetime connection $\Gamma^{\alpha}_{\beta}$, and $\Omega^{\alpha}_4$ as the co-tetrad field $e^{\alpha}$ divided by some quantity with the dimension of length, a natural choice for which is $l$. Resultantly, $\Omega^{AB}$ is identified with a combination of geometric quantities as follows:
|
| 117 |
+
|
| 118 |
+
$$ \Omega^{AB} = \begin{pmatrix} \Gamma^{\alpha\beta} & l^{-1}e^{\alpha} \\ -l^{-1}e^{\beta} & 0 \end{pmatrix}. \qquad (15) $$
|
| 119 |
+
|
| 120 |
+
In the case with constant $l$, this formula has been given by Refs. [7,20], and, in the case with varying $l$, it has been given by Refs. [10, 19]. In the Lorentz gauges, $\tilde{D}\xi^4 = 0$, $\tilde{D}\xi^{\alpha} = \Omega^{\alpha}_4 l = e^{\alpha}$ (where Eq. (15) is used), and so $g_{\mu\nu}$ defined by Eq. (12) satisfies $g_{\mu\nu} = \eta_{\alpha\beta}e^{\alpha}_{\mu}e^{\beta}_{\nu}$, implying that Eq. (12) coincides with Eq. (15). Moreover, according to Eq. (15), one finds the expression for $\mathcal{F}^{AB}$ in the Lorentz gauges as follows [19]:
|
| 121 |
+
|
| 122 |
+
$$ \mathcal{F}^{AB} = \begin{pmatrix} R^{\alpha\beta} - l^{-2}e^{\alpha} \wedge e^{\beta} & l^{-1}[S^{\alpha} - d \ln l \wedge e^{\alpha}] \\ -l^{-1}[S^{\beta} - d \ln l \wedge e^{\beta}] & 0 \end{pmatrix}, \qquad (16) $$
|
| 123 |
+
|
| 124 |
+
where $R^{\alpha}_{\beta} = d\Gamma^{\alpha}_{\beta} + \Gamma^{\alpha}_{\gamma} \wedge \Gamma^{\gamma}_{\beta}$ is the spacetime curvature, and $S^{\alpha} = de^{\alpha} + \Gamma^{\alpha}_{\beta} \wedge e^{\beta}$ is the spacetime torsion.
|
| 125 |
+
|
| 126 |
+
Now it is ready to interpret the results in Sec. 2.1 in the Lorentz gauges. In those gauges, $D\psi = D^{\Gamma}\psi + 2l^{-1}e^{\alpha}T_{\alpha4} \wedge \psi$, $D\xi^{\alpha} = e^{\alpha}$, $D\xi^4 = dl$, and so Eq. (2) becomes
|
| 127 |
+
|
| 128 |
+
$$ \mathcal{L} = \mathcal{L}^L(\psi, D^\Gamma \psi, l, dl, e^\alpha, R^{\alpha\beta}, S^\alpha), \qquad (17) $$
|
| 129 |
+
|
| 130 |
+
where $D^{\Gamma}\psi = d\psi + \Gamma^{\alpha\beta}T_{\alpha\beta} \wedge \psi$. It is the same as a Lagrangian 4-form in PGT [21], with the fundamental variables being $\psi, l, \Gamma^{\alpha\beta}$ and $e^{\alpha}$. The relations between the variational derivatives with respect to the PGT variables and those with respect to the DGT variables can be deduced from the following equality:
|
| 131 |
+
|
| 132 |
+
$$ \delta\xi^A \cdot V_A + 2\delta\Omega^{\alpha4} \wedge V_{\alpha4} = \delta l \cdot \Sigma_l + \delta e^\alpha \wedge \Sigma_\alpha, \qquad (18) $$
|
| 133 |
+
|
| 134 |
+
where $\Sigma_l \equiv \delta\mathcal{L}^L/\delta l$ and $\Sigma_\alpha \equiv \delta\mathcal{L}^L/\delta e^\alpha$. Explicitly, the relations are:
|
| 135 |
+
|
| 136 |
+
$$ \Sigma_{\psi} \equiv \delta \mathcal{L}^L / \delta \psi = V_{\psi}, \qquad (19) $$
|
| 137 |
+
|
| 138 |
+
$$ \Sigma_l = V_4 - 2l^{-2}e^\alpha \wedge V_{\alpha 4}, \qquad (20) $$
|
| 139 |
+
|
| 140 |
+
$$ \Sigma_{\alpha\beta} = \delta\mathcal{L}^L/\delta\Gamma^{\alpha\beta} = V_{\alpha\beta}, \qquad (21) $$
|
| 141 |
+
|
| 142 |
+
$$ \Sigma_\alpha = 2l^{-1}V_{\alpha 4}. \qquad (22) $$
|
| 143 |
+
|
| 144 |
+
It is remarkable that the DGT variational derivative $V_{AB}$ unifies the two PGT variational derivatives $\Sigma_{\alpha\beta}$ and $\Sigma_{\alpha}$. With the help of Eqs. (19)–(22), the $\alpha\beta$ components and $\alpha 4$ components of Eq. (14) are found to be
|
| 145 |
+
|
| 146 |
+
$$ D^\Gamma \Sigma_{\alpha\beta} = -T_{\alpha\beta} \psi \wedge \Sigma_\psi + e_{[\alpha} \wedge \Sigma_{\beta]}, \qquad (23) $$
|
| 147 |
+
|
| 148 |
+
$$ D^\Gamma \Sigma_\alpha = D_\alpha^\Gamma \psi \wedge \Sigma_\psi + (-1)^p (e_\alpha] \psi) \wedge D^\Gamma \Sigma_\psi + \partial_\alpha l \cdot \Sigma_l \\ + (e_\alpha] R^{\beta\gamma}) \wedge \Sigma_{\beta\gamma} + (e_\alpha] S^\beta) \wedge \Sigma_\beta, \qquad (24) $$
|
| 149 |
+
|
| 150 |
+
which are just the two Noether identities in PGT [21], with both $\psi$ and $l$ as the matter fields. This completes our proof for the earlier statement that the DGT identity (14) unifies the two Noether identities in PGT.
|
| 151 |
+
---PAGE_BREAK---
|
| 152 |
+
|
| 153 |
+
# 3 Polynomial models for DGT
|
| 154 |
+
|
| 155 |
+
## 3.1 Stelle-West gravity
|
| 156 |
+
|
| 157 |
+
It is natural to require that the Lagrangian for DGT is regular with respect to the fundamental variables. The simplest regular Lagrangian are polynomial in the variables, and, in order to recover the EC theory, the polynomial Lagrangian should be at least linear in the gauge curvature. Moreover, to ensure $\mathcal{F}^{AB} = 0$ is naturally a vacuum solution, the polynomial Lagrangian should be at least quadratic in $\mathcal{F}^{AB} {}^4$. The general Lagrangian quadratic in $\mathcal{F}^{AB}$ reads:
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
\begin{aligned}
|
| 161 |
+
\mathcal{L}^G &= (\kappa_1 \epsilon_{ABCDE} \xi^E + \kappa_2 \eta_{AC} \xi_B \xi_D + \kappa_3 \eta_{AC} \eta_{BD}) \mathcal{F}^{AB} \wedge \mathcal{F}^{CD} \\
|
| 162 |
+
&= \kappa_1 \mathcal{L}^{\text{SW}} + \kappa_2 (S^\alpha \wedge S_\alpha - 2S^\alpha \wedge d \ln l \wedge e_\alpha) \\
|
| 163 |
+
&\quad + \kappa_3 [R^{\alpha\beta} \wedge R_{\alpha\beta} + d(2l^{-2} S^\alpha \wedge e_\alpha)],
|
| 164 |
+
\end{aligned}
|
| 165 |
+
\quad (25)
|
| 166 |
+
$$
|
| 167 |
+
|
| 168 |
+
where the $\kappa_1$ term is the SW Lagrangian, the $\kappa_2$ and $\kappa_3$ terms are parity odd, and the $\kappa_3$ term is a sum of the Pontryagin and modified Nieh-Yan topological terms. This quadratic Lagrangian is a special case of the at most quadratic Lagrangian proposed in Refs. [10,22], and one should note that the quadratic Lagrangian satisfies the requirement mentioned above about the vacuum solution, while the at most quadratic Lagrangian does not always satisfy that requirement.
|
| 169 |
+
|
| 170 |
+
Among the three terms in Eq. (25), the SW term is the only one that can be reduced to the EC Lagrangian in the case with positive and constant $\xi^A\xi_A$. Thus the SW Lagrangian is the simplest choice for the gravitational Lagrangian which (i) is regular with respect to the fundamental variables; (ii) can be reduced to the EC Lagrangian; (iii) ensures $\mathcal{F}^{AB} = 0$ is naturally a vacuum solution.
|
| 171 |
+
|
| 172 |
+
The SW Lagrangian 4-form $\mathcal{L}^{\text{SW}}$ takes the same form as $\mathcal{L}^{\text{MM}}$ in the first line of Eq. (1), while $\xi^A$ is not constrained by any condition. Substitute Eq. (1) into Eqs. (6)-(7), make use of $\partial\mathcal{L}^{\text{SW}}/\partial\mathcal{F}^{AB} = \epsilon_{ABCDE} \xi^E \mathcal{F}^{CD}$ and the Bianchi identity $D\mathcal{F}^{AB} = 0$, one immediately gets the gravitational field equations
|
| 173 |
+
|
| 174 |
+
$$ -\kappa \epsilon_{ABCDE} \mathcal{F}^{AB} \wedge \mathcal{F}^{CD} = \delta \mathcal{L}^m / \delta \xi^E, \quad (26) $$
|
| 175 |
+
|
| 176 |
+
$$ -\kappa \epsilon_{ABCDE} D\xi^E \wedge \mathcal{F}^{CD} = \delta \mathcal{L}^m / \delta \Omega^{AB}, \quad (27) $$
|
| 177 |
+
|
| 178 |
+
where $\mathcal{L}^m$ is the Lagrangian of the matter field coupled to the SW gravity, with $\kappa$ as the coupling constant. In the vacuum case, Eq. (27) has been given by Ref. [22] by direct computation, while here, Eq. (27) is obtained from the general formula (7).
|
| 179 |
+
|
| 180 |
+
In the Lorentz gauges, $\mathcal{L}^{\text{SW}}$ takes the same form as $\mathcal{L}^{\text{MM}}$ in the second line of Eq. (1), while $l$ becomes a dynamical field. The gravitational field equations read
|
| 181 |
+
|
| 182 |
+
$$ -(\kappa/4)\epsilon_{\alpha\beta\gamma\delta}\epsilon^{\mu\nu\sigma\rho}e^{-1}R^{\alpha\beta}_{\quad\mu\nu}R^{\gamma\delta}_{\quad\sigma\rho} - 4\kappa l^{-2}R + 72\kappa l^{-4} = \delta S_m/\delta l, \quad (28) $$
|
| 183 |
+
|
| 184 |
+
$$ -\kappa \epsilon_{\alpha\beta\gamma\delta} \epsilon^{\mu\nu\sigma\rho} e^{-1} \partial_{\nu} l \cdot R^{\gamma\delta}_{\quad\sigma\rho} + 8\kappa e_{[\alpha}^{\mu} e_{\beta]}^{\nu} \partial_{\nu} l^{-1} + 4\kappa l^{-1} T^{\mu}_{\alpha\beta} = \delta S_m / \delta \Gamma^{\alpha\beta}_{\quad\mu}, \quad (29) $$
|
| 185 |
+
|
| 186 |
+
$$ -8\kappa l^{-1}(G^{\mu}_{\alpha} + \Lambda e_{\alpha}) = \delta S_m / \delta e^{\alpha}_{\mu}, \quad (30) $$
|
| 187 |
+
|
| 188 |
+
where $e = \det(e^{\alpha}_{\mu})$, $R$ is the scalar curvature, $G^{\mu}_{\alpha}$ is the Einstein tensor, $T^{\mu}_{\alpha\beta} = S^{\mu}_{\alpha\beta} + 2e_{[\alpha}^{\mu} S^{\nu}_{\beta]\nu}$, and $S_m$ is the action of the matter field.
|
| 189 |
+
|
| 190 |
+
⁴When the Lagrangian is linear in $\mathcal{F}^{AB}$, we may add some ‘constant term’ (independent of $\mathcal{F}^{AB}$) to ensure $\mathcal{F}^{AB}=0$ is a vacuum solution, but this way is not so natural.
|
| 191 |
+
---PAGE_BREAK---
|
| 192 |
+
|
| 193 |
+
## 3.2 Polynomial dS fluid
|
| 194 |
+
|
| 195 |
+
For the same reason of choosing a polynomial Lagrangian for DGT, we intend to use those matter sources with polynomial Lagrangian. It has been shown that the Lagrangian of fundamental fields can be reformulated into polynomial forms [14, 15]. However, when describing the universe, it is more adequate to use a fluid as the matter source. The Lagrangian of an ordinary perfect fluid [23] can be written in a Lorentz-invariant form:
|
| 196 |
+
|
| 197 |
+
$$ \mathcal{L}_{\mu\nu\rho\sigma}^{\text{PF}} = -\epsilon_{\alpha\beta\gamma\delta} e_{\mu}^{\alpha} e_{\nu}^{\beta} e_{\rho}^{\gamma} e_{\sigma}^{\delta} \rho + \epsilon_{\alpha\beta\gamma\delta} J^{\alpha} e_{\mu}^{\beta} e_{\nu}^{\gamma} e_{\rho}^{\delta} \eta^{\sigma} \wedge \partial_{\mu}\phi, \quad (31) $$
|
| 198 |
+
|
| 199 |
+
where $\phi$ is a scalar field, $J^\alpha$ is the particle number current which is Lorentz covariant and satisfies $J^\alpha J_\alpha < 0$, $\rho = \rho(n)$ is the energy density, and $n \equiv \sqrt{-J^\alpha J_\alpha}$ is the particle number density. The Lagrangian (31) is polynomial in the PGT variable $e^\alpha_\mu$, but it is not polynomial in the DGT variables when it is reformulated into a dS-invariant form, in which case the Lagrangian reads
|
| 200 |
+
|
| 201 |
+
$$ \begin{aligned} \mathcal{L}_{\mu\nu\rho\sigma}^{\text{PF}} = & -\epsilon_{ABCDE}(D_\mu\xi^A)(D_\nu\xi^B)(D_\rho\xi^C)(D_\sigma\xi^D)(\xi^E/l)\rho \\ & +\epsilon_{ABCDE}J^A(D_\nu\xi^B)(D_\rho\xi^C)(D_\sigma\xi^D) \wedge (\xi^E/l)\partial_\mu\phi, \end{aligned} \quad (32) $$
|
| 202 |
+
|
| 203 |
+
where $J^A$ is a dS-covariant particle number current, which satisfies $J^AJ_A < 0$ and $J^\alpha\xi_A = 0$, $\rho = \rho(n)$ and $n \equiv \sqrt{-J^AJ_A}$. Because $l^{-1}$ appears in Eq. (32), the Lagrangian is not polynomial in $\xi^A$.
|
| 204 |
+
|
| 205 |
+
A straightforward way to modify Eq. (32) into a polynomial Lagrangian is to multiply it by $l$. In the Lorentz gauges, $J^4 = 0$, and we may define the invariant $J^\mu \equiv J^\alpha e_\alpha^\mu$. Then the modified Lagrangian $\mathcal{L}_{\mu\nu\rho\sigma}^{\prime PF} = -e\epsilon_{\mu\nu\rho\sigma}\rho l + e\epsilon_{\mu'\nu\rho\sigma}J^{\mu'} \wedge l \cdot \partial_\mu\phi$. It can be verified that this Lagrangian violates the particle number conservation law $\nabla_\mu J^\mu = 0$, where $\nabla_\mu$ is the linearly covariant, metric-compatible and torsion-free derivative. To preserve the particle number conservation, we may replace $l \cdot \partial_\mu\phi$ by $\partial_\mu(l\phi)$, and the corresponding dS-invariant Lagrangian is
|
| 206 |
+
|
| 207 |
+
$$ \begin{aligned} \mathcal{L}_{\mu\nu\rho\sigma}^{\text{DF}} = & -\epsilon_{ABCDE}(D_\mu\xi^A)(D_\nu\xi^B)(D_\rho\xi^C)(D_\sigma\xi^D)\xi^E\rho(n) \\ & +\epsilon_{ABCDE}J^A(D_\nu\xi^B)(D_\rho\xi^C)(D_\sigma\xi^D) \wedge \left(\frac{1}{4}D_\mu\xi^E \cdot \phi + \xi^E \partial_\mu\phi\right). \end{aligned} \quad (33) $$
|
| 208 |
+
|
| 209 |
+
The perfect fluid depicted by the above Lagrangian is called the polynomial dS fluid, or dS fluid for short. In the Lorentz gauges,
|
| 210 |
+
|
| 211 |
+
$$ \begin{aligned} \mathcal{L}_{\mu\nu\rho\sigma}^{\text{DF}} &= -e\epsilon_{\mu\nu\rho\sigma}\rho l + \epsilon_{\alpha\beta\gamma\delta}J^\alpha e^\beta_\nu e^\gamma_\rho e^\delta_\sigma \wedge (\partial_\mu l \cdot \phi + l \cdot \partial_\mu \phi) \\ &= -e\epsilon_{\mu\nu\rho\sigma}\rho l + e\epsilon_{\mu'\nu\rho\sigma}J^{\mu'} \wedge \partial_\mu(l\phi), \end{aligned} \quad (34) $$
|
| 212 |
+
|
| 213 |
+
which is equivalent to Eq. (31) when $l$ is a constant.
|
| 214 |
+
|
| 215 |
+
Define the Lagrangian function $\mathcal{L}_{\text{DF}}$ by $\mathcal{L}_{\mu\nu\rho\sigma}^{\text{DF}} = \mathcal{L}_{\text{DF}} e\epsilon_{\mu\nu\rho\sigma}$, then $\mathcal{L}_{\text{DF}} = -\rho l + J^\mu \partial_\mu(l\phi)$. To compare the polynomial dS fluid with the ordinary perfect fluid, let us consider a general model with the Lagrangian function
|
| 216 |
+
|
| 217 |
+
$$ \mathcal{L}_m = -\rho l^k + J^\mu \partial_\mu (l^k \phi), \quad (35) $$
|
| 218 |
+
|
| 219 |
+
where $k \in \mathbb{R}$. When $k=0$, it describes the ordinary perfect fluid; when $k=1$, it describes the polynomial dS fluid. The variation of $S_m = \int dx^4 e \mathcal{L}_m$ with respect to $\phi$ gives the
|
| 220 |
+
---PAGE_BREAK---
|
| 221 |
+
|
| 222 |
+
particle number conservation law $\nabla_{\mu}J^{\mu} = 0$. The variation with respect to $J^{\alpha}$ yields
|
| 223 |
+
$\partial_{\mu}(l^{k}\phi) = -\mu U_{\mu}l^{k}$, where $\mu \equiv d\rho/dn = (\rho+p)/n$ is the chemical potential, $p = p(n)$
|
| 224 |
+
is the pressure, and $U^{\mu} \equiv J^{\mu}/n$ is the 4-velocity of the fluid particle. Making use of
|
| 225 |
+
these results, one may check that the on-shell Lagrangian function is equal to $pl^{k}$, and
|
| 226 |
+
the variational derivatives
|
| 227 |
+
|
| 228 |
+
$$
|
| 229 |
+
\delta S_m / \delta l = -k \rho l^{k-1}, \tag{36}
|
| 230 |
+
$$
|
| 231 |
+
|
| 232 |
+
$$
|
| 233 |
+
\delta S_m / \delta \Gamma^{\alpha\beta}_{\mu} = 0, \quad (37)
|
| 234 |
+
$$
|
| 235 |
+
|
| 236 |
+
$$
|
| 237 |
+
\delta S_m / \delta e^\alpha_\mu = (\rho + p) l^k U^\mu U_\alpha + pl^k e^\alpha_\mu . \quad (38)
|
| 238 |
+
$$
|
| 239 |
+
|
| 240 |
+
It is seen that $\delta S_m / \delta l = 0$ for the ordinary perfect fluid, while $\delta S_m / \delta l = -\rho$ for the polynomial dS fluid.
|
| 241 |
+
|
| 242 |
+
Finally, it should be noted that the polynomial dS fluid does not support a signature change corresponding to $\xi^A\xi_A$ varying from negative to positive. The reason is that when $\xi^A\xi_A < 0$, there exists no $J^A$ which satisfies $J^AJ_A < 0$ and $J^A\xi_A = 0$.
|
| 243 |
+
|
| 244 |
+
# 4 Cosmological solutions
|
| 245 |
+
|
| 246 |
+
## 4.1 Field equations for the universe
|
| 247 |
+
|
| 248 |
+
In this section, the coupling system of the SW gravity and the fluid model (35) will be analyzed in the homogenous, isotropic, parity-invariant and spatially flat universe characterized by the following ansatz [13]:
|
| 249 |
+
|
| 250 |
+
$$
|
| 251 |
+
e^0_\mu = d_\mu t, \quad e^i_\mu = a \, d_\mu x^i, \tag{39}
|
| 252 |
+
$$
|
| 253 |
+
|
| 254 |
+
$$
|
| 255 |
+
S^0_{\mu\nu} = 0, \quad S^i_{\mu\nu} = b e^0_\mu \wedge e^i_\nu, \tag{40}
|
| 256 |
+
$$
|
| 257 |
+
|
| 258 |
+
where *a* and *b* are functions of the cosmic time *t*, and *i* = 1, 2, 3. On account of Eqs. (39)–(40), the Lorentz connection Γ<sup>αβ</sup><sub>μ</sub> and curvature R<sup>αβ</sup><sub>μν</sub> can be calculated [13]. Further, assume that U<sup>μ</sup> = e<sub>0</sub><sup>μ</sup>, then U<sub>μ</sub> = −e<sub>μ</sub><sup>0</sup>, and so U<sub>α</sub> = −δ<sup>0</sup><sub>α</sub>. Now the reduced form of each term of Eqs. (28)–(30) can be attained. In particular,
|
| 259 |
+
|
| 260 |
+
$$
|
| 261 |
+
\epsilon_{\alpha\beta\gamma\delta} \epsilon^{\mu\nu\sigma\rho} e^{-1} R^{\alpha\beta}_{\mu\nu} R^{\gamma\delta}_{\sigma\rho} = 96(ha)' a^{-1} h^2, \quad (41)
|
| 262 |
+
$$
|
| 263 |
+
|
| 264 |
+
$$
|
| 265 |
+
R = 6[(ha)'a^{-1} + h^2], \tag{42}
|
| 266 |
+
$$
|
| 267 |
+
|
| 268 |
+
$$
|
| 269 |
+
\epsilon_{0i\gamma\delta} \epsilon^{\mu\nu\sigma\rho} e^{-1} \partial_{\nu} l \cdot R^{\gamma\delta}_{\sigma\rho} = -4h^2 \dot{l} e_i{}^\mu, \quad (43)
|
| 270 |
+
$$
|
| 271 |
+
|
| 272 |
+
$$
|
| 273 |
+
\epsilon_{ij\gamma\delta} \epsilon^{\mu\nu\sigma\rho} e^{-1} \partial_{\nu} l \cdot R^{\gamma\delta}_{\sigma\rho} = 0, \quad (44)
|
| 274 |
+
$$
|
| 275 |
+
|
| 276 |
+
$$
|
| 277 |
+
T^{\mu}_{0i} = -2b e_i{}^{\mu}, \quad T^{\mu}_{ij} = 0, \tag{45}
|
| 278 |
+
$$
|
| 279 |
+
|
| 280 |
+
$$
|
| 281 |
+
G^{\mu}_{0} = -3h^{2}e_{0}^{\mu}, \qquad (46)
|
| 282 |
+
$$
|
| 283 |
+
|
| 284 |
+
$$
|
| 285 |
+
G^{\mu}_i = -[2(ha)' a^{-1} + h^2] e_i^{\mu}, \quad (47)
|
| 286 |
+
$$
|
| 287 |
+
|
| 288 |
+
$$
|
| 289 |
+
\delta S_m / \delta e^\mu = -\rho l^k e_0^\mu, \quad (48)
|
| 290 |
+
$$
|
| 291 |
+
|
| 292 |
+
$$
|
| 293 |
+
\delta S_m / \delta e_i^\mu = p l^k e_i^\mu, \quad (49)
|
| 294 |
+
$$
|
| 295 |
+
|
| 296 |
+
where $\cdot$ on top of a quantity or being a superscript denotes the differentiation with respect to $t$, and $h = \dot{a}/a - b$. Substitution of the above equations into Eqs. (28)–(30) leads to
|
| 297 |
+
|
| 298 |
+
$$
|
| 299 |
+
(ha)' a^{-1} (h^2 + l^{-2}) + l^{-2} (h^2 - \Lambda) = k \rho l^{k-1} / 24\kappa, \quad (50)
|
| 300 |
+
$$
|
| 301 |
+
---PAGE_BREAK---
|
| 302 |
+
|
| 303 |
+
$$ (h^2 + l^{-2})\dot{l} - 2bl^{-1} = 0, \qquad (51) $$
|
| 304 |
+
|
| 305 |
+
$$ 8\kappa l^{-1}(-3h^2 + \Lambda) = \rho l^k, \qquad (52) $$
|
| 306 |
+
|
| 307 |
+
$$ 8\kappa l^{-1}[-2(ha)\dot{a}^{-1} - h^2 + \Lambda] = -pl^k, \qquad (53) $$
|
| 308 |
+
|
| 309 |
+
which constitute the field equations for the universe.
|
| 310 |
+
|
| 311 |
+
## 4.2 Solutions for the field equations
|
| 312 |
+
|
| 313 |
+
Before solving the field equations (50)–(53), let us first derive the continuity equation from the field equations. Rewrite Eq. (52) as
|
| 314 |
+
|
| 315 |
+
$$ h^2 = l^{-2} - \rho l^{k+1}/24\kappa. \qquad (54) $$
|
| 316 |
+
|
| 317 |
+
Substituting Eq. (54) into Eq. (53) yields
|
| 318 |
+
|
| 319 |
+
$$ (ha)\dot{a}^{-1} = l^{-2} + (\rho + 3p)l^{k+1}/48\kappa. \qquad (55) $$
|
| 320 |
+
|
| 321 |
+
Multiply Eq. (55) by $2h$, making use of Eq. (54) and $h = \dot{a}/a - b$, one gets
|
| 322 |
+
|
| 323 |
+
$$ 2h\dot{h} = (\rho + p)l^{k+1}\dot{a}a^{-1}/8\kappa - 2b(ha)\dot{a}a^{-1}, \qquad (56) $$
|
| 324 |
+
|
| 325 |
+
in which, according to Eqs. (50), (51) and (54),
|
| 326 |
+
|
| 327 |
+
$$ 2b(ha)\dot{a}^{-1} = \dot{l}[(k+1)\rho l^k/24\kappa + 2l^{-3}]. \qquad (57) $$
|
| 328 |
+
|
| 329 |
+
Differentiate Eq. (54) with respect to $t$, and compare it with Eqs. (56)–(57), one arrives at the continuity equation
|
| 330 |
+
|
| 331 |
+
$$ \dot{\rho} + 3(\rho + p)\dot{a}a^{-1} = 0, \qquad (58) $$
|
| 332 |
+
|
| 333 |
+
which is, unexpectedly, the same as the usual one. Suppose that $p = w\rho$, where $w$ is a constant. Then Eq. (58) has the solution
|
| 334 |
+
|
| 335 |
+
$$ \rho = \rho_0(a/a_0)^{-3(1+w)}, \qquad (59) $$
|
| 336 |
+
|
| 337 |
+
where $a_0$ and $\rho_0$ are the values of $a$ and $\rho$ at some moment $t_0$.
|
| 338 |
+
|
| 339 |
+
Now it is ready to solve Eqs. (50)–(52), while Eq. (53) is replaced by Eq. (58) with the solution (59). Firstly, substitute Eqs. (54)–(55) into Eq. (50), one finds
|
| 340 |
+
|
| 341 |
+
$$ \rho l^{k+3} = 48\kappa(3w - k - 1)/(3w + 1). \qquad (60) $$
|
| 342 |
+
|
| 343 |
+
Assume that $\kappa < 0$, then according to the above relation, $\rho l^{k+3} > 0$ implies $(3w - k - 1)/(3w + 1) < 0$. We only concern the cases with $k=0, 1$, and so assume that $k+1 > -1$, then $\rho l^{k+3} > 0$ constrains $w$ by
|
| 344 |
+
|
| 345 |
+
$$ -\frac{1}{3} < w < \frac{k+1}{3}. \qquad (61) $$
|
| 346 |
+
|
| 347 |
+
For the ordinary fluid ($k=0$), the pure radiation ($w=1/3$) cannot exist. In fact, on account of Eq. (60), $\rho l^3 = 0$ in this case, which is unreasonable. This problem is similar to that appeared in Ref. [13]. On the other hand, for the dS fluid ($k=1$), Eq. (61)
|
| 348 |
+
---PAGE_BREAK---
|
| 349 |
+
|
| 350 |
+
becomes $-1/3 < w < 2/3$, which contains both the cases with pure matter ($w = 0$) and
|
| 351 |
+
pure radiation ($w = 1/3$). Generally, the combination of Eqs. (59) and (60) yields
|
| 352 |
+
|
| 353 |
+
$$l = l_0(a/a_0)^{\frac{3(w+1)}{k+3}}, \quad (62)$$
|
| 354 |
+
|
| 355 |
+
where $l_0$ is the value of $l$ when $t = t_0$, and is related to $\rho_0$ by Eq. (60).
|
| 356 |
+
Secondly, substitute Eq. (54) into Eq. (51), and utilize Eqs. (60) and (62), one gets
|
| 357 |
+
|
| 358 |
+
$$b = \frac{3(w + 1)(k + 2)}{(3w + 1)(k + 3)} \dot{a} a^{-1}, \qquad (63)$$
|
| 359 |
+
|
| 360 |
+
and hence
|
| 361 |
+
|
| 362 |
+
$$h = \frac{3w - 2k - 3}{(3w + 1)(k + 3)} \dot{a} a^{-1}. \qquad (64)$$
|
| 363 |
+
|
| 364 |
+
Thirdly, substitution of Eqs. (60) and (64) into Eq. (52) leads to
|
| 365 |
+
|
| 366 |
+
$$\dot{a}a^{-1} = H_0(l_0/l), \qquad (65)$$
|
| 367 |
+
|
| 368 |
+
where $H_0 \equiv (\dot{a}a^{-1})_{t_0}$ is the Hubble constant, being related to $l_0$ by
|
| 369 |
+
|
| 370 |
+
$$H_0 = \sqrt{\frac{3w+1}{-3w+2k+3}} \cdot (k+3)l_0^{-1}. \qquad (66)$$
|
| 371 |
+
|
| 372 |
+
Here note that Eq. (61) implies that $3w + 1 > 0$, $-3w + k + 1 > 0$, $k + 1 > -1$, and so
|
| 373 |
+
$-3w + 2k + 3 > 0$. In virtue of Eqs. (63), (65) and (62), one has
|
| 374 |
+
|
| 375 |
+
$$b = b_0(a_0/a)^{\frac{3(w+1)}{k+3}}, \qquad (67)$$
|
| 376 |
+
|
| 377 |
+
where $b_0$ is related to $H_0$ by Eq. (63). Moreover, substitute Eq. (62) into Eq. (65) and
|
| 378 |
+
solve the resulting equation, one attains
|
| 379 |
+
|
| 380 |
+
$$(a/a_0)^{\frac{3(w+1)}{k+3}} - 1 = \frac{3(w+1)}{k+3} \cdot H_0(t-t_0). \qquad (68)$$
|
| 381 |
+
|
| 382 |
+
In conclusion, the solutions for the field equations (50)-(53) are given by Eqs. (59),
|
| 383 |
+
(62), (67) and (68), with the independent constants $a_0$, $H_0$ and $t_0$.
|
| 384 |
+
|
| 385 |
+
**4.3 Comparison with observations**
|
| 386 |
+
|
| 387 |
+
If $k$ is specified, we can determine the value of the coupling constant $\kappa$ from the observed values of $H_0 = 67.4 \text{ km} \cdot \text{s}^{-1} \cdot \text{Mpc}^{-1}$ and $\Omega_0 \equiv 8\pi\rho_0/3H_0^2 = 0.315$ [24]. For example, put $k=1$, then according to Eq. (66) (with $w=0$), one has
|
| 388 |
+
|
| 389 |
+
$$l_0 = 4/\sqrt{5}H_0 = 8.19 \times 10^{17} \text{ s}. \qquad (69)$$
|
| 390 |
+
|
| 391 |
+
Substitution of Eq. (69) and $\rho_0 = 3H_0^2\Omega_0/8\pi = 1.79 \times 10^{-37} \text{ s}^{-2}$ into Eq. (60) yields
|
| 392 |
+
|
| 393 |
+
$$\kappa = -\rho_0 l_0^4 / 96 = -8.41 \times 10^{32} \text{ s}^2. \qquad (70)$$
|
| 394 |
+
|
| 395 |
+
This value is an important reference for the future work which will explore the viability
|
| 396 |
+
of the model in the solar system scale.
|
| 397 |
+
---PAGE_BREAK---
|
| 398 |
+
|
| 399 |
+
Also, we can compare the deceleration parameter $q \equiv -a\ddot{a}/\dot{a}^2$ derived from the above models with the observed one. With the help of Eqs. (65) and (62), one finds $\dot{a} \sim a^{(k-3w)/(k+3)}$, then $\ddot{a} = \frac{k-3w}{k+3} \cdot \dot{a}^2 a^{-1}$, and so
|
| 400 |
+
|
| 401 |
+
$$q = \frac{3w-k}{k+3}. \quad (71)$$
|
| 402 |
+
|
| 403 |
+
Put $w=0$, it is seen that the universe accelerates ($q<0$) if $k>0$, linearly expands ($q=0$) if $k=0$, and decelerates ($q>0$) if $k<0$. In particular, for the model with an ordinary fluid ($k=0$), the universe expands linearly⁵; while for the model with a dS fluid ($k=1$), the universe accelerates with $q=-1/4$, which is consistent with the observational result $-1 \le q_0 < 0$ [25–27], where $q_0$ is the present-day value of $q$. It should be noted that Eq. (71) implies that $q$ is a constant when $w$ is a constant, and so the models cannot describe the transition from deceleration to acceleration when $w$ is a constant.
|
| 404 |
+
|
| 405 |
+
## 5 Remarks
|
| 406 |
+
|
| 407 |
+
It is shown that the requirement of regular Lagrangian may be crucial for DGT, as it is shown that the SW gravity coupled with an ordinary perfect fluid (whose Lagrangian is not regular with respect to $\xi^A$ when $\xi^A\xi_A = 0$) does not permit a radiation epoch and the acceleration of the universe, while the SW gravity coupled with a polynomial dS fluid (whose Lagrangian is regular with respect to $\xi^A$) is out of these problems. Yet, the latter model is still not a realistic model, because it cannot describe the transition from deceleration to acceleration in the matter epoch.
|
| 408 |
+
|
| 409 |
+
There are two possible ways to find a more reasonable model. The first is to modify the gravitational part to be the general quadratic model (25), which is a special case of the at most quadratic model proposed in Refs. [10, 22], but the coupling of which with the polynomial dS fluid is unexplored. It is unknown whether the effect of the $\kappa_2$ term could solve the problem encountered in the SW gravity.
|
| 410 |
+
|
| 411 |
+
The second way is to modify the matter part. Although the Lagrangian of the polynomial dS fluid is regular with respect to $\xi^A$, it is not regular with respect to $J^A$ when $\xi^A\xi_A = 0$, in which case there should be $J^AJ_A \ge 0$, and so the number density $n \equiv \sqrt{-J^AJ_A}$ is not regular. Maybe one could find a new fluid model whose Lagrangian is regular with respect to all the variables, based on the polynomial models for fundamental fields proposed in Refs. [14, 15].
|
| 412 |
+
|
| 413 |
+
## Acknowledgments
|
| 414 |
+
|
| 415 |
+
I thank Profs. S.-D. Liang and Z.-B. Li for their abiding help. Also, I would to thank my parents and my wife. This research is supported by the National Natural Science Foundation for Young Scientists of China under Grant No. 12005307.
|
| 416 |
+
|
| 417 |
+
⁵This result is different from that in Ref. [13], where the cosmological solution describes a decelerating universe. It shows that the SW model is not equivalent to the model considered in Ref. [13].
|
| 418 |
+
---PAGE_BREAK---
|
| 419 |
+
|
| 420 |
+
References
|
| 421 |
+
|
| 422 |
+
[1] T. W. B. Kibble. Lorentz invariance and the gravitational field. J. Math. Phys. 2, 212-221 (1961)
|
| 423 |
+
|
| 424 |
+
[2] D. W. Sciama. On the analogy between charge and spin in general relativity, in: Recent Developments in General Relativity, Festschrift for Infeld (Pergamon Press, Oxford, 1962) pp. 415–439
|
| 425 |
+
|
| 426 |
+
[3] M. Blagojević and F. W. Hehl. Gauge Theories of Gravitation. A Reader with Commentaries. Imperial College Press, London, 2013
|
| 427 |
+
|
| 428 |
+
[4] V. N. Ponomariov, A. O. Barvinsky and Y. N. Obukhov. Gauge Approach and Quantization Methods in Gravity Theory (Nauka, Moscow, 2017)
|
| 429 |
+
|
| 430 |
+
[5] E. W. Mielke. Geometrodynamics of Gauge Fields, 2nd. ed. (Springer, Switzerland, 2017)
|
| 431 |
+
|
| 432 |
+
[6] K. S. Stelle and P. C. West. De Sitter gauge invariance and the geometry of the Einstein-Cartan theory. J. Phys., A12, L205-L210 (1979)
|
| 433 |
+
|
| 434 |
+
[7] K. S. Stelle and P. C. West. Spontaneously broken de Sitter symmetry and the gravitational holonomy group. Phys. Rev. D 21, 1466-1488 (1980)
|
| 435 |
+
|
| 436 |
+
[8] S. W. MacDowell and F. Mansouri. Unified geometric theory of gravity and supergravity. Phys. Rev. Lett. 38, 739-742 (1977)
|
| 437 |
+
|
| 438 |
+
[9] P. C. West. A geometric gravity Lagrangian. Phys. Lett. B 76, 569 (1978)
|
| 439 |
+
|
| 440 |
+
[10] H. Westman and T. Złośnik. Exploring Cartan gravity with dynamical symmetry breaking. Class. Quant. Grav. 31, 095004 (2014)
|
| 441 |
+
|
| 442 |
+
[11] J. Magueijo, M. Rodríguez-Vázquez, H. Westman and T. Złośnik. Cosmological sig-nature change in Cartan Gravity with dynamical symmetry breaking. Phys. Rev. D 89, 063542 (2014)
|
| 443 |
+
|
| 444 |
+
[12] H. Westman and T. Złośnik. An introduction to the physics of Cartan gravity. Ann. Phys. 361, 330-376 (2015)
|
| 445 |
+
|
| 446 |
+
[13] S. Alexander, M. Cortês, A. Liddle, J. Magueijo, R. Sims, and L. Smolin. The cosmology of minimal varying Lambda theories. Phys. Rev. D 100, 083507 (2019)
|
| 447 |
+
|
| 448 |
+
[14] H. R. Pagels. Gravitational gauge fields and the cosmological constant. Phys. Rev. D 29, 1690-1698 (1984)
|
| 449 |
+
|
| 450 |
+
[15] H. Westman and T. Złośnik. Cartan gravity, matter fields, and the gauge principle. Ann. Phys. 334, 157-197 (2013)
|
| 451 |
+
|
| 452 |
+
[16] J.-A. Lu. Energy, momentum and angular momentum conservation in de Sitter gravity. Class. Quantum Grav. 33, 155009 (2016)
|
| 453 |
+
|
| 454 |
+
[17] F. W. Hehl, P. von der Heyde, G. D. Kerlick, and J. M. Nester. General relativity with spin and torsion: Foundations and prospects. Rev. Mod. Phys. 48, 393 (1976)
|
| 455 |
+
---PAGE_BREAK---
|
| 456 |
+
|
| 457 |
+
[18] F. W. Hehl, J. D. McCrea, E. W. Mielke, and Y. Ne'eman. Metric-affine gauge theory of gravity: field equations, Noether identities, world spinors, and breaking of dilation invariance. Phys. Rep. 258, 1-171 (1995)
|
| 458 |
+
|
| 459 |
+
[19] J.-A. Lu and C.-G. Huang. Kaluza-Klein-type models of de Sitter and Poincaré gauge theories of gravity. Class. Quantum Grav. 30, 145004 (2013)
|
| 460 |
+
|
| 461 |
+
[20] H.-Y. Guo. The local de Sitter invariance. Kexue Tongbao 21, 31-34 (1976)
|
| 462 |
+
|
| 463 |
+
[21] Y. N. Obukhov. Poincaré gauge gravity: selected topics. Int. J. Geom. Meth. Mod. Phys. 3, 95-138 (2006)
|
| 464 |
+
|
| 465 |
+
[22] H. Westman and T. Złośnik. Gravity, Cartan geometry, and idealized waywisers. arXiv:1203.5709 (2012)
|
| 466 |
+
|
| 467 |
+
[23] J. D. Brown. Action functionals for relativistic perfect fluids. Class. Quant. Grav. 10, 1579 (1993)
|
| 468 |
+
|
| 469 |
+
[24] Planck Collaboration. Planck 2018 results. VI. Cosmological parameters. Astron. Astrophys. 641, A6 (2020)
|
| 470 |
+
|
| 471 |
+
[25] A. G. Riess et al. Observational evidence from supernovae for an accelerating universe and a cosmological constant. Astron. J. 116, 1009-1038 (1998)
|
| 472 |
+
|
| 473 |
+
[26] B. Schmidt et al. The high-Z supernova search: measuring cosmic deceleration and global curvature of the universe using type IA supernovae. Astrophys. J. 507, 46-63 (1998)
|
| 474 |
+
|
| 475 |
+
[27] S. Perlmutter et al. Measurements of Omega and Lambda from 42 high redshift supernovae. Astrophys. J. 517, 565-586 (1999)
|
samples_new/texts_merged/7618174.md
ADDED
|
@@ -0,0 +1,712 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
QUADRATIC BOUNDS
|
| 5 |
+
ON THE QUASICONVEXITY OF
|
| 6 |
+
NESTED TRAIN TRACK SEQUENCES
|
| 7 |
+
|
| 8 |
+
by
|
| 9 |
+
TARIK AOUGAB
|
| 10 |
+
|
| 11 |
+
Electronically published on March 4, 2014
|
| 12 |
+
|
| 13 |
+
Topology Proceedings
|
| 14 |
+
|
| 15 |
+
Web: http://topology.auburn.edu/tp/
|
| 16 |
+
|
| 17 |
+
Mail: Topology Proceedings
|
| 18 |
+
Department of Mathematics & Statistics
|
| 19 |
+
Auburn University, Alabama 36849, USA
|
| 20 |
+
|
| 21 |
+
E-mail: topolog@auburn.edu
|
| 22 |
+
|
| 23 |
+
ISSN: 0146-4124
|
| 24 |
+
|
| 25 |
+
COPYRIGHT © by Topology Proceedings. All rights reserved.
|
| 26 |
+
---PAGE_BREAK---
|
| 27 |
+
|
| 28 |
+
QUADRATIC BOUNDS ON THE QUASICONVEXITY OF
|
| 29 |
+
NESTED TRAIN TRACK SEQUENCES
|
| 30 |
+
|
| 31 |
+
TARIK AOUGAB
|
| 32 |
+
|
| 33 |
+
**ABSTRACT.** Let $S_{g,p}$ denote the genus $g$ orientable surface with $p$ punctures. We show that nested train track sequences constitute $O((g,p)^2)$-quasiconvex subsets of the curve graph, effectivizing a theorem of Howard A. Masur and Yair N. Minsky. As a consequence, the genus $g$ disk set is $O(g^2)$-quasiconvex. We also show that splitting and sliding sequences of birecurrent train tracks project to $O((g,p)^2)$-unparameterized quasigeodesics in the curve graph of any essential subsurface, an effective version of a theorem of Masur, Lee Mosher, and Saul Schleimer.
|
| 34 |
+
|
| 35 |
+
# 1. INTRODUCTION
|
| 36 |
+
|
| 37 |
+
Let $S_{g,p}$ denote the orientable surface of genus $g$ with $p \ge 0$ punctures, and let $\mathcal{C}(S_{g,p})$ be the corresponding curve complex. Finally, let $\mathcal{C}_k(S_{g,p})$ denote the corresponding $k$-skeleton.
|
| 38 |
+
|
| 39 |
+
Let $(\tau_i)_i$ be a sequence of train tracks on $S_{g,p}$ such that $\tau_{i+1}$ is carried by $\tau_i$ for each $i$. Such a collection of train tracks defines a subset of $\mathcal{C}_0(S_{g,p})$ called a *nested train track sequence*. A train track splitting sequence is an important special case of such a sequence, in which $\tau_i$ is obtained from $\tau_{i-1}$ via one of two simple combinatorial moves, *splitting* and *sliding*.
|
| 40 |
+
|
| 41 |
+
A nested train track sequence is said to have *R*-bounded steps if the $\mathcal{C}_1$-distance between the vertex cycles of $\tau_i$ and those of $\tau_{i+1}$ is bounded above by R. Howard A. Masur and Yair N. Minsky [13] show that any
|
| 42 |
+
|
| 43 |
+
2010 Mathematics Subject Classification. 57M07, 20F65.
|
| 44 |
+
Key words and phrases. curve complex, disk set, mapping class roup.
|
| 45 |
+
The author was partially supported by an NSF grant during the completion of this work.
|
| 46 |
+
©2014 Topology Proceedings.
|
| 47 |
+
---PAGE_BREAK---
|
| 48 |
+
|
| 49 |
+
nested train track sequence with $R$-bounded steps is a $K = K(R, g, p)$-quasigeodesic. Our first result provides some effective control on $K$ as a function of $g$ and $p$; in what follows, let $\omega(g, p) = 3g + p - 4$.
|
| 50 |
+
|
| 51 |
+
**Theorem 1.1.** There exists a function $K(g,p) = O(\omega(g,p)^2)$ such that any nested train track sequence with $R$-bounded steps is a $(K(g,p) + R)$-unparameterized quasigeodesic of the curve graph $C_1(S_{g,p})$, which is $(K(g,p) + R)$-quasiconvex.
|
| 52 |
+
|
| 53 |
+
Masur, Lee Mosher, and Saul Schleimer [14] use Masur and Minsky's result [13] to show that if $Y \subseteq S_{g,p}$ is any essential subsurface, then a sliding and splitting sequence on $S_{g,p}$ maps to a uniform unparameterized quasigeodesic under the subsurface projection map to $\mathcal{C}(Y)$. Using Theorem 1.1, we show the following theorem.
|
| 54 |
+
|
| 55 |
+
**Theorem 1.2.** *There exists a function $A(g,p) = O(\omega(g,p)^2)$ satisfying the following. Suppose $Y \subseteq S_{g,p}$ is an essential subsurface, and let $(\tau_i)_i$ be a splitting and sliding sequence of birecurrent train tracks on $S_{g,p}$. Then $(\tau_i)_i$ projects to an $A(g,p)$-unparameterized quasigeodesic in $C_1(Y)$.*
|
| 56 |
+
|
| 57 |
+
Let $H_g$ denote the genus $g$ handlebody and let $D(g) \subset C_1(S_g)$ denote the set of meridians, curves on $S_g$ that bound disks in $H_g$. Also due to Masur and Minsky [13] is the fact that any two meridians in $D(g)$ can be connected by a 15-bounded nested train track sequence. Therefore, we obtain the following corollary of Theorem 1.1.
|
| 58 |
+
|
| 59 |
+
**Corollary 1.3.** *There exists a function $f(g) = O(g^2)$ such that $D(g)$ is an $f(g)$-quasiconvex subset of $C_1(S_g)$.*
|
| 60 |
+
|
| 61 |
+
The mapping class group, denoted Mod($S$), is the group of isotopy classes of orientation preserving homeomorphisms of a surface $S$ (see [5] for a thorough exposition).
|
| 62 |
+
|
| 63 |
+
As an application of Corollary 1.3, we obtain a more effective approach for detecting when a pseudo-Anosov mapping class $\phi$ is generic. Here, *generic* means that the stable lamination of $\phi$ is not a limit of meridians; the term “generic” is warranted by a theorem of Steven P. Kerckhoff [10], which states that the set of all projective measured laminations which are limits of meridians constitutes a measure 0 subset of $\mathcal{PML}(S)$, the space of all projective measured laminations on a surface $S$.
|
| 64 |
+
|
| 65 |
+
In what follows, let $d_{C(S)}$ denote distance in $C_1(S)$; when there is no confusion, the reference to $S$ will be omitted. Masur and Minsky [11] showed that $C_1(S)$ is a $\delta$-hyperbolic metric space.
|
| 66 |
+
|
| 67 |
+
Using Theorem 1.2, [1], and the fact that the curve graphs are uniformly hyperbolic (as shown by the author in [2], and independently in [3], [4], and [9]), we have the following corollary.
|
| 68 |
+
---PAGE_BREAK---
|
| 69 |
+
|
| 70 |
+
**Corollary 1.4.** There exists a function $r(g) = O(g^2)$ such that $\phi \in Mod(S_g)$ is a generic pseudo-Anosov mapping class if and only if there exists some $k \in \mathbb{N}$ such that for all $n > k$,
|
| 71 |
+
|
| 72 |
+
$$d_C(D(g), \phi^n(D(g))) > r(g).$$
|
| 73 |
+
|
| 74 |
+
**Remark 1.5.** By the argument of Aaron Abrams and Saul Schleimer [1], it suffices to take $r(g) = 2\delta + 2f(g)$ for $\delta$ the hyperbolicity constant of $C_1$, and $f(g)$ as in the statement of Corollary 1.3.
|
| 75 |
+
|
| 76 |
+
We also note that quasiconvexity of $D(g)$ and the fact that splitting sequences map to quasigeodesics under subsurface projection are main ingredients in the proof due to Masur and Schleimer [15] that the disk complex is $\delta$-hyperbolic. Thus, the effective control discussed above is perhaps a first step to studying the growth of the hyperbolicity constant of the disk complex.
|
| 77 |
+
|
| 78 |
+
The proof of the main theorem, Theorem 1.1, relies on the ability to control
|
| 79 |
+
|
| 80 |
+
(1) the hyperbolicity constant $\delta(g,p)$ of $C_1$;
|
| 81 |
+
|
| 82 |
+
(2) $B = B(g,p)$, a bound on the diameter of a set of vertex cycles of a fixed train track $\tau \subset S_{g,p}$; and
|
| 83 |
+
|
| 84 |
+
(3) the “nesting lemma constant” $k(g,p)$.
|
| 85 |
+
|
| 86 |
+
As mentioned above, due to work of the author and the authors of [3], [4], and [9], curve graphs are uniformly hyperbolic. Furthermore, [9] shows that all curve graphs are 17-hyperbolic.
|
| 87 |
+
|
| 88 |
+
Regarding (2), the author [2] has also shown that for sufficiently large $\omega$, $B(g,p) \le 3$.
|
| 89 |
+
|
| 90 |
+
Therefore, all that remains is to analyze the growth of $k(g,p)$, which we address in section 5 by following Masur and Minsky's original argument [11] while keeping track of the constants that pop up along the way. However, in order to do this, we have need of an effective criterion for determining when a train track $\tau$ is non-recurrent, which we address in section 4.
|
| 91 |
+
|
| 92 |
+
In section 2, we review some preliminaries about curve complexes and subsurface projections. In section 3, we review train tracks on surfaces and bounds on curve graph distance given by intersection number, as obtained in previous work. In section 4, we obtain an effective way of detecting non-recurrence of train tracks by analyzing the linear algebra of the corresponding branch-switch incidence matrix. In section 5, we obtain an effective version of Masur and Minsky's nesting lemma [11], which is the main tool needed to prove Theorem 1.1. In section 6 we complete the proofs of theorems 1.1 and 1.2, and Corollary 1.3.
|
| 93 |
+
---PAGE_BREAK---
|
| 94 |
+
|
| 95 |
+
## 2. PRELIMINARIES: COARSE GEOMETRY, COMBINATORIAL COMPLEXES, AND SUBSURFACE PROJECTIONS
|
| 96 |
+
|
| 97 |
+
Let $(X, d_X)$ and $(Y, d_Y)$ be metric spaces. For some $k \ge 1$, a relation $f : X \to Y$ is a *k-quasi-isometric embedding* of $X$ into $Y$ if, for any $x_1, x_2 \in X$, we have
|
| 98 |
+
|
| 99 |
+
$$ \frac{1}{k} d_Y(f(x_1, x_2) - k \le d_X(x_1, x_2) \le k \cdot d_Y(f(x_1), f(x_2)) + k. $$
|
| 100 |
+
|
| 101 |
+
Since $f$ is not necessarily a map, $f(x)$ and $f(y)$ need not be singletons, and the distance $d_Y(f(x), f(y))$ is defined to be the diameter in the metric $d_Y$ of the union $f(x) \cup f(y)$. If the $k$-neighborhood of $f(X)$ is all of $Y$, then $f$ is a *k-quasi-isometry* between $X$ and $Y$, and we refer to $X$ and $Y$ as being *quasi-isometric*.
|
| 102 |
+
|
| 103 |
+
Given an interval $[a, b] \in \mathbb{Z}$, a *k-quasigeodesic* in $X$ is a $k$-quasi-isometric embedding $f : [a, b] \to X$. If $f : [a, b] \to X$ is any relation such that there exists an interval $[c, d]$ and a strictly increasing function $g \cdot [c, d] \to [a, b]$ such that $f \circ g$ is a $k$-quasigeodesic, we say that $f$ is a *k-unparameterized quasigeodesic*. In this case we also require that, for each $i \in [c, d - 1]$, the diameter of $f([g(i), g(i+1)])$ is at most $k$. We will sometimes refer to a quasigeodesic by its image in the metric space $X$.
|
| 104 |
+
|
| 105 |
+
A simple closed curve on $S_{g,p}$ is *essential* if it is homotopically non-trivial and not homotopic into a neighborhood of a puncture.
|
| 106 |
+
|
| 107 |
+
The *curve complex* of $S_{g,p}$, denoted $\mathcal{C}(S_{g,p})$, is the simplicial complex whose vertices correspond to isotopy classes of essential simple closed curves on $S_{g,p}$, such that $k+1$ vertices span a $k$-simplex exactly when the corresponding $k+1$ isotopy classes can be realized disjointly on $S_{g,p}$. The curve complex is made into a metric space by identifying each simplex with the standard Euclidean simplex with unit length edges. Let $\mathcal{C}_k(S)$ denote the $k$-skeleton of $\mathcal{C}(S)$.
|
| 108 |
+
|
| 109 |
+
The curve complex is a locally infinite, infinite diameter metric space. By a theorem of Masur and Minsky [11], $\mathcal{C}(S)$ is $\delta$-hyperbolic for some $\delta = \delta(S) > 0$, meaning that the $\delta$-neighborhood of the union of any two edges of a geodesic triangle contains the third edge.
|
| 110 |
+
|
| 111 |
+
The curve complex admits an isometric (but not properly discontinuous) action of $\text{Mod}(S)$, and it is a flag complex, so that its combinatorics are completely encoded by $\mathcal{C}_1(S)$, the *curve graph*; note also that $\mathcal{C}(S)$ is quasi-isometric to $\mathcal{C}_1(S)$, and therefore, to study the coarse geometry of $\mathcal{C}$, it suffices to consider the curve graph. Let $d_\mathcal{C}$ denote distance in the curve graph.
|
| 112 |
+
|
| 113 |
+
If $p \ne 0$, we can consider more general combinatorial complexes, which also allow vertices to represent essential arcs connecting punctures, up to isotopy. As such, define $\mathcal{A}\mathcal{C}(S)$, the *arc and curve complex of* $S$, to
|
| 114 |
+
---PAGE_BREAK---
|
| 115 |
+
|
| 116 |
+
be the simplicial complex whose vertices correspond to isotopy classes of essential simple closed curves and arcs on $S$. In case $S$ has boundary, the isotopy classes of arcs which constitute a vertex of $\mathcal{A}C$ are not required to be rel boundary; that is, two arcs represent the same vertex if they are isotopic via an isotopy which need not fix the boundary pointwise.
|
| 117 |
+
|
| 118 |
+
As with $\mathcal{C}(S)$, two vertices are connected by an edge if and only if the corresponding isotopy classes can be realized disjointly, and the higher dimensional skeleta are defined by requiring $\mathcal{A}C(S)$ to be flag. As with $\mathcal{C}$, denote by $\mathcal{A}C_k(S)$ the $k$-skeleton of $\mathcal{A}C(S)$. It is worth noting that $\mathcal{A}C(S)$ is quasi-isometric to $\mathcal{C}(S)$, with quasiconstants not depending on the topological type of $S$.
|
| 119 |
+
|
| 120 |
+
A non-annular subsurface $Y$ of $S$ is the closure of a complementary component of an essential multi-curve on $S$; an annular subsurface $Y \subseteq S$ is a closed neighborhood of an essential simple closed curve on $S$, homeomorphic to $[0, 1] \times S^1$. A subsurface is essential if its boundary components are all essential curves and it is not homotopy equivalent to a thrice-punctured sphere.
|
| 121 |
+
|
| 122 |
+
Let $Y \subseteq S$ be an essential, embedded subsurface of $S$. Then there is a covering space $S^Y$ associated to the inclusion $\pi_1(Y) < \pi_1(S)$. While $S^Y$ is not compact, note that the Gromov compactification of $S^Y$ is homeomorphic to $Y$, and via this homeomorphism, we identify $\mathcal{A}C(Y)$ with $\mathcal{A}C(S^Y)$. Then, given $\alpha \in \mathcal{A}C_0(S)$, the subsurface projection map $\pi_Y : \mathcal{A}C(S) \to \mathcal{A}C(Y)$ is defined by setting $\pi_Y(\alpha)$ equal to its preimage under the covering map $S^Y \to S$.
|
| 123 |
+
|
| 124 |
+
Technically, this defines a map from $\mathcal{A}C_0(S)$ into $2^{\mathcal{A}C_0(S)}$ since there may be multiple connected components of the pre-image of a curve or arc, but the image of any point in the domain is a bounded subset of the range. Thus, to make $\pi_Y$ a map, we can simply choose some component of this pre-image for each point in the domain and then extend the map $\pi_Y$ simplicially to the higher dimensional skeleta.
|
| 125 |
+
|
| 126 |
+
Given an arc $a \in \mathcal{A}C(S)$, there is a closely related simple closed curve $\tau(a) \in C_1(S)$, obtained from $a$ by surgering along the boundary components that $a$ meets. More concretely, let $\mathcal{N}(a)$ denote a thickening of the union of $a$ together with the (at most two) boundary components of $S$ that $a$ meets, and define $\tau(a) \in 2^{C_1(S)}$ to be the components of $\partial(\mathcal{N}(a))$.
|
| 127 |
+
|
| 128 |
+
Thus, we obtain a *subsurface projection map*
|
| 129 |
+
|
| 130 |
+
$$\psi_Y := \tau \circ \pi_Y : \mathcal{C}(S) \to \mathcal{C}(Y)$$
|
| 131 |
+
|
| 132 |
+
for $Y \subseteq S$ any essential subsurface.
|
| 133 |
+
|
| 134 |
+
Then, given $\alpha, \beta \in \mathcal{C}(S)$, define $d_Y(\alpha, \beta)$ by
|
| 135 |
+
|
| 136 |
+
$$d_Y(\alpha, \beta) := \text{diam}_{\mathcal{C}(Y)}(\psi_Y(\alpha) \cup \psi_Y(\beta)).$$
|
| 137 |
+
---PAGE_BREAK---
|
| 138 |
+
|
| 139 |
+
### 3. TRAIN TRACKS AND INTERSECTION NUMBERS
|
| 140 |
+
|
| 141 |
+
In this section, we recall some basic terminology of train tracks on surfaces; we refer the reader to [18] and [16] for a more in-depth discussion. A *train track* $\tau \subset S$ is an embedded 1-complex whose vertices and edges are called *switches* and *branches*, respectively. Branches are smooth parameterized paths with well-defined tangent vectors at the initial and terminal switches. At each switch $v$ there is a unique line $L \subset T_v S$ such that the tangent vector of any branch incident at $v$ coincides with $L$.
|
| 142 |
+
|
| 143 |
+
As part of the data of $\tau$, we choose a preferred direction along this line at each switch $v$; a half branch incident at $v$ is called *incoming* if its tangent vector at $v$ is parallel to this chosen direction and is called *outgoing* if it is anti-parallel. Therefore, at each switch, the incident half branches are partitioned disjointly into two orientation classes, the *incoming germ* and *outgoing germ*.
|
| 144 |
+
|
| 145 |
+
The valence of each switch must be at least 3 unless $\tau$ has a connected component consisting of a simple closed curve; in this case, $\tau$ has one bivalent switch for such a component.
|
| 146 |
+
|
| 147 |
+
Finally, we require that every complementary component of $S \setminus \tau$ has a negative generalized Euler characteristic, that is
|
| 148 |
+
|
| 149 |
+
$$\chi(Q) - \frac{1}{2}V(Q) < 0$$
|
| 150 |
+
|
| 151 |
+
for any complementary component $Q$; here, $\chi(Q)$ is the usual Euler characteristic and $V(Q)$ is the number of cusps on $\partial(Q)$.
|
| 152 |
+
|
| 153 |
+
A *train path* is a path $\gamma : [0, 1] \to \tau$, smooth on $(0, 1)$, which traverses a switch only by entering via one germ and exiting from the other; a *closed train path* is a train path with $\gamma(0) = \gamma(1)$. A *proper closed train path* is a closed train path with $\gamma'(0) = \gamma'(1)$; here, $\gamma'(t)$ is the unit tangent vector to the path $\gamma$ at time $t$.
|
| 154 |
+
|
| 155 |
+
Let $\mathcal{B}$ denote the set of branches of $\tau$; then a non-negative, real-valued function $\mu : \mathcal{B} \to \mathbb{R}_+$ is called a *transverse measure* on $\tau$ if for each switch $v$ of $\tau$, we have
|
| 156 |
+
|
| 157 |
+
$$\sum_{b \in i(v)} \mu(b) = \sum_{b' \in o(v)} \mu(b')$$
|
| 158 |
+
|
| 159 |
+
where $i(v)$ is the set of incoming branches and $o(v)$ is the set of outgoing ones. These are called the *switch conditions*. $\tau$ is called *recurrent* if it admits a strictly positive transverse measure, that is, one that assigns a positive weight to every branch. A switch of $\tau$ is called *semi-generic* if exactly one of the two germs of half branches consists of a single half branch. $\tau$ is called semi-generic if all switches are semi-generic, and $\tau$ is *generic* if $\tau$ is semi-generic and each switch has degree at most 3. $\tau$
|
| 160 |
+
---PAGE_BREAK---
|
| 161 |
+
|
| 162 |
+
is called *large* if each connected component of its complement is simply connected.
|
| 163 |
+
|
| 164 |
+
Any positive scaling of a transverse measure is also a transverse measure, and therefore the set of all transverse measures, viewed as a subset of $\mathbb{R}^\mathcal{B}$, is a cone over a compact polyhedron in projective space. Let $P(\tau)$ denote the projective polyhedron of transverse measures. A projective measure class $[\mu] \in P(\tau)$ is called a *vertex cycle* if it is an extreme point of $P(\tau)$. It is worth noting that if $\tau$ is any train track on $S$, there exists a generic, recurrent train track $\tau'$ such that $P(\tau) = P(\tau')$.
|
| 165 |
+
|
| 166 |
+
A lamination $\lambda$ is *carried* by $\tau$ if there is a smooth map $\phi: S \to S$ called the $\tau$-carrying map for $\lambda$ which is isotopic to the identity, $\phi(\lambda) \subset \tau$, and such that the restriction of the differential $d\phi$ to any tangent line of $\lambda$ is non-singular. If $c$ is any simple closed curve carried by $\tau$, then $c$ induces an integral transverse measure called the *counting measure*, for which each branch of $\tau$ is assigned the natural number equaling the number of times the image of $c$ under its carrying map traverses that branch.
|
| 167 |
+
|
| 168 |
+
A train track $\tau'$ is *carried* by $\tau$ if there exists a smooth map $\phi: S \to S$ isotopic to the identity, such that for any lamination $\lambda$ carried by $\tau'$, $\phi$ is a $\tau$-carrying map for $\lambda$.
|
| 169 |
+
|
| 170 |
+
A subset $\tau' \subset \tau$ is called a *subtrack* of $\tau$ if it is also a train track on $S$. In this case, we write $\tau' < \tau$.
|
| 171 |
+
|
| 172 |
+
Given any train track $\tau$ with branch set $\mathcal{B}$, we can distinguish branches as being one of three types: If $b \in \mathcal{B}$ and both half branches of $b$ are the only half branch in their respective germs, $b$ is called *large*. If both half branches of $b$ are in germs containing more than one half branch, $b$ is *small*; otherwise, $b$ is *mixed* (Figure 1).
|
| 173 |
+
|
| 174 |
+
FIGURE 1. Branch Classes. Left: $b_1$ is small; Middle: $b_2$ is mixed; Right: $b_3$ is large.
|
| 175 |
+
|
| 176 |
+
If $[v]$ is a vertex cycle of $\tau$, then there is a unique (up to isotopy) simple closed curve $c(v)$ such that $c$ is carried by $\tau$, and the counting measure on $c$ is an element of $[v]$. Therefore, if $[v_1]$ and $[v_2]$ are two vertex cycles of $\tau$, we can define the distance $d([v_1], [v_2])$ between them to be the curve graph
|
| 177 |
+
---PAGE_BREAK---
|
| 178 |
+
|
| 179 |
+
distance between their respective simple closed curve representatives:
|
| 180 |
+
|
| 181 |
+
$$d([v_1], [v_2]) := d_C(c(v_1), c(v_2)).$$
|
| 182 |
+
|
| 183 |
+
Using this, we can also define the distance between two train tracks $\tau$ and $\tau'$ to be the distance between their vertex cycle sets:
|
| 184 |
+
|
| 185 |
+
$$d(\tau, \tau') := \min\{d([v_\tau], [v_{\tau'}]): [v_\tau] \text{ is a vertex cycle of } \tau \text{ and} \\ [v_{\tau'}] \text{ is a vertex cycle of } \tau']\}.$$
|
| 186 |
+
|
| 187 |
+
A train track $\tau$ is called *transversely recurrent* if, for each branch $b$ of $\tau$, there exists a simple closed curve $c$ intersecting $b$, such that $S \setminus (\tau \cup c)$ contains no bigon complementary regions. A track $\tau$ which is both recurrent and transversely recurrent is called *birecurrent*.
|
| 188 |
+
|
| 189 |
+
A *nested train track sequence* is a sequence $(\tau_i)_i$ on $S_{g,p}$ of birecurrent train tracks such that $\tau_j$ is carried by $\tau_{j+1}$ for each $j$. This, in turn, determines a collection of vertices in $C_1(S_{g,p})$ by associating the track $\tau_j$ with its collection of vertices.
|
| 190 |
+
|
| 191 |
+
Given $R > 0$, a nested train track sequence $(\tau_i)_i$ is said to have $R$-bounded steps if
|
| 192 |
+
|
| 193 |
+
$$d(\tau_i, \tau_{i+1}) \le R$$
|
| 194 |
+
|
| 195 |
+
for each $i$. An important special case is the example of a *splitting and sliding sequence*. This is any train track sequence where $\tau_i$ is obtained from $\tau_{i+1}$ via one of two combinatorial moves, *splitting* (Figure 2) or *sliding* (Figure 3).
|
| 196 |
+
|
| 197 |
+
FIGURE 2. Any large branch admits three possible “splittings.”
|
| 198 |
+
---PAGE_BREAK---
|
| 199 |
+
|
| 200 |
+
FIGURE 3. Any mixed branch admits a “sliding.”
|
| 201 |
+
|
| 202 |
+
We will need the following theorem, as seen in [2].
|
| 203 |
+
|
| 204 |
+
**Theorem 3.1.** There exists a natural number $n \in \mathbb{N}$ such that if $\omega(g,p) > n$, the following holds: Suppose $\tau \subset S_{g,p}$ is any train track and $[v_1]$ and $[v_2]$ are vertex cycles of $\tau$. Then
|
| 205 |
+
|
| 206 |
+
$$d([v_1], [v_2]) \le 3.$$
|
| 207 |
+
|
| 208 |
+
Let $\text{int}(P(\tau)) \subset P(\tau)$ denote the set of strictly positive transverse measures on $\tau$. There, $\tau$ is recurrent if and only if $\text{int}(P(\tau)) \neq \emptyset$. For $\tau$ a large track, a *diagonal extension* $\sigma$ of $\tau$ is a track such that $\tau < \sigma$ and each branch of $\sigma \setminus \tau$ has the property that its endpoints are incident at corners of complementary regions of $\tau$.
|
| 209 |
+
|
| 210 |
+
Following [11], let $E(\tau)$ denote the set of all diagonal extensions of $\tau$, and define
|
| 211 |
+
|
| 212 |
+
$$PE(\tau) := \bigcup_{\sigma \in E(\tau)} P(\sigma).$$
|
| 213 |
+
|
| 214 |
+
Let $N(\tau)$ be the union of $E(\kappa)$ over all large, recurrent subtracks $\kappa < \tau$:
|
| 215 |
+
|
| 216 |
+
$$N(\tau) := \bigcup_{\kappa < \tau, \kappa \text{ large, recurrent}} E(\kappa),$$
|
| 217 |
+
|
| 218 |
+
and define
|
| 219 |
+
|
| 220 |
+
$$PN(\tau) := \bigcup_{\kappa \in N(\tau)} P(\kappa).$$
|
| 221 |
+
|
| 222 |
+
Define $\text{int}(PE(\tau))$ to be the measures in $PE(\tau)$ whose restrictions to $\tau$ are strictly positive, and define
|
| 223 |
+
|
| 224 |
+
$$\text{int}(PN(\tau)) := \bigcup_{\kappa} \text{int}(PE(\kappa)).$$
|
| 225 |
+
|
| 226 |
+
The following theorem will be heavily relied upon in section 3.
|
| 227 |
+
|
| 228 |
+
**Theorem 3.2 ([2]).** For $\epsilon \in (0,1)$, there is some $\eta = \eta(\epsilon)$ such that if $\alpha, \beta \in C_0(S_g)$, whenever $\omega(g,p) > \eta(\epsilon)$ and $d_C(\alpha, \beta) \ge k$,
|
| 229 |
+
|
| 230 |
+
$$i(\alpha, \beta) \ge \left( \frac{\omega(g,p)^{\epsilon}}{q(g,p)} \right)^{k-2}$$
|
| 231 |
+
|
| 232 |
+
where $q(g,p) = O(\log_2(\omega))$.
|
| 233 |
+
---PAGE_BREAK---
|
| 234 |
+
|
| 235 |
+
**Remark 3.3.** In the above, $i(\alpha, \beta)$ is the geometric intersection number between $\alpha$ and $\beta$, defined by
|
| 236 |
+
|
| 237 |
+
$$i(\alpha, \beta) := \min |x \cap \beta|$$
|
| 238 |
+
|
| 239 |
+
where the minimum is taken over all $x$ isotopic to $\alpha$.
|
| 240 |
+
|
| 241 |
+
We can explicitly write down the function $q(g,p)$ from the statement of Theorem 3.2. $q(g,p)$ is an upper bound on the girth of a finite graph with at most $8(6g+3p-7)$ vertices and average degree larger than 2.02. As seen in [6],
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
\begin{aligned}
|
| 245 |
+
q(g,p) = & \left( \frac{8}{\log_2(1.01)} + 5 \right) \log_2(8(6g + 3p - 7)) \\
|
| 246 |
+
& < 1000 \cdot \log_2(100\omega).
|
| 247 |
+
\end{aligned}
|
| 248 |
+
$$
|
| 249 |
+
|
| 250 |
+
This upper bound will be used in section 5.
|
| 251 |
+
|
| 252 |
+
## 4. DETECTING RECURRENCE FROM THE INCIDENCE MATRIX
|
| 253 |
+
|
| 254 |
+
Let $\tau = (S, \mathcal{B}) \subset S_{g,p}$ be a train track with branch set $\mathcal{B}$ and switch set $S$.
|
| 255 |
+
|
| 256 |
+
Label the branches $\mathcal{B} = \{b_1, \dots, b_n\}$ and switches $S = \{s_1, \dots, s_m\}$, and identify $\mathbb{R}^n$ with real-valued functions over $\mathcal{B}$. Then, associated to $\tau$ is a linear map $L_\tau: \mathbb{R}^n \to \mathbb{R}^m$ and a corresponding matrix in the standard basis defined by, given $u \in \mathbb{R}^n$, the $j^{th}$ coordinate of $L_\tau(u)$ is the sum of the incoming weights, minus the sum of the outgoing weights at the $j^{th}$ switch, $1 \le j \le m$. Let $\mathbb{R}_+^n$ denote the strictly positive orthant of $\mathbb{R}^n$, the collection of vectors with all positive coordinates.
|
| 257 |
+
|
| 258 |
+
We call $L_\tau$ the incidence matrix for $\tau$. Note that $\mu \in \mathbb{R}^n$ is a transverse measure on $\tau$ if and only if $\mu \in \ker(L_\tau)$; thus, $\tau$ is recurrent if $\ker(L_\tau)$ intersects $\mathbb{R}_+^n$ non-trivially.
|
| 259 |
+
|
| 260 |
+
As mentioned in the proof of Lemma 4.1 of [11], if $\ker(L_\tau) \cap \mathbb{R}_+^n = \emptyset$, then there is some $\delta > 0$ such that
|
| 261 |
+
|
| 262 |
+
$$ \|L_{\tau}(u)\| \geq \delta \cdot u_{min}, \quad \forall u \in \mathbb{R}_{+}^{n}. $$
|
| 263 |
+
|
| 264 |
+
Here, $u_{min}$ is the minimum over all coordinates of the vector $u$, and $\|\cdot\|$ is the standard Euclidean norm in $\mathbb{R}^m$. The main goal of this section is to effectivize this statement, that is, to obtain explicit control on the size of $\delta$ as a function of $g$ and $p$.
|
| 265 |
+
|
| 266 |
+
**Theorem 4.1.** Let $\tau = (S, \mathcal{B})$, $|\mathcal{B}| = n$, and $|S| = m$ be a non-recurrent train track on $S_{g,p}$, and let $u \in \mathbb{R}_+^n$. Then
|
| 267 |
+
|
| 268 |
+
$$ \|L_{\tau}(u)\|_{sup} \geq \frac{u_{min}}{12g + 4p - 12}, $$
|
| 269 |
+
|
| 270 |
+
where $\|\cdot\|_{sup}$ is the sup norm on $\mathbb{R}^m$.
|
| 271 |
+
---PAGE_BREAK---
|
| 272 |
+
|
| 273 |
+
*Proof.* We begin by observing that non-recurrence is equivalent to the existence of “extra” branches, ones that must be assigned 0 by any transverse measure:
|
| 274 |
+
|
| 275 |
+
**Lemma 4.2.** Suppose that for each branch $b \in \mathcal{B}$, there is some corresponding transverse measure $\mu_b$ on $\tau$ such that $\mu(b) > 0$. Then $\tau$ is recurrent.
|
| 276 |
+
|
| 277 |
+
Therefore, the existence of a branch $b$, which is assigned 0 by every
|
| 278 |
+
transverse measure on $\tau$, is equivalent to $\tau$ being non-recurrent. We will
|
| 279 |
+
call such a branch *invisible*.
|
| 280 |
+
|
| 281 |
+
Given $s \in S$, the switch condition at $s$ represents a row vector of the
|
| 282 |
+
matrix corresponding to the linear transformation $L_{\tau}$. This is the vector
|
| 283 |
+
$v_s$ that has 1's in the coordinates corresponding to the incoming half
|
| 284 |
+
branches incident to $s$ and -1's in the coordinates corresponding to the
|
| 285 |
+
outgoing half branches incident to $s$. Note that $v_s$ could also have a $\pm 2$
|
| 286 |
+
in place of two 1's if both ends of a single branch are incident to $s$. Let
|
| 287 |
+
$R(L_{\tau})$ denote the row space of $L_{\tau}$, the vector space spanned by the row
|
| 288 |
+
vectors.
|
| 289 |
+
|
| 290 |
+
The following is an immediate corollary of Theorem 4.1.
|
| 291 |
+
|
| 292 |
+
**Lemma 4.3.** Suppose $b \in \mathcal{B}$ is an invisible branch. Then $b$ is not contained in a closed train path.
|
| 293 |
+
|
| 294 |
+
For $b$, a branch of $\tau$, let $S(b) \subset S$ denote the switches of $\tau$ incident to $b$; thus, $|S(b)| = 1$ or 2. For $x \in S(b)$, consider the pointed universal cover $(\tilde{\tau}, \tilde{x})$ with associated covering projection $\pi : (\tilde{\tau}, \tilde{x}) \to (\tau, x)$. We define $P(\tilde{\tau}, \tilde{x}) \subseteq \tilde{\tau}$ to be the subset of the universal cover consisting of train paths in $\tilde{\tau}$ emanating from $\tilde{x}$ that do not traverse any branch which projects to $b$ under $\pi$.
|
| 295 |
+
|
| 296 |
+
Any train path emanating from $\tilde{x}$ has a natural choice of orientation
|
| 297 |
+
by defining its initial point to be $\tilde{x}$. This induces an orientation on any
|
| 298 |
+
branch $e$ contained in $\tilde{P}$. Note that this is well defined because $\tilde{\tau}$ does
|
| 299 |
+
not contain closed train paths (proper or otherwise).
|
| 300 |
+
|
| 301 |
+
We say that $P(\tilde{\tau}, \tilde{x})$ is unidirectional if, whenever $e_i, e_j \subseteq P(\tilde{\tau}, \tilde{x})$ project to the same branch $e$ of $\tau$, the orientations of $e$ induced by $e_i$ and $e_j$ agree.
|
| 302 |
+
|
| 303 |
+
Given $u \in \mathbb{R}^n$, define the *deviation* of $u$ at $s \in S$, denoted by $d_s(u)$, to be the absolute value of the coordinate of $L_\tau(u)$ corresponding to $s$. It suffices to assume that for $u$, as in the statement of the theorem,
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
(4.1) \qquad d_s(u) < \frac{u_{\min}}{12g + 4p - 12}, \quad \forall s \in S.
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
We will use this assumption to obtain a contradiction.
|
| 310 |
+
|
| 311 |
+
Since $\tau$ is non-recurrent, it must contain an invisible branch $b$.
|
| 312 |
+
---PAGE_BREAK---
|
| 313 |
+
|
| 314 |
+
**Lemma 4.4.** Let $s_1, s_2 \in S(b)$ be the two (possibly non-distinct) switches incident to the invisible branch $b$ and let $\tilde{s}_1$ and $\tilde{s}_2 \in \tilde{\tau}$ be corresponding lifts which together bound a lift of $b$. Then at least one of $\mathcal{P}(\tilde{\tau}, \tilde{s}_i)$, $i = 1, 2$ is unidirectional.
|
| 315 |
+
|
| 316 |
+
*Proof.* Suppose not. Then there exist branches $(e_j^i)_{\substack{j=1,2 \\ i=1,2}}^{j=1,2} \in \mathcal{P}(\tilde{\tau}, \tilde{x})$ such that $e_1^i$, $i = 1, 2$ project to a branch $e_1$ of $\tau$ with opposite orientations, and similarly for $e_2^i$, $i = 1, 2$. Thus, in $\tau$ there exist two train paths starting from $s_1$ and ending at $e_1$, but which traverse $e_1$ in opposite directions. Concatenating these two paths produces a loop in $\tau$, which is a train path away from $s_1$.
|
| 317 |
+
|
| 318 |
+
By the same exact argument, there is another loop containing the switch $s_2$ and the branch $e_2$, which is a train path away from $s_2$. We can then concatenate these two paths across the branch $b$ to obtain a "dumbbell"-shaped closed train path, which contains $b$ (see Figure 4). This contradicts Lemma 4.2. □
|
| 319 |
+
|
| 320 |
+
FIGURE 4. If neither train path set emanating from $b$ is unidirectional, then there exist non-closed train paths starting and ending at $s_1$ and $s_2$. Joining these paths across $b$ yields a closed train path containing $b$, pictured above.
|
| 321 |
+
|
| 322 |
+
Therefore, we assume henceforth that $\mathcal{P}(\tilde{\tau}, \tilde{s}_1)$ is unidirectional; let $\mathcal{Q}(s_1) \subseteq \tau$ be the projection of $\mathcal{P}$ to $\tau$. That $\mathcal{P}$ is unidirectional will allow us to redefine which half branches are incoming and which are outgoing (without changing the linear algebraic structure of $L_\tau$) such that each branch of $\mathcal{Q}$ is mixed.
|
| 323 |
+
|
| 324 |
+
More concretely, orient each edge $e \subseteq \mathcal{Q}(s_1)$ by projecting the orientation on $\tilde{e}$ down to $e$, where $\tilde{e} \subseteq \tilde{\mathcal{P}}$ is any branch of $\tilde{\tau}$ with $\pi(\tilde{e}) = e$; unidirectionality implies that this construction is well defined. Then we
|
| 325 |
+
---PAGE_BREAK---
|
| 326 |
+
|
| 327 |
+
simply define a half-branch $e' \subset e \in Q$ to be outgoing at a switch $s$ if the orientation of $e'$ coming from $e$ points away from $s$, and similarly for in-coming branches. Note that this is well defined in that two half-branches incident to the same switch in distinct germs will be assigned opposing directional classes.
|
| 328 |
+
|
| 329 |
+
This rule then defines an assignment of direction for all half branches of $\tau$ as follows. The half branches of $\tau$ which are not contained in $Q$ can be partitioned disjointly into two subcollections: the *frontier* half branches (those which are incident to a switch contained in $Q$) and the *interior* half branches (those for which the incident switch is not contained in $Q$). Once directions have been assigned to the half branches of $Q$ as above, directions for frontier half branches are determined by which germ they belong to at the corresponding switch. For interior half branches, simply assign the original directions coming from $\tau$.
|
| 330 |
+
|
| 331 |
+
Let $S(Q) \subseteq S$ denote the switches of $\tau$ contained in $Q$ and recall that $v_s$ denotes the row vector of $L_{\tau}$ corresponding to the switch $s \in S$.
|
| 332 |
+
|
| 333 |
+
**Lemma 4.5.** The vector $V = \sum_{s \in S(Q)} v_s \in R(L_{\tau})$ is a non-zero integer vector, all of whose coordinates are non-negative.
|
| 334 |
+
|
| 335 |
+
*Proof.* Since every branch of $Q$ is mixed, each component of $V$ corresponding to a branch of $Q$ is 0. The same is true for any branch not in $Q$ which does not contain a frontier half-branch.
|
| 336 |
+
|
| 337 |
+
We claim that frontier half branches must be incoming at the switch contained in $S(Q)$ to which it is incident; this will imply that $V$ takes on a positive value for each component corresponding to a branch containing a frontier half branch.
|
| 338 |
+
|
| 339 |
+
Indeed, let $e$ be a branch containing a frontier half branch and let $s \in S(Q)$ be incident to $e$. $s \in S(Q)$ implies that there is another branch $e'$ incident to $s$ such that $e'$ is a branch of $Q$ and $e'$ is incoming at $s$. Thus, if $e$ were outgoing at $s$, there would exist a train path emanating from $s_1$ which traverses $e$, by concatenating the train path starting at $s_1$ and ending at $e'$ with the train path connecting $e'$ to $e$ over $s$. This contradicts the assumption that $e \notin Q$.
|
| 340 |
+
|
| 341 |
+
Thus, to complete the argument, it suffices to show that the collection of frontier half branches is non-empty. Recall that $b$ is an invisible branch, and is therefore not contained in any closed train path. It then follows that the half branch of $b$ incident to $s_1$ is frontier. $\square$
|
| 342 |
+
|
| 343 |
+
We now use the following elementary fact regarding train tracks on $S_{g,p}$ (see [18] for proof).
|
| 344 |
+
|
| 345 |
+
**Lemma 4.6.** Let $\tau \subset S_g, \tau = (\mathcal{B}, \mathcal{S})$ be a train track. Then
|
| 346 |
+
|
| 347 |
+
$$|\mathcal{B}| \leq 18g + 6p - 18;$$
|
| 348 |
+
---PAGE_BREAK---
|
| 349 |
+
|
| 350 |
+
$$|\mathcal{S}| \leq 12g + 4p - 12.$$
|
| 351 |
+
|
| 352 |
+
Therefore, there are at most $12g+4p-12$ row vectors of $L_{\tau}$ in the sum
|
| 353 |
+
$V$. Furthermore, since the components of $V$ are all non-negative integers,
|
| 354 |
+
|
| 355 |
+
$$|V \cdot u| \geq u_{min},$$
|
| 356 |
+
|
| 357 |
+
where $\cdot$ denotes the standard Euclidean dot product. On the other hand, assuming the validity of (4.1), one obtains
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
\begin{align*}
|
| 361 |
+
|V \cdot u| &= \left| \sum_{s \in S(Q)} v_s \cdot u \right| \le \sum_{s \in S(Q)} |v_s \cdot u| \\
|
| 362 |
+
&= \sum_{s \in S(Q)} d_s(u) < (12g + 4p - 12) \cdot \frac{u_{min}}{12g + 4g - 12} = u_{min},
|
| 363 |
+
\end{align*}
|
| 364 |
+
$$
|
| 365 |
+
|
| 366 |
+
a contradiction.
|
| 367 |
+
□
|
| 368 |
+
|
| 369 |
+
5. AN EFFECTIVE NESTING LEMMA
|
| 370 |
+
|
| 371 |
+
In this section, we will use Theorem 3.2 and Lemma 4.3 to establish
|
| 372 |
+
the following effective version of Masur and Minsky’s [11] nesting lemma.
|
| 373 |
+
|
| 374 |
+
**Lemma 5.1.** There exists a function $k(g,p) = O(\omega^2)$ such that if $\sigma$ and $\tau$ are large train tracks and $\sigma$ is carried by $\tau$, and $d(\tau,\sigma) > k(g,p)$, then
|
| 375 |
+
|
| 376 |
+
$$PN(\sigma) \subset \text{int}(PN(\tau)).$$
|
| 377 |
+
|
| 378 |
+
**Remark 5.2.** When convenient, we will assume our train tracks to be generic; as mentioned in [13], the proof of the nesting lemma in the generic case is easily extendable to the general setting.
|
| 379 |
+
|
| 380 |
+
If $\mu \in P(\tau)$, define the *combinatorial length* of $\mu$ with respect to $\tau$,
|
| 381 |
+
$l_{\tau}(\mu)$, to be the integral of $\mu$ over $\mathcal{B}$, that is
|
| 382 |
+
|
| 383 |
+
$$l_{\tau}(\mu) := \sum_{b} \mu(b).$$
|
| 384 |
+
|
| 385 |
+
We also define
|
| 386 |
+
|
| 387 |
+
$$l_{N(\tau)}(\mu) := \min_{\sigma} l_{\sigma}(\mu)$$
|
| 388 |
+
|
| 389 |
+
where the minimum is taken over all tracks $\sigma \in N(\tau)$ carrying $\mu$.
|
| 390 |
+
|
| 391 |
+
We will need the following lemma, as seen in [8].
|
| 392 |
+
|
| 393 |
+
**Lemma 5.3.** Let $c$ be a simple closed curve carried by a train track $\tau$. Then the counting measure on $c$ is a vertex cycle of $\tau$ if and only if, for any branch $b$ of $\tau$, the image of $c$ under its corresponding carrying map traverses $b$ at most twice, and never twice in the same direction.
|
| 394 |
+
---PAGE_BREAK---
|
| 395 |
+
|
| 396 |
+
Since the vertex cycles are the extreme points of $P(\tau)$, by the classical
|
| 397 |
+
Krein-Milman theorem, any projective transverse measure class can be
|
| 398 |
+
written as a convex combination of vertex cycles; that is, given $\kappa \in P(\tau)$,
|
| 399 |
+
there exists $(a_i)$ such that
|
| 400 |
+
|
| 401 |
+
$$
|
| 402 |
+
(5.1) \qquad \kappa = \sum_i a_i \alpha_i,
|
| 403 |
+
$$
|
| 404 |
+
|
| 405 |
+
where $(\alpha_i)$ are the vertex cycles of $\tau$. Any train track on $S_{g,p}$ has at most
|
| 406 |
+
$18g + 6p - 18$ branches, and therefore, by Lemma 5.3, if $\tau$ is any train
|
| 407 |
+
track and $\alpha$ is a vertex cycle,
|
| 408 |
+
|
| 409 |
+
$$
|
| 410 |
+
l_{\tau}(\alpha) \leq 2(18g + 6p - 18).
|
| 411 |
+
$$
|
| 412 |
+
|
| 413 |
+
Lemma 5.3 also implies that any train track $\tau$ has at most $3^{18g+6p-18}$ vertex cycles since any branch is traversed once, twice, or no times. We therefore conclude that, given $\kappa$ as in equation (5.1),
|
| 414 |
+
|
| 415 |
+
$$
|
| 416 |
+
(5.2) \quad \max_i a_i \le l_\tau(\sigma) < \left[ (2(18g + 6p - 18)) \cdot 3^{18g+6p} \right] \max_i a_i
|
| 417 |
+
$$
|
| 418 |
+
|
| 419 |
+
$$
|
| 420 |
+
(5.3) \qquad = C \cdot \max_i a_i.
|
| 421 |
+
$$
|
| 422 |
+
|
| 423 |
+
**Lemma 5.4.** Given $L > 0$, there exists a function $h_L(g,p) =$ $O(\log_{\omega(g,p)}(L))$ such that if $\alpha \in P(\tau)$ and $l_{\tau}(\alpha) \le L$, then $d_C(\alpha, \tau) < h_L(g,p)$.
|
| 424 |
+
|
| 425 |
+
*Proof*. Suppose $l_{\tau}(\alpha) \le L$. We will abuse notation and refer to the image of $\alpha$ under its carrying map by $\alpha$. Then every time $\alpha$ traverses a branch of $\tau$, by Lemma 5.3, it can intersect a vertex cycle at most twice. Therefore, if $v$ is any vertex cycle of $\tau$,
|
| 426 |
+
|
| 427 |
+
$$
|
| 428 |
+
i(v, \alpha) \le 2L,
|
| 429 |
+
$$
|
| 430 |
+
|
| 431 |
+
and hence, by Theorem 3.2, for any $\epsilon \in (0,1)$ and $\omega = \text{omega}(\epsilon)$ suffi-
|
| 432 |
+
ciently large,
|
| 433 |
+
|
| 434 |
+
$$
|
| 435 |
+
\begin{align}
|
| 436 |
+
(5.4) \quad d_C(v, \alpha) &\le \frac{\log_\omega(2L)}{\lambda(\log_\omega(3)+1) - \log_\omega(1000 \cdot \log_2(100\omega))} + 2 \\
|
| 437 |
+
&= O(\log_\omega(L)). \tag*{\hspace*{\fill} \square}
|
| 438 |
+
\end{align}
|
| 439 |
+
$$
|
| 440 |
+
|
| 441 |
+
**Remark 5.5.** One needs to be cautious in manipulating the inequality
|
| 442 |
+
in Theorem 3.2 to obtain equation (5.4); if
|
| 443 |
+
|
| 444 |
+
$$
|
| 445 |
+
\rho(\omega, \lambda) := \lambda(\log_{\omega}(3) + 1) - \log_{\omega}(1000 \cdot \log_{2}(100\omega)) < 0,
|
| 446 |
+
$$
|
| 447 |
+
|
| 448 |
+
the direction of the inequality changes and we will not get the desired
|
| 449 |
+
upper bound on curve graph distance. However,
|
| 450 |
+
|
| 451 |
+
$$
|
| 452 |
+
\lim_{\omega \to \infty} \rho(\omega, \lambda) = \lambda > 0,
|
| 453 |
+
$$
|
| 454 |
+
|
| 455 |
+
and therefore, for sufficiently large $\omega$, this is not an issue.
|
| 456 |
+
---PAGE_BREAK---
|
| 457 |
+
|
| 458 |
+
**Lemma 5.6.** Suppose $\sigma$ is a large recurrent train track carried by $\tau$ on $S_{g,p}$, and let $\sigma' \in E(\sigma)$ and $\tau' \in E(\tau)$ such that $\sigma'$ is carried by $\tau'$. Then the total number of times, counting multiplicity, that branches of $\sigma'$ traverse any branch of $\tau' \setminus \tau$ is bounded above by $m_0 = 36g + 12p$.
|
| 459 |
+
|
| 460 |
+
*Proof.* The complete argument may be found in Masur and Minsky's original paper [11] on the hyperbolicity of the curve complex. For our purposes and for the sake of brevity, it suffices here to simply remark that they show that any given branch of $\sigma'$ can only traverse branches of $\tau' \setminus \tau$ at most twice. Then, since any track has less than $18g + 6p$ branches, the result follows. $\square$
|
| 461 |
+
|
| 462 |
+
To prove the following lemma, we use the results from section 4.
|
| 463 |
+
|
| 464 |
+
**Lemma 5.7.** There exists $R = R(g,p)$ with
|
| 465 |
+
|
| 466 |
+
$$ \frac{1}{R(g,p)} = O(\omega^2), $$
|
| 467 |
+
|
| 468 |
+
such that if $\sigma < \tau$, $\sigma$ is large, $\tau$ is generic, $\mu \in P(\tau)$, and every branch $b$ of $\tau \setminus \sigma$ and $b'$ of $\sigma$ satisfies $\mu(b) < R(g)\mu(b')$, then $\mu \in \text{int}(PE(\sigma))$ and $\sigma$ is recurrent.
|
| 469 |
+
|
| 470 |
+
*Proof.* We follow Masur and Minsky's original argument [11]. The main tools are the elementary moves on train tracks called splitting and sliding as introduced in section 3 (see figures 2 and 3), which can be used to take $\tau$ to a diagonal extension of $\sigma$. In order to do this, we need to move any branch of $\tau \setminus \sigma$ into a corner of a complementary region of $\sigma$. A split or a slide applied to any such branch either reduces the number of branches of $\tau \setminus \sigma$ incident to a given branch of $\sigma$ or decreases the distance between a branch of $\tau \setminus \sigma$ and a corner of a complementary region of $\sigma$.
|
| 471 |
+
|
| 472 |
+
Thus, a bounded number of such moves produces a track carried by a diagonal extension of $\sigma$. If a splitting is performed involving a branch $b$ of $\tau \setminus \sigma$ and a branch $c$ of $\sigma$, the resulting track contains a new branch $c'$ of $\sigma$, and we can extend $\mu$ to $c'$ to be consistent with the switch conditions by assigning $\mu(c') = \mu(c) - \mu(b)$. In particular, a sufficient condition for being able to define $\mu$ on the new track is
|
| 473 |
+
|
| 474 |
+
$$ (5.5) \qquad \mu(c) > \mu(b). $$
|
| 475 |
+
|
| 476 |
+
There are at most $18g + 6p$ branches of $\tau \setminus \sigma$ and at most $18g + 6p$ branches of $\sigma$ or $\tau$. As earlier mentioned, a splitting move either reduces the number of branches of $\tau \setminus \sigma$ incident to $\sigma$, or it reduces the number of edges of $\sigma$ between a given branch of $\tau \setminus \sigma$ and a corner that it faces. Once a branch of $\tau \setminus \sigma$ is separated by a corner of a complementary region of $\sigma$ by only edges of $\sigma$ for which no splitting moves can be performed, a slide move takes such an edge to a corner point. Therefore, each edge of
|
| 477 |
+
---PAGE_BREAK---
|
| 478 |
+
|
| 479 |
+
$\tau \setminus \sigma$ is taken to a corner of $\sigma$ after no more than $18g+6p+1$ slidings and splittings, and therefore we obtain $\tau'$ after at most $(18g+6p)(18g+6p+1)$ such moves.
|
| 480 |
+
|
| 481 |
+
Now, let $R(g,p) = \frac{1}{(18g+6p)(18g+6p+1)+1}$, and assume that for this value of $R$, the hypothesis of the statement is satisfied. In light of equation (5.5), $\mu$ is definable on the diagonal extension $\tau'$ that we obtain after splitting and sliding as long as
|
| 482 |
+
|
| 483 |
+
$$ (5.6) \qquad \min_{\sigma} \mu > \frac{1}{R(g,p)} \max_{\tau \setminus \sigma} \mu, $$
|
| 484 |
+
|
| 485 |
+
which is precisely what the hypothesis of Lemma 5.6 implies. Therefore, $\mu$ is extendable to a diagonal extension of $\sigma$ such that all branches receive positive weights; hence, $\mu \in \text{int}(PE(\sigma))$.
|
| 486 |
+
|
| 487 |
+
It remains to show that $\sigma$ is recurrent; suppose not. Let $B(\sigma)$ denote the branch set of $\sigma$. Then Lemma 4.3 implies that if $u \in \mathbb{R}^{|B(\sigma)|}$ is a vector with all positive coordinates,
|
| 488 |
+
|
| 489 |
+
$$ \|L_{\sigma}(u)\| \geq \frac{u_{\min}}{12g + 4p - 12}. $$
|
| 490 |
+
|
| 491 |
+
In light of equation (5.6), since $\mu$ satisfies the switch conditions on $\sigma$, the vector $\mu$ has small deviations up to the additive error coming from the weight it assigns to any branch of $\tau \setminus \sigma$, which is less than
|
| 492 |
+
|
| 493 |
+
$$ \frac{\mu_{\min}}{R(g,p)}; $$
|
| 494 |
+
|
| 495 |
+
since we assumed that $\tau$ is generic, there are at most two branches of $\tau \setminus \sigma$ incident to any branch of $\sigma$, and therefore the deviations of $\mu$ are all less than $\frac{\mu_{\min}}{12g+4p-12}$, contradicting Lemma 4.3. $\square$
|
| 496 |
+
|
| 497 |
+
**Lemma 5.8.** Let $L > 0$ be given. Then there exist functions $s_L(g,p)$ and $y(g,p) = O(\omega^3 3^{18\omega})$ satisfying the following: If $\sigma$ is large and carried by $\tau$ and $\sigma' \in E(\sigma)$, $\tau' \in E(\tau)$ such that $\tau'$ carries $\sigma'$, and if $d_C(\sigma, \tau) \ge s_L$, then any simple closed curve $\beta$ carried on $\sigma'$ can be written in $P(\tau')$ as $\beta_{\tau} + \beta'_{\tau'}$, such that
|
| 498 |
+
|
| 499 |
+
$$ l_{\tau'}(\beta'_{\tau}) \le y(g,p) \cdot l_{\sigma'}(\beta) \text{ and} \\ l_{\tau}(\beta_{\tau}) \ge s_L(g,p)l_{\sigma'}(\beta). $$
|
| 500 |
+
|
| 501 |
+
*Proof.* The details of the argument are not entirely relevant for the proof of our main theorem but may be found in [11]; therefore, we omit the particulars of the proof, and remark only that in their argument, Masur and Minsky show that it suffices to take
|
| 502 |
+
|
| 503 |
+
$$ y(g,p) := C \cdot m_0 W_0 C_0, $$
|
| 504 |
+
---PAGE_BREAK---
|
| 505 |
+
|
| 506 |
+
where $C$ is the constant from equation (5.3), $m_0$ is the constant from the statement of Remark 5.5, $W_0$ is a bound on the weights that a vertex cycle can place on any one branch of $\sigma'$ (and therefore it suffices to take $W_0 = 3$ by Lemma 5.3), and $C_0$ is a bound on the combinatorial length of any vertex cycle on any train track on $S_{g,p}$. Putting all of this together, we obtain
|
| 507 |
+
|
| 508 |
+
$$y(g,p) := [(2(18g + 6p - 18)) \cdot 3^{18g+6p}] (3(36g+12p-36)^2) = O(\omega^3 3^{18\omega}),$$
|
| 509 |
+
as claimed.
|
| 510 |
+
|
| 511 |
+
Masur and Minsky [11] also show that it suffices to take
|
| 512 |
+
|
| 513 |
+
$$s_L(g,p) := h_L(C_0 L + y(g,p)) + 2B,$$
|
| 514 |
+
|
| 515 |
+
where $B$ is a bound on the curve graph distance between any two vertex cycles of the same train track.
|
| 516 |
+
|
| 517 |
+
Therefore, by Theorem 3.1, for sufficiently large $\omega$,
|
| 518 |
+
|
| 519 |
+
$$ (5.7) \qquad s_L(g,p) \le h_L(C_0 L + y(g,p)) + 6. \qquad \square $$
|
| 520 |
+
|
| 521 |
+
*Proof of Lemma 5.1.* Again with concision in mind, we do not include the entirety of Masur and Minsky’s argument [11]; we simply remark here that in our notation, it suffices to choose
|
| 522 |
+
|
| 523 |
+
$$k(g,p) := s_{C m_0} \left( \frac{m_2}{R(g,p)} \right)^{m_3} (g,p).$$
|
| 524 |
+
|
| 525 |
+
Here, $m_0$ is as in Lemma 5.5 and is thus bounded above by $36g + 12p$, $m_2 < (18g + 6p)^{18g+6p}$, and $m_3 < 18g + 6p$. Thus,
|
| 526 |
+
|
| 527 |
+
$$ C m_0 \cdot \left( \frac{m_2}{R(g)} \right)^{m_3} \\ < [(2(18g + 6p - 18)) \cdot 3^{18g}] \cdot (36g + 12p) ((18g + 6p)^{18g+6p+2})^{18g+6p} =: D, $$
|
| 528 |
+
|
| 529 |
+
and therefore, by Lemma 4.5, for $\omega(g,p)$ sufficiently large,
|
| 530 |
+
|
| 531 |
+
$$ \begin{align*} k(g,p) &< h_D(C_0 D + y(g,p)) + 6 \\ &= O(\log_\omega(\omega^3 3^{18\omega}(18\omega)^{324\omega^2+36\omega})) \\ &= O(\omega^2). \end{align*} \qquad \square $$
|
| 532 |
+
|
| 533 |
+
## 6. PROOF OF THE MAIN THEOREM AND COROLLARIES
|
| 534 |
+
|
| 535 |
+
In this section, we prove the main results.
|
| 536 |
+
|
| 537 |
+
**Theorem 1.1.** There exists a function $K(g,p) = O(\omega(g,p)^2)$ such that any nested train track sequence with $R$-bounded steps is a $(K(g,p) + R)$-unparameterized quasigeodesic of the curve graph $C_1(S_{g,p})$, which is $(K(g,p) + R)$-quasiconvex.
|
| 538 |
+
---PAGE_BREAK---
|
| 539 |
+
|
| 540 |
+
*Proof.* Where possible, we use the same notation that Masur and Minsky [11] do to avoid confusion. Let $\delta$ be the hyperbolicity constant of $C_1(S)$. By [9], it suffices to take $\delta = 17$. Let $B$ be a bound on the diameter of the set of vertex cycles of a given train track $\tau \subset S_{g,p}$. As mentioned above, for sufficiently large $\omega$ it suffices to take $B=3$ (see [2] for a proof of this).
|
| 541 |
+
|
| 542 |
+
Given a nested train sequence $(\tau_i)_i$, consider a subsequence $(\tau_{ij})_j$ such that
|
| 543 |
+
|
| 544 |
+
$$k(g, p) \le d(\tau_{ij}, \tau_{ij+1}) < k(g, p) + R,$$
|
| 545 |
+
|
| 546 |
+
and such that if $\tau_n$ is any track not in the subsequence $(\tau_{ij})_j$, then there is some $c$ for which
|
| 547 |
+
|
| 548 |
+
$$d(\tau_{ic}, \tau_n) < k(g, p).$$
|
| 549 |
+
|
| 550 |
+
Then, since $d(\tau_{ij}, \tau_{ij+1}) \ge k(g, p)$, the effective nesting lemma implies that
|
| 551 |
+
|
| 552 |
+
$$PN(\tau_{ij+1}) \subset int(PN(\tau_{ij})).$$
|
| 553 |
+
|
| 554 |
+
For any train track $\tau$, one always has
|
| 555 |
+
|
| 556 |
+
$$N_1(int(PN(\tau))) \subset PN(\tau),$$
|
| 557 |
+
|
| 558 |
+
where $N_m(int(PN(\tau)))$ denotes the set of multi-curves distance at most $m$ in $C_1$ from some multi-curve representing a measure in $int(PN(\tau))$. Combining these two inclusions and inducting yields
|
| 559 |
+
|
| 560 |
+
$$N_{m-1}(PN(\tau_{ij+m})) \subset int(PN(\tau_{ij})).$$
|
| 561 |
+
|
| 562 |
+
Masur and Minsky [11] then make use of a lemma which implies that no vertex cycle of $\tau_{ij}$ is in $int(PN(\tau_{ij}))$, and therefore
|
| 563 |
+
|
| 564 |
+
$$d(\tau_{ij}, \tau_{ik}) \ge |k-j|.$$
|
| 565 |
+
|
| 566 |
+
Thus, if $(v_{ij})_j$ is any sequence of the vertices of $(\tau_{ij})_j$, we have
|
| 567 |
+
|
| 568 |
+
$$|m-n| \le d_C(v_{in}, v_{im}) < (k(g,p) + R + 2B)|m-n|,$$
|
| 569 |
+
|
| 570 |
+
which implies that $(v_{ij})_j$ is a $(k(g,p)+R+2B)$-quasigeodesic. This proves the first part of Theorem 1.1, with $K(g,p) := 2k(g,p) + 46$. (We have shown the sequence to be a $(k(g,p)+R+6)$-quasigeodesic, but we will need the extra $k(g,p)+40$ for the quasiconvexity statement.)
|
| 571 |
+
|
| 572 |
+
We now show $(\tau_i)_{i \in I_1}$ is $(K(g,p)+R)$-quasiconvex. In any $\delta$-hyperbolic metric space, a geodesic segment $\gamma$ connecting the endpoints of a $K$-quasigeodesic segment $\gamma'$ is contained in a $W$-neighborhood of $\gamma'$, where $W = W(K, \delta)$. $W$ is sometimes known as the *stability constant*.
|
| 573 |
+
|
| 574 |
+
Therefore, a geodesic segment connecting any two elements of the vertex cycle sequence $(v_{ij})_j$ is contained in a $W(K, \delta) = W(k(g,p)+R+6, 17)$-neighborhood of the sequence.
|
| 575 |
+
|
| 576 |
+
**Lemma 6.1.** For sufficiently large $\omega$, $W < K(g,p) + R$.
|
| 577 |
+
---PAGE_BREAK---
|
| 578 |
+
|
| 579 |
+
*Proof.* We only give a sketch here; the main idea of the proof follows an argument of Ken'ichi Ohshika [17, p. 35], and we refer to this for a more complete argument. Hyperbolicity of $C_1$ implies the existence of an exponential divergence function; that is, if $\alpha_1, \alpha_2 : [0, \infty) \to C_1$ are two geodesic rays based at the same point $x_0 \in C_1$, then there is some exponential function $f$ so that for sufficiently large $r$ (depending on the choice of geodesic rays), the length of any arc outside of a ball of radius $r$ centered at $x$, connecting $\alpha_1(r)$ and $\alpha_2(r)$, is at least $f(r)$.
|
| 580 |
+
|
| 581 |
+
Let $x$ and $y$ be two elements of a vertex cycle sequence $(v_{ij})_j$ and let $h$ be a geodesic segment connecting them. Denote by $w$ the $(k(g,p)+M+6)$-quasigeodesic segment obtained by following along the vertex sequence from $x$ to $y$.
|
| 582 |
+
|
| 583 |
+
Let $D = \sup_{x \in h} d_C(x, w)$ and suppose $s \in h$ with $d_C(s, w) = D$. Let $a$ and $b$ be two points on $w$ whose distance from $s$ is $D$ and such that $a$ and $b$ are on different sides of $s$. Note that we can assume that such points exist because the end points of $w$ are also the endpoints of $h$, and therefore $s$ must be at least $D$ from the end points of $w$.
|
| 584 |
+
|
| 585 |
+
Let $a'$ ($b'$, respectively) be points located $2D$ from $s$ on either side of $s$ on $w$; if $s$ is closer than $2D$ to one of the endpoints of $w$, simply define $a'$ ($b'$, respectively) to be this corresponding endpoint of $w$. Let $y, z \in h$ be points whose distances are less than $D$ from $a'$ and $b'$, respectively. Note that there is an arc $\sigma$ joining $y$ to $z$ by first connecting $y$ to $a'$, then $a'$ to $b'$ along $w$, and then jumping back over to $h$. Thus,
|
| 586 |
+
|
| 587 |
+
$$
|
| 588 |
+
\begin{align*}
|
| 589 |
+
d_C(y, z) &\le d_C(y, a') + d_C(a', b') + d_C(b', z) \\
|
| 590 |
+
&\le D + 4D + D = 6D.
|
| 591 |
+
\end{align*}
|
| 592 |
+
$$
|
| 593 |
+
|
| 594 |
+
This gives a bound on the length of the segment of $w$ connecting $y$ and
|
| 595 |
+
$z$ since it is a quasigeodesic:
|
| 596 |
+
|
| 597 |
+
$$ \text{length}_w(y,z) \leq (k(g,p) + R + 6) \cdot 6D. $$
|
| 598 |
+
|
| 599 |
+
Let $\beta$ be the arc obtained by concatenating the following 5 arcs: the arc along $h$ from $a$ to $a'$, the arc connecting $a'$ to $y$, the arc along $w$ from $y$ to $z$, the arc connecting $z$ to $b'$, and the arc along $h$ from $b'$ to $b$ (see Figure 5).
|
| 600 |
+
|
| 601 |
+
It follows that
|
| 602 |
+
|
| 603 |
+
$$ \mathrm{length}(\beta) \leq 4D + (k(g,p) + R + 6)D. $$
|
| 604 |
+
|
| 605 |
+
Now we use the divergence function $f$ for $C_1$ to bound the length of $\beta$
|
| 606 |
+
from below. Indeed, for sufficiently large $D$, we have
|
| 607 |
+
|
| 608 |
+
$$ \mathrm{length}(\beta) \geq f(D-c), $$
|
| 609 |
+
---PAGE_BREAK---
|
| 610 |
+
|
| 611 |
+
FIGURE 5. The length of the path $\beta$ (the dotted path) is bounded above by $4D + (k(g,p) + R + 6)D$.
|
| 612 |
+
|
| 613 |
+
where $c$ is a constant related to $f(0)$, and which does not affect the growth rate of the function $f$. Therefore,
|
| 614 |
+
|
| 615 |
+
$$f(D-c) \leq 4D + (k(g,p) + R + 6)D.$$
|
| 616 |
+
|
| 617 |
+
Therefore, if $D > k(g,p) + R + 6$, $\omega$ cannot be arbitrarily large because $f(x)$ eventually dominates $x^2$. This completes the proof of the lemma. $\square$
|
| 618 |
+
|
| 619 |
+
**Remark 6.2.** We note that the conclusion of Lemma 6.1 is not at all sharp; indeed, the same argument would have shown that $W$ is eventually smaller than $(k(g,p) + R + 6)^\lambda$ for any $\lambda \in (0, 1)$. However, we do not concern ourselves with this because the contribution to the quasiconvexity of nested sequences coming from $W$ will be dominated by a larger term, as will be seen below.
|
| 620 |
+
|
| 621 |
+
We have now shown that the collection of vertices of the sequence $(\tau_{ij})_j$ is quasiconvex with quasiconvexity constant $k(g,p) + R + 6$. It remains to analyze the vertex cycles of tracks that are not in this subsequence. If $v$ is such a vertex and $\omega$ is sufficiently large, we know that $v$ is within $k(g,p)+6$ from some vertex of one of the $\tau_{ij}$'s. In any $\delta$-hyperbolic space, geodesics with nearby end points fellow travel, in that they remain within a bounded neighborhood of one another, whose diameter depends only on $\delta$ and the distance between endpoints.
|
| 622 |
+
|
| 623 |
+
Indeed, if $h$ is any geodesic segment connecting arbitrary vertices $v_1$ and $v_2$, $h$ must remain within $2\delta + k(g,p) + 6 \leq 40 + k(g,p)$ of some geodesic connecting vertices of the $\tau_{ij}$.
|
| 624 |
+
|
| 625 |
+
Therefore, the collection of all vertices of the sequence $(\tau_i)_{i \in I_1}$ is a $(46+R+2k(g,p))$-quasiconvex subset of $C_1$. This completes the proof of Theorem 1.1. $\square$
|
| 626 |
+
---PAGE_BREAK---
|
| 627 |
+
|
| 628 |
+
*Proof of Corollary 1.3.* Masur and Minsky [13] complete their argument showing the quasiconvexity of $D(g) \subset C_1(S_g)$ by noting that any two disks in $D(g)$ can be connected by a path in $D(g)$ representing a *well-nested curve replacement sequence*, a certain kind of nested train track sequence with $R$-bounded steps for which one can take $R$ to be 15.
|
| 629 |
+
|
| 630 |
+
Thus, we see that $D(g)$ is $(61 + 4k(g, 0))$-quasiconvex, and this completes the proof of Corollary 1.3. $\square$
|
| 631 |
+
|
| 632 |
+
## 6.1. PROOF OF THEOREM 1.2.
|
| 633 |
+
|
| 634 |
+
The purpose of this subsection is to prove Theorem 1.2, which states that the splitting and sliding sequences project to $O(\omega^2)$-unparameterized quasigeodesics in the curve graph of any essential subsurface $Y \subseteq S$. To do this, we simply follow the original argument of [14], effectivizing along the way.
|
| 635 |
+
|
| 636 |
+
We first introduce some terminology. Given a subsurface $Y$, as in section 2, let $S^Y$ denote the (non-compact) covering space of $S$ corresponding to $Y$. Then, if $\tau$ is a train track on $S$, let $\tau^Y$ denote the pre-image under the covering projection of $\tau$ to $S^Y$. Then let $C(\tau^Y)$ and $\mathcal{A}C(\tau^Y)$ denote the collection of essential, non-peripheral, simple closed curves (curves and arcs, respectively) in the Gromov compactification of $S^Y$ whose interiors are train paths on $\tau^Y$. Let $V(\tau)$ denote the collection of vertex cycles of a track $\tau$.
|
| 637 |
+
|
| 638 |
+
Then, if $Y$ is not an annulus, define the *induced track*, denoted $\tau|_Y$, to be the union of branches of $\tau^Y$ traversed by some element of $C(\tau^Y)$.
|
| 639 |
+
|
| 640 |
+
*Proof of Theorem 1.2.* We first note that any splitting and sliding sequence $(\tau_i)_i$ is a nested train track sequence with $Z$-bounded steps, for $Z$ some uniform constant. Indeed, if $\tau_i$ is obtained from $\tau_{i-1}$ by either a splitting or a sliding, any vertex cycle of $\tau_i$ may intersect a vertex cycle of $\tau_{i-1}$ at most 6 times over any branch of $\tau_{i-1}$. Thus, there is some linear function $f: \mathbb{N} \to \mathbb{N}$ such that $i(v_i, v_{i-1}) < f(\omega(g,p))$ for $(\tau_i)_i$ a sliding and splitting sequence on $S_{g,p}$, and $v_i$ ($v_{i-1}$, respectively) is any vertex cycle of $\tau_i$ ($\tau_{i-1}$, respectively), and therefore, as a consequence of Theorem 3.2, for sufficiently large $\omega$,
|
| 641 |
+
|
| 642 |
+
$$d_C(v_i, v_{i-1}) < 4.$$
|
| 643 |
+
|
| 644 |
+
To show that $(\psi_Y(\tau_i))_i$ is an $O(\omega^2)$-unparameterized quasigeodesic in $C(Y)$, we will exhibit a splitting and sliding sequence $(\sigma_i)_i$ on $Y$ such that $d_C(\tau_i, \sigma_i) = O(1)$. Then we will be done by applying Theorem 1.1 to the sequence $(\sigma_i)$.
|
| 645 |
+
|
| 646 |
+
Given a vertex cycle $\alpha$ of $\tau_j|_Y$, define $\sigma_j \subset \tau_j|_Y$ to be the minimal track carrying $\alpha$; thus, $\sigma_j$ is recurrent by construction, and Masur, Mosher, and Schleimer [14] show $\sigma_j$ to be transversely recurrent as well.
|
| 647 |
+
---PAGE_BREAK---
|
| 648 |
+
|
| 649 |
+
Furthermore, they show that $\sigma_{j+1}$ is obtained from $\sigma_j$ by a slide or a split so long as $\sigma_j \neq \sigma_{j+1}$. Therefore, $(\sigma_i)_i$ constitutes a sliding and splitting sequence of birecurrent train tracks and thus is a nested train track sequence on $Y$ with $Z$-bounded steps.
|
| 650 |
+
|
| 651 |
+
Since $\sigma_j$ is a subtrack of $\tau_j|Y$, by Lemma 5.3, any vertex cycle of $\sigma_j$ is a vertex cycle of $\tau_j|Y$, and therefore the diameter of $V(\tau_j|Y) \cup V(\sigma_j)$ is no more than 6 for sufficiently large $\omega$.
|
| 652 |
+
|
| 653 |
+
Since $\alpha$ is carried by $\tau_j|Y$, it is also carried by $\tau_j$. Masur, Mosher, and Schleimer [14] then make use of a lemma which implies the existence of a vertex cycle $\beta_j$ of $\tau_j$ which intersects the subsurface $Y$ essentially. By [14, Lemma 2.8 and Lemma 5.4],
|
| 654 |
+
|
| 655 |
+
$$i(\pi_Y(\beta_j), v_j) < 8|\mathcal{B}(\tau_j)|,$$
|
| 656 |
+
|
| 657 |
+
and therefore, by Lemma 4.6 and Theorem 3.2, for $\omega$ sufficiently large,
|
| 658 |
+
|
| 659 |
+
$$d_C(\pi_Y(\beta_j), v_j) < 4.$$
|
| 660 |
+
|
| 661 |
+
This same argument applies to any vertex cycle of $\tau_j$ which projects non-trivially to $Y$, and thus we conclude that
|
| 662 |
+
|
| 663 |
+
$$d_Y(\sigma_j, \tau_j) \le d_Y(\sigma_j, \tau_j|Y) + d_Y(\tau_j|Y, \tau_j) < 6 + 4 = 10,$$
|
| 664 |
+
|
| 665 |
+
for all $\omega$ sufficiently large. $\square$
|
| 666 |
+
|
| 667 |
+
**Acknowledgments.** The author would primarily like to thank his adviser, Yair Minsky, for his guidance and for many helpful suggestions. He would also like to thank Ian Biringer, Catherine Pfaff, Saul Schleimer, and Harold Sultan for their time and for the many motivating conversations they’ve had with the author regarding this work. Finally, the author thanks the referee for several helpful comments.
|
| 668 |
+
|
| 669 |
+
REFERENCES
|
| 670 |
+
|
| 671 |
+
[1] Aaron Abrams and Saul Schleimer, *Distances of Heegaard splittings*, Geom. Topol. **9** (2005), 95–119 (electronic).
|
| 672 |
+
|
| 673 |
+
[2] Tarik Aougab. *Uniform hyperbolicity of the graphs of curves*. arXiv:1212.3160 [math.GT]. Available at http://arxiv.org/pdf/1212.3160.pdf.
|
| 674 |
+
|
| 675 |
+
[3] Brian H. Bowditch, *Uniform hyperbolicity of the curve graphs*. Available at http://homepages.warwick.ac.uk/masgak/papers/uniformhyp.pdf.
|
| 676 |
+
|
| 677 |
+
[4] Matt Clay, Kasra Rafi, and Saul Schleimer, *Uniform hyperbolicity of the curve graph via surgery sequences*. arXiv:1302.5519 [math.GT]. Available at http://arxiv.org/pdf/1302.5519.pdf.
|
| 678 |
+
|
| 679 |
+
[5] Benson Farb and Dan Margalit, *A Primer on Mapping Class Groups*. Princeton Mathematical Series, 49. Princeton, NJ: Princeton University Press, 2012.
|
| 680 |
+
---PAGE_BREAK---
|
| 681 |
+
|
| 682 |
+
[6] Samuel Fiorini, Gwenaël Joret, Dirk Oliver Theis, and David R. Wood, *Small minors in dense graphs*. arXiv:1005.0895 [math.CO]. Available at http://arxiv.org/pdf/1005.0895.pdf. Small minors in dense graphs. European J. Combin. 33 (2012), no. 6, 1226–1245.
|
| 683 |
+
|
| 684 |
+
[7] John Hempel, *3-manifolds as viewed from the curve complex*, Topology **40** (2001), no. 3, 631–657.
|
| 685 |
+
|
| 686 |
+
[8] Ursula Hamenstädt, *Geometry of the complex of curves and of Teichmüller space* in Handbook of Teichmüller Theory. Vol. I. Ed. Athanase Papadopoulos. IRMA Lectures in Mathematics and Theoretical Physics, 11. Zürich: Eur. Math. Soc., 2007. 447–467.
|
| 687 |
+
|
| 688 |
+
[9] Sebastian Hensel, Piotr Przytycki, and Richard C. H. Webb, *Slim unicorns and uniform hyperbolicity for arc graphs and curve graphs*. arXiv:1301.5577 [math.GT]. Available at http://arxiv.org/pdf/1301.5577.pdf.
|
| 689 |
+
|
| 690 |
+
[10] Steven P. Kerckhoff, *The measure of the limit set of the handlebody group*, Topology **29** (1990), no. 1, 27–40.
|
| 691 |
+
|
| 692 |
+
[11] Howard A. Masur and Yair N. Minsky, *Geometry of the complex of curves. I. Hyperbolicity*, Invent. Math. **138** (1999), no. 1, 103–149.
|
| 693 |
+
|
| 694 |
+
[12] ————, *Geometry of the complex of curves. II. Hierarchical structure*, Geom. Funct. Anal. **10** (2000), no. 4, 902–974.
|
| 695 |
+
|
| 696 |
+
[13] ————, *Quasiconvexity in the curve complex* in In the Tradition of Ahlfors and Bers, III. Ed. William Abikoff and Andrew Haas. Contemporary Mathematics, 355. Providence, RI: Amer. Math. Soc., 2004. 309–320.
|
| 697 |
+
|
| 698 |
+
[14] Howard Masur, Lee Mosher, and Saul Schleimer, *On train-track splitting sequences*, Duke Math. J. **161** (2012), no. 9, 1613–1656.
|
| 699 |
+
|
| 700 |
+
[15] Howard Masur and Saul Schleimer, *The geometry of the disk complex*, J. Amer. Math. Soc. **26** (2013), no. 1, 1–62.
|
| 701 |
+
|
| 702 |
+
[16] Lee Mosher, *Train track expansions of measured foliations*. Available at http://andromeda.rutgers.edu/sinmosher/arationality031228.pdf. 2003.
|
| 703 |
+
|
| 704 |
+
[17] Ken'ichi Ohshika, *Discrete Groups*. Translated from the 1998 Japanese original by the author. Translations of Mathematical Monographs, 207. Iwanami Series in Modern Mathematics. Providence, RI: American Mathematical Society, 2002.
|
| 705 |
+
|
| 706 |
+
[18] R. C. Penner and J. L. Harer, *Combinatorics of Train Tracks*. Annals of Mathematics Studies, 125. Princeton, NJ: Princeton University Press, 1992.
|
| 707 |
+
|
| 708 |
+
[19] William P. Thurston, *On the geometry and dynamics of diffeomorphisms of surfaces*, Bull. Amer. Math. Soc. (N.S.) **19** (1988), no. 2, 417–431.
|
| 709 |
+
|
| 710 |
+
DEPARTMENT OF MATHEMATICS; YALE UNIVERSITY; 10 HILLHOUSE AVENUE; NEW HAVEN, CT 06510 USA
|
| 711 |
+
|
| 712 |
+
E-mail address: tarik.aougab@yale.edu
|
samples_new/texts_merged/7774888.md
ADDED
|
@@ -0,0 +1,807 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Spectral theory and operator ergodic theory on super-reflexive Banach spaces
|
| 5 |
+
|
| 6 |
+
by
|
| 7 |
+
|
| 8 |
+
EARL BERKSON (Urbana, IL)
|
| 9 |
+
|
| 10 |
+
**Abstract.** On reflexive spaces trigonometrically well-bounded operators have an operator-ergodic-theory characterization as the invertible operators *U* such that
|
| 11 |
+
|
| 12 |
+
$$ (*) \quad \sup_{n \in \mathbb{N}, z \in \mathcal{T}} \left\| \sum_{0 < |k| \le n} \left( 1 - \frac{|k|}{n+1} \right) k^{-1} z^k U^k \right\| < \infty. $$
|
| 13 |
+
|
| 14 |
+
Trigonometrically well-bounded operators permeate many settings of modern analysis, and this note highlights the advances in both their spectral theory and operator ergodic theory made possible by a recent rekindling of interest in the R. C. James inequalities for super-reflexive spaces. When the James inequalities are combined with Young-Stieltjes integration for the spaces $V_p(\mathcal{T})$ of functions having bounded $p$-variation, it transpires that every trigonometrically well-bounded operator on a super-reflexive space $X$ has a norm-continuous $V_p(\mathcal{T})$-functional calculus for a range of values of $p > 1$, and we investigate the ways this outcome logically simplifies and simultaneously expands the structure theory, Fourier analysis, and operator ergodic theory of trigonometrically well-bounded operators on $X$. In particular, on a super-reflexive space $X$ (but not on a general relexive space) a theorem of Tauberian type holds: the (C, 1) averages in (*) corresponding to a trigonometrically well-bounded operator $U$ can be replaced by the set of all the rotated ergodic Hilbert averages of $U$, which, in fact, is a precompact set relative to the strong operator topology. This circle of ideas is facilitated by the development of a convergence theorem for nets of spectral integrals of $V_p(\mathcal{T})$-functions. In the Hilbert space setting we apply the foregoing to the operator-weighted shifts which are known to provide a universal model for trigonometrically well-bounded operators on Hilbert space.
|
| 15 |
+
|
| 16 |
+
## 1. Introduction and notation.
|
| 17 |
+
The set of positive integers, the set of all integers, the real line, and the complex plane will be denoted by $\mathbb{N}$, $\mathbb{Z}$, $\mathbb{R}$, and $\mathbb{C}$, respectively. The unit circle $\{z \in \mathbb{C} : |z| = 1\}$ will be designated by $\mathcal{T}$. The symbol “K” with a (possibly empty) set of subscripts will be used to denote a constant which depends only on its subscripts, and which can change in value from one occurrence to another. Except where other-
|
| 18 |
+
|
| 19 |
+
2010 Mathematics Subject Classification: Primary 26A45, 46B20, 47A35, 47B40.
|
| 20 |
+
Key words and phrases: ergodic Hilbert transform, super-reflexive Banach space, spectral decomposition, p-variation, trigonometrically well-bounded operator.
|
| 21 |
+
---PAGE_BREAK---
|
| 22 |
+
|
| 23 |
+
wise indicated, the convergence of a bilateral series $\sum_{k=-\infty}^{\infty} a_k$ will mean
|
| 24 |
+
the convergence of its sequence of bilateral partial sums $\{\sum_{k=-n}^{n} a_k\}_{n=1}^{\infty}$.
|
| 25 |
+
Throughout all that follows, $\mathcal{X}$ will be an arbitrary Banach space, and we
|
| 26 |
+
shall symbolize by $\mathfrak{B}(\mathcal{X})$ the Banach algebra of all continuous linear oper-
|
| 27 |
+
ators mapping $\mathcal{X}$ into $\mathcal{X}$, the identity operator on $\mathcal{X}$ being denoted by $I$.
|
| 28 |
+
A trigonometric polynomial will be a linear combination of a finite subset of
|
| 29 |
+
the functions $\epsilon_n(z) \equiv z^n \ (z \in \mathbb{T}, n \in \mathbb{Z})$. Given a trigonometric polynomial
|
| 30 |
+
$Q(z) \equiv \sum_n a_n z^n$ and an invertible $T \in \mathfrak{B}(\mathcal{X})$, we shall denote by $Q(T)$ the
|
| 31 |
+
operator $\sum_n a_n T^n$.
|
| 32 |
+
|
| 33 |
+
Deferring the precise details from spectral theory to §2, we use this in-
|
| 34 |
+
troductory section to fix some notation and to outline our considerations,
|
| 35 |
+
beginning with the abstract notions of spectral decomposability and spec-
|
| 36 |
+
tral integration. An operator $U \in \mathfrak{B}(\mathcal{X})$ is said to be trigonometrically well-
|
| 37 |
+
bounded ([5]) provided that $U$ has a “unitary-like” spectral representation
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
(1.1) \qquad U = \int_{0-}^{2\pi} e^{it} dE(t),
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
where $E(\cdot) : \mathbb{R} \to \mathfrak{B}(\mathcal{X})$ is a bounded idempotent-valued function possessing certain additional properties reminiscent of, but weaker than, those that would be inherited from a countably additive Borel spectral measure in $\mathbb{R}$, and where the integral in (1.1) is a Riemann–Stieltjes integral existing in the strong operator topology. After suitable normalization, the idempotent-valued function $E(\cdot)$ in (1.1) is uniquely determined, and is called the *spectral decomposition* of $U$. The spectral decomposition $E(\cdot)$ gives rise to a notion of Riemann–Stieltjes *spectral integration* against the integrator $E(\cdot)$. Spectral integration with respect to $E(\cdot)$ provides the trigonometrically well-bounded operator $U$ with a norm-continuous functional calculus implemented by $BV(\mathbb{T})$, the Banach algebra of all complex-valued functions $\psi$ on $\mathbb{T}$ having bounded variation and furnished with the $BV([0, 2\pi])$-norm of the corresponding function $\psi^\dagger(\cdot) \equiv \psi(e^{i\langle\cdot\rangle})$.
|
| 44 |
+
|
| 45 |
+
Trigonometrically well-bounded operators abound in the structures of modern analysis that require weakened forms of orthogonality to treat delicate convergence phenomena beyond the reach of the unconditional convergence associated with spectral measures. For a variety of naturally occurring examples of trigonometrically well-bounded operators, see, e.g., [8], §4 of [10], and [20]. In particular, if $\mathcal{X}$ is a UMD space, then any invertible $U \in \mathfrak{B}(\mathcal{X})$ such that $U$ is power-bounded (that is, $\sup_{n \in \mathbb{Z}} \|U^n\| < \infty$) is trigonometrically well-bounded. For some applications of trigonometrically well-bounded operators to operator ergodic theory and transference methods, see [3], [13], [14], [15], [17], and [18].
|
| 46 |
+
|
| 47 |
+
Our starting point for this article is the following operator-ergodic-theory
|
| 48 |
+
characterization of trigonometrically well-bounded operators on an arbitrary
|
| 49 |
+
---PAGE_BREAK---
|
| 50 |
+
|
| 51 |
+
reflexive Banach space $\mathcal{X}_0$ (see the equivalence of conditions (i) and (ii) of
|
| 52 |
+
Theorem (2.4) in [6]).
|
| 53 |
+
|
| 54 |
+
PROPOSITION 1.1. Let $\mathcal{X}_0$ be a reflexive Banach space, and let $U \in \mathfrak{B}(\mathcal{X}_0)$ be an invertible operator. Then $U$ is trigonometrically well-bounded if and only if
|
| 55 |
+
|
| 56 |
+
$$
|
| 57 |
+
(1.2) \quad \sup \left\{ \left\| \sum_{0 < |k| \le n} \left( 1 - \frac{|k|}{n+1} \right) \frac{z^k}{k} U^k \right\| : n \in \mathbb{N}, z \in \mathbb{T} \right\} < \infty.
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+
This article features results in both spectral theory and operator er-
|
| 61 |
+
godic theory made possible by a recent renewal of interest in the conse-
|
| 62 |
+
quences of R. C. James' inequalities for super-reflexive Banach spaces. (For
|
| 63 |
+
these inequalities, see [30]; for the basic notions and fundamental features
|
| 64 |
+
of super-reflexive spaces, see [31] as well as the celebrated result of P. Enflo
|
| 65 |
+
in [26], which characterizes super-reflexivity as the property of having an
|
| 66 |
+
equivalent uniformly convex norm.) When the James inequalities from [30]
|
| 67 |
+
are combined with Young's inequalities in [40] for the spaces of functions
|
| 68 |
+
having bounded $p$-variation on the circle (the $V_p(\mathbb{T})$ spaces), $1 < p < \infty$,
|
| 69 |
+
it transpires that for every trigonometrically well-bounded operator on a
|
| 70 |
+
super-reflexive Banach space, spectral integration against its spectral de-
|
| 71 |
+
composition extends its BV$(\mathbb{T})$-functional calculus to a norm-continuous
|
| 72 |
+
$V_p(\mathbb{T})$-functional calculus, for a suitable range of values of $p > 1$ (Theorem
|
| 73 |
+
3.7 below). One indicator of the scope of this extension is that, in contrast
|
| 74 |
+
to BV$(\mathbb{T})$, every class $V_p(\mathbb{T})$ contains a continuous, nowhere differentiable
|
| 75 |
+
function of Hardy-Weierstrass type (see Remark 2.8(ii) below).
|
| 76 |
+
|
| 77 |
+
The spectral integration of function classes of “higher variation” was initiated in [11], but heretofore has been confined to integrating against the spectral decompositions of: invertible power-bounded operators on classical UMD spaces [19], or invertible operators that are separation-preserving and modulus mean-bounded on reflexive Lebesgue spaces of sigma-finite measures [18]. Consequently, the results below ensuring spectral integration of $V_p(\mathbb{T})$ in the wide setting of super-reflexive spaces markedly expand the scope of spectral integration. Since functions of higher variation act as Fourier multipliers in classical unweighted settings as well as in classical weighted settings (see, e.g., Theorem 8 of [18], Théorème 1 and Lemme 3 of [24]), the spectral integration of the spaces $V_p(\mathbb{T})$ provided by Theorem 3.7 below can be viewed as a mechanism for the transference to super-reflexive spaces of a wide family of classical Fourier multipliers, with ramifications for the Fourier analysis of operators. In this regard let us recall that in various contexts where the left bilateral shift is a trigonometrically well-bounded operator (with spectral decomposition $\mathcal{E}(\cdot)$, say) on a sequence space, any bounded complex-valued function $f$ which is continuous a.e. on the circle, and
|
| 78 |
+
---PAGE_BREAK---
|
| 79 |
+
|
| 80 |
+
such that the spectral integral $\int_{[0,2\pi]} f(e^{it}) d\mathcal{E}(t)$ exists, will act as a Fourier multiplier for the given sequence space, with $\int_{[0,2\pi]} f(e^{it}) d\mathcal{E}(t)$ serving as the multiplier transform of $f$ (p. 16 of [9], Scholium (5.13) of [10], Theorem 4.3 of [16]). Theorem 5.5 below illustrates this point with a new application.
|
| 81 |
+
|
| 82 |
+
By drawing on §3, the treatment in §4 furnishes a number of pleasant consequences for the operator ergodic theory of trigonometrically well-bounded operators that logically simplifies and expands their machinery in the super-reflexive space setting. In particular, if $U$ is a trigonometrically well-bounded operator on a super-reflexive space $X$, then a Tauberian-type theorem holds (Theorem 4.3 below). Specifically, the $(C, 1)$ averages appearing in the uniform boundedness condition (1.2) can be replaced by the rotated ergodic Hilbert averages of $U$:
|
| 83 |
+
|
| 84 |
+
$$ (1.3) \qquad \tilde{\mathcal{W}} = \left\{ \sum_{0 < |k| \le n} \frac{z^k}{k} U^k : n \in \mathbb{N}, z \in \mathbb{T} \right\}. $$
|
| 85 |
+
|
| 86 |
+
In fact, the set $\tilde{\mathcal{W}}$ is precompact relative to $\sigma_X$, the strong operator topology of $\mathfrak{B}(X)$. In the general reflexive space setting, this norm-boundedness of $\tilde{\mathcal{W}}$ need not hold for a trigonometrically well-bounded operator $U$ (see Remark 2.5 below). However, thanks to Hardy's Tauberian Theorem (see, e.g., Theorem II.2.2 in [32]), in the general Banach space setting the set $\tilde{\mathcal{W}}$ corresponding to a power-bounded trigonometrically well-bounded operator is norm-bounded (Theorem (3.21) of [7]). So the streamlining effect of Theorem 4.3 below is that for boundedness of $\tilde{\mathcal{W}}$, the hypothesis of power-boundedness can be dropped provided the underlying Banach space is super-reflexive. In the realm of Fourier analysis of operators on super-reflexive spaces, this streamlining effect is illustrated below by the strong convergence of the operator-valued "Fourier series" associated with a trigonometrically well-bounded operator $U$ and $BV(\mathbb{T})$-functions (Theorem 4.4). (In this setting, it is further shown that the operator-valued "Fourier series" associated with a trigonometrically well-bounded operator $U$ and $V_p(\mathbb{T})$-functions converge $(C, 1)$ in the strong operator topology (Theorem 4.5 below).) The foregoing circle of ideas is facilitated by the development of a suitable convergence theorem for the spectral integrals of $V_p(\mathbb{T})$-functions (Theorem 3.9 below).
|
| 87 |
+
|
| 88 |
+
Since, when taken as a whole, the foregoing results can fail to hold in the general reflexive space setting, it is a pleasant surprise to find them valid throughout the broad context furnished by super-reflexive spaces, which include the UMD spaces ([1], [34]) properly ([22], [35]). In §5, we confine attention to the Hilbert space context by taking up some applications of the foregoing to operator-weighted shifts, which have been shown in [16] to furnish a universal model for estimates regarding trigonometrically well-bounded operators on Hilbert space.
|
| 89 |
+
---PAGE_BREAK---
|
| 90 |
+
|
| 91 |
+
In the course of the exchanges during the Oberwolfach Workshop on Spectral Theory in Banach Spaces and Harmonic Analysis (July 25–31, 2004), Nigel Kalton offered the seminal suggestion that the James inequalities for super-reflexive spaces ([30]) might prove to be a useful tool for advances in spectral integration. The author wishes to thank Nigel Kalton for subsequently informing him of this perceptive viewpoint, which forms the basis for the developments below. On the heels of the Oberwolfach Workshop on Spectral Theory in Banach Spaces and Harmonic Analysis, work aimed in the direction of Kalton’s suggestion was carried out in a doctoral dissertation at the University of Edinburgh [21]. This thesis work and the present article spiritually overlap each other in two places, and this state of affairs will be described below in Remark 3.8, where we discuss the anatomy of the present article’s methods.
|
| 92 |
+
|
| 93 |
+
**2. Background items.** In this section, we recall requisite notions, starting with the basic machinery of spectral families and their associated spectral integration.
|
| 94 |
+
|
| 95 |
+
**DEFINITION 2.1.** A *spectral family* in a Banach space $\mathcal{X}$ is an idempotent-valued function $E(\cdot) : \mathbb{R} \to \mathfrak{B}(\mathcal{X})$ with the following properties:
|
| 96 |
+
|
| 97 |
+
(i) $E(\lambda)E(\tau) = E(\tau)E(\lambda) = E(\lambda)$ if $\lambda \le \tau$;
|
| 98 |
+
|
| 99 |
+
(ii) $\|E\|_u = \sup\{\|E(\lambda)\| : \lambda \in \mathbb{R}\} < \infty$;
|
| 100 |
+
|
| 101 |
+
(iii) with respect to the strong operator topology, $E(\cdot)$ is right continuous and has a left-hand limit $E(\lambda^{-})$ at each point $\lambda \in \mathbb{R}$;
|
| 102 |
+
|
| 103 |
+
(iv) $E(\lambda) \to I$ as $\lambda \to \infty$ and $E(\lambda) \to 0$ as $\lambda \to -\infty$, each limit being with respect to the strong operator topology.
|
| 104 |
+
|
| 105 |
+
If, in addition, there exist $a, b \in \mathbb{R}$ with $a \le b$ such that $E(\lambda) = 0$ for $\lambda < a$ and $E(\lambda) = I$ for $\lambda \ge b$ then $E(\cdot)$ is said to be *concentrated on* $[a, b]$.
|
| 106 |
+
|
| 107 |
+
Given a spectral family $E(\cdot)$ in the Banach space $\mathcal{X}$ concentrated on a compact interval $J = [a, b]$, an associated theory of spectral integration can be developed as follows. For each bounded function $\psi : J \to \mathbb{C}$ and each partition $\mathcal{P} = (\lambda_0, \lambda_1, \dots, \lambda_n)$ of $J$, where we take $\lambda_0 = a$ and $\lambda_n = b$, set
|
| 108 |
+
|
| 109 |
+
$$ (2.1) \qquad S(\mathcal{P}; \psi, E) = \sum_{k=1}^{n} \psi(\lambda_k) \{E(\lambda_k) - E(\lambda_{k-1})\}. $$
|
| 110 |
+
|
| 111 |
+
If the net $\{S(\mathcal{P}; \psi, E)\}$ converges in the strong operator topology of $\mathfrak{B}(\mathcal{X})$ as $\mathcal{P}$ runs through the set of partitions of $J$ directed to increase by refinement, then the strong limit is called the *spectral integral* of $\psi$ with respect to $E(\cdot)$, and is denoted by $\int_J \psi(\lambda) dE(\lambda)$ or, more briefly, by $\int_J \psi dE$.
|
| 112 |
+
---PAGE_BREAK---
|
| 113 |
+
|
| 114 |
+
In this case, we define $\int_J^\oplus \psi(\lambda) dE(\lambda)$ by writing
|
| 115 |
+
|
| 116 |
+
$$\int_J^\oplus \psi(\lambda) dE(\lambda) = \psi(a)E(a) + \int_J \psi(\lambda) dE(\lambda),$$
|
| 117 |
+
|
| 118 |
+
and so $\int_J^\oplus \psi(\lambda) dE(\lambda)$ is the limit in the strong operator topology of the sums
|
| 119 |
+
|
| 120 |
+
$$ (2.2) \quad \tilde{S}(\mathcal{P}; \psi, E) = \psi(a)E(a) + \sum_{k=1}^n \psi(\lambda_k)\{E(\lambda_k) - E(\lambda_{k-1})\}. $$
|
| 121 |
+
|
| 122 |
+
It can be shown that the spectral integral $\int_J \psi(\lambda) dE(\lambda)$ exists for each $\psi \in \text{BV}(J)$, and that the mapping
|
| 123 |
+
|
| 124 |
+
$$ (2.3) \qquad \psi \mapsto \int_J^\oplus \psi(\lambda) dE(\lambda) $$
|
| 125 |
+
|
| 126 |
+
is an identity-preserving algebra homomorphism of $BV(J)$ into $\mathfrak{B}(\mathcal{X})$ satisfying
|
| 127 |
+
|
| 128 |
+
$$ (2.4) \qquad \left\| \int_J^\oplus \psi(t) dE(t) \right\| \le \|\psi\|_{\text{BV}(J)} \sup\{\|E(\lambda)\| : \lambda \in \mathbb{R}\}, $$
|
| 129 |
+
|
| 130 |
+
where $\|\cdot\|_{\text{BV}(J)}$ denotes the usual Banach algebra norm expressed by
|
| 131 |
+
|
| 132 |
+
$$ \|\psi\|_{\text{BV}(J)} = \sup_{x \in J} |\psi(x)| + \text{var}(\psi, J). $$
|
| 133 |
+
|
| 134 |
+
In this connection, we recall here a key oscillation notion for the spectral family $E(\cdot)$ in the arbitrary Banach space $\mathcal{X}$ concentrated on a compact interval $J = [a, b]$. For each $x \in \mathcal{X}$, and each partition of $[a, b]$, $\mathcal{P} = (a = a_0 < a_1 < \dots < a_N = b)$, we put
|
| 135 |
+
|
| 136 |
+
$$ \omega(\mathcal{P}, E, x) = \max_{1 \le j \le N} \sup \{\|E(t)x - E(a_{j-1})x\| : a_{j-1} \le t < a_j\}. $$
|
| 137 |
+
|
| 138 |
+
Now, as $\mathcal{P}$ increases through the set of all partitions of $[a, b]$ directed to increase by refinement, we have (see Lemma 4 of [38])
|
| 139 |
+
|
| 140 |
+
$$ (2.5) \qquad \lim_{\mathcal{P}} \omega(\mathcal{P}, E, x) = 0. $$
|
| 141 |
+
|
| 142 |
+
In the setting of the arbitrary Banach space $\mathcal{X}$, one can establish with the aid of (2.5) the following “workhorse” convergence theorem for spectral integrals of $BV(J)$-functions taken with respect to $E(\cdot)$. In the setting of super-reflexive spaces, Theorems 3.9 and 3.11 below show that this convergence theorem has counterparts for functions of higher variation.
|
| 143 |
+
|
| 144 |
+
**THEOREM 2.2.** Let $\{\psi_\alpha\}_{\alpha \in \mathcal{A}}$ be a net in $BV(J)$, and let $\psi$ be a complex-valued function on $J$ such that
|
| 145 |
+
|
| 146 |
+
(i) $\sup_{\alpha \in \mathcal{A}} \text{var}(\psi_\alpha, J) < \infty$,
|
| 147 |
+
|
| 148 |
+
(ii) $\psi_\alpha \to \psi$ pointwise on $J$.
|
| 149 |
+
---PAGE_BREAK---
|
| 150 |
+
|
| 151 |
+
Then $\psi \in \text{BV}(J)$, and $\{\int_J^\oplus \psi_\alpha dE\}_{\alpha \in \mathcal{A}}$ converges to $\int_J^\oplus \psi dE$ in the strong operator topology.
|
| 152 |
+
|
| 153 |
+
The foregoing basic theory of spectral integration was developed in [38]. We refer the reader to §2 of [7] for a simplified account using the above notation. We shall also consider in connection with the above matters the Banach algebra $\text{BV}(\mathbb{T})$, which consists of all $\psi : \mathbb{T} \to \mathbb{C}$ such that the function $\psi^\dagger(t) = \psi(e^{it})$ belongs to $\text{BV}([0, 2\pi])$, furnished with the norm $\|\psi\|_{\text{BV}(\mathbb{T})} = \|\psi^\dagger\|_{\text{BV}([0, 2\pi])}$. The following notation will come in handy—particularly whenever Fejér's Theorem is invoked. Given any function $f : \mathbb{R} \to \mathbb{C}$ which has a right-hand limit and a left-hand limit at each point of $\mathbb{R}$, we shall denote by $f^\# : \mathbb{R} \to \mathbb{C}$ the function defined for every $t \in \mathbb{R}$ by putting
|
| 154 |
+
|
| 155 |
+
$$f^{\#}(t) = \frac{1}{2} \left\{ \lim_{s \to t^+} f(s) + \lim_{s \to t^-} f(s) \right\}.$$
|
| 156 |
+
|
| 157 |
+
In the case of a function $\phi : \mathbb{T} \to \mathbb{C}$ such that $\phi(e^{i\cdot}) : \mathbb{R} \to \mathbb{C}$ has everywhere a right-hand limit and a left-hand limit, we shall, by a slight abuse of notation, write
|
| 158 |
+
|
| 159 |
+
$$ (2.6) \qquad \phi^{\#}(t) = \frac{1}{2} \left\{ \lim_{s \to t^+} \phi(e^{is}) + \lim_{s \to t^-} \phi(e^{is}) \right\} \quad \text{for all } t \in \mathbb{R}. $$
|
| 160 |
+
|
| 161 |
+
In particular, for each $\phi \in \text{BV}(\mathbb{T})$, it is clear that we may regard the $(2\pi)$-periodic function $\phi^\#$ as an element of $\text{BV}(\mathbb{T})$. (In general, when there is no danger of confusion, we shall, as convenient, tacitly indulge in the conventional practice of identifying a function $\Psi$ defined on $\mathbb{T}$ with its $(2\pi)$-periodic counterpart $\Psi(e^{i\cdot})$ defined on $\mathbb{R}$.)
|
| 162 |
+
|
| 163 |
+
**DEFINITION 2.3.** An operator $U \in \mathfrak{B}(\mathcal{X})$ is said to be trigonometrically well-bounded if there is a spectral family $E(\cdot)$ in $\mathcal{X}$ concentrated on $[0, 2\pi]$ such that $U = \int_{[0,2\pi]} e^{i\lambda} dE(\lambda)$. In this case, it is possible to arrange that $E((2\pi)^{-}) = I$, and with this additional property the spectral family $E(\cdot)$ is uniquely determined by $U$, and is called the *spectral decomposition* of $U$.
|
| 164 |
+
|
| 165 |
+
**REMARK 2.4.** The above discussion regarding (2.3) and (2.4) shows that a trigonometrically well-bounded operator on a Banach space has a norm-continuous $\text{BV}(\mathbb{T})$-functional calculus. In the setting of super-reflexive spaces, Theorem 3.7 below will extend this $\text{BV}(\mathbb{T})$-functional calculus to a norm-continuous functional calculus based on functions of appropriately higher variation.
|
| 166 |
+
|
| 167 |
+
After the development in [4] of an intimately related precursor class (the “well-bounded operators of type (B)”), the class of trigonometrically well-bounded operators was introduced in [5], and its fundamental structural theory further developed in [6]. In the general Banach space setting
|
| 168 |
+
---PAGE_BREAK---
|
| 169 |
+
|
| 170 |
+
resp., in the reflexive space setting described in Proposition 1.1), trigonometrically well-bounded operators can be characterized by the precompactness in the weak operator topology (resp., the uniform boundedness) of the
|
| 171 |
+
(C, 1) means of their full set of rotated discrete ergodic Hilbert averages.
|
| 172 |
+
(For the general Banach space case, see Theorem 5.2 of [14].) In order to
|
| 173 |
+
discuss this recurring theme, it will be convenient to establish a notation
|
| 174 |
+
for the sequence of trigonometric polynomials underlying it via spectral
|
| 175 |
+
integration—specifically, for each $n \in \mathbb{N}$ and each $z \in \mathbb{T}$, we write
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
(2.7) \qquad \mathfrak{s}_n(z) = \sum_{0 < |k| \le n} \frac{z^k}{k}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
(thus, {$\mathfrak{s}_n$}$_{n=1}^{\infty}$ is the sequence of partial sums for the Fourier series of $\phi_0 \in$
|
| 182 |
+
BV($\mathbb{T}$) defined by $\phi_0(1) = 0$ and $\phi_0(e^{it}) = i(\pi - t)$ for $0 < t < 2\pi$). The
|
| 183 |
+
fact that var($\mathfrak{s}_n$, $\mathbb{T}$) $\to \infty$ as $n \to \infty$ is a well-known consequence of the
|
| 184 |
+
properties of the Lebesgue constants (see, e.g., (3.9) of [14]), and renders
|
| 185 |
+
(2.4) incapable of bounding the sequence {||$\mathfrak{s}_n(T)$||}$_{n=1}^{\infty}$ in the case of an
|
| 186 |
+
arbitrary trigonometrically well-bounded operator on an arbitrary Banach
|
| 187 |
+
space $\mathcal{X}$. The following remark guarantees that there is no way out of this,
|
| 188 |
+
even in the setting of a general reflexive Banach space, and this fact serves
|
| 189 |
+
to underscore the aforementioned felicitous properties which Theorem 4.3
|
| 190 |
+
confers on the set $\tilde{\mathcal{W}}$ in (1.3) when the underlying Banach space is super-
|
| 191 |
+
reflexive.
|
| 192 |
+
|
| 193 |
+
**REMARK 2.5.** Example (3.1) in [6] exhibits a reflexive Banach space $\mathcal{X}_0$
|
| 194 |
+
and a trigonometrically well-bounded operator $T_0 \in \mathfrak{B}(\mathcal{X}_0)$ such that for
|
| 195 |
+
each trigonometric polynomial $Q$, we have
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
\|Q(T_0)\|_{\mathfrak{B}(\mathcal{X}_0)} = |Q(1)| + \text{var}(Q, \mathbb{T}).
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
Hence $\|\mathfrak{s}_n(T_0)\|_{\mathfrak{B}(\mathcal{X}_0)} \to \infty$ as $n \to \infty$. A noteworthy feature of the reflexive Banach space $\mathcal{X}_0$ used in this example is that, by virtue of [25] (note, e.g., Lemma 1.e.4 in [33]), $\mathcal{X}_0$ cannot be made uniformly convex by equivalent renorming (in view of Corollary 3 of [26], this last can be equivalently restated by saying that the reflexive Banach space $\mathcal{X}_0$ is not super-reflexive).
|
| 202 |
+
|
| 203 |
+
On a more positive note, we mention here that trigonometrically well-
|
| 204 |
+
bounded operators do enjoy the following operator-valued variant of Fejér’s
|
| 205 |
+
Theorem (see Theorem (3.10)(i) of [7]). (For a marked improvement on
|
| 206 |
+
the conclusion of this next theorem in the presence of super-reflexivity, see
|
| 207 |
+
Theorem 4.4 below.)
|
| 208 |
+
|
| 209 |
+
**THEOREM 2.6.** Suppose that *U* is a trigonometrically well-bounded op-
|
| 210 |
+
erator on a Banach space *X*, and *E*(·) is the spectral decomposition of *U*.
|
| 211 |
+
Let *f* ∈ BV(*T*), and let *f*# be as in (2.6). Then the series ∑∞∑k=-∞ ̂*f*(k)*U*^k
|
| 212 |
+
is (*C*, 1)-summable in the strong operator topology to (that is, the sequence
|
| 213 |
+
---PAGE_BREAK---
|
| 214 |
+
|
| 215 |
+
$$ \begin{gather*} \left\{ \sum_{k=-n}^{n} \left(1 - \frac{|k|}{n+1}\right) \hat{f}(k) U^k \right\}_{n=1}^{\infty} \text{ converges in the strong operator topology to} \\ \int_{[0,2\pi]} \int f^{\#}(t) dE(t). \end{gather*} $$
|
| 216 |
+
|
| 217 |
+
The centerpiece of our considerations in §3 will be a proof that, in the context of super-reflexivity, spectral integration against $E(\cdot)$ can be extended from BV($\mathbb{T}$) to the broader classes $V_p(\mathbb{T})$ consisting of the functions of bounded $p$-variation, where $p$ ranges over an appropriate subinterval of $(1, \infty)$ (see Theorem 3.7 below). To avoid later digressions, we take up here the definition of the $p$-variation of a function $\psi$.
|
| 218 |
+
|
| 219 |
+
**DEFINITION 2.7.** Let $J = [a, b]$ be a compact interval of $\mathbb{R}$. For $1 \le p < \infty$, the $p$-variation of a function $\psi: J \to \mathbb{C}$ is specified by writing
|
| 220 |
+
|
| 221 |
+
$$ \mathrm{var}_p(\psi, [a,b]) = \sup \left\{ \sum_{k=1}^{N} |\psi(x_k) - \psi(x_{k-1})|^p \right\}^{1/p}, $$
|
| 222 |
+
|
| 223 |
+
where the supremum is extended over all partitions $a = x_0 < x_1 < \dots < x_N = b$ of $[a, b]$.
|
| 224 |
+
|
| 225 |
+
By definition, the class $V_p(J)$ consists of all functions $\psi: J \to \mathbb{C}$ such that $\mathrm{var}_p(\psi, [a,b]) < \infty$. It is readily verified that $V_p(J)$ becomes a unital Banach algebra under pointwise operations when endowed with the norm $\|\cdot\|_{V_p(J)}$ specified by
|
| 226 |
+
|
| 227 |
+
$$ \|\psi\|_{V_p(J)} = \sup\{|\psi(x)| : x \in J\} + \mathrm{var}_p(\psi, J). $$
|
| 228 |
+
|
| 229 |
+
Moreover, if $\psi \in V_p(J)$, then $\lim_{x \to y^+} \psi(x)$ exists for each $y \in [a, b)$ and $\lim_{x \to y^-} \psi(x)$ exists for each $y \in (a, b]$, and the set of discontinuities of $\psi$ in $J$ is countable. It is elementary that $V_1(J)$ and BV$(J)$ consist of the same functions, and also that $V_q(J) \subseteq V_r(J)$ when $1 \le q \le r < \infty$, since $\|\psi\|_{V_p(J)}$ is a decreasing function of $p$. For additional fundamental features of $V_p(J)$, see, e.g., §2 in [11].
|
| 230 |
+
|
| 231 |
+
For $\psi: \mathbb{T} \to \mathbb{C}$, we define $\mathrm{var}_p(\psi, \mathbb{T})$ to be $\mathrm{var}_p(\psi(e^{i\cdot}), [0, 2\pi])$, and we designate by $V_p(\mathbb{T})$ the class consisting of all functions $\psi: \mathbb{T} \to \mathbb{C}$ such that $\mathrm{var}_p(\psi, \mathbb{T}) < \infty$. With pointwise operations on $\mathbb{T}$, $V_p(\mathbb{T})$ likewise becomes a unital Banach algebra when furnished with the norm
|
| 232 |
+
|
| 233 |
+
$$ \|\psi\|_{V_p(\mathbb{T})} = \|\psi(e^{i\cdot})\|_{V_p([0,2\pi])} = \sup\{|\psi(z)| : z \in \mathbb{T}\} + \mathrm{var}_p(\psi, \mathbb{T}). $$
|
| 234 |
+
|
| 235 |
+
**REMARK 2.8.** (i) For $1 \le p < \infty$ and $\psi: \mathbb{T} \to C$, there is also a rotation-invariant notion for the $p$-variation of $\psi$ on $\mathbb{T}$, which serves as an alternative to $\mathrm{var}_p(\psi, \mathbb{T})$ defined above. Specifically, we can define
|
| 236 |
+
|
| 237 |
+
$$ \mathfrak{v}_p(\psi, \mathbb{T}) = \sup \left\{ \sum_{k=1}^{N} |\psi(e^{it_k}) - \psi(e^{it_{k-1}})|^p \right\}^{1/p}, $$
|
| 238 |
+
---PAGE_BREAK---
|
| 239 |
+
|
| 240 |
+
where the supremum is taken over all finite sequences $-\infty < t_0 < t_1 < \dots < t_N = t_0 + 2\pi < \infty$. It is evident that
|
| 241 |
+
|
| 242 |
+
$$ (2.8) \qquad \mathrm{var}_p(\psi, \mathbb{T}) \le \nu_p(\psi, \mathbb{T}) \le 2 \mathrm{var}_p(\psi, \mathbb{T}), $$
|
| 243 |
+
|
| 244 |
+
and that $\nu_1(\psi, \mathbb{T}) = \mathrm{var}_1(\psi, \mathbb{T})$. Moreover, for $1 \le p < \infty$, $V_p(\mathbb{T})$ is also a unital Banach algebra under the norm $\|\cdot\|_{\nu_p(\mathbb{T})}$ given by
|
| 245 |
+
|
| 246 |
+
$$ \|\psi\|_{\nu_p(\mathbb{T})} = \sup\{|\psi(z)| : z \in \mathbb{T}\} + \nu_p(\psi, \mathbb{T}), $$
|
| 247 |
+
|
| 248 |
+
which, by virtue of (2.8), is obviously equivalent to the Banach algebra norm $\|\cdot\|_{V_p(\mathbb{T})}$ defined above. (When convenient, we shall use the equivalence of the norms $\|\cdot\|_{\nu_p(\mathbb{T})}$ and $\|\cdot\|_{V_p(\mathbb{T})}$ without comment.) Straightforward application of the Generalized Minkowski Inequality shows that if $F \in L^1(\mathbb{T})$ and $\psi \in V_p(\mathbb{T})$, then the convolution $F * \psi$ belongs to $V_p(\mathbb{T})$, with
|
| 249 |
+
|
| 250 |
+
$$ (2.9) \qquad \|F * \psi\|_{V_p(\mathbb{T})} \le \|F\|_{L^1(\mathbb{T})} \|\psi\|_{\nu_p(\mathbb{T})} \le 2 \|F\|_{L^1(\mathbb{T})} \|\psi\|_{V_p(\mathbb{T})}. $$
|
| 251 |
+
|
| 252 |
+
(ii) It is worth noting here that if $1 < q < \infty$, then $\bigcup_{1 \le p < q} V_p(\mathbb{T})$ is not dense in $V_q(\mathbb{T})$. To see this, first note that if $1 \le p < \infty$ and $f \in V_p(\mathbb{T})$, then, in the notation of [29] we have, $f \in \Lambda_p$. This is a standard inclusion, established for $p=1$ in Lemma 9 of [29], and for $1 < p < \infty$ on pages 259, 260 of [40] (nowadays this inclusion for $1 < p < \infty$ is also transparent via, e.g., Theorem 3.1 of [23]). Hence Lemma 11 of [29] shows that $\{\hat{f}(k)\}_{k=-\infty}^{\infty}$, the sequence of Fourier coefficients of $f$, satisfies
|
| 253 |
+
|
| 254 |
+
$$ (2.10) \qquad \sup\{|k|^{1/p}|\hat{f}(k)| : k \in \mathbb{Z}\} < \infty. $$
|
| 255 |
+
|
| 256 |
+
In view of this, we can define for $1 \le p < \infty$ the linear mapping $\mathfrak{T}_p : V_p(\mathbb{T}) \to \ell^\infty(\mathbb{Z})$ by writing $\mathfrak{T}_p(f) = \{|k|^{1/p} \hat{f}(k)\}_{k=-\infty}^{\infty}$. It follows via the Closed Graph Theorem that $\mathfrak{T}_p$ is continuous, and so the following set $\mathcal{N}_p(\mathbb{T})$, which coincides with $(\mathfrak{T}_p)^{-1}(c_0(\mathbb{Z}))$, is a closed subspace of $V_p(\mathbb{T})$:
|
| 257 |
+
|
| 258 |
+
$$ \mathcal{N}_p(\mathbb{T}) = \{g \in V_p(\mathbb{T}) : |k|^{1/p} \hat{g}(k) \to 0 \text{ as } |k| \to \infty\}. $$
|
| 259 |
+
|
| 260 |
+
It is clear from (2.10) that $\bigcup_{1 \le p < q} V_p(\mathbb{T}) \subseteq \mathcal{N}_q(\mathbb{T})$. However, $F_q$, Hardy's $(2\pi)$-periodic, Weierstrass-type, continuous, nowhere differentiable function from [28], which is specified by
|
| 261 |
+
|
| 262 |
+
$$ F_q(t) = \sum_{n=0}^{\infty} 2^{-n/q} \cos(2^n t) \quad \text{for all } t \in \mathbb{R}, $$
|
| 263 |
+
|
| 264 |
+
belongs to $Lip_{1/q}(\mathbb{R})$ by 1.33 of [28], and hence its restriction $F_q|_{[0, 2\pi]}$ can be regarded as belonging to $V_q(\mathbb{T})$. It is clear that for each non-negative integer $n$,
|
| 265 |
+
|
| 266 |
+
$$ 2^{n/q} \widehat{F}_q(2^n) = \frac{1}{2}, $$
|
| 267 |
+
|
| 268 |
+
whence $F_q|_{[0, 2\pi]}$ does not belong to $\mathcal{N}_q(\mathbb{T})$. (Compare (9.4) of [40].)
|
| 269 |
+
---PAGE_BREAK---
|
| 270 |
+
|
| 271 |
+
If we replace absolute values by norms in the foregoing definitions of $p$-variation, we arrive at the corresponding definitions for vector-valued functions. Furthermore, for a vector-valued function $f$ defined on $\mathbb{R}$ (including the scalar-valued case), the standard counterpart for $\mathbb{R}$ of $p$-variation is given by
|
| 272 |
+
|
| 273 |
+
$$ \operatorname{var}_p(f, \mathbb{R}) = \sup_{-\infty < a < b < \infty} \operatorname{var}_p(f, [a, b]). $$
|
| 274 |
+
|
| 275 |
+
If $E(\cdot)$ is a spectral family of projections in an arbitrary Banach space $\mathcal{X}$, and $1 \le p < \infty$, we shall also use the symbol $\operatorname{var}_p(E)$ to denote
|
| 276 |
+
|
| 277 |
+
$$ \sup\{\operatorname{var}_p(E(\cdot)x, \mathbb{R}) : \|x\| \le 1\}. $$
|
| 278 |
+
|
| 279 |
+
**3. Super-reflexivity and spectral integration of $V_p(\mathbb{T})$ with $p > 1$.**
|
| 280 |
+
|
| 281 |
+
For extensive details and terminology regarding the structure theory of super-reflexive spaces, we refer the interested reader to, e.g., Part 4 of [2]. One of R. C. James' inequalities for super-reflexive spaces (Theorem 3 of [30]) states the following.
|
| 282 |
+
|
| 283 |
+
**THEOREM 3.1.** Let $X$ be a super-reflexive Banach space. If $\phi$ and $K$ are real numbers such that
|
| 284 |
+
|
| 285 |
+
$$ 0 < 2\phi < 1/K \le 1, $$
|
| 286 |
+
|
| 287 |
+
then there is $q = q(X, \phi, K) \in (1, \infty)$ such that for any normalized basic sequence $\{y_j\}$ in $X$ with basis constant not exceeding $K$, we have
|
| 288 |
+
|
| 289 |
+
$$ (3.1) \qquad \phi\left\{\sum_j |a_j|^q\right\}^{1/q} \le \left\|\sum_j a_j y_j\right\|, $$
|
| 290 |
+
|
| 291 |
+
for all scalar sequences $\{a_j\}$ such that $\sum_j a_j y_j$ converges.
|
| 292 |
+
|
| 293 |
+
In the context of a spectral family of projections in a super-reflexive space, James's Theorem 3.1 above readily specializes so as to take the following form.
|
| 294 |
+
|
| 295 |
+
**PROPOSITION 3.2.** If $E(\cdot)$ is a spectral family of projections in a super-reflexive Banach space $X$, and $\phi$ is a real number satisfying
|
| 296 |
+
|
| 297 |
+
$$ (3.2) \qquad 0 < \phi < \frac{1}{4\|E\|_u}, $$
|
| 298 |
+
|
| 299 |
+
then there is a real number $q = q(X, \phi, \|E\|_u) \in (1, \infty)$ such that
|
| 300 |
+
|
| 301 |
+
$$ (3.3) \qquad \operatorname{var}_q(E) \le \frac{2\|E\|_u}{\phi}. $$
|
| 302 |
+
|
| 303 |
+
*Proof.* Let $x \in X \setminus \{0\}$, and suppose that $-\infty < \lambda_0 < \lambda_1 < \dots < \lambda_N < \infty$. Let $\{z_j\}_{j=1}^M$ be the basic sequence consisting of all non-zero terms extracted from $\{{E(\lambda_k) - E(\lambda_{k-1})}\}_{k=1}^N x\}_{j=1}^M$, let $\{y_j\}_{j=1}^M$ be the normalized basic sequence $\{z_j/||z_j||\}_{j=1}^M$ (whose basis constant clearly does not exceed
|
| 304 |
+
---PAGE_BREAK---
|
| 305 |
+
|
| 306 |
+
$2\|E\|_u)$, and let $\{a_j\}_{j=1}^M$ be the sequence of real numbers $\{\|z_j\|\}_{j=1}^M$. Then, in the present context, (3.1) becomes the desired conclusion (3.3), since the sum in the majorant of (3.1) telescopes here. ■
|
| 307 |
+
|
| 308 |
+
Since we shall not require any specificity for the roles played by the constants $\phi$, $\|E\|_u$, and $q = q(X, \phi, \|E\|_u)$ in Proposition 3.2, we include here the following condensed version (which can also be derived directly from Proposition IV.II.3 on pages 249–250 of [2] by similar reasoning to that above after using the equivalent renorming of $X$ specified by $\|x\| = \sup_{-\infty<a<b<\infty} \|\{E(b) - E(a)\}x\|$ to convert to a monotone basic sequence).
|
| 309 |
+
|
| 310 |
+
**PROPOSITION 3.3.** If $E(\cdot)$ is a spectral family of projections in a super-reflexive Banach space $X$, then there is a constant $q \in (1, \infty)$ such that $\text{var}_q(E) < \infty$.
|
| 311 |
+
|
| 312 |
+
The obvious vehicle for using Proposition 3.3 to derive the spectral integration of $V_p(\mathbb{T})$ for appropriate values of $p \in (1, \infty)$ is the following fundamental theorem of Young-Stieltjes integration (see §10 of [40]).
|
| 313 |
+
|
| 314 |
+
**THEOREM 3.4.** Suppose that $J = [a, b]$ is a compact interval, $1 < p, q < \infty$, $p^{-1} + q^{-1} > 1$, and $f \in V_p(J)$, $g \in V_q(J)$ have no common discontinuities. Then the Riemann-Stieltjes integral $\int_a^b f(t) dg(t)$ exists and obeys the estimate
|
| 315 |
+
|
| 316 |
+
$$ \left| \int_a^b f(t) dg(t) \right| \le \left\{ 1 + \zeta \left( \frac{1}{p} + \frac{1}{q} \right) \right\} \|f\|_{V_p(J)} \text{var}_q(g, J). $$
|
| 317 |
+
|
| 318 |
+
*(Here $\zeta$ designates the Riemann zeta function specified by $\zeta(s) = \sum_{n=1}^{\infty} n^{-s}$ for $s > 1$)*
|
| 319 |
+
|
| 320 |
+
**THEOREM 3.5.** Let $X$ be a super-reflexive Banach space, and let $E(\cdot)$ be the spectral decomposition of a trigonometrically well-bounded operator $U \in \mathfrak{B}(X)$. Let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$. Let $u \in (1, q')$, where $q' = q(q-1)^{-1}$ is the conjugate index of $q$. Then, in terms of the notation of (2.6), for every $f \in \text{BV}(\mathbb{T})$ we have
|
| 321 |
+
|
| 322 |
+
$$ (3.4) \quad \left\| \int_{[0,2\pi]}^\oplus f^{#}(t) dE(t) \right\| \le 3 \left\{ 1 + \zeta \left( \frac{1}{u} + \frac{1}{q} \right) \right\} \|f\|_{V_u(\mathbb{T})} \text{var}_q(E). $$
|
| 323 |
+
|
| 324 |
+
*Proof.* Here and henceforth we denote by $\{\kappa_n\}_{n=0}^\infty$ the Fejér kernel for $\mathbb{T}$,
|
| 325 |
+
|
| 326 |
+
$$ \kappa_n(z) = \sum_{k=-n}^{n} \left(1 - \frac{|k|}{n+1}\right) z^k. $$
|
| 327 |
+
|
| 328 |
+
Clearly $u^{-1} + q^{-1} > 1$. For $f \in \text{BV}(\mathbb{T})$, each trigonometric polynomial $\kappa_n * f$
|
| 329 |
+
---PAGE_BREAK---
|
| 330 |
+
|
| 331 |
+
is in $BV(\mathbb{T}) \subseteq V_u(\mathbb{T})$, with
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
\|\kappa_n * f\|_{BV(\mathbb{T})} \leq \|f\|_{BV(\mathbb{T})}.
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
For the integral
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
\int_{[0,2\pi]} (\kappa_n * f)(e^{it}) dx^*(E(t)x)
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
(which automatically exists for arbitrary $x \in X$, and $x^*$ in the dual space $X^*$), we now apply Theorem 3.4 to the pair of functions $\kappa_n * f \in V_u(\mathbb{T})$ and $x^*(E(\cdot)x) \in V_q([0, 2\pi])$ to obtain the estimate
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
\left| \int_{[0,2\pi]} (\kappa_n * f)(e^{it}) dx^*(E(t)x) \right|
|
| 347 |
+
\le \left\{ 1 + \zeta \left( \frac{1}{u} + \frac{1}{q} \right) \right\} \| \kappa_n * f \|_{V_u(\mathbb{T})} \mathrm{var}_q(E) \|x\| \|x^*\|,
|
| 348 |
+
$$
|
| 349 |
+
|
| 350 |
+
and consequently for each $n$, we see with the aid of this last estimate that
|
| 351 |
+
|
| 352 |
+
$$
|
| 353 |
+
(3.5) \quad \left\| \int_{[0,2\pi]} (\kappa_n * f)(e^{it}) dE(t) \right\|
|
| 354 |
+
\le
|
| 355 |
+
\begin{cases}
|
| 356 |
+
1 + \zeta \left(\frac{1}{u} + \frac{1}{q}\right) & \| \kappa_n * f \|_{V_u(\mathbb{T})} \operatorname{var}_q(E) \\
|
| 357 |
+
\le 2 \left\{ 1 + \zeta \left(\frac{1}{u} + \frac{1}{q}\right) \right\} \| f \|_{V_u(\mathbb{T})} \operatorname{var}_q(E).
|
| 358 |
+
\end{cases}
|
| 359 |
+
$$
|
| 360 |
+
|
| 361 |
+
Since $\{\kappa_n * f\}_{n=0}^\infty$ converges pointwise to $f^\#$ on $\mathbb{T}$ while its terms have uniformly bounded 1- variations, we can infer via Theorem 2.2 above that, in the strong operator topology,
|
| 362 |
+
|
| 363 |
+
$$
|
| 364 |
+
\int_{[0,2\pi]} (\kappa_n * f)(e^{it}) dE(t) \rightarrow_{[0,2\pi]} f^\#(t) dE(t).
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
Hence (3.5) shows that (3.4) holds. ■
|
| 368 |
+
|
| 369 |
+
In order to pass from the estimate in (3.4) for the spectral integral of $f^\#$ when $f \in BV(\mathbb{T})$ to the spectral integration of $V_p(\mathbb{T})$-functions, we shall need to rely on the following exemplar of the tools which spectral integration furnishes for such situations.
|
| 370 |
+
|
| 371 |
+
**THEOREM 3.6.** Suppose that $U$ is a trigonometrically well-bounded op-
|
| 372 |
+
erator on an arbitrary Banach space $\mathcal{X}$, $E(\cdot)$ is the spectral decomposition
|
| 373 |
+
of $U$, and $1 < u < \infty$. Suppose further that there is a constant $\tau$ such that
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
(3.6) \quad \left\| \int_{[0,2\pi]}^{\oplus} \psi^{\#}(e^{it}) dE(t) \right\| \leq \tau \|\psi\|_{V_u(\mathbb{T})} \quad \text{for all } \psi \in BV(\mathbb{T}).
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
Then if $1 \le p < u$, the spectral integral $\int_{[0,2\pi]} \phi(e^{it}) dE(t)$ exists for each $\phi \in V_p(\mathbb{T})$, and the mapping $\phi \in V_p(\mathbb{T}) \mapsto \int_{[0,2\pi]}^{\oplus} \phi(e^{it}) dE(t)$ is an identity-
|
| 380 |
+
---PAGE_BREAK---
|
| 381 |
+
|
| 382 |
+
preserving algebra homomorphism of $V_p(\mathbb{T})$ into $\mathfrak{B}(\mathcal{X})$ such that
|
| 383 |
+
|
| 384 |
+
$$ \left\| \int_{[0,2\pi]}^{\oplus} \phi(e^{it}) dE(t) \right\| \leq \tau K_{p,u} \| \phi \|_{V_p(\mathbb{T})} \quad \text{for all } \phi \in V_p(\mathbb{T}). $$
|
| 385 |
+
|
| 386 |
+
*Proof*. A demonstration of the current theorem can readily be modeled after the proof of Theorem 2.1 in [11] by replacing the Fourier multiplier norm estimate in Proposition 2.3 et seq. of [11] by the present hypothesis (3.6). Alternatively, one can extract key elements of a proof for the current theorem by making suitable modifications to the reasoning for its Marcinkiewicz power-classes counterpart in Theorem 12 of [18]. ■
|
| 387 |
+
|
| 388 |
+
By taking $u = 2^{-1}(p+q')$ in Theorem 3.5 while combining Theorems 3.5 and 3.6 we arrive at the following principal result, which guarantees spectral integration of $V_p(\mathbb{T})$ spaces in the presence of super-reflexivity, and thereby extends to each $V_p(\mathbb{T})$ space, throughout an appropriate range of $p > 1$, the BV$(\mathbb{T})$-functional calculus for trigonometrically well-bounded operators.
|
| 389 |
+
|
| 390 |
+
**THEOREM 3.7.** Let $X$ be a super-reflexive Banach space, and let $E(\cdot)$ be the spectral decomposition of a trigonometrically well-bounded operator $U \in \mathfrak{B}(X)$. Let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$. Let $p \in (1, q')$, where $q' = q(q-1)^{-1}$ is the conjugate index of $q$. Then the spectral integral $\int_{[0,2\pi]} \phi(e^{it}) dE(t)$ exists for each $\phi \in V_p(\mathbb{T})$, and the mapping $\phi \in V_p(\mathbb{T}) \mapsto \int_{[0,2\pi]}^{\oplus} \phi(e^{it}) dE(t)$ is an identity-preserving algebra homomorphism of $V_p(\mathbb{T})$ into $\mathfrak{B}(X)$ such that
|
| 391 |
+
|
| 392 |
+
$$ \left\| \int_{[0,2\pi]}^{\oplus} \phi(e^{it}) dE(t) \right\| \leq K_{p,q} \text{var}_q(E) \| \phi \|_{V_p(\mathbb{T})} \quad \text{for all } \phi \in V_p(\mathbb{T}). $$
|
| 393 |
+
|
| 394 |
+
**REMARK 3.8.** (i) As already indicated above, from both a conceptual and historical standpoint, Proposition 3.2 (along with its abbreviated version in Proposition 3.3) can best be viewed as the immediate specialization to spectral families of James’ celebrated estimate for super-reflexive spaces here quoted as Theorem 3.1. On the basis of extensive calculations aided by [30], Theorem 2.1 of [21] asserts what amounts to Proposition 3.2 above. The reasoning devoted to Theorem 2.1 in [21] occurs there on pp. 14–28, 31, with the following description on page 23: “The proof of Theorem 2.1 is rather involved, and requires several technical results”.
|
| 395 |
+
|
| 396 |
+
(ii) Some generic spectral integration tool for the general Banach space setting, such as Theorem 3.6, seems to be required for the transition from Proposition 3.2 and the fundamental theorem of Young–Stieltjes integration reproduced in Theorem 3.4 in order to arrive at Theorem 3.7. The reasoning offered for Theorem 4.1 in [21], which purports to establish the same result as Theorem 3.7 above without such a transitional tool, is flawed, primarily
|
| 397 |
+
---PAGE_BREAK---
|
| 398 |
+
|
| 399 |
+
because it rests on the false premise that $V_1(\mathbb{T})$ is norm-dense in $V_p(\mathbb{T})$, if
|
| 400 |
+
$1 < p < \infty$, in contradiction to the result in Remark 2.8(ii) above.
|
| 401 |
+
|
| 402 |
+
We now proceed to associate with Theorem 3.7 a useful convergence theorem for appropriate nets of spectral integrals in the context of super-reflexivity. This (as well as Theorem 3.11 below) furnishes the promised extension of Theorem 2.2 to functions of higher variation.
|
| 403 |
+
|
| 404 |
+
**THEOREM 3.9.** *Assume the hypotheses on X, E(·), U, and q of Theorem 3.7, and let p ∈ (1, q'). Suppose that {gβ}β∈B is a net of mappings from T into C satisfying*
|
| 405 |
+
|
| 406 |
+
$$
|
| 407 |
+
\begin{align*}
|
| 408 |
+
(3.7) \qquad & \rho \equiv \sup\{\text{var}_p(g_\beta, \mathbb{T}) : \beta \in B\} < \infty, \\
|
| 409 |
+
& \text{and such that for each } \beta \in B, \text{ and each } t_0 \in \mathbb{R}, \\
|
| 410 |
+
(3.8) \qquad & \lim_{t \to t_0^-} g_\beta(e^{it}) = g_\beta(e^{it_0}).
|
| 411 |
+
\end{align*}
|
| 412 |
+
$$
|
| 413 |
+
|
| 414 |
+
*Suppose further that {g_β}_{β∈B} converges pointwise on T to a complex-valued function g. Then g ∈ V_p(T), and the net*
|
| 415 |
+
|
| 416 |
+
$$
|
| 417 |
+
\left\{ \int_{[0,2\pi]}^{\oplus} g_{\beta}(e^{it}) dE(t) \right\}_{\beta \in B}
|
| 418 |
+
$$
|
| 419 |
+
|
| 420 |
+
converges in the strong operator topology of $\mathfrak{B}(X)$ to $\int_{[0,2\pi]}^{\oplus} g(e^{it}) dE(t).$
|
| 421 |
+
|
| 422 |
+
*Proof.* Clearly, var$_p$(g, T) ≤ ρ < ∞. Choose q₁ so that 1 < q < q₁ < ∞ and p⁻¹ + q⁻¹ > 1. Fix x ∈ X \ {0}, let ε > 0 be given, and use (2.5) to infer that [0, 2π] has a partition Pε = (0 = t₀ < t₁ < ... < tJ = 2π) such that
|
| 423 |
+
|
| 424 |
+
$$
|
| 425 |
+
(3.9) \qquad \omega(U, E, x) < \epsilon \quad \text{for any refinement } U \text{ of } P_{\epsilon}.
|
| 426 |
+
$$
|
| 427 |
+
|
| 428 |
+
For an arbitrary pair of refinements of $\mathcal{P}_\varepsilon$, say $\mathcal{P} = (0 = a_0 < a_1 < \dots < a_N = 2\pi)$, $\mathcal{Q} = (0 = b_0 < b_1 < \dots < b_M = 2\pi)$, and for any $\beta \in B$, we shall now consider the following two sums:
|
| 429 |
+
|
| 430 |
+
$$
|
| 431 |
+
S_1 \equiv \sum_{j=1}^{N} E(a_{j-1})x\{g_{\beta}(e^{ia_j}) - g_{\beta}(e^{ia_{j-1}})\},
|
| 432 |
+
$$
|
| 433 |
+
|
| 434 |
+
$$
|
| 435 |
+
S_2 \equiv \sum_{m=1}^{M} E(b_{m-1})x\{g_{\beta}(e^{ib_m}) - g_{\beta}(e^{ib_{m-1}})\}.
|
| 436 |
+
$$
|
| 437 |
+
|
| 438 |
+
For $1 \le \nu \le J$, let $I_\nu = [y_\nu, z_\nu]$ be the rightmost subinterval of $\mathcal{P}$ contained
|
| 439 |
+
in the subinterval $[t_{\nu-1}, t_\nu]$ of $\mathcal{P}_\varepsilon$, and let $S'_1$ denote the sum $S_1$ after the
|
| 440 |
+
replacement of the terms $E(y_\nu)x\{g_\beta(e^{iz_\nu}) - g_\beta(e^{iy_\nu})\}$, $1 \le \nu \le J$, by corre-
|
| 441 |
+
sponding terms $E(y_\nu)x\{g_\beta(e^{iz'_\nu}) - g_\beta(e^{iy_\nu})\}$, where $y_\nu < z'_\nu < z_\nu$, $1 \le \nu \le J$.
|
| 442 |
+
Moreover, we can choose these points $z'_\nu$, $1 \le \nu \le J$, so that we can similarly
|
| 443 |
+
form $S'_2$ from $S_2$ by truncating to the same right end-point $z'_\nu$ the rightmost
|
| 444 |
+
---PAGE_BREAK---
|
| 445 |
+
|
| 446 |
+
in the string of subintervals of $\mathcal{Q}$ contained in each $[t_{\nu-1}, t_\nu]$. In terms of
|
| 447 |
+
this notation, we can write
|
| 448 |
+
|
| 449 |
+
$$S'_1 - S'_2 = \sum_{\nu=1}^{J} (\Omega_{\nu} - \Lambda_{\nu}),$$
|
| 450 |
+
|
| 451 |
+
where, for $1 \le \nu \le J$, $\Omega_\nu$ (resp., $\Lambda_\nu$) represents the contribution to $S'_1$ (resp., $S'_2$) of the string of intervals that are contained in the subinterval $[t_{\nu-1}, t_\nu]$ of $\mathcal{P}_\varepsilon$. Provided that the pair of reciprocal indices involved has sum exceeding 1 (as is true here for $q_1^{-1}, p^{-1}$), the reasoning leading up to and including Young's estimate (6.4) in [40] can be applied to any pair of qualifying functions such that one is vector-valued, and the other is scalar-valued (a quick way to see this is to apply temporarily an arbitrary continuous linear functional, then invoke directly the results in [40] for a pair of scalar-valued functions, and then revert to norms in the ultimate vector-valued expressions).
|
| 452 |
+
|
| 453 |
+
Applying Young's estimate (6.4), and then the technique in (10.8) of [40],
|
| 454 |
+
together with (3.9) above, we can infer that for $1 \le \nu \le J$ we have, in terms
|
| 455 |
+
of the Riemann zeta function $\zeta$,
|
| 456 |
+
|
| 457 |
+
$$
|
| 458 |
+
\begin{equation}
|
| 459 |
+
\begin{aligned}
|
| 460 |
+
& \left\| \Omega_{\nu} - \Lambda_{\nu} \right\| \\
|
| 461 |
+
& \le 2(2\varepsilon)^{(q_1-q)/q_1} \{1+\zeta(q_1^{-1}+p^{-1})\} \text{var}_{q}^{q/q_1}(E(\cdot)x, [t_{\nu-1}, t_{\nu}]) \text{var}_{p}(g_{\beta}, [t_{\nu-1}, t_{\nu}]).
|
| 462 |
+
\end{aligned}
|
| 463 |
+
\end{equation}
|
| 464 |
+
$$
|
| 465 |
+
|
| 466 |
+
Summing the estimates in (3.10) from $\nu = 1$ to $J$, and then applying Hölder's inequality (for the pair of indices $q_1, p$) to the resulting majorant, we find that
|
| 467 |
+
|
| 468 |
+
$$
|
| 469 |
+
\[
|
| 470 |
+
\begin{array}{l}
|
| 471 |
+
(3.11) \\
|
| 472 |
+
\| S'_{1} - S'_{2} \| \le 2(2\varepsilon)^{(q_1-q)/q_1} \{1+\zeta(q_1^{-1}+p^{-1})\} \mathrm{var}_q^{q/q_1}(E(\cdot)x, [0, 2\pi]) \mathrm{var}_p(g_\beta, \mathbb{T}).
|
| 473 |
+
\end{array}
|
| 474 |
+
\]
|
| 475 |
+
$$
|
| 476 |
+
|
| 477 |
+
If in the sums $S'_1$ and $S'_2$ we now let each $z'_\nu$ approach from the left the
|
| 478 |
+
corresponding point $t_\nu$, then (3.8) gives
|
| 479 |
+
|
| 480 |
+
$$
|
| 481 |
+
\begin{equation}
|
| 482 |
+
\begin{split}
|
| 483 |
+
& \left\| \sum_{j=1}^{N} E(a_{j-1})x \{g_{\beta}(e^{ia_j}) - g_{\beta}(e^{ia_{j-1}})\} - \sum_{m=1}^{M} E(b_{m-1})x \{g_{\beta}(e^{ib_m}) - g_{\beta}(e^{ib_{m-1}})\} \right\| \\
|
| 484 |
+
& \le 2(2\varepsilon)^{(q_1-q)/q_1} \{1 + \zeta(q_1^{-1} + p^{-1})\} \mathrm{var}_q^{q/q_1}(E(\cdot)x, [0, 2\pi])\rho.
|
| 485 |
+
\end{split}
|
| 486 |
+
\tag{3.12}
|
| 487 |
+
\end{equation}
|
| 488 |
+
$$
|
| 489 |
+
|
| 490 |
+
For notational convenience, let us denote by $\delta_\epsilon$ the majorant in (3.12), while
|
| 491 |
+
keeping in mind that $\delta_\epsilon \to 0$ as $\epsilon \to 0^+$. After a summation by parts is
|
| 492 |
+
performed on each of the vector-valued sums appearing in the minorant of
|
| 493 |
+
(3.12), we find that, in the notation of (2.2), the estimate (3.12) can be
|
| 494 |
+
rewritten as follows:
|
| 495 |
+
|
| 496 |
+
$$
|
| 497 |
+
(3.13) \quad \| \tilde{\mathcal{S}}(\mathcal{P}; g_\beta(e^{i\cdot}), E) x - \tilde{\mathcal{S}}(\mathcal{Q}; g_\beta(e^{i\cdot}), E) x \| \le \delta_\varepsilon.
|
| 498 |
+
$$
|
| 499 |
+
---PAGE_BREAK---
|
| 500 |
+
|
| 501 |
+
Upon letting $\mathcal{P}$ run through all refinements of $\mathcal{P}_\varepsilon$ in (3.13), while simultaneously holding fixed both the arbitrary refinement $\mathcal{Q}$ of $\mathcal{P}_\varepsilon$ and the arbitrary $\beta \in B$, we get
|
| 502 |
+
|
| 503 |
+
$$
|
| 504 |
+
(3.14) \quad \left\| \int_{[0,2\pi]}^{\oplus} g_\beta(e^{it}) dE(t)x - \tilde{\mathcal{S}}(\mathcal{Q}; g_\beta(e^{i\cdot})), E)x \right\| \le \delta_\varepsilon.
|
| 505 |
+
$$
|
| 506 |
+
|
| 507 |
+
Next, while holding $\mathcal{P}, \mathcal{Q}$ fixed in (3.13), we let $\beta$ run through $B$ to obtain,
|
| 508 |
+
via the pointwise convergence on $\mathbb{T}$,
|
| 509 |
+
|
| 510 |
+
$$
|
| 511 |
+
\|\tilde{\mathcal{S}}(\mathcal{P}; g(e^{i\cdot}), E)x - \tilde{\mathcal{S}}(\mathcal{Q}; g(e^{i\cdot}), E)x\| \le \delta_\varepsilon.
|
| 512 |
+
$$
|
| 513 |
+
|
| 514 |
+
Letting $\mathcal{P}$ run through all refinements of $\mathcal{P}_\varepsilon$ in this yields, for every refine-
|
| 515 |
+
ment $\mathcal{Q}$ of $\mathcal{P}_\varepsilon$,
|
| 516 |
+
|
| 517 |
+
$$
|
| 518 |
+
\left\| \int_{[0,2\pi]}^{\oplus} g(e^{it}) dE(t)x - \tilde{\mathcal{S}}(\mathcal{Q}; g(e^{i\cdot})), E)x \right\| \leq \delta_{\epsilon}.
|
| 519 |
+
$$
|
| 520 |
+
|
| 521 |
+
Combining this estimate with (3.14), we find that for all $\beta \in B$, and every
|
| 522 |
+
refinement $\mathcal{Q}$ of $\mathcal{P}_{\varepsilon}$,
|
| 523 |
+
|
| 524 |
+
$$
|
| 525 |
+
(3.15) \quad \left\| \int_{[0,2\pi]}^{\oplus} g_{\beta}(e^{it}) dE(t)x - \int_{[0,2\pi]}^{\oplus} g(e^{it}) dE(t)x \right\| \\
|
| 526 |
+
\le 2\delta_{\epsilon} + \| \tilde{\mathcal{S}}(\mathcal{Q}; g_{\beta}(e^{i\cdot})), E)x - \tilde{\mathcal{S}}(\mathcal{Q}; g(e^{i\cdot})), E)x \| .
|
| 527 |
+
$$
|
| 528 |
+
|
| 529 |
+
In (3.15), we now specialize $\mathcal{Q}$ to be $\mathcal{P}_{\varepsilon}$, and we see from the pointwise
|
| 530 |
+
convergence of $\{g_{\beta}\}_{\beta \in B}$ to $g$ on $\mathbb{T}$ that for all sufficiently large $\beta \in B$,
|
| 531 |
+
|
| 532 |
+
$$
|
| 533 |
+
\left\| \int_{[0,2\pi]}^{\oplus} g_{\beta}(e^{it}) dE(t)x - \int_{[0,2\pi]}^{\oplus} g(e^{it}) dE(t)x \right\| \le 3\delta_{\varepsilon}. \blacksquare
|
| 534 |
+
$$
|
| 535 |
+
|
| 536 |
+
**REMARK 3.10.** Our treatment of the spectral integration of functions of higher variation emphasizes applications thereof to a unified framework of trigonometrically well-bounded operators and related periodic functions. For this purpose $[0, 2\pi]$ conveniently serves as the fundamental interval. It is worth noting, however, that the above Theorems 3.7 and 3.9 do not need to be tied directly to trigonometrically well-bounded operators, since they readily imply their analogues for spectral families concentrated on arbitrary intervals by using simple affine changes of the real variable (e.g., mapping $[0, \pi]$ onto an interval $J = [a, b]$). The outcome, which includes an extension of the BV($J$)-functional calculus induced by spectral families (2.3), can be stated as follows.
|
| 537 |
+
|
| 538 |
+
**THEOREM 3.11.** Let $E(\cdot)$ be a spectral family of projections in a super-reflexive Banach space $X$. Suppose that $E(\cdot)$ is concentrated on a compact interval $J = [a, b]$, and let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$. Let $p \in (1, q')$. Then the spectral integral $\int_J \Phi dE$ exists for each $\Phi \in V_p(J)$, and the mapping $\Phi \in V_p(J) \to$
|
| 539 |
+
---PAGE_BREAK---
|
| 540 |
+
|
| 541 |
+
$$ \int_J^\oplus \Phi dE \text{ is a continuous identity-preserving homomorphism of the Banach algebra } V_p(J) \text{ into the Banach algebra } \mathfrak{B}(X) \text{ such that} $$
|
| 542 |
+
|
| 543 |
+
$$ \left\| \int_J^\oplus \Phi dE \right\| \le K_{p,q} \operatorname{var}_q(E) \| \Phi \|_{V_p(J)} \quad \text{for all } \Phi \in V_p(J). $$
|
| 544 |
+
|
| 545 |
+
If $\{\Phi_\beta\}_{\beta \in B}$ is a net of mappings from $J$ into $\mathbb{C}$ satisfying
|
| 546 |
+
|
| 547 |
+
$$ \sup\{\operatorname{var}_p(\Phi_\beta, J) : \beta \in B\} < \infty, $$
|
| 548 |
+
|
| 549 |
+
and such that for each $\beta \in B$, and each $t_0 \in (a, b]$,
|
| 550 |
+
|
| 551 |
+
$$ \lim_{t \to t_0^-} \Phi_\beta(t) = \Phi_\beta(t_0), $$
|
| 552 |
+
|
| 553 |
+
and if $\{\Phi_\beta\}_{\beta \in B}$ converges pointwise on $J$ to a complex-valued function $\Phi$, then $\Phi \in V_p(J)$, and the net
|
| 554 |
+
|
| 555 |
+
$$ \left\{ \int_J^\oplus \Phi_\beta dE \right\}_{\beta \in B} $$
|
| 556 |
+
|
| 557 |
+
converges in the strong operator topology of $\mathfrak{B}(X)$ to $\int_J^\oplus \Phi dE$.
|
| 558 |
+
|
| 559 |
+
**4. Some consequences.** The stage is almost set for the main result of this section (Theorem 4.3), which will establish the precompactness relative to the strong operator topology of the set of rotated Hilbert averages $\tilde{W}$ corresponding to a trigonometrically well-bounded operator $U$ on a super-reflexive space. In order to obtain this result, we shall also require the following two auxiliary items from the literature.
|
| 560 |
+
|
| 561 |
+
**PROPOSITION 4.1.** Suppose that $1 \le p < \infty$. Then we have, for the sequence of trigonometric polynomials $\{s_n\}_{n=1}^\infty$ in (2.7),
|
| 562 |
+
|
| 563 |
+
$$ (4.1) \qquad \sup_{n \in \mathbb{N}} \operatorname{var}_p(s_n, \mathbb{T}) < \infty \quad \text{if and only if} \quad p > 1. $$
|
| 564 |
+
|
| 565 |
+
*Proof*. Since, as was noted in conjunction with (2.7), $\operatorname{var}_1(s_n, \mathbb{T}) \to \infty$ as $n \to \infty$, it suffices to have
|
| 566 |
+
|
| 567 |
+
$$ \sup_{n \in \mathbb{N}} \operatorname{var}_p(s_n, \mathbb{T}) < \infty \quad \text{if } p > 1. $$
|
| 568 |
+
|
| 569 |
+
The derivation of this is included in §12 of the article [40]. ■
|
| 570 |
+
|
| 571 |
+
In view of this, the set $\mathcal{S}$ consisting of all rotates of $\{s_n : n \in \mathbb{N}\}$ must satisfy
|
| 572 |
+
|
| 573 |
+
$$ (4.2) \qquad \sup_{n \in \mathbb{N}, z \in \mathbb{T}} \|s_n((\cdot)z)\|_{V_p(\mathbb{T})} < \infty \quad \text{if } p > 1, $$
|
| 574 |
+
|
| 575 |
+
by virtue of (2.8), and because $\{s_n\}_{n=1}^\infty$ is the sequence of partial sums for the Fourier series of a BV($\mathbb{T}$)-function, whence
|
| 576 |
+
|
| 577 |
+
$$ \sup_{n \in \mathbb{N}} \|s_n\|_{L^\infty(\mathbb{T})} < \infty. $$
|
| 578 |
+
---PAGE_BREAK---
|
| 579 |
+
|
| 580 |
+
The second auxiliary item we shall rely on is the following convenient formulation of the “Helly Selection Theorem for Functions of Bounded p-Variation” (Theorem 2.4 of [36]). (Although it will not be an issue for us, we note that in the parlance of [36], the symbol $\text{var}_p$ denotes what is, in the sense of our notation, $\text{var}_p^p$.)
|
| 581 |
+
|
| 582 |
+
**THEOREM 4.2.** Let $\mathcal{F}$ be a sequence of functions mapping a subset $\mathcal{M}$ of $\mathbb{R}$ to a metric space $\mathcal{Y}$, and such that, for some $p \in [1, \infty)$, $\mathcal{F}$ has uniformly bounded $p$-variation on $\mathcal{M}$ (in symbols, $\sup\{\text{var}_p(F, \mathcal{M}) : F \in \mathcal{F}\} < \infty$). Suppose further that for each $t \in \mathcal{M}$, $\{F(t) : F \in \mathcal{F}\}$ has compact closure in $\mathcal{Y}$. Then $\mathcal{F}$ has a subsequence $\{f_n\}_{n=1}^\infty$ pointwise convergent on $\mathcal{M}$ to a function $f : \mathcal{M} \to \mathcal{Y}$ such that
|
| 583 |
+
|
| 584 |
+
$$ \text{var}_p(f, \mathcal{M}) \leq \sup\{\text{var}_p(F, \mathcal{M}) : F \in \mathcal{F}\} < \infty. $$
|
| 585 |
+
|
| 586 |
+
**THEOREM 4.3.** If $U$ is a trigonometrically well-bounded operator on a super-reflexive Banach space $X$, then the closure, relative to the strong operator topology, of the class $\tilde{\mathcal{W}}$ specified in (1.3) by
|
| 587 |
+
|
| 588 |
+
$$ (4.3) \qquad \tilde{\mathcal{W}} = \left\{ \sum_{0 < |k| \le n} \frac{z^k}{k} U^k : n \in \mathbb{N}, z \in \mathbb{T} \right\} $$
|
| 589 |
+
|
| 590 |
+
is compact in the strong operator topology, and hence, in particular,
|
| 591 |
+
|
| 592 |
+
$$ (4.4) \qquad \sup\{\|T\| : T \in \tilde{\mathcal{W}}\} < \infty. $$
|
| 593 |
+
|
| 594 |
+
Conversely, if $\mathcal{X}_0$ is a reflexive Banach space, and $U \in \mathfrak{B}(\mathcal{X}_0)$ is an invertible operator such that (4.4) holds, then $U$ is trigonometrically well-bounded.
|
| 595 |
+
|
| 596 |
+
*Proof.* Let $E(\cdot)$ be the spectral decomposition of $U$, and choose $q,p$ as in the hypotheses of Theorem 3.7. Let $x \in X \setminus \{0\}$. We are required to show that the set $\tilde{\mathcal{W}}x$ is totally bounded in the metric space defined by the norm of $X$. For this purpose, let $\mathcal{G}$ be a sequence in $\tilde{\mathcal{W}}x$. Hence for some sequence $\mathcal{F}$ taken from the set of trigonometric polynomials $\mathfrak{S}$ appearing in the minorant of (4.2), we can express $\mathcal{G}$ as $\mathcal{F}(U)x$. By virtue of (4.2) and Theorem 4.2, we can extract from the sequence of trigonometric polynomials $\mathcal{F}$ a subsequence $\{f_k\}_{k=1}^\infty$ pointwise convergent on $\mathbb{T}$ to a function $f : \mathbb{T} \to \mathbb{C}$ such that
|
| 597 |
+
|
| 598 |
+
$$ \text{var}_p(f, \mathbb{T}) \leq \sup\{\text{var}_p(F, \mathbb{T}) : F \in \mathfrak{S}\} < \infty. $$
|
| 599 |
+
|
| 600 |
+
By Theorem 3.9, applied to $\{f_k\}_{k=1}^\infty$, we see that $\{f_k(U)\}_{k=1}^\infty$ converges in the strong operator topology to $\int_{[0,2\pi]} f(e^{it}) dE(t)$.
|
| 601 |
+
|
| 602 |
+
The converse conclusion follows directly from Proposition 1.1, since for each $z \in \mathbb{T}$, the $(C, 1)$ averages appearing in (1.2) are the means of the corresponding discrete Hilbert averages in (4.3). ■
|
| 603 |
+
|
| 604 |
+
An application of Theorem 3.7 of [12] to (4.4) yields the following improvement of Theorem 2.6.
|
| 605 |
+
---PAGE_BREAK---
|
| 606 |
+
|
| 607 |
+
**THEOREM 4.4.** Let $X$ be a super-reflexive Banach space, let $U \in \mathfrak{B}(X)$ be trigonometrically well-bounded, and let $E(\cdot)$ be the spectral decomposition of $U$. Then for each $f \in \text{BV}(\mathbb{T})$, the series $\sum_{k=-\infty}^{\infty} \hat{f}(k)U^k$ converges in the strong operator topology to $\int_{[0,2\pi]} f^{\#}(t) dE(t)$.
|
| 608 |
+
|
| 609 |
+
In the presence of super-reflexivity, we now also have the following extension of Theorem 2.6 from $\text{BV}(\mathbb{T})$ to spaces $V_p(\mathbb{T})$, for appropriate $p > 1$.
|
| 610 |
+
|
| 611 |
+
**THEOREM 4.5.** Let $X$ be a super-reflexive Banach space, and let $U \in \mathfrak{B}(X)$ be a trigonometrically well-bounded operator. Denote by $E(\cdot)$ the spectral decomposition of $U$, let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$, and let $p \in (1, q')$. If $\phi \in V_p(\mathbb{T})$, then for each $x \in X$,
|
| 612 |
+
|
| 613 |
+
$$ (4.5) \quad \left\| \sum_{\nu=-n}^{n} \left(1 - \frac{|\nu|}{n+1}\right) \hat{\phi}(\nu) U^\nu x - \left\{ \int_{[0,2\pi]}^\oplus \phi^{\#}(t) dE(t) \right\} x \right\| \to 0 \quad \text{as } n \to \infty. $$
|
| 614 |
+
|
| 615 |
+
*Proof.* Clearly, the sequence of trigonometric polynomials $\{\kappa_n * \phi\}_{n \ge 0}$ has the property that $\sup_{n \ge 0} \| \kappa_n * \phi \|_{V_p(\mathbb{T})} < \infty$, and by Fejér's Theorem, $(\kappa_n * \phi)(e^{it}) \to \phi^{\#}(t)$ for all $t \in \mathbb{R}$. The desired conclusion is now an immediate consequence of Theorem 3.9 applied to the pointwise convergent sequence $\{\kappa_n * \phi\}_{n \ge 0}$. ■
|
| 616 |
+
|
| 617 |
+
**REMARK 4.6.** In contrast to the situation for $\text{BV}(\mathbb{T})$-functions in Theorem 4.4, it is an open question whether or not one can, for the general $\phi \in V_p(\mathbb{T})$, improve the strong $(C, 1)$-convergence in (4.5) to strong convergence of the series $\sum_{\nu=-\infty}^{\infty} \hat{\phi}(\nu)U^{\nu}$. In this regard, one can use Theorem 3.1 of [37] in combination with Theorem 4.5 to obtain the following partial result in the positive direction. We omit the details for expository reasons.
|
| 618 |
+
|
| 619 |
+
**PROPOSITION 4.7.** Suppose that $\mathcal{Y}$ is a UMD space having an unconditional basis, let $U \in \mathfrak{B}(\mathcal{Y})$ be a trigonometrically well-bounded operator. Denote by $E(\cdot)$ the spectral decomposition of $U$. Let $q \in (1, \infty)$ be the index furnished for $E(\cdot)$ by Proposition 3.3 so that $\text{var}_q(E) < \infty$, and let $p \in (1, q')$. If $\phi \in V_p(\mathbb{T})$, then for each $y \in \mathcal{Y}$ we have, for almost all $z \in \mathbb{T}$,
|
| 620 |
+
|
| 621 |
+
$$ \left\| \left( \sum_{k=-n}^{n} \hat{\phi}(k) U^k z^k \right) y - \left( \int_{[0,2\pi]}^\oplus (\phi_z)^{\#}(t) dE(t) \right) y \right\|_{\mathcal{Y}} \to 0 \quad \text{as } n \to \infty. $$
|
| 622 |
+
|
| 623 |
+
**REMARK 4.8.** Since the Haar system is an unconditional basis for $L^r([0, 1])$, $1 < r < \infty$, the space $L^r(\mathbb{T})$ satisfies the hypotheses on $\mathcal{Y}$ of Proposition 4.7. In particular, by specializing to the value $r = 2$, we see that any separable Hilbert space (finite-dimensional or infinite-dimensional) satisfies these hypotheses on $\mathcal{Y}$.
|
| 624 |
+
---PAGE_BREAK---
|
| 625 |
+
|
| 626 |
+
**5. Operator-weighted Hilbert sequence spaces and trigonometrically well-bounded shift operators.** Henceforth, $\mathcal{R}$ will be an arbitrary Hilbert space with inner product $\langle \cdot, \cdot \rangle$. As shown in Theorem 2.3 of [16], shifts on appropriate operator-weighted Hilbert sequence spaces serve as a model for the general behavior of trigonometrically well-bounded operators on arbitrary Hilbert spaces. More specifically, to any invertible operator $V \in \mathfrak{B}(\mathcal{R})$ there correspond a bilateral operator-valued weight sequence $\mathfrak{W}_V \subseteq \mathfrak{B}(\mathcal{R})$ and an affiliated Hilbert sequence space $\ell^2(\mathfrak{W}_V)$ such that $V$ is trigonometrically well-bounded on $\mathcal{R}$ if and only if the right bilateral shift $\mathcal{R}$ is a trigonometrically well-bounded operator on $\ell^2(\mathfrak{W}_V)$; moreover, if this is the case, then the norm properties of trigonometric polynomials of $\mathcal{R}$ mirror the norm properties of trigonometric polynomials of $V$. (See (5.6) below. For additional background facts regarding these matters, see [12].) In this section, we shall discuss how application of the preceding sections to this circle of ideas in Hilbert space affords some new insights into the role of the Hilbert transform and of multiplier theory in non-commutative analysis.
|
| 627 |
+
|
| 628 |
+
We begin by describing the relevant class of operator-weighted Hilbert sequence spaces. An *operator-valued weight sequence* on $\mathcal{R}$ will be a bilateral sequence $\mathfrak{W} = \{W_k\}_{k=-\infty}^{\infty} \subseteq \mathfrak{B}(\mathcal{R})$ such that for each $k \in \mathbb{Z}$, $W_k$ is a positive, invertible, self-adjoint operator. We associate with $\mathfrak{W}$ the weighted Hilbert space $\ell^2(\mathfrak{W})$ consisting of all sequences $x = \{x_k\}_{k=-\infty}^{\infty} \subseteq \mathcal{R}$ such that
|
| 629 |
+
|
| 630 |
+
$$ \sum_{k=-\infty}^{\infty} \langle W_k x_k, x_k \rangle < \infty, $$
|
| 631 |
+
|
| 632 |
+
and furnished with the inner product $\langle \langle \cdot, \cdot \rangle \rangle$ specified by
|
| 633 |
+
|
| 634 |
+
$$ \langle\langle x, y \rangle\rangle = \sum_{k=-\infty}^{\infty} \langle W_k x_k, y_k \rangle. $$
|
| 635 |
+
|
| 636 |
+
Thus, $\ell^2(\mathfrak{W})$ is a generalization to non-commutative analysis of the $\ell^2$-spaces defined by scalar-valued weight sequences in the special case where $\mathcal{R} = \mathbb{C}$. (For the continuous variable generalization from scalar-valued weights to operator-valued weights, see [39].) Note that for each $z \in \mathbb{T}$, there is a natural unitary operator $\Delta_z$ defined on $\ell^2(\mathfrak{W})$ by writing $\Delta_z(\{x_k\}_{k=-\infty}^{\infty}) = \{z^k x_k\}_{k=-\infty}^{\infty}$.
|
| 637 |
+
|
| 638 |
+
The links between the considerations of the previous sections and $\ell^2(\mathfrak{W})$ stem from the interplay between $\ell^2(\mathfrak{W})$ and the discrete Hilbert kernel $h: \mathbb{Z} \to \mathbb{R}$, which, in terms of the function $\phi_0 \in \text{BV}(\mathbb{T})$ specified in conjunction with (2.7), is expressed by $h = \hat{\phi}_0$. Thus $h(0) = 0$, and $h(k) = k^{-1}$ for $k \in \mathbb{Z} \setminus \{0\}$. The truncates $\{h_N\}_{N=1}^{\infty}$ of the discrete Hilbert kernel $h$ are defined by writing, for each $N \in \mathbb{N}$ and each $k \in \mathbb{Z}$, $h_N(k) = h(k)$ if $|k| \le N$, and $h_N(k) = 0$ if $|k| > N$. The formal operator of convolution by
|
| 639 |
+
---PAGE_BREAK---
|
| 640 |
+
|
| 641 |
+
$h$ on $\ell^2(\mathfrak{M})$ will be referred to as the discrete Hilbert transform, and will be symbolized by $D$ (convolution by $h_N$ on $\ell^2(\mathfrak{M})$ will be denoted by $D_N$). If $h$ defines a bounded convolution operator from $\ell^2(\mathfrak{M})$ into $\ell^2(\mathfrak{M})$, we shall say that $\mathfrak{M}$ possesses the *Treil–Volberg property*. It was shown in [12] that in the context of $\ell^2(\mathfrak{M})$, one can define an operator-valued counterpart (the discrete analogue of [39]) for the Muckenhoupt $A_2$-weight condition—if this condition is satisfied by $\mathfrak{M}$, we write $\mathfrak{M} \in A_2(\mathcal{R})$. Since we do not need this $A_2(\mathcal{R})$ weight condition for our present considerations, we shall not pursue it further, except to note that the condition $\mathfrak{M} \in A_2(\mathcal{R})$ is always necessary, but, for the continuous-variable case and infinite-dimensional $\mathcal{R}$, is known not to be sufficient, for $\mathfrak{M}$ to possess the Treil–Volberg property (see, respectively, Proposition 4.4 of [12], and Theorem 1.1 of [27]).
|
| 642 |
+
|
| 643 |
+
The connection between the Treil–Volberg property and the right (bilateral) shift $\mathcal{R}: \ell^2(\mathfrak{M}) \to \mathcal{R}^\mathbb{Z}$ specified by
|
| 644 |
+
|
| 645 |
+
$$ \mathcal{R}(\{x_k\}_{k=-\infty}^{\infty}) = \{x_{k-1}\}_{k=-\infty}^{\infty} $$
|
| 646 |
+
|
| 647 |
+
is expressed as follows (Theorem 4.12 of [12]).
|
| 648 |
+
|
| 649 |
+
**PROPOSITION 5.1.** Let $\mathfrak{M} = \{W_k\}_{k=-\infty}^{\infty}$ be an operator-valued weight sequence on the arbitrary Hilbert space $\mathcal{R}$. Then the following assertions are equivalent:
|
| 650 |
+
|
| 651 |
+
(i) $\mathfrak{M}$ has the Treil–Volberg property.
|
| 652 |
+
|
| 653 |
+
(ii) The right shift $\mathcal{R}$ is a trigonometrically well-bounded operator on $\ell^2(\mathfrak{M})$.
|
| 654 |
+
|
| 655 |
+
(iii) $\mathcal{R}$ is a bounded invertible operator on $\ell^2(\mathfrak{M})$ such that
|
| 656 |
+
|
| 657 |
+
$$ (5.1) \quad \sup_{n \in \mathbb{N}} \left\| \sum_{0 < |k| \le n} \left( 1 - \frac{|k|}{n+1} \right) \frac{\mathcal{R}^k}{k} \right\| < \infty. $$
|
| 658 |
+
|
| 659 |
+
**REMARK 5.2.** If $\mathcal{R} \in \mathfrak{B}(\ell^2(\mathfrak{M}))$, then for each $z \in \mathbb{T}$, $\Delta_z \mathcal{R} \Delta_{\bar{z}} = z \mathcal{R}$, and hence the condition (1.2) reduces to (5.1) in the context of Proposition 5.1(iii).
|
| 660 |
+
|
| 661 |
+
By virtue of (4.4), we can add the following two conditions to the list of equivalent conditions in Proposition 5.1.
|
| 662 |
+
|
| 663 |
+
**PROPOSITION 5.3.** Under the hypotheses of Proposition 5.1, each of the following two conditions is equivalent to the conditions (i)–(iii) listed therein:
|
| 664 |
+
|
| 665 |
+
(iv) $\mathcal{R}$ is a bounded invertible operator on $\ell^2(\mathfrak{M})$ such that
|
| 666 |
+
|
| 667 |
+
$$ (5.2) \quad \sup_{n \in \mathbb{N}} \left\| \sum_{0 < |k| \le n} \frac{\mathcal{R}^k}{k} \right\| < \infty. $$
|
| 668 |
+
---PAGE_BREAK---
|
| 669 |
+
|
| 670 |
+
$$ (v) \{D_N\}_{N=1}^{\infty} \subseteq \mathfrak{B}(\ell^2(\mathfrak{M})), \text{ with} $$
|
| 671 |
+
|
| 672 |
+
$$ (5.3) \qquad \sup_{N \in \mathbb{N}} \|D_N\|_{\mathfrak{B}(\ell^2(\mathfrak{M}))} < \infty. $$
|
| 673 |
+
|
| 674 |
+
*Proof*. It is elementary that (iv) ⇒ (iii). The implication (ii) ⇒ (iv) is a consequence of (4.4). If (iv) holds, then for each $N \in \mathbb{N}$,
|
| 675 |
+
|
| 676 |
+
$$ (5.4) \qquad s_N(\mathcal{R}) = D_N, $$
|
| 677 |
+
|
| 678 |
+
and hence (v) holds. So the proof of Proposition 5.3 boils down to assuming (v) in order to show any one of the conditions (i) through (iv). Since there is no a priori reason to infer from (v) that $\mathcal{R}$ is a bounded invertible operator on $\ell^2(\mathfrak{M})$, we cannot make immediate use of (5.4), and so we shall sidestep this difficulty by establishing (i) directly. Since the Hilbert space $\ell^2(\mathfrak{M})$ is, in particular, reflexive, it follows from (5.3) that the closure of
|
| 679 |
+
|
| 680 |
+
$$ \mathcal{D} = \{D_N : N \in \mathbb{N}\} $$
|
| 681 |
+
|
| 682 |
+
in the weak operator topology of $\mathfrak{B}(\ell^2(\mathfrak{M}))$ is compact in the weak operator topology of $\mathfrak{B}(\ell^2(\mathfrak{M}))$. Consequently, there are a subnet $\{D_{N_\gamma}\}_{\gamma \in \Gamma}$ and an operator $\mathfrak{H} \in \mathfrak{B}(\ell^2(\mathfrak{M}))$ such that
|
| 683 |
+
|
| 684 |
+
$$ (5.5) \qquad D_{N_\gamma} \to \mathfrak{H} \quad \text{in the weak operator topology of } \mathfrak{B}(\ell^2(\mathfrak{M})). $$
|
| 685 |
+
|
| 686 |
+
Hence it will suffice to verify that for every vector $y = \{y_k\}_{k=-\infty}^{\infty} \in \ell^2(\mathfrak{M})$ such that the support of $y$ is a singleton, $\mathfrak{H}$ acts on $y$ as convolution by $h$. It is a routine matter to perform this verification by using (5.5) in conjunction with such vectors. ■
|
| 687 |
+
|
| 688 |
+
**REMARK 5.4.** In classical single-variable Fourier analysis, as well as in its generalizations to norm inequalities involving scalar-valued weights, the boundedness of the relevant Hilbert transform goes hand-in-hand with the boundedness of pillars like the Hardy–Littlewood maximal function and the maximal Hilbert transform—which leave in their wake the uniform boundedness of the Hilbert transform’s truncates. This familiar scenario ultimately entails the validity of the relevant version of the Marcinkiewicz Multiplier Theorem and of the Littlewood–Paley Theorem. However, in the framework of condition (i) of Proposition 5.1 such underpinnings as maximal operators are lacking, and moreover, Theorem 6.1 of [16] shows that there is an operator-valued weight sequence $\mathfrak{W}_0$ on the Hilbert space $\ell^2(\mathbb{N})$ such that $\mathfrak{W}_0$ enjoys the Treil–Volberg property, but the analogues of the classical Marcinkiewicz Multiplier Theorem and the Littlewood–Paley Theorem fail to hold on $\ell^2(\mathfrak{W}_0)$. One motivation for obtaining the above implication (i) ⇒ (v) is that it, nevertheless, confirms the survival of the uniform boundedness for the Hilbert transform’s truncates, in an environment where so many mainstays fail to carry over. The next theorem adds still more to the
|
| 689 |
+
---PAGE_BREAK---
|
| 690 |
+
|
| 691 |
+
positive side of the ledger by extending this type of boundedness result to
|
| 692 |
+
appropriate function classes.
|
| 693 |
+
|
| 694 |
+
**THEOREM 5.5.** Suppose that $\mathcal{R}$ is an arbitrary Hilbert space, and $\mathfrak{M} = \{W_k\}_{k=-\infty}^{\infty}$ is an operator-valued weight sequence on $\mathcal{R}$ having the Treil-Volberg property. Then there is $\gamma \in (1, \infty)$ such that for each $p$ satisfying $1 \le p < \gamma$, and each function $\phi \in V_p(\mathbb{T})$, convolution by the inverse Fourier transform $\phi^\vee$ on $\ell^2(\mathfrak{M})$ is a bounded linear mapping $\mathfrak{F}_\phi$ of $\ell^2(\mathfrak{M})$ into $\ell^2(\mathfrak{M})$ satisfying
|
| 695 |
+
|
| 696 |
+
$$
|
| 697 |
+
\|\mathfrak{F}_{\phi}\|_{\mathfrak{B}(\ell^2(\mathfrak{M}))} \leq K_{\mathfrak{M},p} \|\phi\|_{V_p(\mathbb{T})}.
|
| 698 |
+
$$
|
| 699 |
+
|
| 700 |
+
*Proof.* Combine Theorems 4.2 and 4.3 of [16] and Corollary 4.4 of [16] with Theorem 3.7 above. ■
|
| 701 |
+
|
| 702 |
+
We finish this section with a brief sketch of how the above scene furnishes a model for estimates with trigonometrically well-bounded operators on Hilbert spaces. Suppose that $V \in \mathfrak{B}(\mathcal{R})$ is an invertible operator, and let $\mathfrak{M}_V$ be the operator-valued weight sequence on the Hilbert space $\mathcal{R}$ given by $\mathfrak{M}_V = \{(V^k)*V^k\}_{k=-\infty}^{\infty}$. Lemma 2.2 of [16] and Theorem 2.3 of [16] guarantee that the right shift $\mathcal{R}$ is a bounded invertible linear mapping of $\ell^2(\mathfrak{M}_V)$ onto itself such that for every trigonometric polynomial $Q$,
|
| 703 |
+
|
| 704 |
+
$$
|
| 705 |
+
(5.6) \qquad \|Q(\mathcal{R})\|_{\mathfrak{B}(\ell^2(\mathfrak{M}_V))} = \sup_{z \in \mathbb{T}} \|Q(zV)\|_{\mathfrak{B}(\mathcal{R})}.
|
| 706 |
+
$$
|
| 707 |
+
|
| 708 |
+
In view of Proposition 1.1 and the equivalence of conditions (ii) and (iii)
|
| 709 |
+
in Proposition 5.1, it follows directly from (5.6) that the right shift $\mathcal{R}$ is
|
| 710 |
+
trigonometrically well-bounded on $\ell^2(\mathfrak{M}_V)$ if and only if $V$ is trigonometrically well-bounded on $\mathcal{R}$.
|
| 711 |
+
|
| 712 |
+
References
|
| 713 |
+
|
| 714 |
+
[1] D. J. Aldous, *Unconditional bases and martingales in $L_p(F)$*, Math. Proc. Cambridge Philos. Soc. 85 (1979), 117-123.
|
| 715 |
+
|
| 716 |
+
[2] B. Beauzamy, *Introduction to Banach Spaces and Their Geometry*, North-Holland Math. Stud. 68 (Notas de Mat. 86), Elsevier Science, New York, 1982.
|
| 717 |
+
|
| 718 |
+
[3] E. Berkson, J. Bourgain, and T. A. Gillespie, *On the almost everywhere convergence of ergodic averages for power-bounded operators on $L^p$-subspaces*, Integral Equations Operator Theory 14 (1991), 678-715.
|
| 719 |
+
|
| 720 |
+
[4] E. Berkson and H. R. Dowson, *On uniquely decomposable well-bounded operators*, Proc. London Math. Soc. (3) 22 (1971), 339-358.
|
| 721 |
+
|
| 722 |
+
[5] E. Berkson and T. A. Gillespie, *AC functions on the circle and spectral families*, J. Operator Theory 13 (1985), 33-47.
|
| 723 |
+
|
| 724 |
+
[6] —, —, *Fourier series criteria for operator decomposability*, Integral Equations Operator Theory 9 (1986), 767–789.
|
| 725 |
+
|
| 726 |
+
[7] —, —, *Stečkin's theorem, transference, and spectral decompositions*, J. Funct. Anal. 70 (1987), 140–170.
|
| 727 |
+
---PAGE_BREAK---
|
| 728 |
+
|
| 729 |
+
[8] E. Berkson and T. A. Gillespie, *The spectral decomposition of weighted shifts and the $A_p$ condition*, Colloq. Math. (special volume dedicated to A. Zygmund) 60-61 (1990), 507-518.
|
| 730 |
+
|
| 731 |
+
[9] —, —, *Spectral decompositions and harmonic analysis on UMD spaces*, Studia Math. 112 (1994), 13-49.
|
| 732 |
+
|
| 733 |
+
[10] —, —, *Mean-boundedness and Littlewood-Paley for separation-preserving operators*, Trans. Amer. Math. Soc. 349 (1997), 1169-1189.
|
| 734 |
+
|
| 735 |
+
[11] —, —, *The q-variation of functions and spectral integration of Fourier multipliers*, Duke Math. J. 88 (1997), 103-132.
|
| 736 |
+
|
| 737 |
+
[12] —, —, *Mean₂-bounded operators on Hilbert space and weight sequences of positive operators*, Positivity 3 (1999), 101-133.
|
| 738 |
+
|
| 739 |
+
[13] —, —, *Spectral integration from dominated ergodic estimates*, Illinois J. Math. 43 (1999), 500-519.
|
| 740 |
+
|
| 741 |
+
[14] —, —, *Spectral decompositions, ergodic averages, and the Hilbert transform*, Studia Math. 144 (2001), 39-61.
|
| 742 |
+
|
| 743 |
+
[15] —, —, *A Tauberian theorem for ergodic averages, spectral decomposability, and the dominated ergodic estimate for positive invertible operators*, Positivity 7 (2003), 161-175.
|
| 744 |
+
|
| 745 |
+
[16] —, —, *Shifts as models for spectral decomposability on Hilbert space*, J. Operator Theory 50 (2003), 77-106.
|
| 746 |
+
|
| 747 |
+
[17] —, —, *Operator means and spectral integration of Fourier multipliers*, Houston J. Math. 30 (2004), 767-814.
|
| 748 |
+
|
| 749 |
+
[18] —, —, *The q-variation of functions and spectral integration from dominated ergodic estimates*, J. Fourier Anal. Appl. 10 (2004), 149-177.
|
| 750 |
+
|
| 751 |
+
[19] —, —, *An $M_q(T)$-functional calculus for power-bounded operators on certain UMD spaces*, Studia Math. 167 (2005), 245-257.
|
| 752 |
+
|
| 753 |
+
[20] E. Berkson, T. A. Gillespie, and P. S. Muhly, *Abstract spectral decompositions guaranteed by the Hilbert transform*, Proc. London Math. Soc. (3) 53 (1986), 489-517.
|
| 754 |
+
|
| 755 |
+
[21] D. Blagojevic, *Spectral families and geometry of Banach spaces*, PhD thesis, Univ. of Edinburgh, 2007; http://www.era.lib.ed.ac.uk/handle/1842/2389.
|
| 756 |
+
|
| 757 |
+
[22] J. Bourgain, *Some remarks on Banach spaces in which martingale difference sequences are unconditional*, Ark. Mat. 21 (1983), 163-168.
|
| 758 |
+
|
| 759 |
+
[23] V. V. Chistyakov and O. E. Galkin, *On maps of bounded p-variation with p > 1*, Positivity 2 (1998), 19-45.
|
| 760 |
+
|
| 761 |
+
[24] R. Coifman, J. L. Rubio de Francia, et S. Semmes, *Multiplicateurs de Fourier de $L^p(\mathbb{R})$ et estimations quadratiques*, C. R. Acad. Sci. Paris Sér. I Math. 306 (1988), 351-354.
|
| 762 |
+
|
| 763 |
+
[25] M. M. Day, *Reflexive Banach spaces not isomorphic to uniformly convex spaces*, Bull. Amer. Math. Soc. 47 (1941), 313-317.
|
| 764 |
+
|
| 765 |
+
[26] P. Enflo, *Banach spaces which can be given an equivalent uniformly convex norm*, Israel J. Math. 13 (1972), 281-288.
|
| 766 |
+
|
| 767 |
+
[27] T. A. Gillespie, S. Pott, S. Treil, and A. Volberg, *Logarithmic growth for weighted Hilbert transforms and vector Hankel operators*, J. Operator Theory 52 (2004), 103-112.
|
| 768 |
+
|
| 769 |
+
[28] G. H. Hardy, *Weierstrass's non-differentiable function*, Trans. Amer. Math. Soc. 17 (1916), 301-325.
|
| 770 |
+
|
| 771 |
+
[29] G. H. Hardy and J. E. Littlewood, *A convergence criterion for Fourier series*, Math. Z. 28 (1928), 612-634.
|
| 772 |
+
|
| 773 |
+
[30] R. C. James, *Super-reflexive spaces with bases*, Pacific J. Math. 41 (1972), 409-419.
|
| 774 |
+
|
| 775 |
+
[31] —, *Super-reflexive Banach spaces*, Canad. J. Math. 24 (1972), 896-904.
|
| 776 |
+
---PAGE_BREAK---
|
| 777 |
+
|
| 778 |
+
[32] Y. Katznelson, *An Introduction to Harmonic Analysis*, Dover, New York, 1976.
|
| 779 |
+
|
| 780 |
+
[33] J. Lindenstrauss and L. Tzafriri, *Classical Banach Spaces II: Function Spaces*, Ergeb. Math. Grenzgeb. 97, Springer, New York, 1979.
|
| 781 |
+
|
| 782 |
+
[34] B. Maurey, *Système de Haar*, in: Séminaire Maurey-Schwartz 1974-1975, Centre Math. École Polytechnique, Paris, 1975, 26 pp.
|
| 783 |
+
|
| 784 |
+
[35] G. Pisier, *Un exemple concernant la super-réflexivité*, ibid., 12 pp.
|
| 785 |
+
|
| 786 |
+
[36] J. E. Porter, *Helly's selection principle for functions of bounded p-variation*, Rocky Mountain J. Math. 35 (2005), 675-679.
|
| 787 |
+
|
| 788 |
+
[37] J. L. Rubio de Francia, *Fourier series and Hilbert transforms with values in UMD Banach spaces*, Studia Math. 81 (1985), 95-105.
|
| 789 |
+
|
| 790 |
+
[38] P. G. Spain, *On well-bounded operators of type (B)*, Proc. Edinburgh Math. Soc. (2) 18 (1972), 35-48.
|
| 791 |
+
|
| 792 |
+
[39] S. Treil and A. Volberg, *Wavelets and the angle between past and future*, J. Funct. Anal. 143 (1997), 269-308.
|
| 793 |
+
|
| 794 |
+
[40] L. C. Young, *An inequality of the Hölder type, connected with Stieltjes integration*, Acta Math. 67 (1936), 251-282.
|
| 795 |
+
|
| 796 |
+
Earl Berkson
|
| 797 |
+
Department of Mathematics
|
| 798 |
+
University of Illinois
|
| 799 |
+
1409 W. Green Street
|
| 800 |
+
Urbana, IL 61801 U.S.A.
|
| 801 |
+
E-mail: berkson@math.uiuc.edu
|
| 802 |
+
|
| 803 |
+
Received January 30, 2010
|
| 804 |
+
|
| 805 |
+
Revised version July 7, 2010
|
| 806 |
+
|
| 807 |
+
(6804)
|
samples_new/texts_merged/822209.md
ADDED
|
@@ -0,0 +1,738 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
XHX – A Framework for Optimally Secure
|
| 5 |
+
Tweakable Block Ciphers from Classical Block
|
| 6 |
+
Ciphers and Universal Hashing
|
| 7 |
+
|
| 8 |
+
Ashwin Jha¹, Eik List², Kazuhiko Minematsu³,
|
| 9 |
+
Sweta Mishra⁴, and Mridul Nandi¹
|
| 10 |
+
|
| 11 |
+
¹ Indian Statistical Institute, Kolkata, India. {ashwin_r, mridul}@isical.ac.in
|
| 12 |
+
|
| 13 |
+
² Bauhaus-Universität Weimar, Weimar, Germany. eik.list@uni-weimar.de
|
| 14 |
+
|
| 15 |
+
³ NEC Corporation, Tokyo, Japan. k-minematsu@ah.jp.nec.com
|
| 16 |
+
|
| 17 |
+
⁴ IIIT, Delhi, India. swetam@iiitd.ac.in
|
| 18 |
+
|
| 19 |
+
**Abstract.** Tweakable block ciphers are important primitives for designing cryptographic schemes with high security. In the absence of a standardized tweakable block cipher, constructions built from classical block ciphers remain an interesting research topic in both theory and practice. Motivated by Mennink's $\tilde{F}[2]$ publication from 2015, Wang et al. proposed 32 optimally secure constructions at ASIACRYPT'16, all of which employ two calls to a classical block cipher each. Yet, those constructions were still limited to *n*-bit keys and *n*-bit tweaks. Thus, applications with more general key or tweak lengths still lack support. This work proposes the XHX family of tweakable block ciphers from a classical block cipher and a family of universal hash functions, which generalizes the constructions by Wang et al. First, we detail the generic XHX construction with three independently keyed calls to the hash function. Second, we show that we can derive the hash keys in efficient manner from the block cipher, where we generalize the constructions by Wang et al.; finally, we propose efficient instantiations for the used hash functions.
|
| 20 |
+
|
| 21 |
+
**Keywords:** Provable security · ideal-cipher model · tweakable block cipher
|
| 22 |
+
|
| 23 |
+
# 1 Introduction
|
| 24 |
+
|
| 25 |
+
*Tweakable Block Ciphers.* In addition to the usual key and plaintext inputs of classical block ciphers, tweakable block ciphers (TBCs, for short) are cryptographic transform that adds an additional public parameter called *tweak*. So, a tweakable block cipher $\tilde{E}: \mathcal{K} \times \mathcal{T} \times \mathcal{M} \rightarrow \mathcal{M}$ is a permutation on the plaintext/ciphertext space $\mathcal{M}$ for every combination of key $\mathcal{K} \in \mathcal{K}$ and tweak $\mathcal{T} \in \mathcal{T}$, where $\mathcal{K}, \mathcal{T}$, and $\mathcal{M}$ are assumed to be non-empty sets. Their first use in literature was due to Schroeppel and Orman in the Hasty Pudding Cipher, where the tweak still was called *Spice* [18]. Liskov, Rivest, and Wagner [11] have formalized the concept then in 2002.
|
| 26 |
+
|
| 27 |
+
In the recent past, the status of tweakable block ciphers has become more prominent, last but not least due to the advent of efficient dedicated constructions,
|
| 28 |
+
---PAGE_BREAK---
|
| 29 |
+
|
| 30 |
+
such as Deoxys-BC or Joltik-BC that were proposed alongside the TWEAKEY framework [6], or e.g., SKINNY [1]. However, in the absence of a standard, tweakable block ciphers based on classical ones remain a highly interesting topic.
|
| 31 |
+
|
| 32 |
+
**Blockcipher-based Constructions.** Liskov et al. [11] had described two constructions, known as LRW1 and LRW2. Rogaway [17] proposed XE and XEX as refinements of LRW2 for updating tweaks efficiently and reducing the number of keys. These schemes are efficient in the sense that they need one call to the block cipher plus one call to a universal hash function. Both XE and XEX are provably secure in the standard model, i.e., assuming the block cipher is a (strong) pseudorandom permutation, they are secure up to $O(2^{n/2})$ queries, when using an $n$-bit block cipher. Since this bound results from the birthday paradox on input collisions, the security of those constructions is inherently limited by the birthday bound (BB-secure).
|
| 33 |
+
|
| 34 |
+
**Constructions with Stronger Security.** Constructions with beyond-birthday-bound (BBB) security have been an interesting research topic. In [13], Minematsu proposed introduced a rekeying-based construction. Landecker, Shrimpton and Terashima [9] analyzed the cascade of two independent LRW2 instances, called CLRW2. Both constructions are secure up to $O(2^{2n/3})$ queries, however, at the price of requiring two block-cipher calls per block plus per-tweak rekeying or plus two calls to a universal hash function, respectively.
|
| 35 |
+
|
| 36 |
+
For settings that demand stronger security, Lampe and Seurin [8] proved that the chained cascade of more instances of LRW2 could asymptotically approach a security of up to $O(2^n)$ queries, i.e. full $n$-bit security. However, the disadvantage is drastically decreased performance. An alternative direction has been initiated by Mennink [12], who also proposed TBC constructions from classical block ciphers, but proved the security in the ideal-cipher model. Mennink's constructions could achieve full $n$-bit security quite efficiently when both input and key are $n$ bits. In particular, his $\tilde{F}$[2] construction required only two block-cipher calls.
|
| 37 |
+
|
| 38 |
+
Following Mennink's work, Wang et al. [20] proposed 32 constructions of optimally secure tweakable block ciphers from classical block ciphers. Their designs share an $n$-bit key, $n$-bit tweak and $n$-bit plaintext, and linearly mix tweak, key, and the result of a second offline call to the block cipher. Their constructions have the desirable property of allowing to cache the result of the first block-cipher call; moreover, given a-priori known tweaks, some of their constructions allow further to precompute the result of the key schedule.
|
| 39 |
+
|
| 40 |
+
All constructions by Wang et al. were restricted to $n$-bit keys and tweaks. While this limit was reasonable, it did not address tweakable block ciphers with tweaks longer than $n$ bit. Such constructions, however, are useful in applications with increased security needs such as for authenticated encryption or variable-input-length ciphers (e.g., [19]). Moreover, disk-encryption schemes are typically based on wide-block tweakable ciphers, where the physical location on disk (e.g., the sector ID) is used as tweak, which can be arbitrarily long.
|
| 41 |
+
---PAGE_BREAK---
|
| 42 |
+
|
| 43 |
+
In general, extending the key length in the ideal-cipher model is far from trivial (see, e.g., [2,5,10]), and the key size in this model does not necessarily match the required tweak length. Moreover, many ciphers, like the AES-192 or AES-256, possess key and block lengths for which the constructions in [12,20] are inapplicable. In general, the tweak represents additional data accompanying the plaintext/ciphertext block, and no general reason exists why tweaks must be limited to the block length.
|
| 44 |
+
|
| 45 |
+
Before proving the security of a construction, we have to specify the employed model. The standard model is well-established in the cryptographic community despite the fact that proofs base on few unproven assumptions, such as that a block cipher is a PRP, or ignore practical side-channel attacks. In the standard model, the adversary is given access only to either the *real construction* $\tilde{E}$ or an *ideal construction* $\tilde{\pi}$. In contrast, the ideal-cipher model assumes an ideal primitive—in our case the classical ideal block cipher $E$ which is used in $\tilde{E}$— which the adversary has also access to in both worlds. Although a proof in the ideal-cipher model is not an unexceptional guarantee that no attacks may exist when instantiated in practice [3], for us, it allows to capture away the details of the primitive for the sake of focusing on the security of the construction.
|
| 46 |
+
|
| 47 |
+
A good example for TBCs proven in the standard model is XTX [14] by Minematsu and Iwata. XTX extended the tweak domain of a given tweakable block cipher $\tilde{E}: \{0,1\}^k \times \{0,1\}^t \times \{0,1\}^n \rightarrow \{0,1\}^n$ by hashing the arbitrary-length tweak to an $(n+t)$-bit value. The first $t$ bits serve as tweak and the latter $n$ bits are XORed to both input and output of $\tilde{E}$. Given an $\epsilon$-AXU family of hash functions and an ideal tweakable cipher, XTX is secure for up to $O(2^{(n+t)/2})$ queries in the standard model. However, no alternative to XTX exists in the ideal-cipher model yet.
|
| 48 |
+
|
| 49 |
+
**Contribution.** This work proposes the XHX family of tweakable block ciphers from a classical block cipher and a family of universal hash functions, which generalizes the constructions by Wang et al. [20]. Like them, the present work also uses the ideal-cipher model for its security analysis. As the major difference to their work, our proposal allows arbitrary tweak lengths and works for any block cipher of $n$-bit block and $k$-bit key. The security is guaranteed for up to $O(2^{(n+k)/2})$ queries, which yields $n$-bit security when $k \ge n$.
|
| 50 |
+
|
| 51 |
+
Our contributions in the remainder of this work are threefold: First, we detail the generic XHX construction with three independently keyed calls to the hash function. Second, we show that we can derive the hash keys in an efficient manner from the block cipher, generalizing the constructions by Wang et al.; finally, we propose efficient instantiations for the employed hash functions for concreteness.
|
| 52 |
+
|
| 53 |
+
*Remark 1.* Recently, Naito [15] proposed the XKX framework of beyond-birthday-secure tweakable block ciphers, which shares similarities to the proposal in the present work. He proposed two instances, the birthday-secure XKX(1) and the beyond-birthday-secure XKX(2). More detailed, the nonce is processed by a block-cipher-based PRF which yields the block-cipher key for the current message; the counter is hashed with a universal hash function under a second, in-
|
| 54 |
+
---PAGE_BREAK---
|
| 55 |
+
|
| 56 |
+
**Table 1:** Comparison of XHX to earlier highly secure TBCs built upon classical block ciphers. ICM(n, k) denotes the ideal-cipher model for a block cipher with n-bit block and k-bit key; BC(n, k) and TBC(n, t, k) denote the standard-model (tweakable) block cipher of n-bit block, t-bit tweak, and k-bit key. \#Enc. = \#calls to the (tweakable) block cipher, and \#Mult. = \#multiplications over GF(2^n). a(b) = b out of a calls can be precomputed with the secret key; we define s = ⌈k/n⌋.
|
| 57 |
+
|
| 58 |
+
<table><thead><tr><th rowspan="2">Scheme</th><th rowspan="2">Model</th><th colspan="2">Tweak</th><th>Key</th><th>Security</th><th colspan="2">Efficiency</th><th rowspan="2">Reference</th></tr><tr><th colspan="2">length in bit</th><th>in bit</th><th>#Enc.</th><th>#Mult.</th></tr></thead><tbody><tr><td>F̃[2]</td><td>ICM(n,n)</td><td>n</td><td>n</td><td></td><td>n</td><td>2</td><td></td><td>[12]</td></tr><tr><td>Eķ1, ..., Eķ32</td><td>ICM(n,n)</td><td>n</td><td>n</td><td></td><td>n</td><td>2 (1)</td><td></td><td>[20]</td></tr><tr><td>XTX</td><td>TBC(n,t,k)</td><td>any l</td><td>k + 2n</td><td>(n + t)/2</td><td></td><td>1</td><td>2[l/n]</td><td>[14]</td></tr><tr><td>XKX<sup>(2)</sup></td><td>BC(n,k)</td><td>-*</td><td>k + n</td><td>min{n,k/2}</td><td></td><td>1</td><td>1</td><td>[15]</td></tr><tr><td>XHX</td><td>ICM(n,k)</td><td>any l</td><td>k</td><td>(n + k)/2</td><td>s + 1 (s)</td><td>s[l/n]</td><td></td><td>This work</td></tr><tr><td>XHX</td><td>ICM(n,k)</td><td>2n</td><td>k</td><td></td><td>n</td><td>s + 1 (s)</td><td>s</td><td>This work</td></tr></tbody></table>
|
| 59 |
+
|
| 60 |
+
* XKX<sup>(2)</sup> employs a counter as tweak.
|
| 61 |
+
|
| 62 |
+
dependent key to mask the input. In contrast to other proposals including ours, Naito's construction demands both a counter plus a nonce as parameters to overcome the birthday bound; as a standalone construction, its security reduces to n/2 bits if an adversary could use the same "nonce" value for all queries. Hence, XKX<sup>(2)</sup> is tailored only to certain domains, e.g., modes of operation in nonce-based authenticated encryption schemes. Our proposal differs from XKX in four aspects: (1) we do not pose limitations on the reuse of input parameters; moreover, (2) we do not require a minimum key length of n + k bits; (3) we do not use several independent keys, but employ the block cipher to derive hashing keys; (4) finally, Naito's construction is proved in the standard model, whereas we consider the ideal-cipher model.
|
| 63 |
+
|
| 64 |
+
The remainder is structured as follows: Section 2 briefly gives the preliminaries necessary for the rest of this work. Section 3 then defines the general construction, that we call GXHX for simplicity, which hashes the tweak to three outputs. Section 4 continues with the definition and analysis of XHX, which derives the hashing keys from the block cipher. Section 5 describes and analyzes efficient instantiations for our hash functions depending on the tweak length. In particular, we propose instantiations for 2n-bit and arbitrary-length tweaks.
|
| 65 |
+
|
| 66 |
+
## 2 Preliminaries
|
| 67 |
+
|
| 68 |
+
**General Notation.** We use lowercase letters $x$ for indices and integers, uppercase letters $X, Y$ for binary strings and functions, and calligraphic uppercase letters $\mathcal{X}, \mathcal{Y}$ for sets. We denote the concatenation of binary strings $X$ and $Y$ by $X \parallel Y$ and the result of their bitwise XOR by $X \oplus Y$. For tuples of bit
|
| 69 |
+
---PAGE_BREAK---
|
| 70 |
+
|
| 71 |
+
strings $(X_1, \dots, X_n)$, $(Y_1, \dots, Y_n)$ of equal domain, we denote by $(X_1, \dots, X_n) \oplus (Y_1, \dots, Y_n)$ the element-wise XOR, i.e., $(X_1 \oplus Y_1, \dots, X_n \oplus Y_n)$. We indicate the length of $X$ in bits by $|X|$ and write $X_i$ for the $i$-th block. Furthermore, we denote by $X \leftarrow \mathcal{X}$ that $X$ is chosen uniformly at random from the set $\mathcal{X}$. We define three sets of particular interest: $\text{Func}(\mathcal{X}, \mathcal{Y})$ be the set of all functions $F : \mathcal{X} \to \mathcal{Y}$, $\text{Perm}(\mathcal{X})$ the set of all permutations $\pi : \mathcal{X} \to \mathcal{X}$, and $\text{TPerm}(\mathcal{T}, \mathcal{X})$ for the set of tweaked permutations over $\mathcal{X}$ with associated tweak space $\mathcal{T}$. $(X_1, \dots, X_n) \stackrel{n}{\leftarrow} X$ denotes that $X$ is split into $n$-bit blocks i.e., $X_1 || \dots || X_n = X$, and $|X_i| = n$ for $1 \le i \le x-1$, and $|X_x| \le n$. Moreover, we define $\langle X \rangle_n$ to denote the encoding of a non-negative integer $X$ into its $n$-bit representation. Given an integer $x \in \mathbb{N}$, we define the function $\text{TRUNC}_x : \{0,1\}^* \to \{0,1\}^x$ to return the leftmost $x$ bits of the input if its length is $\ge x$, and returns the input otherwise. For two sets $\mathcal{X}$ and $\mathcal{Y}$, a uniform random function $\rho : \mathcal{X} \to \mathcal{Y}$ maps inputs $X \in \mathcal{X}$ independently from other inputs and uniformly at random to outputs $Y \in \mathcal{Y}$. For an event $E$, we denote by $\text{Pr}[E]$ the probability of $E$. For positive integers $n$ and $k$, we denote the falling factorial as $(n)_k := \frac{n!}{k!}$.
|
| 72 |
+
|
| 73 |
+
**Adversaries.** An adversary **A** is an efficient Turing machine that interacts with a given set of oracles that appear as black boxes to **A**. We denote by **A**<sup>Ο</sup> the output of **A** after interacting with some oracle O. We write Δ<sub>**A**</sub> (<i>O</i><sup>1</sup>; <i>O</i><sup>2</sup>) := |Pr[**A**<sup>Ο</sup> ⇒ 1] − Pr[**A**<sup>Ο</sup><sup>2</sup> ⇒ 1]| for the advantage of **A** to distinguish between oracles O<sup>1</sup> and O<sup>2</sup>. All probabilities are defined over the random coins of the oracles and those of the adversary, if any. W.l.o.g., we assume that **A** never asks queries to which it already knows the answer.
|
| 74 |
+
|
| 75 |
+
A block cipher $E$ with associated key space $\mathcal{K}$ and message space $\mathcal{M}$ is a mapping $E: \mathcal{K} \times \mathcal{M} \rightarrow \mathcal{M}$ such that for every key $K \in \mathcal{K}$, it holds that $E(K, \cdot)$ is a permutation over $\mathcal{M}$. We define Block($\mathcal{K}, \mathcal{M}$) as the set of all block ciphers with key space $\mathcal{K}$ and message space $\mathcal{M}$. A tweakable block cipher $\tilde{E}$ with associated key space $\mathcal{K}$, tweak space $\mathcal{T}$, and message space $\mathcal{M}$ is a mapping $\tilde{E}: \mathcal{K} \times \mathcal{T} \times \mathcal{M} \rightarrow \mathcal{M}$ such that for every key $K \in \mathcal{K}$ and tweak $T \in \mathcal{T}$, it holds that $\tilde{E}(K, T, \cdot)$ is a permutation over $\mathcal{M}$. We also write $\tilde{E}_K^\mathrm{T}(\cdot)$ as short form in the remainder.
|
| 76 |
+
|
| 77 |
+
The STPRP security of $\tilde{E}$ is defined via upper bounding the advantage of a distinguishing adversary **A** in a game, where we consider the ideal-cipher model throughout this work. There, **A** has access to oracles ($\mathcal{O}, E^\pm$), where $E^\pm$ is the usual notation for access to the encryption oracle *E* and to the decryption oracle *E*<sup>-1</sup>. *O* is called construction oracle, and is either the real construction $\tilde{E}_K^\pm(\cdot, \cdot)$, or $\tilde{\pi}^\pm(\cdot, \cdot)$ for $\tilde{\pi} \leftarrow \text{TPerm}(\mathcal{T}, \mathcal{M})$. $E^\pm \leftarrow \text{Perm}(\mathcal{M})$ is an ideal block cipher underneath $\tilde{E}$. The STPRP advantage of **A** is defined as $\Delta_{\mathbf{A}}(\tilde{E}_K^\pm(\cdot, \cdot), E^\pm(\cdot, \cdot); \tilde{\pi}^\pm(\cdot, \cdot), E^\pm(\cdot, \cdot))$, where the probabilities are taken over random and independent choice of $K, E, \tilde{\pi}$, and the coins of **A** if any. For the remainder, we say that **A** is a ($q_C, q_P$)-distinguisher if it asks at most $q_C$ queries to its construction oracle and at most $q_P$ queries to its primitive oracle.
|
| 78 |
+
|
| 79 |
+
**Definition 1 (Almost-Uniform Hash Function).** Let $\mathcal{H}: \mathcal{K} \times \mathcal{X} \rightarrow \mathcal{Y}$ be a family of keyed hash functions. We call $\mathcal{H}$ ϵ-almost-uniform (ϵ-AUniform) if, for $K \leftarrow_K$ and all $X \in \mathcal{X}$ and $Y \in \mathcal{Y}$, it holds that $\text{Pr}_{K \leftarrow_K}[\mathcal{H}(K, X) = Y] \le \epsilon$.
|
| 80 |
+
---PAGE_BREAK---
|
| 81 |
+
|
| 82 |
+
**Definition 2 (Almost-XOR-Universal Hash Function).** Let $\mathcal{H} : \mathcal{K} \times \mathcal{X} \rightarrow \mathcal{Y}$ be a family of keyed hash functions with $\mathcal{Y} \subseteq \{0,1\}^*$. We say that $\mathcal{H}$ is $\epsilon$-almost-XOR-universal ($\epsilon$-AXU) if, for $K \leftarrow \mathcal{K}$, and for all distinct $X, X' \in \mathcal{X}$ and any $\Delta \in \mathcal{Y}$, it holds that $\Pr_{K \leftarrow \mathcal{K}} [\mathcal{H}(K,X) \oplus \mathcal{H}(K,X') = \Delta] \le \epsilon$.
|
| 83 |
+
|
| 84 |
+
Minematsu and Iwata [14] defined partial-almost-XOR-universality to capture
|
| 85 |
+
the probability of partial output collisions.
|
| 86 |
+
|
| 87 |
+
**Definition 3 (Partial-AXU Hash Function).** Let $\mathcal{H} : \mathcal{K} \times \mathcal{X} \to \{0,1\}^n \times \{0,1\}^k$ be a family of hash functions. We say that $\mathcal{H}$ is $(n, k, \epsilon)$-partial-AXU $((n, k, \epsilon)$-pAXU) if, for $K \leftarrow \mathcal{K}$, and for all distinct $X, X' \in \mathcal{X}$ and all $\Delta \in \{0,1\}^n$, it holds that $\Pr_{K \leftarrow \mathcal{K}} [\mathcal{H}(K,X) \oplus \mathcal{H}(K,X') = (\Delta, 0^k)] \le \epsilon$.
|
| 88 |
+
|
| 89 |
+
**The H-Coefficient Technique.** The H-coefficients technique is a method due to Patarin [4,16]. It assumes the results of the interaction of an adversary **A** with its oracles are collected in a transcript $\tau$. The task of **A** is to distinguish the real world $\mathcal{O}_{\text{real}}$ from the ideal world $\mathcal{O}_{\text{ideal}}$. A transcript $\tau$ is called *attainable* if the probability to obtain $\tau$ in the ideal world is non-zero. One assumes that **A** does not ask duplicate queries or queries prohibited by the game or to which it already knows the answer. Denote by $\Theta_{\text{real}}$ and $\Theta_{\text{ideal}}$ the distribution of transcripts in the real and the ideal world, respectively. Then, the fundamental Lemma of the H-coefficients technique states:
|
| 90 |
+
|
| 91 |
+
**Lemma 1 (Fundamental Lemma of the H-coefficient Technique [16]).**
|
| 92 |
+
Assume, the set of attainable transcripts is partitioned into two disjoint sets GOODT and BADT. Further assume, there exist $\epsilon_1, \epsilon_2 \ge 0$ such that for any transcript $\tau \in$ GOODT, it holds that
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
\frac{\Pr[\Theta_{\text{real}} = \tau]}{\Pr[\Theta_{\text{ideal}} = \tau]} \geq 1 - \epsilon_1, \quad \text{and} \quad \Pr[\Theta_{\text{ideal}} \in \text{BADT}] \leq \epsilon_2.
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
Then, for all adversaries **A**, it holds that $\Delta_A(\mathcal{O}_{\text{real}}; \mathcal{O}_{\text{ideal}}) \le \epsilon_1 + \epsilon_2$.
|
| 99 |
+
|
| 100 |
+
The proof is given in [4,16].
|
| 101 |
+
|
| 102 |
+
3 The Generic GXHX Construction
|
| 103 |
+
|
| 104 |
+
Let $n, k, l \ge 1$ be integers and $\mathcal{K} = \{0,1\}^k$, $\mathcal{L} = \{0,1\}^l$, and $\mathcal{T} \subseteq \{0,1\}^*$. Let $E: \mathcal{K} \times \{0,1\}^n \rightarrow \{0,1\}^n$ be a block cipher and $\mathcal{H}: \mathcal{L} \times \mathcal{T} \rightarrow \{0,1\}^n \times \mathcal{K} \times \{0,1\}^n$ be a family of hash functions. Then, we define by GXHX[$E$, $\mathcal{H}$] : $\mathcal{L} \times \mathcal{T} \times \{0,1\}^n \rightarrow \{0,1\}^n$ the tweakable block cipher instantiated with $E$ and $\mathcal{H}$ that, for given key $L \in \mathcal{L}$, tweak $T \in \mathcal{T}$, and message $M \in \{0,1\}^n$, computes the ciphertext $C$, as shown on the left side of Algorithm 1. Likewise, given key $L \in \mathcal{L}$, tweak $T \in \mathcal{T}$, and ciphertext $C \in \{0,1\}^n$, the plaintext $M$ is computed by $M \leftarrow$ GXHX[$E$, $\mathcal{H}]_L^{-1}(T, C)$, as shown on the right side of Algorithm 1. Clearly, GXHX[$E$, $\mathcal{H}$] is a correct and tidy tweakable permutation, i.e., for all
|
| 105 |
+
---PAGE_BREAK---
|
| 106 |
+
|
| 107 |
+
**Fig. 1:** Schematic illustration of the encryption process of a message *M* and a tweak *T* with the general GXHX[*E*, *H*] tweakable block cipher. *E*: *K* × {0, 1}ⁿ → {0, 1}ⁿ is a keyed permutation and *H*: *L* × *T* → {0, 1}ⁿ × *K* × {0, 1}ⁿ is a keyed universal hash function.
|
| 108 |
+
|
| 109 |
+
**Algorithm 1** Encryption and decryption algorithms of the general GXHX[*E*, *H*] construction.
|
| 110 |
+
|
| 111 |
+
<table><tr><td>11: <strong>function</strong> GXHX[<i>E</i>, <i>H</i>]<sub>L</sub>(<i>T</i>, <i>M</i>)</td><td>21: <strong>function</strong> GXHX[<i>E</i>, <i>H</i>]<sub>L</sub><sup>-1</sup>(<i>T</i>, <i>C</i>)</td></tr><tr><td>12: (H<sub>1</sub>, H<sub>2</sub>, H<sub>3</sub>) ← <i>H</i>(L, T)</td><td>22: (H<sub>1</sub>, H<sub>2</sub>, H<sub>3</sub>) ← <i>H</i>(L, T)</td></tr><tr><td>13: C ← E<sub>H<sub>2</sub></sub>(M ⊕ H<sub>1</sub>) ⊕ H<sub>3</sub></td><td>23: M ← E<sub>H<sub>2</sub></sub><sup>-1</sup>(C ⊕ H<sub>3</sub>) ⊕ H<sub>1</sub></td></tr><tr><td>14: <strong>return</strong> C</td><td>24: <strong>return</strong> <i>M</i></td></tr></table>
|
| 112 |
+
|
| 113 |
+
keys $L \in \mathcal{L}$, all tweak- plaintext inputs $(T, M) \in \mathcal{T} \times \{0, 1\}^n$, and all tweak-ciphertext inputs $(T, C) \in \mathcal{T} \times \{0, 1\}^n$, it holds that
|
| 114 |
+
|
| 115 |
+
$$ \text{GXHX}[E, \mathcal{H}]_L^{-1}(T, \text{GXHX}[E, \mathcal{H}]_L(T, M)) = M \text{ and} \\ \text{GXHX}[E, \mathcal{H}]_L(T, \text{GXHX}[E, \mathcal{H}]_L^{-1}(T, C)) = C. $$
|
| 116 |
+
|
| 117 |
+
Figure 1 illustrates the encryption process schematically.
|
| 118 |
+
|
| 119 |
+
## 4 XHX: Deriving the Hash Keys from the Block Cipher
|
| 120 |
+
|
| 121 |
+
In the following, we adapt the general GXHX construction to XHX. which differs from the former in two aspects: first, XHX splits the hash function into three functions $\mathcal{H}_1$, $\mathcal{H}_2$, and $\mathcal{H}_3$; second, since we need at least $n + k$ bit of key material for the hash functions, it derives the hash-function key from a key $K$ using the block cipher $E$. We denote by $s \ge 0$ the number of derived hash-function keys $L_i$ and collect them together with the user-given key $K \in \{0, 1\}^k$ into a vector $L := (K, L_1, ..., L_s)$. Moreover, we define a set of variables $I_i$ and $K_i$, for $1 \le i \le s$, which denote input and key to the block cipher $E$ for computing: $L_i := E_{K_i}(I_i)$. We allow flexible, usecase-specific definitions for the values $I_i$ and $K_i$ as long as they fulfill certain properties that will be listed in Section 4.1. We redefine the key space of the hash functions to $\mathcal{L} \subseteq \{0, 1\}^k \times$
|
| 122 |
+
---PAGE_BREAK---
|
| 123 |
+
|
| 124 |
+
Fig. 2: Schematic illustration of the XHX[E, $\mathcal{H}$] construction where we derive the hash-function keys $L_i$ from the block cipher E.
|
| 125 |
+
|
| 126 |
+
**Algorithm 2** Encryption and decryption algorithms of XHX where the keys are derived from the block cipher. We define $\mathcal{H} := (\mathcal{H}_1, \mathcal{H}_2, \mathcal{H}_3)$. Note that the exact definitions of $I_i$ and $K_i$ are usecase-specific.
|
| 127 |
+
|
| 128 |
+
11: **function** XHX[E, $\mathcal{H}$].KEYSETUP(K)
|
| 129 |
+
12: **for** i ← 1 to s **do**
|
| 130 |
+
13: $L_i \leftarrow E_{K_i}(I_i)$
|
| 131 |
+
14: $L \leftarrow (K, L_1, \dots, L_s)$
|
| 132 |
+
15: **return** $L$
|
| 133 |
+
31: **function** XHX[E, $\mathcal{H}$]$_K(T, M)$
|
| 134 |
+
32: $L \leftarrow$ XHX[E, $\mathcal{H}$].KEYSETUP(K)
|
| 135 |
+
33: $(H_1, H_2, H_3) \leftarrow \mathcal{H}(L, T)$
|
| 136 |
+
34: $C \leftarrow E_{H_2}(M \oplus H_1) \oplus H_3$
|
| 137 |
+
35: **return** $C$
|
| 138 |
+
31: **function** XHX[E, $\mathcal{H}$]$_K(T, C)$
|
| 139 |
+
42: $L \leftarrow$ XHX[E, $\mathcal{H}$].KEYSETUP(K)
|
| 140 |
+
43: $(H_1, H_2, H_3) \leftarrow \mathcal{H}(L, T)$
|
| 141 |
+
44: $M \leftarrow E_{H_2}^{-1}(C \oplus H_3) \oplus H_1$
|
| 142 |
+
45: **return** $M$
|
| 143 |
+
|
| 144 |
+
($\{0, 1\}^n)^s$. Note, the values $L_i$ are equal for all encryptions and decryptions and hence, can be precomputed and stored for all encryptions under the same key.
|
| 145 |
+
|
| 146 |
+
*The Constructions by Wang et al.* The 32 constructions $\tilde{\mathbb{E}}[2]$ by Wang et al. are a special case of our construction with the parameters $s=1$, key length $k=n$, with the inputs $I_i, K_i \in \{0^n, K\}$, and the option $(I_i, K_i) = (0^n, 0^n)$ excluded. Their constructions compute exactly one value $L_1$ by $L_1 := E_{K_1}(I_1)$. One can easily describe their constructions in the terms of the XHX framework, with three variables $X_1, X_2, X_3 \in \{K, L_1, K \oplus L_1\}$ for which holds that $X_1 \neq X_2$ and $X_3 \neq X_2$, and which are used in XHX as follows:
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\begin{align*}
|
| 150 |
+
\mathcal{H}_1(L,T) &:= X_1, \\
|
| 151 |
+
\mathcal{H}_2(L,T) &:= X_2 \oplus T, \\
|
| 152 |
+
\mathcal{H}_3(L,T) &:= X_3.
|
| 153 |
+
\end{align*}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
## 4.1 Security Proof of XHX
|
| 157 |
+
|
| 158 |
+
This section concerns the security of the XHX construction in the ideal-cipher model where the hash-function keys are derived by the (ideal) block cipher E.
|
| 159 |
+
---PAGE_BREAK---
|
| 160 |
+
|
| 161 |
+
**Properties of $\mathcal{H}$**. For our security analysis, we list a set of properties that we require for $\mathcal{H}$. We assume that $L$ is sampled uniformly at random from $\mathcal{L}$. To address parts of the output of $\mathcal{H}$, we also use the notion $\mathcal{H}_i : \mathcal{L} \times \mathcal{T} \to \{0,1\}^{o_i}$ to refer to the function that computes the $i$-th output of $\mathcal{H}(L,T)$, for $1 \le i \le 3$, with $o_1 := n$, $o_2 := k$, and $o_3 := n$. Moreover, we define $\mathcal{H}_{1,2}(T) := (\mathcal{H}_1(L,T), \mathcal{H}_2(L,T))$, and $\mathcal{H}_{3,2}(T) := (\mathcal{H}_3(L,T), \mathcal{H}_2(L,T))$.
|
| 162 |
+
|
| 163 |
+
**Property P1.** For all distinct $T, T' \in \mathcal{T}$ and all $\Delta \in \{0,1\}^n$, it holds that
|
| 164 |
+
|
| 165 |
+
$$ \max_{i \in \{1,3\}} \Pr_{L \leftarrow \mathcal{L}} [\mathcal{H}_{i,2}(T) \oplus \mathcal{H}_{i,2}(T') = (\Delta, 0^k)] \le \epsilon_1. $$
|
| 166 |
+
|
| 167 |
+
**Property P2.** For all $T \in \mathcal{T}$ and all $(c_1, c_2) \in \{0,1\}^n \times \{0,1\}^k$, it holds that
|
| 168 |
+
|
| 169 |
+
$$ \max_{i \in \{1,3\}} \Pr_{L \leftarrow \mathcal{L}} [\mathcal{H}_{i,2}(T) = (c_1, c_2)] \le \epsilon_2. $$
|
| 170 |
+
|
| 171 |
+
Note that Property P1 is equivalent to saying $\mathcal{H}_{1,2}$ and $\mathcal{H}_{3,2}$ are $(n, k, \epsilon_1)$-pAXU; Property P2 is equivalent to the statement that $\mathcal{H}_{1,2}$ and $\mathcal{H}_{3,2}$ are $\epsilon_2$-AUniform. Clearly, it must hold that $\epsilon_1, \epsilon_2 \ge 2^{-(n+k)}$.
|
| 172 |
+
|
| 173 |
+
**Property P3.** For all $T \in \mathcal{T}$, all chosen $I_i, K_i$, for $1 \le i \le s$, and all $\Delta \in \{0,1\}^n$, it holds that
|
| 174 |
+
|
| 175 |
+
$$ \Pr_{L \leftarrow \mathcal{L}} [\mathcal{H}_{1,2}(T) \oplus (I_i, K_i) = (\Delta, 0^k)] \le \epsilon_3. $$
|
| 176 |
+
|
| 177 |
+
**Property P4.** For all $T \in \mathcal{T}$, all chosen $K_i, L_i$, for $1 \le i \le s$, and all $\Delta \in \{0,1\}^n$, it holds that
|
| 178 |
+
|
| 179 |
+
$$ \Pr_{L \leftarrow \mathcal{L}} [\mathcal{H}_{3,2}(T) \oplus (L_i, K_i) = (\Delta, 0^k)] \le \epsilon_4. $$
|
| 180 |
+
|
| 181 |
+
Properties P3 and P4 represent the probabilities that an adversary's query hits the inputs that have been chosen for computing a hash-function key. We list a further property which gives the probability that a set of constants chosen by the adversary can hit the values $I_i$ and $K_i$ from generating the keys $L_i$:
|
| 182 |
+
|
| 183 |
+
**Property P5.** For $1 \le i \le s$, and all $(c_1, c_2) \in \{0,1\}^n \times \{0,1\}^k$, it holds that
|
| 184 |
+
|
| 185 |
+
$$ \Pr_{K \leftarrow \mathcal{K}} [(I_i, K_i) = (c_1, c_2)] \le \epsilon_5. $$
|
| 186 |
+
|
| 187 |
+
In other words, the tuples $(I_i, K_i)$ contain a sufficient amount of close to $n$ bit entropy, and cannot be predicted by an adversary with greater probability, i.e., $\epsilon_5$ should not be larger than a small multiple of $1/2^n$. From Property 5 and the fact that the values $L_i$ are computed from $E_{K_i}(I_i)$ with an ideal permutation $E$, it follows that for $1 \le i \le s$ and all $(c_1, c_2) \in \{0,1\}^n \times \{0,1\}^k$
|
| 188 |
+
|
| 189 |
+
$$ \Pr_{K \leftarrow \mathcal{K}} [(L_i, K_i) = (c_1, c_2)] \le \epsilon_5. $$
|
| 190 |
+
---PAGE_BREAK---
|
| 191 |
+
|
| 192 |
+
**Fig. 3:** Schematic illustration of the oracles available to **A**.
|
| 193 |
+
|
| 194 |
+
**Theorem 1.** Let $E \leftarrow$ Block($\mathcal{K}$, $\{0,1\}^n$) be an ideal cipher. Further, let $\mathcal{H}_i$: $\mathcal{L} \times \mathcal{T} \rightarrow \{0,1\}^{o_i}$, for $1 \le i \le 3$ be families of hash functions for which Properties P1 through P4 hold, and let $K \leftarrow K$. Moreover, let Property P5 hold for the choice of all $I_i$ and $K_i$. Let $s$ denote the number of keys $L_i$, $1 \le i \le s$. Let **A** be a $(q_C, q_P)$-distinguisher on XHX[$E, \mathcal{H}]_K$. Then
|
| 195 |
+
|
| 196 |
+
$$ \Delta_{\mathbf{A}}(\text{XHX}[E, \mathcal{H}], E^{\pm}; \tilde{\pi}^{\pm}, E^{\pm}) \le q_C^2\epsilon_1 + 2q_P q_C \epsilon_2 + q_C s(\epsilon_3 + \epsilon_4) + 2q_P s \epsilon_5 + \frac{s^2}{2^{n+1}} $$
|
| 197 |
+
|
| 198 |
+
*Proof Idea.* The proof of Theorem 1 follows from Lemmas 1, 2, and 3. Those can be found in Appendix A. Let $\tilde{E}$ denote the XHX[$E, \mathcal{H}$] construction in the remainder. Figure 3 illustrates the oracles available to **A**. The queries by **A** are collected in a transcript $\tau$. We will define a series of bad events that can happen during the interaction of **A** with its oracles:
|
| 199 |
+
|
| 200 |
+
- Collisions between two construction queries,
|
| 201 |
+
|
| 202 |
+
- Collisions between a construction and a primitive query,
|
| 203 |
+
|
| 204 |
+
- Collisions between two primitive queries,
|
| 205 |
+
|
| 206 |
+
- The case that the adversary finds an input-key tuple in either a primitive or construction query that was used to derive a key $L_i$.
|
| 207 |
+
|
| 208 |
+
The proof will bound the probability of these events to occur in the transcript in Lemma 2. We define a transcript as **bad** if it satisfies at least one such **bad** event, and define BADT as the set of all attainable **bad** transcripts.
|
| 209 |
+
|
| 210 |
+
**Lemma 2.** It holds that
|
| 211 |
+
|
| 212 |
+
$$ \Pr[\Theta_{\text{ideal}} \in \text{BADT}] \le q_C^2\epsilon_1 + 2q_P q_C \epsilon_2 + q_C s(\epsilon_3 + \epsilon_4) + 2q_P s \epsilon_5 + \frac{s^2}{2^{n+1}}. $$
|
| 213 |
+
|
| 214 |
+
The proof is given in Appendix A.1.
|
| 215 |
+
|
| 216 |
+
**Good Transcripts.** Above, we have considered **bad** events. In contrast, we define GOODT as the set of all good transcripts, i.e., all attainable transcripts that are *not* bad.
|
| 217 |
+
|
| 218 |
+
**Lemma 3.** Let $\tau \in \text{GOODT}$ be a good transcript. Then
|
| 219 |
+
|
| 220 |
+
$$ \frac{\Pr[\Theta_{\text{real}} = \tau]}{\Pr[\Theta_{\text{ideal}} = \tau]} \ge 1. $$
|
| 221 |
+
|
| 222 |
+
The full proof can be found in Appendix A.2.
|
| 223 |
+
---PAGE_BREAK---
|
| 224 |
+
|
| 225 |
+
**Algorithm 3** The universal hash function $\mathcal{H}^*$.
|
| 226 |
+
---PAGE_BREAK---
|
| 227 |
+
|
| 228 |
+
- **Case k < n.** In this case, we could simply truncate $H_2$ from $n$ to $k$ bits. Theoretically, we could derive a longer key from $K$ for the computation of $H_1$ and $H_3$; however, we disregard this case since ciphers with smaller key size than state length are very uncommon.
|
| 229 |
+
|
| 230 |
+
- **Case k > n.** In the third case, we truncate the hash key $K$ for the computation of $H_1$ and $H_3$ to $n$ bits. Moreover, we derive $s$ hashing keys $L_1, \dots, L_s$ from the block cipher $E$. For $H_2$ and we concatenate the output of $s$ instances of $\mathcal{F}$. This construction is well-known to be $\epsilon^s(m)$-pAXU if $\mathcal{F}$ is $\epsilon(m)$-pAXU. Finally, we truncate the result to $k$ bits if necessary.
|
| 231 |
+
|
| 232 |
+
**Lemma 4.** $\mathcal{H}^*$ is $2^{sn-k}\epsilon^{s+1}(m)$-pAXU and $2^{sn-k}\rho^{s+1}(m)$-Uniform. Moreover, it satisfies Properties P3 and P4 with probability $2^{sn-k}\rho^{s+1}(m)$ each, and Property P5 with $\epsilon_5 \le 2/2^k$ for our choice of the values $I_i$ and $K_i$.
|
| 233 |
+
|
| 234 |
+
*Remark 2.* The term $2^{sn-k}$ results from the potential truncation of $H_2$ if the key length $k$ of the block cipher is no multiple of the state size $n$. $H_2$ is computed by concatenating the results of multiple independent invocations of a polynomial hash function $\mathcal{F}$ in $\text{GF}(2^n)$ under assumed independent keys. Clearly, if $\mathcal{F}$ is $\epsilon$-AXU, then their $sn$-bit concatenation is $\epsilon^s$-AXU. However, after truncating $sn$ to $k$ bits, we may lose information, which results in the factor of $2^{sn-k}$. For the case $k=n$, it follows that $s=1$, and the terms $2^{sn-k}\epsilon^{s+1}(m)$ and $2^{sn-k}\rho^{s+1}(m)$ simplify to $\epsilon^2(m)$ and $\rho^2(m)$, respectively.
|
| 235 |
+
|
| 236 |
+
Our instantiation of $\mathcal{F}$ has $\epsilon(m) = \rho(m) = (m+2)/2^n$. Before we prove Lemma 4, we derive from it the following corollary for XHX when instantiated with $\mathcal{H}^*$.
|
| 237 |
+
|
| 238 |
+
**Corollary 1.** Let $E$ and XHX[$E, \mathcal{H}^*$] be defined as in Theorem 1, where the maximum length of any tweak is limited by at most $m$ n-bit blocks. Moreover, let $K \leftarrow K$. Let $\mathbf{A}$ be a $(q_C, q_P)$-distinguisher on XHX[$E, \mathcal{H}^*$]. Then
|
| 239 |
+
|
| 240 |
+
$$ \Delta_{\mathbf{A}}(\text{XHX}[E, \mathcal{H}^*], E^\pm; \tilde{\pi}^\pm, E^\pm) \le \frac{(q_C^2+2q_Cq_P+2q_Cs)(m+2)^{s+1}}{2^{n+k}} + \frac{4q_P s}{2^k} + \frac{s^2}{2^{n+1}}. $$
|
| 241 |
+
|
| 242 |
+
The proof of the corollary stems from the combination of Lemma 4 with Theorem 1 and can be omitted.
|
| 243 |
+
|
| 244 |
+
*Proof of Lemma 4.* In the following, we assume that $T, T' \in \{0, 1\}^*$ are distinct tweaks of at most $m$ blocks each. Again, we consider the pAXU property first.
|
| 245 |
+
|
| 246 |
+
**Partial Almost-XOR-Universality.** This is the probability that for any $\Delta \in \{0, 1\}^n$:
|
| 247 |
+
|
| 248 |
+
$$
|
| 249 |
+
\begin{align*}
|
| 250 |
+
& \Pr_{L \leftarrow \mathcal{L}} [(\mathcal{F}_{K'}(T), \mathcal{F}_{L_1, \dots, L_s}(T)) \oplus (\mathcal{F}_{K'}(T'), \mathcal{F}_{L_1, \dots, L_s}(T')) = (\Delta, 0^n)] \\
|
| 251 |
+
&= \Pr_{L \leftarrow \mathcal{L}} [\mathcal{F}_{K'}(T) \oplus \mathcal{F}_{K'}(T') = \Delta, \mathcal{F}_{L_1, \dots, L_s}(T) \oplus \mathcal{F}_{L_1, \dots, L_s}(T') = 0^n] \\
|
| 252 |
+
&\le 2^{sn-k} \cdot \epsilon^{s+1}(m).
|
| 253 |
+
\end{align*}
|
| 254 |
+
$$
|
| 255 |
+
---PAGE_BREAK---
|
| 256 |
+
|
| 257 |
+
We assume independent hashing keys $K', L_1, \dots, L_s$ here. When $k=n$, it holds that $s=1$, and this probability is upper bounded by $\epsilon^2(m)$ since $\mathcal{F}$ is $\epsilon(m)$-AXU. In the case $k>n$, we compute $s$ words of $H_2$ that are concatenated and truncated to $k$ bits. Hence, $\mathcal{F}_{L_1, \dots, L_s}$ is $2^{sn-k} \cdot \epsilon^s(m)$-AXU. In combination with the AXU bound for $\mathcal{F}_{K'}$, we obtain the pAXU bound for $\mathcal{H}^*$ above.
|
| 258 |
+
|
| 259 |
+
**Almost-Uniformity.** Here, for any $(\Delta_1, \Delta_2) \in \{0,1\}^n \times \{0,1\}^k$, it shall hold
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
\begin{align*}
|
| 263 |
+
\Pr_{L \leftarrow \mathcal{L}} [(\mathcal{F}_{K'}(T), \mathcal{F}_{L_1, \dots, L_s}(T)) = (\Delta_1, \Delta_2)] &= \Pr_{L \leftarrow \mathcal{L}} [\mathcal{F}_{K'}(T) = \Delta_1, \mathcal{F}_{L_1, \dots, L_s}(T) = \Delta_2] \\
|
| 264 |
+
&\le 2^{sn-k} \cdot \rho^{s+1}(m)
|
| 265 |
+
\end{align*}
|
| 266 |
+
$$
|
| 267 |
+
|
| 268 |
+
since $\mathcal{F}$ is $\rho(m)$-AUniform, and using a similar argumentation for the cases $k=n$ and $k>n$ as for partial-almost-XOR universality.
|
| 269 |
+
|
| 270 |
+
**Property P3.** For all $T \in \mathcal{T}$ and $\Delta \in \{0,1\}^n$, Property P3 is equivalent to
|
| 271 |
+
|
| 272 |
+
$$
|
| 273 |
+
\Pr_{L \leftarrow \mathcal{L}} [\mathcal{F}_{K'}(T) = (\Delta \oplus I_i), \mathcal{F}_{L_1, \dots, L_s}(T) = K]
|
| 274 |
+
$$
|
| 275 |
+
|
| 276 |
+
for a fixed $1 \le i \le s$. Here, this property is equivalent to almost uniformity; hence,
|
| 277 |
+
the probability for the latter equality is at most $2^{sn-k} \cdot \rho^s(m)$. The probability for
|
| 278 |
+
the former equality is at most $\rho(m)$ since the property considers a fixed $i$. Since
|
| 279 |
+
we assume independence of $K$ and $L_1, \dots, L_s$, it holds that $\epsilon_3 \le 2^{sn-k} \cdot \rho^{s+1}(m)$.
|
| 280 |
+
|
| 281 |
+
**Property P4.** For all $T \in \mathcal{T}$ and $\Delta \in \{0,1\}^n$, Property P4 is equivalent to
|
| 282 |
+
|
| 283 |
+
$$
|
| 284 |
+
\Pr_{L \leftarrow \mathcal{L}} [\mathcal{F}_{K'}(T) = (\Delta \oplus L_i), \mathcal{F}_{L_1, \dots, L_s}(T) = K]
|
| 285 |
+
$$
|
| 286 |
+
|
| 287 |
+
for a fixed $1 \le i \le s$. Using a similar argumentation as for Property P3, the
|
| 288 |
+
probability is upper bounded by $\epsilon_4 \le 2^{sn-k} \cdot \rho^{s+1}(m)$.
|
| 289 |
+
|
| 290 |
+
**Property P5.** We derive the hashing keys $L_i$ with the help of $E$ and the secret key $K$. So, in the simple case that $s=1$, the probability that the adversary can guess any tuple $(I_i, K_i)$, for $1 \le i \le s$, that is used to derive the hashing keys $L_i$, or guess any tuple $(L_i, K_i)$ is at most $1/2^k$. Under the reasonable assumption $s < 2^{k-1}$, the probability becomes for fixed $i$ in the general case:
|
| 291 |
+
|
| 292 |
+
$$
|
| 293 |
+
\Pr_{K \leftarrow K} [ (I_i, K_i) = (c_1, c_2) ] \leq \frac{1}{2^k - s} \leq \frac{2}{2^k}.
|
| 294 |
+
$$
|
| 295 |
+
|
| 296 |
+
A similar argument holds that the adversary can guess any tuple $(L_i, K_i)$, for
|
| 297 |
+
$1 \le i \le s$. Hence, it holds for $\mathcal{H}^*$ that $\epsilon_5 \le 2/2^k$.
|
| 298 |
+
|
| 299 |
+
$\epsilon(m)$ **and** $\rho(m)$. It remains to determine $\epsilon(m)$ and $\rho(m)$ for our instantiation of $\mathcal{F}_K(\cdot)$. It maps tweaks $T = T_1, \dots, T_m$ to the result of
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
\left( \bigoplus_{i=1}^{m} T_i \cdot K^{m+3-i} \right) \oplus (\|T\|_n \cdot K \oplus K.
|
| 303 |
+
$$
|
| 304 |
+
---PAGE_BREAK---
|
| 305 |
+
|
| 306 |
+
**Algorithm 4** The universal hash function $\mathcal{H}^2$.
|
| 307 |
+
|
| 308 |
+
11: **function** $\mathcal{H}_L^2(T)$
|
| 309 |
+
12: $(K, L_1, \dots, L_s) \leftarrow L$
|
| 310 |
+
13: $(T_1, T_2) \stackrel{n}{\leftarrow} T$
|
| 311 |
+
14: $K' \leftarrow \text{TRUNC}_n(K)$
|
| 312 |
+
15: $H_1 \leftarrow T_1 \Box K'$
|
| 313 |
+
16: $H_2 \leftarrow \text{TRUNC}_k (\mathcal{F}_{L_1}(T) \parallel \dots \parallel \mathcal{F}_{L_s}(T))$
|
| 314 |
+
17: $H_3 \leftarrow T_1 \Box K'$
|
| 315 |
+
18: **return** $(H_1, H_2, H_3)$
|
| 316 |
+
|
| 317 |
+
21: function $\mathcal{F}_{L_i}(T_1 || T_2)$
|
| 318 |
+
|
| 319 |
+
22: return $(T_1 \boxdot L_i) \oplus T_2$
|
| 320 |
+
|
| 321 |
+
This is a polynomial of degree at most $m+2$, which is $(m+2)/2^n$-AXU. Moreover, over $L \in \mathcal{L}$, it lacks fixed points but for every $\Delta \in \{0, 1\}^n$, and any fixed subset of $m$ blocks of $T_1, \dots, T_m$, there are at most $m+2$ out of $2^n$ values for the block $T_{m+1}$ that fulfill $\mathcal{F}_K(T) = \Delta$. Hence, $\mathcal{F}$ is also $(m+2)/2^n$-AUniform. $\square$
|
| 322 |
+
|
| 323 |
+
$\mathcal{H}^*$ is a general construction which supports arbitrary tweak lengths. Though, if we used $\mathcal{H}^*$ for 2n-bit tweaks, we would need four Galois-Field multiplications. However, we can hash more efficiently, even optimal in terms of the number of multiplications in this case. For this purpose, we define $\mathcal{H}^2$.
|
| 324 |
+
|
| 325 |
+
**$\mathcal{H}^2$ - A Hash Function for 2n-bit Tweaks.** Naively, for two-block tweaks $|T| = 2n$, an $\epsilon$-pAXU construction with $\epsilon \approx 1/2^{2n}$ could be achieved by simply multiplying the tweak with some key $L \in \mathrm{GF}(2^{2n})$ sampled uniformly over $\mathrm{GF}(2^{2n})$. However, we can realize a similarly secure construction more efficiently by using two multiplications over the smaller field $\mathrm{GF}(2^n)$. Additional conditions, such as uniformity, are satisfied by introducing squaring in the field to avoid fixed points in multiplication-based universal hash function. Following the notations from the previous sections, let $L = (K, L_1)$ be the 2n-bit key of our hash function. For $X, Y \in \mathrm{GF}(2^n)$, we define the operation $\boxdot : \mathrm{GF}(2^n) \times \mathrm{GF}(2^n) \to \mathrm{GF}(2^n)$ as
|
| 326 |
+
|
| 327 |
+
$$ X \boxdot Y := \begin{cases} X \cdot Y & \text{if } X \neq 0 \\ Y^2 & \text{otherwise.} \end{cases} $$
|
| 328 |
+
|
| 329 |
+
We assume a common encoding between the bit space and GF(2^n), i.e. a polynomial in the field is represented as its coefficient vector, e. g., the all-zero vector denotes the zero element 0, and the bit string (0...01) denotes the identity element. Hereafter, we write X interchangeably as an element of GF(2^n) or of {0, 1}^n. For L = {0, 1}^n, X = ({0, 1})^2 and Y = {0, 1}^n × {0, 1}^k × {0, 1}^n, the construction H^2 : L × X → Y is defined in Algorithm 4. We note that the usage of keys has been chosen carefully, e.g., a swap of K and L_1 in H^2 would invalidate Property P4.
|
| 330 |
+
|
| 331 |
+
**Lemma 5.** $\mathcal{H}^2$ is $2^{s+1}/2^{n+k}$-pAXU, $2^s/2^{n+k}$-AUniform, satisfies Properties P3 and P4 with probability $2/2^{n+k}$ each, and Property P5 with $\epsilon_5 = s/2^n$ for our choices of $I_i$ and $K_i$, for $1 \le i \le s$.
|
| 332 |
+
---PAGE_BREAK---
|
| 333 |
+
|
| 334 |
+
Before proving Lemma 5, we derive from it the following corollary for XHX when instantiated with $\mathcal{H}^2$.
|
| 335 |
+
|
| 336 |
+
**Corollary 2.** Let $E$ and XHX[$E$, $\mathcal{H}^2$] be defined as in Theorem 1. Moreover, let $K \leftarrow K$. Let $\mathbf{A}$ be a $(q_C, q_P)$-distinguisher on XHX[$E$, $\mathcal{H}^2$]$_K$. Then
|
| 337 |
+
|
| 338 |
+
$$ \Delta_{\mathbf{A}}(\mathrm{XHX}[E, \mathcal{H}^2], E^{\pm}; \tilde{\pi}^{\pm}, E^{\pm}) \le \frac{2^{s+2}q_C^2 + 2^{s+1}q_Cq_P + 4q_Cs}{2^{n+k}} + \frac{2q_Ps^2}{2^n} + \frac{s^2}{2^{n+1}} $$
|
| 339 |
+
|
| 340 |
+
Again, the proof of the corollary stems from the combination of Lemma 5 with Theorem 1 and can be omitted.
|
| 341 |
+
|
| 342 |
+
*Proof of Lemma 5.* Since $H_1$ and $H_3$ are computed identically, we can restrict the analysis of the properties of $\mathcal{H}^2$ to only the outputs $(H_1, H_2)$. Note that $K$ and $L_1$ are independent. In the following, we denote the hash-function results for some tweak $T$ as $H_1, H_2, H_3$, and those for some tweak $T' \ne T$ as $H'_1, H'_2, H'_3$. Moreover, we denote the $n$-bit words of $H_2$ as $(H'_2, \dots, H'_s)$, and those of $H'_2$ as $(H''_2, \dots, H''_s)$.
|
| 343 |
+
|
| 344 |
+
**Partial Almost-XOR-Universality.** First, let us consider the pAXU property. It holds that $H_1 := T_1 \sqcap K'$ and $H_2 := \text{TRUNC}_k(\mathcal{F}_{L_1}(T), \dots, \mathcal{F}_{L_s}(T))$. Considering $H_1$, it must hold that $H'_1 = H_1 \oplus \Delta$, with
|
| 345 |
+
|
| 346 |
+
$$ \Delta = (T'_1 \sqcap K') \oplus (T_1 \sqcap K'). $$
|
| 347 |
+
|
| 348 |
+
For any $X \ne 0^n$, it is well-known that $X \sqcap Y$ is $1/2^n$-AXU. So, for any fixed $T_1$ and fixed $\Delta \in \{0, 1\}^n$, there is exactly one value $T'_1$ that fulfills the equation if $H'_1 \ne K' \sqcap K'$, and exactly two values if $H'_1 = K' \sqcap K'$, namely $T'_1 \in \{0^n, K'\}$. So
|
| 349 |
+
|
| 350 |
+
$$ \Pr_{K \leftarrow \{0,1\}^k} [ (T_1 \sqcap K') \oplus (T'_1 \sqcap K') = \Delta ] \le 2/2^n. $$
|
| 351 |
+
|
| 352 |
+
The argumentation for $H_2$ is similar. The probability that any $L_i = 0^n$, for fixed $1 \le i \le s$, is at most $1/(2^n - s + 1)$, which will be smaller than the probability of $H_i^i = H'^i_2$. So, in the remainder, we can concentrate on the case that all $L_i \ne 0^n$. W.l.o.g., we focus for now on the first word of $H_2$, $H'^1_2$, in the following. For fixed $(T_1, T_2)$, $H'^1_2$, and $T'_2$, there is exactly one value $T'_1$ s.t. $H'^1_2 = H'^1_1$ if $H'^1_2 \ne L_1 \sqcap (L_1 \oplus T'_2)$, namely $T'_1 := T_1 \oplus (T_2 \oplus T'_2) \sqcap L^{-1}_1$. There exist exactly two values $T'_1$ if $H'^1_2 = L_1 \sqcap L_1 \oplus T'_2$, namely $T'_1 \in \{0^n, L_1\}$. Hence, it holds that
|
| 353 |
+
|
| 354 |
+
$$ \Pr_{L_1 \leftarrow \mathcal{L}} [H_2^1 = H'_2] \le 2/2^n. $$
|
| 355 |
+
|
| 356 |
+
The same argumentation follows for $H_2^i = H'^i_2$, for $2 \le i \le s$ since the keys $L_i$ are pairwise independent. Since the $sn$ bits of $H_2^s$ and $H'^s_2$ are truncated if $k$ is not a multiple of $n$, the bound has to be multiplied with $2^{sn-k}$. With the factor of $2/2^n$ for $H_1$, it follows for fixed $\Delta \in \{0, 1\}^n$ that $\mathcal{H}^2$ is $\epsilon$-pAXU for $\epsilon$ upper bounded by
|
| 357 |
+
|
| 358 |
+
$$ \frac{2}{2^n} \cdot 2^{sn-k} \cdot \left( \frac{2}{2^n} \right)^s = \frac{2^{s+1}}{2^{n+k}}. $$
|
| 359 |
+
---PAGE_BREAK---
|
| 360 |
+
|
| 361 |
+
**Almost-Uniformity.** Here, we concern the probability for any $H_1$ and $H_2$:
|
| 362 |
+
|
| 363 |
+
$$ \mathrm{Pr}_{L \leftarrow \mathcal{L}} [T_1 \sqcap K' = H_1, \mathrm{TRUNC}_k(\mathcal{F}_{L_1}(T), \dots, \mathcal{F}_{L_s}(T)) = H_2]. $$
|
| 364 |
+
|
| 365 |
+
If $K' = 0^n$ and $H_1 = 0^n$, then the first equation may be fulfilled for any $T_1$. Though, the probability for $K' = 0^n$ is $1/2^n$. So, we can assume $K' \neq 0^n$ in the remainder. Next, we focus again on the first word of $H_2$, i.e., $H_2^1$. For fixed $L_1$ and $H_2^1$, there exist at most two values $(T_1, T_2)$ to fulfill $(T_1 \sqcap L_1) \oplus T_2 = H_2^1$. In the case $H_1 \neq K' \sqcap K'$, there is exactly one value $T_1 := H_1 \sqcap K'^{-1}$ that yields $H_1$. Then, $T_1, L_1$, and $H_2^1$ determine $T_2 := H_2^1 \oplus (T_1 \sqcap L_1)$ uniquely. In the opposite case that $H_1 = K' \sqcap K'$, there exist exactly two values $(T_1, T'_1)$ that yield $H_1$, namely $0^n$ and $K'$. Each of those determines $T_2$ uniquely. The probability that the so-fixed values $T_1, T_2$ yield also $H_2^2, \dots, H_s^2$ is at most $(2/2^n)^{s-1}$ if $k$ is a multiple of $n$ since the keys $L_i$ are pairwise independent; if $k$ is not a multiple of $n$, we have again an additional factor of $2^{sn-k}$ from the truncation. So, $\mathcal{H}^2$ is $\epsilon$-AUniform for $\epsilon$ at most
|
| 366 |
+
|
| 367 |
+
$$ 2^{sn-k} \cdot \left( \frac{2}{2^n} \right)^s = \frac{2^s}{2^{n+k}}. $$
|
| 368 |
+
|
| 369 |
+
**Property P3.** Given $I_i = \langle i - 1 \rangle$ and $K_i = K$, for $1 \le i \le s$, $\epsilon_3$ is equivalent to the probability that a chosen $(T_1, T_2)$ yields $\mathrm{Pr}[T_1 \sqcap K' = \Delta \oplus \langle i - 1 \rangle, \mathrm{TRUNC}_k(\mathcal{F}_{L_1}(T), \dots, \mathcal{F}_{L_s}(T)) = K]$, for some $i$. This can be rewritten to
|
| 370 |
+
|
| 371 |
+
$$
|
| 372 |
+
\begin{aligned}
|
| 373 |
+
& \mathrm{Pr}[T_1 \sqcap K' = \Delta \oplus \langle i-1 \rangle] \\
|
| 374 |
+
& \quad \cdot \mathrm{Pr}[\mathrm{TRUNC}_k(\mathcal{F}_{L_1}(T), \dots, \mathcal{F}_{L_s}(T)) = K | T_1 \sqcap K' = \Delta \oplus \langle i-1 \rangle].
|
| 375 |
+
\end{aligned}
|
| 376 |
+
$$
|
| 377 |
+
|
| 378 |
+
For fixed $\Delta \neq K' \sqcap K'$, there is exactly one value $T_1$ that satisfies the first part of the equation; otherwise, there are exactly two values $T_1$ if $\Delta = K' \sqcap K'$. Moreover, $K'$ is secret; so, the values $T_1$ require that the adversary guesses $K'$ correctly. Given fixed $T_1$, $\Delta$, and $K'$, there is exactly one value $T_2$ that matches the first $n$ bits of $K$; $T_2 := (T_1 \sqcap L_1) \oplus K[k-1..k-n]$. The remaining bits of $K$ are matched with probability $2^{sn-k}/2^{(s-1)n}$, assuming that the keys $L_i$ are independent. Hence, it holds that $\epsilon_3$ is at most
|
| 379 |
+
|
| 380 |
+
$$ \frac{2}{2^n} \cdot \frac{2^{sn-k}}{2^{sn}} = \frac{2}{2^{n+k}}. $$
|
| 381 |
+
|
| 382 |
+
**Property P4.** This argument follows from a similar argumentation as Property P3. Hence, it holds that $\epsilon_4 \le 2/2^{n+k}$. $\square$
|
| 383 |
+
|
| 384 |
+
**Acknowledgments.** This work was initiated during the group sessions of the 6th Asian Workshop on Symmetric Cryptography (ASK 2016) held in Nagoya. We thank the anonymous reviewers of the ToSC 2017 and Latincrypt 2017 for their fruitful comments. We thank Ashwin Jha and Mridul Nandi for their remark in [7] wherein they pointed us to a subtle error in our formulation of Fact 1 that has been corrected in this version of 08 March 2021. As they noted, our Proof of Lemma 3 implicitly used a special case of compressing sequences, where the fact already held. Therefore, our proof was only slightly augmented to point it out, but does not change.
|
| 385 |
+
---PAGE_BREAK---
|
| 386 |
+
|
| 387 |
+
References
|
| 388 |
+
|
| 389 |
+
1. Christof Beierle, Jérémy Jean, Stefan Kölbl, Gregor Leander, Amir Moradi, Thomas Peyrin, Yu Sasaki, Pascal Sasdrich, and Siang Meng Sim. The SKINNY Family of Block Ciphers and Its Low-Latency Variant MANTIS. In Matthew Robshaw and Jonathan Katz, editors, *CRYPTO II*, volume 9815 of *Lecture Notes in Computer Science*, pages 123–153. Springer, 2016.
|
| 390 |
+
|
| 391 |
+
2. Mihir Bellare and Phillip Rogaway. The Security of Triple Encryption and a Framework for Code-Based Game-Playing Proofs. In Serge Vaudenay, editor, *EUROCRYPT*, volume 4004 of *Lecture Notes in Computer Science*, pages 409–426. Springer, 2006.
|
| 392 |
+
|
| 393 |
+
3. John Black. The Ideal-Cipher Model, Revisited: An Uninstantiable Blockcipher-Based Hash Function. In Matthew J. B. Robshaw, editor, *FSE*, volume 4047 of *Lecture Notes in Computer Science*, pages 328–340. Springer, 2006.
|
| 394 |
+
|
| 395 |
+
4. Shan Chen and John P. Steinberger. Tight Security Bounds for Key-Alternating Ciphers. In Phong Q. Nguyen and Elisabeth Oswald, editors, *EUROCRYPT*, volume 8441 of *Lecture Notes in Computer Science*, pages 327–350. Springer, 2014.
|
| 396 |
+
|
| 397 |
+
5. Peter Gazi and Ueli M. Maurer. Cascade Encryption Revisited. In Mitsuru Matsui, editor, *ASIACRYPT*, volume 5912 of *Lecture Notes in Computer Science*, pages 37–51. Springer, 2009.
|
| 398 |
+
|
| 399 |
+
6. Jérémy Jean, Ivica Nikolic, and Thomas Peyrin. Tweaks and Keys for Block Ciphers: The TWEAKEY Framework. In Palash Sarkar and Tetsu Iwata, editors, *ASIACRYPT (2)*, volume 8874 of *Lecture Notes in Computer Science*, pages 274–288, 2014.
|
| 400 |
+
|
| 401 |
+
7. Ashwin Jha and Mridul Nandi. Tight security of cascaded LRW2. *J. Cryptol.*, 33(3):1272–1317, 2020.
|
| 402 |
+
|
| 403 |
+
8. Rodolphe Lampe and Yannick Seurin. Tweakable Blockciphers with Asymptotically Optimal Security. In Shiho Moriai, editor, *FSE*, volume 8424 of *Lecture Notes in Computer Science*, pages 133–151. Springer, 2013.
|
| 404 |
+
|
| 405 |
+
9. Will Landecker, Thomas Shrimpton, and R. Seth Terashima. Tweakable blockciphers with beyond birthday-bound security. In Reihaneh Safavi-Naini and Ran Canetti, editors, *CRYPTO*, volume 7417 of *Lecture Notes in Computer Science*, pages 14–30. Springer, 2012.
|
| 406 |
+
|
| 407 |
+
10. Jooyoung Lee. Towards Key-Length Extension with Optimal Security: Cascade Encryption and XOR-cascade Encryption. In Thomas Johansson and Phong Q. Nguyen, editors, *EUROCRYPT*, volume 7881 of *Lecture Notes in Computer Science*, pages 405–425. Springer, 2013.
|
| 408 |
+
|
| 409 |
+
11. Moses Liskov, Ronald L. Rivest, and David Wagner. Tweakable Block Ciphers. In Moti Yung, editor, *CRYPTO*, volume 2442 of *Lecture Notes in Computer Science*, pages 31–46. Springer, 2002.
|
| 410 |
+
|
| 411 |
+
12. Bart Mennink. Optimally Secure Tweakable Blockciphers. In Gregor Leander, editor, *FSE*, volume 9054 of *Lecture Notes in Computer Science*, pages 428–448. Springer, 2015.
|
| 412 |
+
|
| 413 |
+
13. Kazuhiko Minematsu. Beyond-Birthday-Bound Security Based on Tweakable Block Cipher. In Orr Dunkelman, editor, *FSE*, volume 5665 of *Lecture Notes in Computer Science*, pages 308–326. Springer, 2009.
|
| 414 |
+
|
| 415 |
+
14. Kazuhiko Minematsu and Tetsu Iwata. Tweak-Length Extension for Tweakable Blockciphers. In Jens Groth, editor, *IMA Int. Conf.*, volume 9496 of *Lecture Notes in Computer Science*, pages 77–93. Springer, 2015.
|
| 416 |
+
---PAGE_BREAK---
|
| 417 |
+
|
| 418 |
+
15. Yusuke Naito. Tweakable Blockciphers for Efficient Authenticated Encryptions with Beyond the Birthday-Bound Security. *IACR Transactions on Symmetric Cryptology*, 2017(2):1–26, 2017.
|
| 419 |
+
|
| 420 |
+
16. Jacques Patarin. The "Coefficients H" Technique. In Roberto Maria Avanzi, Liam Keliher, and Francesco Sica, editors, *SAC*, volume 5381 of *Lecture Notes in Computer Science*, pages 328–345. Springer, 2008.
|
| 421 |
+
|
| 422 |
+
17. Phillip Rogaway. Efficient Instantiations of Tweakable Blockciphers and Refinements to Modes OCB and PMAC. In *ASIACRYPT*, volume 3329 of *Lecture Notes in Computer Science*, pages 16–31. Springer, 2004.
|
| 423 |
+
|
| 424 |
+
18. Richard Schroeppel and Hilarie Orman. The Hasty Pudding Cipher. *AES candidate submitted to NIST*, 1998.
|
| 425 |
+
|
| 426 |
+
19. Thomas Shrimpton and R. Seth Terashima. A Modular Framework for Building Variable-Input-Length Tweakable Ciphers. In Kazue Sako and Palash Sarkar, editors, *ASIACRYPT (1)*, volume 8269 of *Lecture Notes in Computer Science*, pages 405–423. Springer, 2013.
|
| 427 |
+
|
| 428 |
+
20. Lei Wang, Jian Guo, Guoyan Zhang, Jingyuan Zhao, and Dawu Gu. How to Build Fully Secure Tweakable Blockciphers from Classical Blockciphers. In Jung Hee Cheon and Tsuyoshi Takagi, editors, *ASIACRYPT (1)*, volume 10031 of *Lecture Notes in Computer Science*, pages 455–483, 2016.
|
| 429 |
+
|
| 430 |
+
A Proof Details
|
| 431 |
+
|
| 432 |
+
The proof of Theorem 1 follows from Lemmas 1, 2, and 3. Let $\tilde{E}$ denote the XHX[$E, \mathcal{H}$] construction in the remainder. W.l.o.g., we assume, **A** does not ask duplicated queries nor trivial queries to which it already knows the answer, e.g., feeds the result of an encryption query to the corresponding decryption oracle or vice versa. The queries by **A** are collected in a transcript $\tau$. We define that $\tau$ is composed of two disjoint sets of queries $\tau_C$ and $\tau_P$ and $L$, $\tau = \tau_C \cup \tau_P \cup \{L\}$, where $\tau_C := \{(M^i, C^i, T^i, H_1^i, H_2^i, H_3^i, X^i, Y^i, d^i)\}_{1\le i\le q_C}$ denotes the queries by **A** to the construction oracle plus internal variables $H_1^i, H_2^i, H_3^i$ (i.e., the outputs of $\mathcal{H}_1, \mathcal{H}_2$, and $\mathcal{H}_3$, respectively), $X^i$ and $Y^i$ (where $X^i \leftarrow H_1^i \oplus M^i$ and $Y^i \leftarrow H_3^i \oplus C^i$, respectively); and $\tau_P := \{(\hat{K}^i, \hat{X}^i, \hat{Y}^i, d^i)\}_{1\le i\le q_P}$ the queries to the primitive oracle; both sets store also binary variables $d^i$ that indicate the direction of the $i$-th query, where $d^i = 1$ represents the fact that the $i$-th query is an encryption query, and $d^i = 0$ that it is a decryption query. The internal variables for one call to XHX are as given in Algorithm 2 and Figure 2.
|
| 433 |
+
We apply a common strategy for handling bad events from both worlds: in the real world, all secrets (i.e., the hash-function key $L$) are revealed to the **A** after it finished its interaction with the available oracles, but before it has output its decision bit regarding which world it interacted with. Similarly, in the ideal world, the oracle samples the hash-function key independently from the choice of $E$ and $\tilde{\pi}$ uniformly at random, $L \leftarrow \mathcal{L}$, and also reveals $L$ to **A** after the adversary finished its interaction and before has output its decision bit. The internal variables in construction queries – $H_1^i, H_2^i, H_3^i, X^i, Y^i$ – can then be computed and added to the transcript also in the ideal world using the oracle inputs and outputs $T^i, M^i, C^i, H_1^i, H_2^i$, and $H_3^i$.
|
| 434 |
+
---PAGE_BREAK---
|
| 435 |
+
|
| 436 |
+
Let $1 \le i \ne j \le q$. We define that an attainable transcript $\tau$ is **bad**, i.e., $\tau \in \text{BADT}$, if one of the following conditions is met:
|
| 437 |
+
|
| 438 |
+
- bad$_1$: There exist $i \neq j$ s.t. $(H_2^i, X^i) = (H_2^j, X^j)$.
|
| 439 |
+
|
| 440 |
+
- bad$_2$: There exist $i \neq j$ s.t. $(H_2^i, Y^i) = (H_2^j, Y^j)$.
|
| 441 |
+
|
| 442 |
+
- bad$_3$: There exist $i \neq j$ s.t. $(H_2^i, X^i) = (\tilde{K}^j, \tilde{X}^j)$.
|
| 443 |
+
|
| 444 |
+
- bad$_4$: There exist $i \neq j$ s.t. $(H_2^i, Y^i) = (\tilde{K}^j, \tilde{Y}^j)$.
|
| 445 |
+
|
| 446 |
+
- bad$_5$: There exist $i \neq j$ s.t. $(\tilde{K}^i, \tilde{X}^i) = (\tilde{K}^j, \tilde{X}^j)$.
|
| 447 |
+
|
| 448 |
+
- bad$_6$: There exist $i \neq j$ s.t. $(\tilde{K}^i, \tilde{Y}^i) = (\tilde{K}^j, \tilde{Y}^j)$.
|
| 449 |
+
|
| 450 |
+
- bad$_7$: There exist $i \in \{1, \dots, s\}$ and $j \in \{1, \dots, q_C\}$ s.t. $(X^j, H_2^j) = (I_i, K_i)$ and $d^j = 1$.
|
| 451 |
+
|
| 452 |
+
- bad$_8$: There exist $i \in \{1, \dots, s\}$ and $j \in \{1, \dots, q_C\}$ s.t. $(Y^j, H_2^j) = (L_i, K_i)$ and $d^j = 0$.
|
| 453 |
+
|
| 454 |
+
- bad$_9$: There exist $i \in \{1, \dots, s\}$ and $j \in \{1, \dots, q_P\}$ s.t. $(\tilde{X}^j, \tilde{K}^j) = (I_i, K_i)$.
|
| 455 |
+
|
| 456 |
+
- bad$_{10}$: There exist $i \in \{1, \dots, s\}$ and $j \in \{1, \dots, q_P\}$ s.t. $(\tilde{Y}^j, \tilde{K}^j) = (L_i, K_i)$.
|
| 457 |
+
|
| 458 |
+
- bad$_{11}$: There exist $i, j \in \{1, \dots, s\}$ and $i \neq j$ s.t. $(K_i, L_i) = (K_j, L_j)$ but $I_i \neq I_j$.
|
| 459 |
+
|
| 460 |
+
The events
|
| 461 |
+
|
| 462 |
+
- bad$_1$ and bad$_2$ consider collisions between two construction queries,
|
| 463 |
+
|
| 464 |
+
- bad$_3$ and bad$_4$ consider collisions between primitive and construction queries,
|
| 465 |
+
|
| 466 |
+
- bad$_5$ and bad$_6$ consider collisions between two primitive queries, and
|
| 467 |
+
|
| 468 |
+
- bad$_7$ through bad$_{10}$ address the case that the adversary may could find an input-key tuple in either a primitive or construction query that has been used to derive some of the subkeys $L_i$.
|
| 469 |
+
|
| 470 |
+
- bad$_{11}$ addresses the event that the ideal oracle produces a collision while sampling the hash-function keys independently uniformly at random.
|
| 471 |
+
|
| 472 |
+
Note that the events bad$_5$ and bad$_6$ are listed here only for the sake of completeness. We will show briefly that these events can never occur.
|
| 473 |
+
|
| 474 |
+
## A.1 Proof of Lemma 2
|
| 475 |
+
|
| 476 |
+
*Proof.* In the following, we upper bound the probabilities of each bad event.
|
| 477 |
+
|
| 478 |
+
**bad$_1$ and bad$_2$.** Events bad$_1$ and bad$_2$ represent the cases that two distinct construction queries would feed the same tuple of key and input to the underlying primitive *E* if the construction would be the real $\tilde{E}$; bad$_1$ considers the case when the values $H_2^i = H_2^j$ and $X^i = X^j$ collide. In the real world, it follows that $Y^i = Y^j$, while this holds only with small probability in the ideal world. The event bad$_2$ concerns the case when the values $H_2^i = H_2^j$ and $Y^i = Y^j$ collide. Again, in the real world, it follows then that $X^i = X^j$, whereas this holds only with small probability in the ideal world. So, both events would allow **A** to distinguish both worlds. Let us consider bad$_1$ first, and let us start in the real
|
| 479 |
+
---PAGE_BREAK---
|
| 480 |
+
|
| 481 |
+
world. Since **A** asks no duplicate queries, it must hold that two distinct queries $(M^i, T^i)$ and $(M^j, T^j)$ yielded
|
| 482 |
+
|
| 483 |
+
$$X^i = (M^i \oplus H_1^i) = (M^j \oplus H_1^j) = X^j \quad \text{and} \quad H_2^i = H_2^j.$$
|
| 484 |
+
|
| 485 |
+
We define $\Delta := M^i \oplus M^j$ and consider two subcases: in the subcase that $T^i = T^j$, it automatically holds that $H_2^i = H_2^j$ and $H_1^i = H_1^j$. However, this also implies that $M^i = M^j$, i.e., **A** would have asked a duplicate query, which is prohibited. So, it must hold that $T^i \neq T^j$ in the real world.
|
| 486 |
+
|
| 487 |
+
If $T^i = T^j$ in the ideal world, it must hold that the plaintexts are disjoint, $M^i \neq M^j$, since we assumed that **A** does not make duplicate queries. Since $\tilde{\pi}(T^i, \cdot)$ is a permutation, the resulting plaintexts are also disjoint: $M^i \neq M^j$. From $T^i = T^j$ follows that $H_1^i = H_1^j$ and thus, $X^i$ and $X^j$ cannot be equal:
|
| 488 |
+
|
| 489 |
+
$$X^i = M^i \oplus H_1^i \neq M^j \oplus H_1^j = X^j,$$
|
| 490 |
+
|
| 491 |
+
which contradicts with our definition of bad$_1$. So, it must hold that $T^i \neq T^j$ also in the ideal world. From Property P1 and over $L \leftarrow \mathcal{L}$, it holds then
|
| 492 |
+
|
| 493 |
+
$$
|
| 494 |
+
\begin{align*}
|
| 495 |
+
\Pr[\text{bad}_1] &= \Pr[\exists i \neq j; 1 \le i, j \le q_C : (X^i, H_2^i) = (X^j, H_2^j)] \\
|
| 496 |
+
&= \Pr[\exists i \neq j; 1 \le i, j \le q_C : \mathcal{H}_{1,2}(T^i) \oplus \mathcal{H}_{1,2}(T^j) = (\Delta, 0^k)] \le \binom{q_C}{2} \epsilon_1.
|
| 497 |
+
\end{align*}
|
| 498 |
+
$$
|
| 499 |
+
|
| 500 |
+
Using a similar argumentation, it follows also from Property P1 that for $T^i \neq T^j$
|
| 501 |
+
|
| 502 |
+
$$
|
| 503 |
+
\begin{align*}
|
| 504 |
+
\Pr[\text{bad}_2] &= \Pr[\exists i \neq j; 1 \le i, j \le q_C : (Y^i, H_2^i) = (Y^j, H_2^j)] \\
|
| 505 |
+
&= \Pr[\exists i \neq j; 1 \le i, j \le q_C : \mathcal{H}_{3,2}(T^i) \oplus \mathcal{H}_{3,2}(T^j) = (\Delta, 0^k)] \le \binom{q_C}{2} \epsilon_1.
|
| 506 |
+
\end{align*}
|
| 507 |
+
$$
|
| 508 |
+
|
| 509 |
+
**bad<sub>3</sub> and bad<sub>4</sub>.** Events bad<sub>3</sub> and bad<sub>4</sub> represent the cases that a construction query to the *real* construction $\tilde{E}$ would feed the same key and input $(H_2^i, X^i)$ to the underlying primitive *E* in the real construction as a primitive query $(\hat{K}^j, \hat{X}^j)$. This is equivalent to guessing the hash-function output for the *i*-th query. Let us consider bad<sub>3</sub> first. Over $L \leftarrow \mathcal{L}$ and for all $(\hat{K}^j, \hat{X}^j)$, the probability of bad<sub>3</sub> is upper bounded by
|
| 510 |
+
|
| 511 |
+
$$
|
| 512 |
+
\begin{align*}
|
| 513 |
+
\Pr[\text{bad}_3] &= \Pr[\exists i,j; 1 \le i \le q_C, 1 \le j \le q_P : (X^i, H_2^i) = (\hat{X}^j, \hat{K}^j)] \\
|
| 514 |
+
&= \Pr[\exists i,j; 1 \le i \le q_C, 1 \le j \le q_P : (H_1^i = M^i \oplus \hat{X}^j) \land (H_2^i = \hat{K}^j)] \\
|
| 515 |
+
&= \Pr[\exists i,j; 1 \le i \le q_C, 1 \le j \le q_P : \mathcal{H}_{1,2}(T^i) = (M^i \oplus \hat{X}^j, \hat{K}^j)] \\
|
| 516 |
+
&\le q_C \cdot q_P \cdot \epsilon_2
|
| 517 |
+
\end{align*}
|
| 518 |
+
$$
|
| 519 |
+
---PAGE_BREAK---
|
| 520 |
+
|
| 521 |
+
due to Property P2. Using a similar argumentation, it holds that
|
| 522 |
+
|
| 523 |
+
$$
|
| 524 |
+
\begin{align*}
|
| 525 |
+
\Pr[\text{bad}_4] &= \Pr\left[\exists i, j; 1 \le i \le q_C, 1 \le j \le q_P : (X^i, H_2^i) = (\hat{Y}^j, \hat{K}^j)\right] \\
|
| 526 |
+
&= \Pr\left[\exists i, j; 1 \le i \le q_C, 1 \le j \le q_P : (H_3^i = C^i \oplus \hat{Y}^j) \land (H_2^i = \hat{K}^j)\right] \\
|
| 527 |
+
&= \Pr\left[\exists i, j; 1 \le i \le q_C, 1 \le j \le q_P : \mathcal{H}_{3,2}(T^i) = (C^i \oplus \hat{Y}^j, \hat{K}^j)\right] \\
|
| 528 |
+
&\le q_C \cdot q_P \cdot \epsilon_2.
|
| 529 |
+
\end{align*}
|
| 530 |
+
$$
|
| 531 |
+
|
| 532 |
+
**bad<sub>5</sub> and bad<sub>6</sub>.** Events **bad<sub>5</sub>** and **bad<sub>6</sub>** represent the cases that two distinct primitive queries feed the same key and the same input to the primitive **E**. Clearly, in both worlds, this implies that **A** either has asked a duplicate primitive query or has fed the result of an earlier primitive query to the primitive's inverse oracle. Both types of queries are forbidden; so, they will not occur.
|
| 533 |
+
|
| 534 |
+
**bad<sub>7</sub> and bad<sub>8</sub>.** Let us consider **bad<sub>7</sub>** first, which considers the case that the *j*-th construction query in encryption direction matches the inputs to **E** used for generating a hash function subkeys $L_i$, for some $j \in [1..q]$ and $i \in [1..s]$. **bad<sub>8</sub>** considers the equivalent case in decryption direction. We define $\Delta := M^j \oplus \mathcal{H}_1(L, T^j)$. For this **bad** event, it must hold that $M^j \oplus \mathcal{H}_1(L, T^j) = I_i$ and $\mathcal{H}_2(L, T^j) = K_i$. Concerning the tuples $I_i, K_i$, we cannot exclude in general that all values $K_1(K) = ... = K_s(K)$ are equal and therefore, $L_i$ are outputs of the same permutation. From Property P3 and the fact that there have been $j$ queries and the adversary can hit one out of $s$ values, and over $L \leftarrow \mathcal{L}$, it follows that the probability for this event can be upper bounded by
|
| 535 |
+
|
| 536 |
+
$$
|
| 537 |
+
\begin{align*}
|
| 538 |
+
\Pr[\text{bad}_7] &= \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_C : (X^j, H_2^j) \oplus (I_i, K_i) = (\Delta, 0^k)\right] \\
|
| 539 |
+
&= \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_C : \mathcal{H}_{1,2}(T^j) \oplus (I_i, K_i) = (\Delta, 0^k)\right] \\
|
| 540 |
+
&\le q_C \cdot s \cdot \epsilon_3.
|
| 541 |
+
\end{align*}
|
| 542 |
+
$$
|
| 543 |
+
|
| 544 |
+
Using a similar argument, it follows from Property P4 that
|
| 545 |
+
|
| 546 |
+
$$
|
| 547 |
+
\begin{align*}
|
| 548 |
+
\Pr[\text{bad}_8] &= \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_C : (Y^j, H_2^j) \oplus (L_i, K_i) = (\Delta, 0^k)\right] \\
|
| 549 |
+
&= \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_C : \mathcal{H}_{3,2}(T^j) \oplus (L_i, K_i) = (\Delta, 0^k)\right] \\
|
| 550 |
+
&\le q_C \cdot s \cdot \epsilon_4.
|
| 551 |
+
\end{align*}
|
| 552 |
+
$$
|
| 553 |
+
|
| 554 |
+
**bad<sub>9</sub> and bad<sub>10</sub>.** The event **bad<sub>9</sub>** models the case that a primitive query in encryption direction matches key and input used for generating $L_i$, for some $i \in [1..s]: (\hat{X}^j, \hat{K}^j) = (I_i, K_i)$. The event **bad<sub>10</sub>** considers the equivalent case in decryption direction. From our assumption that Property P5 holds and the fact that the adversary can hit one out of $s$ values, and over $K \leftarrow K$, the probability for this event can be upper bounded by
|
| 555 |
+
|
| 556 |
+
$$
|
| 557 |
+
\Pr[\text{bad}_9] = \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_P : (\hat{X}^j, \hat{K}^j) = (I_i, K_i)\right] \le q_P \cdot s \cdot \epsilon_7
|
| 558 |
+
$$
|
| 559 |
+
---PAGE_BREAK---
|
| 560 |
+
|
| 561 |
+
We can use a similar argument and Property P5 to upper bound the probability
|
| 562 |
+
that the *j*-th query of **A** hits $L_i$, $K_i$ by
|
| 563 |
+
|
| 564 |
+
$$
|
| 565 |
+
\Pr[\text{bad}_{10}] = \Pr\left[\exists i, j; 1 \le i \le s, 1 \le j \le q_P : (\hat{Y}^j, \hat{K}^j) = (L_i, K_i)\right] \le q_P \cdot s \cdot \epsilon_5.
|
| 566 |
+
$$
|
| 567 |
+
|
| 568 |
+
bad$_{11}$. It is possible that a number of key inputs $K_i = K_j$, for some $i, j \in$
|
| 569 |
+
$\{1, \dots, s\}, i \neq j$, are equal. The event bad$_{11}$ models the case that the ideal
|
| 570 |
+
oracle produces a collision $(K_i, L_i) = (K_j, L_j)$, although it holds that $I_i \neq I_j$,
|
| 571 |
+
which indicates that the hash-function keys cannot be result of computing them
|
| 572 |
+
from the block cipher $E$. In the worst case, all keys $K_i$, for $1 \le i \le s$, are equal.
|
| 573 |
+
So, the probability for this event can be upper bounded by
|
| 574 |
+
|
| 575 |
+
$$
|
| 576 |
+
\mathrm{Pr}[\mathrm{bad}_{11}] = \mathrm{Pr}[\exists i, j \in \{1, \dots, s\}, i \neq j : (K_i, L_i) = (K_j, L_j), I_i \neq I_j] \leq \frac{s^2}{2^{n+1}}.
|
| 577 |
+
$$
|
| 578 |
+
|
| 579 |
+
Our claim in Lemma 2 follows from summing up the probabilities of all bad events.
|
| 580 |
+
|
| 581 |
+
Before proceeding with the proof of good transcripts, we formulate a short fact
|
| 582 |
+
that will serve useful later on. In the remainder, we denote the falling factorial
|
| 583 |
+
as $(n)_k := \frac{n!}{k!}$. Prior, we recall a definition from [7].
|
| 584 |
+
|
| 585 |
+
**Definition 4 (Compressing Sequences [7]).** For integers $r \le s$, let $U = (u_1, \dots, u_r)$ and $V = (b_1, \dots, b_s)$ be two sequences over $\mathbb{N}$. We say that $V$ compresses to $U$ if there exists a partition $\mathcal{P}$ of $\{1, \dots, r\}$ such that $\mathcal{P}$ contains exactly $s$ entries, say $\mathcal{P}_1, \dots, \mathcal{P}_s$ and $\forall i \in \{1, \dots, s\}$, it holds that $u_i = \sum_{j \in \mathcal{P}_i} v_j$.
|
| 586 |
+
|
| 587 |
+
The following Fact has been updated to match Proposition 1 of [7], where we
|
| 588 |
+
changed $r \ge s$. The proof is given there.
|
| 589 |
+
|
| 590 |
+
**Fact 1 (A Variant of Proposition 1 in [7].)** For integers $r \le s$, let $U=(u_1, \dots, u_r)$ and $V = (v_1, \dots, v_s)$ be two sequences of positive integers such that $V$ compresses to $U$. Then, it holds for any positive integer $n$ such that $2^n \ge \sum_{i=1}^r u_i$ that
|
| 591 |
+
|
| 592 |
+
$$
|
| 593 |
+
\prod_{i=1}^{r} (N)_{u_i} \leq \prod_{i=1}^{s} (N)_{v_i} \quad \text{and thus} \quad \prod_{i=1}^{r} \frac{1}{(N)_{u_i}} \geq \prod_{i=1}^{s} \frac{1}{(N)_{v_i}}.
|
| 594 |
+
$$
|
| 595 |
+
|
| 596 |
+
A.2 Proof of Lemma 3
|
| 597 |
+
|
| 598 |
+
*Proof.* Fix a good transcript $\tau$. In the ideal world, the probability to obtain $\tau$ is
|
| 599 |
+
|
| 600 |
+
$$
|
| 601 |
+
\begin{align*}
|
| 602 |
+
\Pr[\Theta_{\text{ideal}} = \tau] &= \Pr_{\forall i} [\tilde{\pi}(T^i, M^i) = C^i] \cdot \Pr_{\forall j} [E(\hat{K}^j, \hat{X}^j) = Y^j] \cdot \Pr_{\forall g} [L_g] \\
|
| 603 |
+
&\qquad \cdot \Pr[K \leftarrow K : K].
|
| 604 |
+
\end{align*}
|
| 605 |
+
$$
|
| 606 |
+
|
| 607 |
+
In the real world, the probability to obtain a transcript $\tau$ is given by
|
| 608 |
+
|
| 609 |
+
$$
|
| 610 |
+
\Pr[\Theta_{\text{real}} = \tau] = \underset{\forall i, \forall j, \forall g}{\operatorname{Pr}} [ \underset{\forall L}{\tilde{E}}(T^i, M^i) = C^i, E(\hat{K}^j, \hat{X}^j) = Y^j, E(K_g, I_g) = L_g ] \\
|
| 611 |
+
\cdot \Pr[K \leftarrow K : K].
|
| 612 |
+
$$
|
| 613 |
+
---PAGE_BREAK---
|
| 614 |
+
|
| 615 |
+
First, we consider the distribution of keys. In the ideal world, all components of $L = (K, L_1, \dots, L_s)$ are sampled uniformly and independently at random; the real world employs the block cipher $E$ for generating $L_1, \dots, L_s$. Let us focus on $K$, which is sampled uniformly in both worlds:
|
| 616 |
+
|
| 617 |
+
$$ \Pr[K \leftarrow \mathcal{K} : K] = \frac{1}{|\mathcal{K}|}. $$
|
| 618 |
+
|
| 619 |
+
The remaining hash-function key $L_1, \dots, L_s$ will be considered in turn. To prove the remainder of our claim in Lemma 3, we have to show that
|
| 620 |
+
|
| 621 |
+
$$ \begin{align} & \Pr_{\forall i, \forall j, \forall g} \left[ \tilde{E}_L(T^i, M^i) = C^i, E(\hat{\mathcal{K}}^j, \hat{\mathcal{X}}^j) = Y^j, E(K_g, I_g) = L_g \right] \tag{1} \\ & \ge \Pr_{\forall i} [\tilde{\pi}(T^i, M^i) = C^i] \cdot \Pr_{\forall j} [E(\hat{\mathcal{K}}^j, \hat{\mathcal{X}}^j) = Y^j] \cdot \prod_{g=1}^s \Pr[L_g \leftarrow \{0, 1\}^n : L_g]. \nonumber \end{align} $$
|
| 622 |
+
|
| 623 |
+
We reindex the keys used in primitive queries to $\hat{\mathcal{K}}^1, \dots, \hat{\mathcal{K}}^\ell$ to eliminate duplicates. Given those indices, we group all primitive queries into sets $\hat{\mathcal{K}}^j$, for $1 \le j \le \ell$, s.t. all sets are distinct and each set $\hat{\mathcal{K}}^j$ contains exactly only the primitive queries with key $\hat{\mathcal{K}}^j$:
|
| 624 |
+
|
| 625 |
+
$$ \hat{\mathcal{K}}^j := \left\{ (\hat{\mathcal{K}}^i, \hat{\mathcal{X}}^i, \hat{\mathcal{Y}}^i) : \hat{\mathcal{K}}^i = \hat{\mathcal{K}}^j \right\}. $$
|
| 626 |
+
|
| 627 |
+
We denote by $\hat{k}^j = |\hat{\mathcal{K}}^j|$ the number of queries with key $\hat{\mathcal{K}}^j$. Clearly, it holds that $\ell \le q_P$ and $\sum_{j=1}^\ell \hat{k}^j = q_P$.
|
| 628 |
+
|
| 629 |
+
Moreover, we also re-index the tweaks of the construction queries to $\mathcal{T}^1, \dots, \mathcal{T}^r$ for the purpose of eliminating duplicates. Given these new indices, we group all construction queries into sets $\mathcal{T}^j$, for $1 \le j \le r$, s.t. all sets are distinct and each set $\mathcal{T}^j$ contains exactly only all construction queries with the tweak $\mathcal{T}^j$:
|
| 630 |
+
|
| 631 |
+
$$ \mathcal{T}^j := \left\{ (\mathcal{T}^i, M^i, C^i) : \mathcal{T}^i = \mathcal{T}^j \right\}. $$
|
| 632 |
+
|
| 633 |
+
We denote by $t^j = |\mathcal{T}^j|$ the number of queries with tweak $\mathcal{T}^j$. It holds that $r \le q_C$ and $\sum_{j=1}^r t^j = q_C$.
|
| 634 |
+
|
| 635 |
+
First, we consider the probability of an obtained good transcript in the ideal world. Therein, all components $L_1, \dots, L_s$ are sampled independently uniformly at random from $\{0, 1\}^n$. So, in the ideal world, it holds that
|
| 636 |
+
|
| 637 |
+
$$ \prod_{g=1}^{s} \Pr[L_g \leftarrow \{0,1\}^n : L_g] = \frac{1}{(2^n)^s}. $$
|
| 638 |
+
|
| 639 |
+
Recall that every $\tilde{\pi}(\mathcal{T}^j, \cdot)$ and $\tilde{\pi}^{-1}(\mathcal{T}^j, \cdot)$ is a permutation, and the assumption that **A** does not ask duplicate queries or such to which it already knows the answer. So, all queries are pairwise distinct. The probability to obtain the outputs of our transcript for some fixed tweak $\mathcal{T}^j$ is given by
|
| 640 |
+
|
| 641 |
+
$$ \frac{1}{2^n \cdot (2^n - 1) \cdots (2^n - t^j + 1)} = \frac{1}{(2^n)_{t^j}}. $$
|
| 642 |
+
---PAGE_BREAK---
|
| 643 |
+
|
| 644 |
+
The same applies for the outputs of the primitive queries in our transcript for some fixed key $\hat{\mathcal{K}}^j$:
|
| 645 |
+
|
| 646 |
+
$$ \frac{1}{(2^n)_{\hat{\mathcal{K}}^j}} $$
|
| 647 |
+
|
| 648 |
+
The outputs of construction and primitive queries are independent from each other in the ideal world. Over all disjoint key and tweak sets, the probability for obtaining $\tau$ in the ideal world is given by
|
| 649 |
+
|
| 650 |
+
$$ \mathrm{Pr}[\Theta_{\mathrm{ideal}} = \tau] = \left(\prod_{i=1}^{r} \frac{1}{(2^n)_{t_i}}\right) \cdot \left(\prod_{j=1}^{\ell} \frac{1}{(2^n)_{\hat{\mathcal{K}}_j}}\right) \cdot \frac{1}{(2^n)^s} \cdot \frac{1}{|\mathcal{K}|}. \quad (2) $$
|
| 651 |
+
|
| 652 |
+
It remains to upper bound the probability $\tau$ in the real world. We observe that for every pair of queries $i$ and $j$ with $T^i = T^j$, it holds that $H_2^i = H_2^j$, i.e., both queries always target the same underlying permutation. Moreover, in the real world, two distinct tweaks $T^i \neq T^j$ can still collide in their hash-function outputs $H_2^i = H_2^j$. In this case, the queries with tweaks $T^i$ and $T^j$ also use the same permutation. Furthermore, there may be hash-function outputs $H_2^i$ from construction queries that are identical to keys $\hat{\mathcal{K}}^j$ that were used in primitive queries. In this case, both queries also employ the same permutation and so, the outputs from primitive and from construction queries are not independent as in the ideal world. Moreover, the derived keys $L_i$ are also constructed from the same block cipher $E$; hence, the inputs $K_i$ may also use the same permutation as primitive and construction queries.
|
| 653 |
+
|
| 654 |
+
For our purpose, we also reindex the keys in all primitive queries into sets to $\hat{\mathcal{K}}^1, \dots, \hat{\mathcal{K}}^\ell$, and also reindex the tweaks in construction queries to $T^1, \dots, T^r$ to eliminate duplicates. We define key sets $\mathcal{K}^j$, for $1 \le j \le \ell$, and tweak sets $T^j$, for $1 \le j \le r$, analogously as we did for the ideal world. Moreover, for every so-indexed tweak $T^i$, we compute its corresponding value $H_2^i$. We also reindex the hash values $H_2^j$ to $H_2^1, \dots, H_2^u$ for duplicate elimination, and group the construction queries into sets
|
| 655 |
+
|
| 656 |
+
$$ \mathcal{H}_2^j := \left\{ (T^i, M^i, C^i) : \mathcal{H}_2(L, T^i) = H_2^j \right\}. $$
|
| 657 |
+
|
| 658 |
+
We denote by $h_2^j = |\mathcal{H}_2^j|$ the number of queries whose tweak maps to $H_2^j$. Clearly, it still holds that $\sum_{i=1}^u h_2^j = q_C$. We can define an ordering s.t. for all $1 \le i \le u$, $T^i$ is mapped to $H_2^i$. Since for all $1 \le i \le r$, all queries of tweak $T^j$ are contained in exactly one set $\mathcal{H}_2^j$, there exists some $j \in \{1, \dots, u\}$, s.t. it holds
|
| 659 |
+
|
| 660 |
+
$$ \sum_{j=1}^{u} h_2^{j} = \sum_{i=1}^{r} t^{i} = q_{C}, \quad u \le r, \text{ and } h_{2}^{i} \ge t^{i}, \text{ for all } 1 \le i \le r. $$
|
| 661 |
+
|
| 662 |
+
Note that the sequence that contains the number of occurrences of tweak values $\mathcal{T}$ compresses to the sequences that contains the number of occurrences of hash values $\mathcal{H}_2$. Equal tweaks $T^i$ and $T^j$ will map to the same hash value $\mathbb{H}_2$. If the
|
| 663 |
+
---PAGE_BREAK---
|
| 664 |
+
|
| 665 |
+
hashes of $T^i$ and $T^j$ are identical, than, $H_2$ will be the sum of (at least) their
|
| 666 |
+
numbers of occurrences. Thus, they are compressing, and it follows from Fact 1
|
| 667 |
+
that
|
| 668 |
+
|
| 669 |
+
$$
|
| 670 |
+
\prod_{j=1}^{u} \frac{1}{(2^n)_{h_2^j}} \geq \prod_{i=1}^{r} \frac{1}{(2^n)_{t^i}}.
|
| 671 |
+
$$
|
| 672 |
+
|
| 673 |
+
In addition, we reindex the key inputs $K_i$ that are used for generating the keys $L_1, \dots, L_s$ to $K_1, \dots, K_w$ to eliminate duplicates, and group all tuples $(I_i, K_i)$ into sets $\mathcal{K}^j$, for $1 \le j \le w$, s.t. all sets are distinct and each set contains exactly those key-generating tuples with the key $K_j$:
|
| 674 |
+
|
| 675 |
+
$$
|
| 676 |
+
\mathcal{K}^j := \{(I_i, K_i) : K_i = \mathcal{K}^j.\}
|
| 677 |
+
$$
|
| 678 |
+
|
| 679 |
+
On this base, we unify and reindex the values $H_2^j$, $\hat{\mathcal{K}}^j$, and $\mathcal{K}^j$ to values $\mathbb{P}^1, \dots, \mathbb{P}^v$ (using $\mathbb{P}$ for permutation). We group all queries into sets $\mathcal{P}^j$, for $1 \le j \le v$, s.t. all sets are distinct and each set $\mathcal{P}^j$ consists of exactly the union of all construction queries with the hash value $H_2 = \mathbb{P}^j$, all primitive queries with $\hat{\mathcal{K}} = \mathbb{P}^j$, and all key-generating tuples with $\mathcal{K} = \mathbb{P}^j$:
|
| 680 |
+
|
| 681 |
+
$$
|
| 682 |
+
\mathcal{P}^j := \{\mathcal{H}_2^i : \mathcal{H}_2^i = \mathbf{P}^j\} \cup \{\hat{\mathcal{K}}^i : \hat{\mathcal{K}}^i = \mathbf{P}^j\} \cup \{\mathcal{K}^i : \mathcal{K}^i = \mathbf{P}^j\}.
|
| 683 |
+
$$
|
| 684 |
+
|
| 685 |
+
We denote by $p^j = |\mathcal{P}^j|$ the number of queries that use the same permutation.
|
| 686 |
+
Clearly, it holds that $\sum_{j=1}^v p^j = q_P + q_C + s$. Recall that Block$(k,n)$ denotes the
|
| 687 |
+
set of all $k$-bit key, $n$-bit block ciphers. In the following, we call a block cipher
|
| 688 |
+
$E$ compatible with $\tau$ iff
|
| 689 |
+
|
| 690 |
+
1. For all $1 \le i \le q_C$, it holds that $C^i = E_{H_2^i}(M^i \oplus H_1^i) \oplus H_3^i$, where $H_1^i = H_1(L, T^i)$, $H_2^i = H_2(L, T^i)$, and $H_3^i = H_3(L, T^i)$, and
|
| 691 |
+
|
| 692 |
+
2. for all $1 \le j \le q_P$, it holds that $\hat{Y}^j = E_{\hat{\mathcal{K}}^j}(\hat{X}^j)$,
|
| 693 |
+
|
| 694 |
+
3. and for all $1 \le g \le s$, it holds that $L_i = E_{K_i}(I_i)$.
|
| 695 |
+
|
| 696 |
+
Let $\text{Comp}(\tau)$ denote the set of all block ciphers $E$ compatible with $\tau$. Then,
|
| 697 |
+
|
| 698 |
+
$$
|
| 699 |
+
\Pr[\Theta_{\text{real}} = \tau] = \Pr[E \leftarrow \text{Block}(k,n) : E \in \text{Comp}(\tau)] \cdot \Pr[K | \Theta_{\text{real}} = \tau]. \quad (3)
|
| 700 |
+
$$
|
| 701 |
+
|
| 702 |
+
We focus on the first factor on the right-hand side. Since we assume that no bad
|
| 703 |
+
events have occurred, the fraction of compatible block ciphers is given by
|
| 704 |
+
|
| 705 |
+
$$
|
| 706 |
+
\mathrm{Pr}[E \leftarrow \text{Block}(k, n) : E \in \mathrm{Comp}(\tau)] = \prod_{i=1}^{v} \frac{1}{(2^n)_{p^i}}.
|
| 707 |
+
$$
|
| 708 |
+
|
| 709 |
+
It holds that
|
| 710 |
+
|
| 711 |
+
$$
|
| 712 |
+
\sum_{i=1}^{v} p^i = q_P + q_C + s = \sum_{j=1}^{\ell} \hat{k}^j + \sum_{j=1}^{r} t^j + \sum_{j=1}^{w} k^j = \sum_{j=1}^{\ell} \hat{k}^j + \sum_{j=1}^{u} h_2^j + \sum_{j=1}^{w} k^j.
|
| 713 |
+
$$
|
| 714 |
+
---PAGE_BREAK---
|
| 715 |
+
|
| 716 |
+
We can substitute the variables $\hat{k}^j, h_2^j$, and $k^j$ on the right-hand side by auxiliary variables $z^j$
|
| 717 |
+
|
| 718 |
+
$$ \sum_{i=1}^{v} p^i = \sum_{j=1}^{\ell+u+w} z^j \quad \text{where} \quad z^j = \begin{cases} \hat{k}^j & \text{if } j \le \ell, \\ h_2^j & \text{if } \ell < j \le \ell+u, \\ k^j & \text{otherwise.} \end{cases} $$
|
| 719 |
+
|
| 720 |
+
It holds that $v \le \ell+u+w \le \ell+r+w$. Since each permutation set $\mathcal{P}^i$ consists of all queries in $\tau$ that use a certain key $\hat{K}^j$, and/or all queries in $\tau$ that use one hash $H_2^j$, and/or all tuples $(I_i, K_i)$ that use one value $K^j$, it further holds that for all $1 \le i \le v$, there exists some $j \in \{1, \dots, \ell+u+w\}$ s.t.
|
| 721 |
+
|
| 722 |
+
$$ p^i \ge z^j. $$
|
| 723 |
+
|
| 724 |
+
Again, the sequences are compressing, and we can directly apply Fact 1. It follows that
|
| 725 |
+
|
| 726 |
+
$$
|
| 727 |
+
\begin{align}
|
| 728 |
+
\prod_{i=1}^{v} \frac{1}{(2^n)_{p^i}} &\ge \left(\prod_{j=1}^{\ell} \frac{1}{(2^n)_{\hat{k}^j}}\right) \cdot \left(\prod_{j=1}^{u} \frac{1}{(2^n)_{h_2^j}}\right) \cdot \left(\prod_{j=1}^{w} \frac{1}{(2^n)_{k^j}}\right) \tag{4} \\
|
| 729 |
+
&\ge \left(\prod_{j=1}^{\ell} \frac{1}{(2^n)_{\hat{k}^j}}\right) \cdot \left(\prod_{j=1}^{r} \frac{1}{(2^n)_{t_j}}\right) \cdot \left(\prod_{j=1}^{w} \frac{1}{(2^n)_{k^j}}\right) \\
|
| 730 |
+
&\ge \left(\prod_{j=1}^{\ell} \frac{1}{(2^n)_{\hat{k}^j}}\right) \cdot \left(\prod_{j=1}^{r} \frac{1}{(2^n)_{t_j}}\right) \cdot \frac{1}{(2^n)^s}.
|
| 731 |
+
\end{align}
|
| 732 |
+
$$
|
| 733 |
+
|
| 734 |
+
Using the combined knowledge from Equations (1) through (4), we can derive that the probability for obtaining the construction and primitive outputs in the transcript is at least as high as the probability in the ideal world:
|
| 735 |
+
|
| 736 |
+
$$ \Pr[\Theta_{\text{real}} = \tau] \ge \Pr[\Theta_{\text{ideal}} = \tau]. $$
|
| 737 |
+
|
| 738 |
+
So, we obtain our claim in Lemma 3. □
|
samples_new/texts_merged/825446.md
ADDED
|
@@ -0,0 +1,377 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# Analysis of Power Matching on Energy Savings of a Pneumatic Rotary Actuator Servo-Control System
|
| 5 |
+
|
| 6 |
+
Yeming Zhang¹*, Hongwei Yue¹, Ke Li² and Maolin Cai³
|
| 7 |
+
|
| 8 |
+
**Abstract**
|
| 9 |
+
|
| 10 |
+
When saving energy in a pneumatic system, the problem of energy losses is usually solved by reducing the air supply pressure. The power-matching method is applied to optimize the air-supply pressure of the pneumatic system, and the energy-saving effect is verified by experiments. First, the experimental platform of a pneumatic rotary actuator servo-control system is built, and the mechanism of the valve-controlled cylinder system is analyzed. Then, the output power characteristics and load characteristics of the system are derived, and their characteristic curves are drawn. The employed air compressor is considered as a constant-pressure source of a quantitative pump, and the power characteristic of the system is matched. The power source characteristic curve should envelope the output characteristic curve and load characteristic curve. The minimum gas supply pressure obtained by power matching represents the optimal gas supply pressure. The comparative experiments under two different gas supply pressure conditions show that the system under the optimal gas supply pressure can greatly reduce energy losses.
|
| 11 |
+
|
| 12 |
+
**Keywords:** Pneumatic rotary actuator, Energy savings, Gas supply pressure, Characteristic curve, Power matching
|
| 13 |
+
|
| 14 |
+
## 1 Introduction
|
| 15 |
+
|
| 16 |
+
The problem of energy shortages has become increasingly significant with the rapid development of society. In addition to discovering new energy sources, energy conservation is the most effective and important measure to fundamentally solve the energy problem [1]. Energy saving has increasingly become a hot topic of concern. Energy has always been a constraint to economic development, which makes energy-saving research more urgent and practical [2]. Currently, pneumatic technology is widely used in various fields of industry, and has become an important technical means of transmission and control [3, 4]. The use of existing technology to improve the energy utilization rate of energy-consuming equipment is an important energy-saving method [5].
|
| 17 |
+
|
| 18 |
+
However, the energy efficiency of pneumatic technology is relatively low [6]. Therefore, improving the efficiency of energy utilization and reducing the energy loss of pneumatic systems have become the concern of scholars all over the world [7, 8].
|
| 19 |
+
|
| 20 |
+
Pneumatic systems have three aspects of energy wastage [9, 10]: (1) gas and power losses during compressor gas production, (2) pressure loss in the gas supply pipeline, and (3) gas leakage from the gas equipment [11]. Accordingly, many methods are available to solve these problems. For the pressure loss in the air source, the timing of opening and closing of multiple air compressors can be optimized, and the gas production process of the air compressors can also be optimized, such as making full use of the expansion of compressed air to reduce unnecessary power consumption [12]. In order to reduce pressure loss in the pipeline, the method of reducing the pressure in the gas supply pipeline can be adopted [13]. When necessary, a supercharger can be added in front of the terminal equipment. For gas leakage from the gas equipment, optimizing the component
|
| 21 |
+
|
| 22 |
+
*Correspondence: zym@hpu.edu.cn
|
| 23 |
+
|
| 24 |
+
¹ School of Mechanical and Power Engineering, Henan Polytechnic University, Jiaozuo 454000, China
|
| 25 |
+
Full list of author information is available at the end of the article
|
| 26 |
+
---PAGE_BREAK---
|
| 27 |
+
|
| 28 |
+
structure is usually implemented to solve this problem. The pneumatic servo-control system precisely controls the angle of rotation; however, energy loss still occurs in the system. For this system, reducing the gas supply pressure is the most effective way of reducing the energy loss. Determining the critical pressure and reducing the gas supply pressure as much as possible while ensuring normal operation of the system are the key. The power-matching method can solve the optimization problem of the gas supply pressure based on the power required by the system [14]. In flow compensation, different compensation controllers can also be designed to match the flow and the system to realize the purpose of energy savings [15, 16]. Problems arise with regard to the high energy consumption and poor controllability of the rotary system of a hydraulic excavator due to throttle loss and overflow loss in the control valve during frequent acceleration and deceleration with large inertia. Therefore, Huang et al. [17] proposed the flow matching of a pump valve joint control and an independent measurement method of the hydraulic excavator rotary system to improve the energy efficiency of the system and reduce throttle loss. Xu et al. [18] designed a dynamic bypass pressure compensation circuit of a load sensing system, which solved the problems of pressure shock and energy loss caused by excessive flow and improved the efficiency and controllability of the system. Kan et al. [19] analyzed the basic characteristics of a hydraulic transmission system for wheel loaders using numerical calculation and adopted the optimal design method of a power-matching system. This improved the efficient working area of the system and average efficiency in the transportation process, and reduced the average working fuel consumption rate. Yang et al. designed an electro-hydraulic flow-matching controller with shunt ability to improve the dynamic characteristics and energy-saving effect and improve the stability of the system [20]. Guo et al. [21] used genetic algorithm to optimize the parameters of an asynchronous motor to achieve energy savings and consumption reduction, which proved the effectiveness and practicability of the power matching method of an electric pump system. Wang et al. [22] matched an engine and a generator to achieve efficiency optimization and obtained a common high efficiency area. They proposed a partial power tracking control strategy. Lai et al. [23] proposed a parameter matching method for an accumulator in a parallel hydraulic hybrid excavator and optimized the parameter matching process of the main components such as the engine, accumulator, and hydraulic secondary regulatory pump using genetic algorithm to reduce the installed power. Yan et al. [24] focused on the problem in which the flow of a constant displacement pump could not match with the changing load, resulting in energy loss.
|
| 29 |
+
|
| 30 |
+
They proposed an electro-hydraulic flow-matching steering control method, which used a servo motor to drive a constant displacement pump independently to reduce the energy consumption of the system. At present, many studies on energy savings are conducted using the power matching method in the hydraulic system, but only few focus on the pneumatic system [25].
|
| 31 |
+
|
| 32 |
+
In the present study, a method of reducing the gas supply pressure is implemented to reduce energy loss of a pneumatic rotary actuator servo-control system. The output and load characteristic curves of the system are derived, and the power source characteristic curve is matched to determine the optimal gas supply pressure. Finally, the experiment verifies the energy-saving effect under this gas supply pressure.
|
| 33 |
+
|
| 34 |
+
Through theoretical analysis and experimental verification of the application platform of the pneumatic rotary actuator, a method of function matching and energy optimization method for the pneumatic rotary actuator under normal working conditions is proposed for the first time.
|
| 35 |
+
|
| 36 |
+
## 2 Experimental Platform
|
| 37 |
+
|
| 38 |
+
Figure 1 shows the schematic diagram of the pneumatic rotary actuator servo-control system.
|
| 39 |
+
|
| 40 |
+
As a gas source, the air compressor provides power to the system. The air filter, air regulator, and air lubricator are used to filter and clean the gas. When the driving voltage signal of the proportional directional control valve is given, the proportional valve controls the flow and direction of the gas, and then controls the rotary motion of the pneumatic rotary actuator. The rotary encoder measures the angular displacement and transmits the TTL (Transistor-Transistor Logic) level signals to the data acquisition card. The data acquisition card is installed in the industrial personal computer which calls the program of the upper computer, samples the encoder signal, and outputs a 0–10 V voltage signal through the controller calculation. The driving voltage signal output by the controller further regulates the flow and direction of the proportional directional control valve to reduce the angle error. After continuous iteration, the angle error of the system decreases and tends to stabilize.
|
| 41 |
+
|
| 42 |
+
Figure 2 shows the experimental platform of the pneumatic rotary actuator servo-control system. The round steel passes through the pneumatic rotary actuator and is connected to the rotary encoder through the coupling. The pneumatic rotary actuator is horizontally installed.
|
| 43 |
+
|
| 44 |
+
By selecting the MPYE-M5-010-B model proportional valve with a smaller range, we can more easily ensure the control accuracy of the system. The SMC MSQA30A pneumatic rotary actuator is adopted. The actuator has a high-precision ball bearing and belongs
|
| 45 |
+
---PAGE_BREAK---
|
| 46 |
+
|
| 47 |
+
**Figure 1** Schematic diagram of the pneumatic rotary actuator servo-control system
|
| 48 |
+
|
| 49 |
+
**Figure 2** Experimental diagram of the pneumatic rotary servo-control system
|
| 50 |
+
|
| 51 |
+
to a high-precision actuator type. The rotating platform of the actuator contains many symmetrical threaded holes for easy introduction of loads. A high-precision rotary encoder is used, and the 20000P/R resolution
|
| 52 |
+
|
| 53 |
+
corresponds to an accuracy of $1.8 \times 10^{-2}$, which satisfies the high-precision measurement for the rotation angle. In addition, the air compressor and the filter, regulator, and lubricator (F. R. L.) units satisfy the gas supply pressure of at least 0.8 MPa. The digital I/O port and analog output port of the data-acquisition card must meet the experimental requirements, and the higher digit counter in the data-acquisition card improves the system response speed. The models and parameters of the components are listed in Table 1.
|
| 54 |
+
|
| 55 |
+
In some experimental tests, measuring the flow rate, pressure, and temperature of the gas is necessary, which can be performed using a flow sensor, a pressure transmitter, and a temperature transmitter (thermocouple), respectively. The flow rate in the inlet and outlet is measured using a flow sensor in the FESTO SFAB series
|
| 56 |
+
|
| 57 |
+
**Table 1** Models and parameters of the components
|
| 58 |
+
|
| 59 |
+
<table><thead><tr><td>Component</td><td>Model</td><td>Parameter</td></tr></thead><tbody><tr><td>Air compressor</td><td>PANDA 750-30L</td><td>Maximum supply pressure: 0.8 MPa</td></tr><tr><td>F. R. L. units</td><td>AC3000-03</td><td>Maximum working pressure: 1.0 MPa</td></tr><tr><td>Proportional-directional control valve</td><td>FESTO MPYE-5-M5-010-B</td><td>3-position 5-way valve, 0–10 V driving voltage</td></tr><tr><td>Pneumatic rotary actuator</td><td>SMC MSQA30A</td><td>Bore: 30 mm; stroke: 190°</td></tr><tr><td>Rotary Encoder</td><td>GSS06-LDH-RAG2000Z1</td><td>Resolution: 20000P/R</td></tr><tr><td>Data-acquisition card</td><td>NI PCI-6229</td><td>32-bit counter; from –10 V to +10 V output voltage</td></tr><tr><td>Industrial personal computer</td><td>ADVANTECH IPC-610H</td><td>Standard configuration</td></tr></tbody></table>
|
| 60 |
+
---PAGE_BREAK---
|
| 61 |
+
|
| 62 |
+
with a range of 2–200 L/min, and the flow rate of the leak port is measured using a flow sensor with a range of 0.1–5 L/min in the SFAH series. The MIK-P300 pressure transmitter has high accuracy and fast response and can accurately measure the pressure changes. A thermocouple is used as a temperature transmitter to measure the gas temperature. To prevent signal interference, a temperature isolator is added to the circuit for the temperature signal transmission. The models and parameters of the test components are listed in Table 2. The circuit connection of the experimental platform is shown in Figure 3.
|
| 63 |
+
|
| 64 |
+
The schematic diagram of the valve-controlled cylinder system is constructed according to the experimental platform, as shown in Figure 4. The system consists of Chamber **a** and Chamber **b**. The dashed lines represent the boundaries of the chambers. Figure 4 shows the gas-flow mechanism when the spool moves to the right, and $\dot{m}_a$, $\dot{m}_b$ represent the mass flow rates of Chamber **a** and Chamber **b**, respectively. $p_a$, $p_b$ and $T_a$, $T_b$ represent the corresponding pressure and temperature of Chamber **a** and Chamber **b**, respectively. $p_s$ is the gas supply pressure, $p_e$ is the atmospheric pressure, and $\theta$ is the rotation angle of the pneumatic rotary actuator.
|
| 65 |
+
|
| 66 |
+
Figure 3 Circuit connection of the experimental platform
|
| 67 |
+
|
| 68 |
+
## 3 Power Characteristic Matching
|
| 69 |
+
|
| 70 |
+
### 3.1 Output Characteristics of the Valve-Controlled Cylinder
|
| 71 |
+
|
| 72 |
+
The output characteristic of the valve-controlled cylinder system refers to the relationship between the total load moment and angular velocity when the power source is known. The output characteristic can be obtained by the following method.
|
| 73 |
+
|
| 74 |
+
When supply pressure $p_s$ is relatively low, i.e., when $0.1013 \text{ MPa} \le p_s \le 0.4824 \text{ MPa}$, the condition is satisfied, i.e., $p_a/p_s > b = 0.21$, where *b* denotes the critical pressure ratio. The gas flow in the proportional-directional control valve is a subsonic flow. Here, the mass flow equation through the proportional valve is [26]
|
| 75 |
+
|
| 76 |
+
$$ \dot{m}_a = \frac{S_e p_s}{\sqrt{RT_s}} \sqrt{\frac{2\kappa}{\kappa-1} \left[ \left( \frac{p_a}{p_s} \right)^{\frac{2}{\kappa}} - \left( \frac{p_a}{p_s} \right)^{\frac{\kappa+1}{\kappa}} \right]}, \quad (1) $$
|
| 77 |
+
|
| 78 |
+
$$ \dot{m}_b = \frac{S_e p_b}{\sqrt{RT_s}} \sqrt{\frac{2\kappa}{\kappa-1} \left[ \left( \frac{p_e}{p_b} \right)^{\frac{2}{\kappa}} - \left( \frac{p_e}{p_b} \right)^{\frac{\kappa+1}{\kappa}} \right]}, \quad (2) $$
|
| 79 |
+
|
| 80 |
+
Table 2 Models and parameters of the test components
|
| 81 |
+
|
| 82 |
+
<table><thead><tr><th>Component</th><th>Model</th><th>Parameter</th></tr></thead><tbody><tr><td>Pressure transmitter</td><td>MIK-P300</td><td>Range: 0–1.0 MPa; accuracy: 0.3% FS</td></tr><tr><td>Flow sensor 1</td><td>FESTO SFAB-200U-HQ8-2SV-M12</td><td>Range: 2–200 L/min; accuracy: 3% o.m.v. + 0.3% FS</td></tr><tr><td>Flow sensor 2</td><td>FESTO SFAH-5U-Q6S-PNLK-PNVBA-M8</td><td>Range: 0.1–5 L/min; accuracy: 2% o.m.v. + 1% FS</td></tr><tr><td>Temperature transmitter (thermocouple)</td><td>TT-K-36 (K type, diameter: 0.1 mm)</td><td>Range: 0–260°; accuracy: 0.4% FS</td></tr><tr><td>Temperature isolator</td><td>SLDTR-2P11</td><td>Response time: ≤ 10 ms; accuracy: 0.1% FS</td></tr></tbody></table>
|
| 83 |
+
---PAGE_BREAK---
|
| 84 |
+
|
| 85 |
+
**Figure 4** Schematic diagram of the valve-controlled cylinder system
|
| 86 |
+
|
| 87 |
+
where $S_e$ is the effective area of the proportional valve orifice, $R$ is the gas constant, $T_s$ is the gas supply temperature, and $\kappa$ is the isentropic index.
|
| 88 |
+
|
| 89 |
+
When the opening of the proportional-directional control valve is maximum, the mass flow rates of the two chambers are maximum, which can be expressed as
|
| 90 |
+
|
| 91 |
+
$$ \dot{m}_{\text{a-max}} = \frac{C \pi r^2 p_s}{\sqrt{RT_s}} \sqrt{\frac{2\kappa}{\kappa-1} \left[ \left( \frac{p_a}{p_s} \right)^{\frac{2}{\kappa}} - \left( \frac{p_a}{p_s} \right)^{\frac{\kappa+1}{\kappa}} \right]}, \quad (3) $$
|
| 92 |
+
|
| 93 |
+
$$ \dot{m}_{\text{b-max}} = \frac{C \pi r^2 p_b}{\sqrt{RT_s}} \sqrt{\frac{2\kappa}{\kappa-1} \left[ \left( \frac{p_e}{p_b} \right)^{\frac{2}{\kappa}} - \left( \frac{p_e}{p_b} \right)^{\frac{\kappa+1}{\kappa}} \right]}, \quad (4) $$
|
| 94 |
+
|
| 95 |
+
where C is the flow coefficient and r is the radius of the orifice.
|
| 96 |
+
|
| 97 |
+
Under adiabatic condition, $p_a/\rho_a^\kappa = p_s/\rho_s^\kappa$ and $p_b/\rho_b^\kappa = p_c/\rho_c^\kappa$, where $\rho_a$, $\rho_b$, $\rho_s$, and $\rho_e$ represent the gas density in Chamber a, gas density in Chamber b, gas supply density, and atmospheric density respectively. For the pneumatic rotary actuator, these can be obtained from the mass flow-rate formulas:
|
| 98 |
+
|
| 99 |
+
$$ \dot{m}_{\text{a-max}} = \rho_a \cdot 2A \cdot \frac{1}{2} d_f \dot{\theta} = \frac{\rho_a}{\rho_s} \rho_s A d_f \dot{\theta} = \left(\frac{p_a}{p_s}\right)^{\frac{1}{\kappa}} \frac{p_s}{RT_s} A d_f \dot{\theta}, \quad (5) $$
|
| 100 |
+
|
| 101 |
+
$$ \dot{m}_{\text{b-max}} = \rho_b \cdot 2A \cdot \frac{1}{2} d_f \dot{\theta} = \frac{\rho_e}{\rho_b} \rho_b A d_f \dot{\theta} = \left(\frac{p_e}{p_b}\right)^{\frac{1}{\kappa}} \frac{p_b}{RT_s} A d_f \dot{\theta}, \quad (6) $$
|
| 102 |
+
|
| 103 |
+
where A is the effective area of a single piston, $d_f$ is the pitch diameter of the gear, and $\dot{\theta}$ is the angular velocity of the pneumatic rotary actuator.
|
| 104 |
+
|
| 105 |
+
**Table 3** Known parameters in Eq. (8)
|
| 106 |
+
|
| 107 |
+
<table><thead><tr><td>Parameter</td><td>Value</td></tr></thead><tbody><tr><td>A (m²)</td><td>3.4636 × 10<sup>-4</sup></td></tr><tr><td>d<sub>f</sub> (m)</td><td>0.014</td></tr><tr><td>κ</td><td>1.4</td></tr><tr><td>C</td><td>0.6437</td></tr><tr><td>r (m)</td><td>1.00 × 10<sup>-3</sup></td></tr><tr><td>R (J/(kg·K))</td><td>287</td></tr></tbody></table>
|
| 108 |
+
|
| 109 |
+
The dynamic equation of the pneumatic rotary actuator can be expressed as follows:
|
| 110 |
+
|
| 111 |
+
$$ p_a - p_b = \frac{f}{d_f A}, \quad (7) $$
|
| 112 |
+
|
| 113 |
+
where f is the total load moment.
|
| 114 |
+
|
| 115 |
+
Combining Eqs. (3)–(6) yields $p_a$ and $p_b$. Substituting the expressions of $p_a$ and $p_b$ into Eq. (7) yields
|
| 116 |
+
|
| 117 |
+
$$ p_s \left[ 1 - \frac{A^2 d_f^2 \dot{\theta}^2 (\kappa - 1)}{2C^2 \pi^2 r^4 \kappa R T_s} \right]^{\frac{\kappa}{\kappa-1}} - \frac{p_e}{\left[ 1 - \frac{A^2 d_f^2 \dot{\theta}^2 (\kappa-1)}{2C^2 \pi^2 r^4 \kappa R T_s} \right]^{\frac{\kappa}{\kappa-1}}} = \frac{f}{d_f A}. \quad (8) $$
|
| 118 |
+
|
| 119 |
+
Eq. (8) is the expression of the output characteristic curve of the valve-controlled cylinder. The known parameters in the equation are shown in Table 3.
|
| 120 |
+
|
| 121 |
+
To extend and improve the influence of the output characteristics of the system, the influence law of the fixed parameters is also theoretically analyzed. Figure 5 shows the output characteristic curves. The following characteristics can be found in plane $\dot{\theta}-f$:
|
| 122 |
+
|
| 123 |
+
(1) Figure 5(a) shows that when pressure $p_s$ increases from 0.3 MPa to 0.4 MPa, the curve is a parabola and $p_s$ is a variable parameter. Increasing $p_s$ makes the whole parabola move to the right while the shape does not change.
|
| 124 |
+
|
| 125 |
+
(2) Figure 5(b) shows that when the maximum opening area of the valve increases from $\pi r^2$ to $2\pi r^2$, the whole parabola becomes wider but the vertices remain the same.
|
| 126 |
+
|
| 127 |
+
(3) Figure 5(c) shows that the increase in effective working area A of the piston makes the top of the parabola move to the right and the parabola simultaneously becomes narrower.
|
| 128 |
+
|
| 129 |
+
We can see from Eq. (8) that when $\dot{\theta}=0$, the maximum total load moment can be expressed as
|
| 130 |
+
|
| 131 |
+
$$ f_{\max} = Ad_f(p_s - p_e). \quad (9) $$
|
| 132 |
+
|
| 133 |
+
When $f=0$, the maximum angular velocity is
|
| 134 |
+
---PAGE_BREAK---
|
| 135 |
+
|
| 136 |
+
**Figure 5** Output characteristic curve of the valve-controlled cylinder: (a) Output characteristics of the pressure variation, (b) Output characteristics of the change in the valve port area, (c) Output characteristics of the variation in the effective piston area
|
| 137 |
+
|
| 138 |
+
$$ \dot{\theta}_{\max} = \sqrt{\frac{2C_1^2 \pi^2 r^4 \kappa R T_s}{A^2 d_f^2 (\kappa - 1)}} \left[ 1 - \left( \frac{p_e}{p_s} \right)^{\left(\frac{\kappa-1}{2\kappa}\right)} \right]. \quad (10) $$
|
| 139 |
+
|
| 140 |
+
## 3.2 Load Characteristic
|
| 141 |
+
|
| 142 |
+
The load characteristic refers to the relationship between the moment required for the load to move and the position, velocity, and acceleration of the load itself [27]. The load characteristic can be expressed by the angular velocity–moment curve.
|
| 143 |
+
|
| 144 |
+
The load characteristic is related to the form of load movement. When the load sinusoidally moves, the motion of the load is expressed as
|
| 145 |
+
|
| 146 |
+
$$ \theta = \theta_m \sin \omega t, \quad (11) $$
|
| 147 |
+
|
| 148 |
+
where $\theta_m$ is the maximum angular value of the load motion and $\omega$ is the sinusoidal motion frequency of the load.
|
| 149 |
+
|
| 150 |
+
The angular velocity and acceleration of the load are
|
| 151 |
+
|
| 152 |
+
$$ \dot{\theta} = \theta_m \omega \cos \omega t, \quad (12) $$
|
| 153 |
+
|
| 154 |
+
$$ \ddot{\theta} = -\theta_m \omega^2 \sin \omega t. \quad (13) $$
|
| 155 |
+
|
| 156 |
+
The total load moment of the pneumatic rotary actuator is
|
| 157 |
+
|
| 158 |
+
$$ f = \left( \frac{1}{2} m_p d_f^2 + J \right) \ddot{\theta} + \frac{1}{2} d_f F_f \\ = - \left( \frac{1}{2} m_p d_f^2 + J \right) \theta_m \omega^2 \sin \omega t \\ + \frac{1}{2} d_f \left[ F_c \operatorname{sign}(\dot{\theta}) + (F_s - F_c)e^{-(\dot{\theta}/\dot{\theta}_s)^2} \operatorname{sign}(\dot{\theta}) + \sigma \dot{\theta} \right], \quad (14) $$
|
| 159 |
+
|
| 160 |
+
where $m_p$ is the mass of a single piston and $J$ is the moment of inertia of the pneumatic rotary actuator. $F_f$ is the friction force and can be represented by the Stribeck friction model.
|
| 161 |
+
|
| 162 |
+
$$ F_f = F_c \operatorname{sign}(\dot{\theta}) + (F_s - F_c)e^{-(\dot{\theta}/\dot{\theta}_s)^2} \operatorname{sign}(\dot{\theta}) + \sigma \dot{\theta}, \quad (15) $$
|
| 163 |
+
|
| 164 |
+
where $F_s$ is the maximum static friction, $F_c$ is the Coulomb friction, $\dot{\theta}_s$ is the critical Stribeck velocity, and $\sigma$ is the viscous friction coefficient.
|
| 165 |
+
---PAGE_BREAK---
|
| 166 |
+
|
| 167 |
+
**Table 4** Known parameters in Eq. (16)
|
| 168 |
+
|
| 169 |
+
<table><thead><tr><td>Parameter</td><td>Value</td></tr></thead><tbody><tr><td>F<sub>s</sub> (N)</td><td>10.60</td></tr><tr><td>F<sub>c</sub> (N)</td><td>6.03</td></tr><tr><td>θ̇<sub>s</sub> (rad/s)</td><td>0.19</td></tr><tr><td>σ (N·s/rad)</td><td>0.87</td></tr><tr><td>m<sub>p</sub> (kg)</td><td>0.21</td></tr></tbody></table>
|
| 170 |
+
|
| 171 |
+
**Figure 6** Load characteristic curve
|
| 172 |
+
|
| 173 |
+
Combining Eqs. (12)–(14) yields
|
| 174 |
+
|
| 175 |
+
$$ \left[ \frac{f - \frac{1}{2} d_f F_c \text{sign}(\dot{\theta}) - \frac{1}{2} d_f (F_s - F_c) e^{-(\dot{\theta}/\dot{\theta}_s)^2} \text{sign}(\dot{\theta})}{-\frac{1}{2} d_f \sigma \dot{\theta} - \left(\frac{1}{2} m_p d_f^2 + J \right) \theta_m \omega^2} + \left(\frac{\dot{\theta}}{\dot{\theta}_m \omega}\right)^2 = 1. \right]^2 \quad (16) $$
|
| 176 |
+
|
| 177 |
+
The known parameters in Eq. (16) are listed in Table 4.
|
| 178 |
+
|
| 179 |
+
The load characteristic curve can be obtained from Eq. (16) when $\theta_m=180°$ and $\omega=10$ rad/s, as shown in Figure 6.
|
| 180 |
+
|
| 181 |
+
### 3.3 Power Source Characteristics and Matching
|
| 182 |
+
|
| 183 |
+
The power source characteristic refers to the characteristic of the flow and pressure provided by the power source, which can be expressed by the flow-pressure curve. The air compressor used in this work can be approximately regarded as a constant-pressure source for a quantitative
|
| 184 |
+
|
| 185 |
+
**Figure 7** Power source characteristic curve
|
| 186 |
+
|
| 187 |
+
**Figure 8** Power source characteristic matching
|
| 188 |
+
|
| 189 |
+
pump. Therefore, the power source characteristic curve is shown in Figure 7, where $\dot{m}_s$ is the gas supply mass flow, $p_s$ is the gas supply pressure, $\dot{m}_L$ is the driving mass flow, and $p_L$ is the driving pressure.
|
| 190 |
+
|
| 191 |
+
The output and power source characteristics of the valve-controlled cylinder should envelope the load characteristic curve. To minimize unnecessary energy consumption, the output characteristic curve should be tangent to the load characteristic curve, and the power source characteristic curve should be tangent to the output characteristic curve in the f-axis direction and the load characteristic curve in the $\dot{\theta}$-axis direction, as shown in Figure 8.
|
| 192 |
+
|
| 193 |
+
In this manner, the maximum total load moment is obtained, i.e., $f_{max}=0.96$ N·m. The optimum gas supply pressure can be obtained from Eq. (9), i.e., $p_s=f_{max}/d_f A + p_e= 0.3367$ MPa.
|
| 194 |
+
---PAGE_BREAK---
|
| 195 |
+
|
| 196 |
+
**4 Experimental Verification of the Energy Savings**
|
| 197 |
+
|
| 198 |
+
To verify the calculation results presented in the previous section, low-speed uniform-motion experiments of the pneumatic rotary actuator were carried out using 0.6 and 0.3367 MPa supply pressure. The total energy and effective energy consumed by the valve-controlled cylinder system were measured and calculated. In the experiment, the input-angle signal was set as the slope signal, and Chamber **a** was used as the intake chamber. The motion curve of the uniform-velocity period was considered, and the angular strokes in the two experiments were the same. Two flow sensors were used to measure the volume flow of the gas supply pipeline and the Chamber **a** port. Temperature sensors were used to measure the gas temperature of the gas supply pipeline and Chamber **a**.
|
| 199 |
+
|
| 200 |
+
Figures 9 and 10 show the system response curves at gas supply pressure values of 0.6 and 0.3367 MPa, respectively, including the angle curve, gas supply flow curve, gas supply temperature curve, pressure curve of Chamber **a**, volume-flow curve of Chamber **a**, and temperature curve of Chamber **a**. Figures 9(f) and 10(f) show that the temperature in Chamber **a** changed with the change in the velocity, which first increased, then decreased, and then entered a stable stage.
|
| 201 |
+
|
| 202 |
+
The total power consumed by the pneumatic system is expressed as [28, 29]:
|
| 203 |
+
|
| 204 |
+
$$P_T = p_s \dot{V}_s \left[ \ln \frac{p_s}{p_e} + \frac{\kappa}{\kappa - 1} \left( \frac{T_s - T_e}{T_e} - \ln \frac{T_s}{T_e} \right) \right], \quad (17)$$
|
| 205 |
+
|
| 206 |
+
where $\dot{V}_s$ is the volume flow through the gas supply pipeline, and its numerical variation curves are shown in Figures 9(b) and 10(b). The $T_s$ curves are shown in Figures 9(c) and 10(c).
|
| 207 |
+
|
| 208 |
+
The effective power of the pneumatic rotary actuator can be expressed as
|
| 209 |
+
|
| 210 |
+
$$P_E = p_a \dot{V}_a \left[ \ln \frac{p_a}{p_e} + \frac{\kappa}{\kappa - 1} \left( \frac{T_a - T_e}{T_e} - \ln \frac{T_a}{T_e} \right) \right], \quad (18)$$
|
| 211 |
+
|
| 212 |
+
where $\dot{V}_a$ is the volume flow into Chamber **a**, and its numerical variation curves are shown in Figures 9(e) and 10(e). The $T_a$ curves are shown in Figures 9(f) and 10(f).
|
| 213 |
+
|
| 214 |
+
By substituting the data in Figures 9 and 10 into Eqs. (17) and (18), the total and effective power of the pneumatic system at different supply pressure values can be obtained, as shown in Figure 11. The total and effective energy consumed by the pneumatic system can be obtained by integrating the data shown in Figure 11 using the Origin software.
|
| 215 |
+
|
| 216 |
+
The actual work done by the gas on the pneumatic rotary actuator is equal to the sum of the rotational
|
| 217 |
+
|
| 218 |
+
kinetic energy of the rotating platform, the kinetic energy of the cylinder piston, and the work done by the piston to overcome the friction force, which can be expressed as
|
| 219 |
+
|
| 220 |
+
$$
|
| 221 |
+
\begin{aligned}
|
| 222 |
+
W &= \frac{1}{2} J \dot{\theta}^2 + \frac{1}{2} \cdot 2m_p \cdot (\dot{y})^2 + F_f y \\
|
| 223 |
+
&= \frac{1}{2} \left( J + \frac{1}{2} m_p d_f^2 \right) \dot{\theta}^2 + \frac{1}{2} F_f d_f \theta,
|
| 224 |
+
\end{aligned}
|
| 225 |
+
\quad (19) $$
|
| 226 |
+
|
| 227 |
+
where $y$ is the displacement of the actuator piston and $\dot{\theta}$ is replaced by the average value of the angular velocity.
|
| 228 |
+
|
| 229 |
+
The calculation results are described as follows. When the gas supply pressure is 0.6 MPa, the total energy consumed by the system is 195.552 J, the effective energy is 32.666 J, and the actual work done by the pneumatic rotary actuator is 3.513 J. When the gas supply pressure is 0.3367 MPa, the total energy consumed by the system is 32.207 J, the effective energy is 9.481 J, and the actual work done is 3.517 J. In both cases, the actual work of the pneumatic rotary actuator is almost the same, and when the gas supply pressure is 0.3367 MPa, the energy consumption is greatly reduced.
|
| 230 |
+
|
| 231 |
+
**5 Further Discussions**
|
| 232 |
+
|
| 233 |
+
According to the matching method of the power characteristics, for the constant-pressure source servo system with a quantitative pump, we need to calculate the optimal air-supply pressure required for manually adjusting the air-supply pressure to the optimal pressure. Matching efficiency $\eta$ represents the ratio of the power output of the pneumatic system to the input power of the gas source. The matching efficiency is expressed as
|
| 234 |
+
|
| 235 |
+
$$\eta = \frac{p_L \dot{m}_L}{p_s \dot{m}_s}. \quad (20)$$
|
| 236 |
+
|
| 237 |
+
Figure 7 shows that the matching efficiency of this method is low. The adaptive power source can adaptively change the gas supply pressure or flow to meet the system requirements and improve the matching efficiency. It can be divided into the following three types [30].
|
| 238 |
+
|
| 239 |
+
(1) Flow adaptive power source
|
| 240 |
+
|
| 241 |
+
This power source can adaptively adjust the supply flow from the power source according to the system flow demand to reduce the loss in the flow. The characteristic curve is shown in Figure 12(a). The matching efficiency is expressed as
|
| 242 |
+
|
| 243 |
+
$$\eta = \frac{p_L \dot{m}_L}{p_s \dot{m}_s'} \approx \frac{p_L}{p_s}. \quad (21)$$
|
| 244 |
+
---PAGE_BREAK---
|
| 245 |
+
|
| 246 |
+
**Figure 9** System-response curve at gas supply pressure of 0.6 MPa: (a) Angle curve, (b) Gas supply flow, (c) Gas supply temperature, (d) Pressure curve of Chamber a, (e) Volume-flow curve of Chamber a, (f) Temperature curve of Chamber a
|
| 247 |
+
---PAGE_BREAK---
|
| 248 |
+
|
| 249 |
+
**Figure 10** System response curve at gas supply pressure of 0.3367 MPa: (a) Angle curve, (b) Gas supply flow, (c) Gas supply temperature, (d) Pressure curve of Chamber a, (e) Volume-flow curve of Chamber a, (f) Temperature curve of Chamber a
|
| 250 |
+
---PAGE_BREAK---
|
| 251 |
+
|
| 252 |
+
**Figure 11** Total and effective power of the pneumatic system under different supply pressure values: (a) Total power, (b) Effective power
|
| 253 |
+
|
| 254 |
+
(2) Pressure adaptive power source
|
| 255 |
+
|
| 256 |
+
This power source can adaptively adjust the gas supply pressure of the power source according to the system pressure demand to reduce the pressure loss. The characteristic curve is shown in Figure 12(b). The matching efficiency is expressed as
|
| 257 |
+
|
| 258 |
+
$$ \eta = \frac{p_L \dot{m}_L}{p'_s \dot{m}_s} \approx \frac{\dot{m}_L}{\dot{m}_s}. \qquad (22) $$
|
| 259 |
+
|
| 260 |
+
(3) Power adaptive power source
|
| 261 |
+
|
| 262 |
+
This power source can adaptively adjust the gas supply pressure and flow from the power source according to the system pressure and flow demand to minimize the loss in power. $\dot{m}'_s$ denotes the air-supply flow. The characteristic
|
| 263 |
+
|
| 264 |
+
**Figure 12** Power characteristics of the adaptive power sources: (a) Flow adaptive power source, (b) Pressure adaptive power source, (c) Power adaptive power source
|
| 265 |
+
---PAGE_BREAK---
|
| 266 |
+
|
| 267 |
+
curve is shown in Figure 12(c). The matching efficiency is expressed as
|
| 268 |
+
|
| 269 |
+
$$ \eta = \frac{p_L \dot{m}_L}{p'_s \dot{m}'_s} \approx 1. \qquad (23) $$
|
| 270 |
+
|
| 271 |
+
Therefore, the power adaptive power source demonstrates better energy-saving effect, and its matching efficiency is closer to 100%.
|
| 272 |
+
|
| 273 |
+
## 6 Conclusions
|
| 274 |
+
|
| 275 |
+
Power matching of the pneumatic rotary actuator involves optimizing the relevant parameters of the pneumatic rotary actuator system based on the premise of satisfying the normal operation of the pneumatic rotary actuator, realizing the power demand and output matching, and achieving energy savings. In this study, the derivation process of the output-power and load characteristics of the pneumatic rotary actuator servo-control system is described. The employed air compressor is regarded as a constant-pressure source of the quantitative pump, and the power characteristics of the system are matched. The following conclusions are obtained.
|
| 276 |
+
|
| 277 |
+
(1) The minimum gas supply pressure obtained by the power-matching method represents the optimal gas supply pressure. The optimum gas supply pressure is 0.3367 MPa.
|
| 278 |
+
|
| 279 |
+
(2) By comparing the system-response experiments at 0.6 and 0.3367 MPa, the total energy consumed by the system generates savings of 163.345 J. This value verifies that the system under the optimal gas supply pressure can significantly reduce energy loss.
|
| 280 |
+
|
| 281 |
+
(3) According to the characteristic curves of the adaptive power sources, the matching efficiency of the power adaptive power source is higher than that of the flow and pressure adaptive power sources.
|
| 282 |
+
|
| 283 |
+
### Acknowledgments
|
| 284 |
+
|
| 285 |
+
The authors would like to thank Henan Polytechnic University and Beihang University for providing the necessary facilities and machinery to build the prototype of the pneumatic servo system. The authors are sincerely grateful to the reviewers for their valuable review comments, which substantially improved the paper.
|
| 286 |
+
|
| 287 |
+
### Authors' Contributions
|
| 288 |
+
|
| 289 |
+
YZ provided guidance for the whole research. KL and HY established the model, designed the experiments and wrote the initial manuscript. KL and MC assisted with sampling and laboratory analyses. YZ and HY revised the manuscript, performed the experiments and analysed the data. All authors read and approved the final manuscript.
|
| 290 |
+
|
| 291 |
+
### Authors' Information
|
| 292 |
+
|
| 293 |
+
Yeming Zhang, born in 1979, is currently an associate professor at School of Mechanical and Power Engineering, Henan Polytechnic University, China. He received his PhD degree from Beihang University, China, in 2011. His research interests include complex mechatronics system design and simulation,
|
| 294 |
+
|
| 295 |
+
intelligent control, reliability and fault diagnosis, pneumatic system energy saving and flow measurement.
|
| 296 |
+
|
| 297 |
+
Hongwei Yue, born in 1992, is currently a master candidate at School of Mechanical and Power Engineering, Henan Polytechnic University, China.
|
| 298 |
+
|
| 299 |
+
Ke Li, born in 1991, is currently a PhD candidate at School of Mechanical and Electrical Engineering, Harbin Institute of Technology, China. He received his master degree on mechano-electronic from Henan Polytechnic University, China, in 2019.
|
| 300 |
+
|
| 301 |
+
Maolin Cai, born in 1972, is currently a professor and a PhD candidate supervisor at Beihang University, China. He received his PhD degree from Tokyo Institute of Technology, Japan, in 2002. His main research direction includes pneumatic and hydraulic fluidics, compressed air energy storage, and pneumatic pipe line system.
|
| 302 |
+
|
| 303 |
+
### Funding
|
| 304 |
+
|
| 305 |
+
Supported by Henan Province Science and Technology Key Project of China (Grant Nos. 202102210081, 202102210082), Fundamental Research Funds for Henan Province Colleges and Universities of China (Grant No. NSFRF140120), and Doctor Foundation of Henan Polytechnic University (Grant No. B2012-101).
|
| 306 |
+
|
| 307 |
+
### Competing Interests
|
| 308 |
+
|
| 309 |
+
The authors declare no competing financial interests.
|
| 310 |
+
|
| 311 |
+
### Author Details
|
| 312 |
+
|
| 313 |
+
¹School of Mechanical and Power Engineering, Henan Polytechnic University, Jiaozuo 454000, China. ²School of Mechanical and Electrical Engineering, Harbin Institute of Technology, Harbin 150001, China. ³School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China.
|
| 314 |
+
|
| 315 |
+
Received: 6 July 2019 Revised: 22 February 2020 Accepted: 18 March 2020
|
| 316 |
+
Published online: 09 April 2020
|
| 317 |
+
|
| 318 |
+
### References
|
| 319 |
+
|
| 320 |
+
[1] L Ge, L Quan, X G Zhang, et al. Power matching and energy efficiency improvement of hydraulic excavator driven with speed and displacement variable power source. *Chinese Journal of Mechanical Engineering*, 2019, 32:100, https://doi.org/10.1186/s10033-019-0415-x.
|
| 321 |
+
|
| 322 |
+
[2] T Chen, L Cai, X F Ma, et al. Modeling and matching performance of a hybrid-power gas engine heat pump system with continuously variable transmission. *Building Simulation*, 2019, 12(2): 273-283.
|
| 323 |
+
|
| 324 |
+
[3] G W Jia, W Q Xu, M L Cai, et al. Micron-sized water spray-cooled quasi-isothermal compression for compressed air energy storage. *Experimental Thermal and Fluid Science*, 2018, 96: 470-481.
|
| 325 |
+
|
| 326 |
+
[4] D Shaw, J-J Yu, C Chieh. Design of a hydraulic motor system driven by compressed air. *Energies*, 2013, 6(7): 3149-3166.
|
| 327 |
+
|
| 328 |
+
[5] M Cheng, B Xu, J H Zhang, et al. Pump-based compensation for dynamic improvement of the electrohydraulic flow matching system. *IEEE Transactions on Industrial Electronics*, 2017, 64(4): 2903-2913.
|
| 329 |
+
|
| 330 |
+
[6] Y M Zhang, K Li, G Wang, et al. Nonlinear model establishment and experimental verification of a pneumatic rotary actuator position servo system. *Energies*, 2019, 12(6): 1096.
|
| 331 |
+
|
| 332 |
+
[7] T L Brown, V P Atluri, J P Schmiedeler. A low-cost hybrid drivetrain concept based on compressed air energy storage. *Applied Energy*, 2014, 134: 477-489.
|
| 333 |
+
|
| 334 |
+
[8] Y M Zhang, M L Cai. Overall life cycle comprehensive assessment of pneumatic and electric actuator. *Chinese Journal of Mechanical Engineering*, 2014, 27(3): 584-594.
|
| 335 |
+
|
| 336 |
+
[9] M L Cai. Energy saving technology on pneumatic systems. *Chinese Hydraulics & Pneumatics*, 2013(8): 1-8. (in Chinese)
|
| 337 |
+
|
| 338 |
+
[10] J F Li. Energy saving of pneumatic system. Beijing: Machinery Industry Press, 1997. (in Chinese)
|
| 339 |
+
|
| 340 |
+
[11] R Saidur, N A Rahim, M Hasanuzzaman. A review on compressed-air energy use and energy savings. *Renewable and Sustainable Energy Reviews*, 2010, 14(4): 1135-1153.
|
| 341 |
+
|
| 342 |
+
[12] Y M Zhang, S Wang, S L Wei, et al. Optimization of control method of air compressor group under intermittent large flow condition. *Fluid Machinery*, 2017, 45(7): 7-11.
|
| 343 |
+
---PAGE_BREAK---
|
| 344 |
+
|
| 345 |
+
[13] K Baghestan, S M Rezaei, H A Talebi, et al. An energy-saving nonlinear position control strategy for electro-hydraulic servo systems. *ISA Trans.*, 2015, 59: 268-279.
|
| 346 |
+
[14] S P Yang, H Yu, J G Liu, et al. Research on power matching and energy sav- ing control of power system in hydraulic excavator. *Journal of Mechanical Engineering*, 2014, 50(5): 152-160. (in Chinese)
|
| 347 |
+
[15] M Cheng, B Xu, J H Zhang, et al. Valve-based compensation for controlla- bility improvement of the energy-saving electrohydraulic flow matching system. *Journal of Zhejiang University: Science A*, 2017, 18(6): 430-442.
|
| 348 |
+
[16] B Xu, M Cheng, H Y Yang, et al. A hybrid displacement/pressure control scheme for an electrohydraulic flow matching system. *IEEE/ASME Transactions on Mechatronics*, 2015, 20(6): 2771-2782.
|
| 349 |
+
[17] W N Huang, L Quan, J H Huang, et al. Flow matching with combined control of the pump and the valves for the independent metering swing system of a hydraulic excavator. *Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering*, 2018, 232(10): 1310-1322.
|
| 350 |
+
[18] B Xu, M Cheng, H Y Yang, et al. Electrohydraulic flow matching system with bypass pressure compensation. *Journal of Zhejiang University (Engineering Science)*, 2015, 49(9): 1762-1767. (in Chinese)
|
| 351 |
+
[19] Y Z Kan, D Y Sun, Y Luo, et al. Optimal design of power matching for wheel loader based on power reflux hydraulic transmission system. *Mechanism and Machine Theory*, 2019, 137: 67-82.
|
| 352 |
+
[20] H Y Yang, W Liu, B Xu, et al. Characteristic analysis of electro-hydraulic flow matching control system in hydraulic excavator. *Journal of Mechanical Engineering*, 2012, 48(14): 156-163. (in Chinese)
|
| 353 |
+
[21] X Guo, C Lu, J Li, et al. Analysis of motor-pump system power matching based on genetic algorithm. *EEA - Electrotehnica, Electronica, Automatica*, 2018, 66(1): 93-99.
|
| 354 |
+
|
| 355 |
+
[22] X Wang, H Lv, Q Sun, et al. A proportional resonant control strategy for efficiency improvement in extended range electric vehicles. *Energies*, 2017, 10(2): 204.
|
| 356 |
+
[23] X L Lai, C Guan. A parameter matching method of the parallel hydraulic hybrid excavator optimized with genetic algorithm. *Mathematical Problems in Engineering*, 2013: 1-6.
|
| 357 |
+
[24] X D Yan, L Quan, J Yang. Analysis on steering characteristics of wheel loader based on electric-hydraulic flow matching principle. *Transactions of the Chinese Society of Agricultural Engineering*, 2015, 31(18): 71-78. (in Chinese)
|
| 358 |
+
[25] L C Xu, X M Hou. Power matching on loader engine and hydraulic torque converter based on typical operating conditions. *Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering*, 2015, 31(7): 80-84. (in Chinese)
|
| 359 |
+
[26] X H Fu, M L Cai, W Y X ang, et al. Optimization study on expansion energy used air-powered vehicle with pneumatic-hydraulic transmission. *Chinese Journal of Mechanical Engineering*, 2018, 31:3, https://doi.org/10.1186/s10033-018-0220-y.
|
| 360 |
+
[27] H B Yuan, H Na, Y Kim. Robust MPC-PIC force control for an electro-hydraulic servo system with pure compressive elastic load. *Control Engineering Practice*, 2018, 79: 170-184.
|
| 361 |
+
[28] Y Shi, M L Cai, W Q Xu, et al. Methods to evaluate and measure power of pneumatic system and their applications. *Chinese Journal of Mechanical Engineering*, 2019, 32:42, https://doi.org/10.1186/s10033-019-0354-6.
|
| 362 |
+
[29] Y Shi, T C Wu, M L Cai, et al. Energy conversion characteristics of a hydro-pneumatic transformer in a sustainable-energy vehicle. *Applied Energy*, 2016, 171: 77-85.
|
| 363 |
+
[30] C C Zhan, X Y Chen. *Hydraulic reliability optimization and intelligent fault diagnosis*. Beijing: Metallurgical Industry Press, 2015. (in Chinese)
|
| 364 |
+
|
| 365 |
+
Submit your manuscript to a SpringerOpen® journal and benefit from:
|
| 366 |
+
|
| 367 |
+
► Convenient online submission
|
| 368 |
+
|
| 369 |
+
► Rigorous peer review
|
| 370 |
+
|
| 371 |
+
► Open access: articles freely available online
|
| 372 |
+
|
| 373 |
+
► High visibility within the field
|
| 374 |
+
|
| 375 |
+
► Retaining the copyright to your article
|
| 376 |
+
|
| 377 |
+
Submit your next manuscript at ► springeropen.com
|
samples_new/texts_merged/879988.md
ADDED
|
@@ -0,0 +1,435 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# The Poisson Process and Associated Probability Distributions on Time Scales
|
| 5 |
+
|
| 6 |
+
Dylan R. Poulsen
|
| 7 |
+
Department of Mathematics
|
| 8 |
+
Baylor University
|
| 9 |
+
Waco, TX 76798
|
| 10 |
+
|
| 11 |
+
Email: Dylan_Poulsen@baylor.edu
|
| 12 |
+
|
| 13 |
+
Michael Z. Spivey
|
| 14 |
+
Department of Mathematics and
|
| 15 |
+
Computer Science
|
| 16 |
+
University of Puget Sound
|
| 17 |
+
Tacoma, WA 98416
|
| 18 |
+
|
| 19 |
+
Email: mspivey@pugetsound.edu
|
| 20 |
+
|
| 21 |
+
Robert J. Marks II
|
| 22 |
+
Department of Electrical and
|
| 23 |
+
Computer Engineering
|
| 24 |
+
Baylor University
|
| 25 |
+
Waco, TX 76798
|
| 26 |
+
|
| 27 |
+
Email: Robert_Marks@baylor.edu
|
| 28 |
+
|
| 29 |
+
**Abstract**—Duals of probability distributions on continuous $\mathbb{R}$ domains exist on discrete $\mathbb{Z}$ domains. The Poisson distribution on $\mathbb{R}$, for example, manifests itself as a binomial distribution on $\mathbb{Z}$. Time scales are a domain generalization in which $\mathbb{R}$ and $\mathbb{Z}$ are special cases. We formulate a generalized Poisson process on an arbitrary time scale and show that the conventional Poisson distribution on $\mathbb{R}$ and binomial distribution on $\mathbb{Z}$ are special cases. The waiting times of the generalized Poisson process are used to derive the Erlang distribution on a time scale and, in particular, the exponential distribution on a time scale. The memoryless property of the exponential distribution on $\mathbb{R}$ is well known. We find conditions on the time scale which preserve the memorylessness property in the generalized case.
|
| 30 |
+
|
| 31 |
+
On $\mathbb{R}$, this is interpreted in the limiting case and $x^\Delta(t) = \frac{d}{dt}x(t)$. The Hilger integral can be viewed as the antiderivative in the sense that, if $y(t) = x^\Delta(t)$, then for $s, t \in \mathbb{T}$,
|
| 32 |
+
|
| 33 |
+
$$\int_{\tau=s}^{t} y(\tau)\Delta\tau = x(t) - x(s).$$
|
| 34 |
+
|
| 35 |
+
The solution to the differential equation
|
| 36 |
+
|
| 37 |
+
$$x^{\Delta}(t) = zx(t); x(0) = 1,$$
|
| 38 |
+
|
| 39 |
+
is $x(t) = e_z(t, 0)$ where [2], [10]
|
| 40 |
+
|
| 41 |
+
$$e_z(t, s) := \exp \left( \int_{\tau=s}^{t} \frac{\log(1 + \mu(\tau)z)}{\mu(\tau)} \Delta\tau \right).$$
|
| 42 |
+
|
| 43 |
+
For an introduction to time scales, there is an online tutorial [10] or, for a more thorough treatment, see the text by Bohner and Peterson [2].
|
| 44 |
+
|
| 45 |
+
## I. INTRODUCTION
|
| 46 |
+
|
| 47 |
+
The theory of continuous and discrete time stochastic processes is well developed [7], [8]. Stochastic processes on general closed subsets of the real numbers, also known as *time scales*, allow a generalization to other domains [4], [9]. The notion of a stochastic process on time scales naturally leads to questions about probability theory on time scales, which has been developed by Kahraman [5]. We begin by introducing a generalized Poisson process on time scales and show it reduces to the conventional Poisson process on $\mathbb{R}$ and the binomial distribution on $\mathbb{Z}$. We then use properties of the Poisson process to motivate generalized Erlang and exponential distributions on time scales. Finally, we show that the generalized exponential distribution has an analogue of the memorylessness property under periodicity conditions on the time scale.
|
| 48 |
+
|
| 49 |
+
## II. FOUNDATIONS
|
| 50 |
+
|
| 51 |
+
A time scale, $\mathcal{T}$, is any closed subset of the real line. We restrict attention to causal time scales [6] where $0 \in \mathcal{T}$ and $t \ge 0$ for all $t \in \mathcal{T}$. The forward jump operator [2], [10], $\sigma(t)$, is defined as the point immediately to the right of $t$, in the sense that $\sigma(t) = \inf\{s \in \mathcal{T} \forall s > t\}$. The graininess is the distance between points defined as $\mu(t) := \sigma(t) - t$. For $\mathbb{R}$, $\sigma(t) = t$ and $\mu(t) = 0$.
|
| 52 |
+
|
| 53 |
+
The time scale or Hilger derivative of a function $x(t)$ on $\mathcal{T}$ is defined as
|
| 54 |
+
|
| 55 |
+
$$x^{\Delta}(t) := \frac{x(\sigma(t)) - x(t)}{\mu(t)}. \quad (II.1)$$
|
| 56 |
+
|
| 57 |
+
## III. THE POISSON PROCESS ON TIME SCALES
|
| 58 |
+
|
| 59 |
+
We begin by presenting the derivation for a particular stochastic process on time scales which mirrors a derivation for the Poisson process on $\mathbb{R}$ [3].
|
| 60 |
+
|
| 61 |
+
Let $\lambda > 0$. Assume the probability an event occurs in the interval $[t, \sigma(s))_{\mathcal{T}}$ is given by
|
| 62 |
+
|
| 63 |
+
$$-(\ominus\lambda)(t)(\sigma(s) - t) + o(s - t).$$
|
| 64 |
+
|
| 65 |
+
where $\ominus z := -z/(1 - \mu(t)z)$ [2], [10]. Hence the probability that no event occurs on the interval is given by
|
| 66 |
+
|
| 67 |
+
$$1 + (\ominus\lambda)(t)(\sigma(s) - t) + o(s - t).$$
|
| 68 |
+
|
| 69 |
+
We also assume that at $t=0$ no events have occurred.
|
| 70 |
+
|
| 71 |
+
We now define a useful notation. Let $X : \mathcal{T} \to \mathbb{N}^0$ be a counting process [8] where $\mathbb{N}^0$ denotes all nonnegative integers. For $k \in \mathbb{N}^0$, define $p_k(t) = \mathbb{P}[X(t) = k]$, the probability that $k$ events have occurred by time $t \in \mathcal{T}$. Let $t, s \in \mathcal{T}$ with $s > t$. Consider the successive intervals $[0, t)_{\mathcal{T}}$
|
| 72 |
+
---PAGE_BREAK---
|
| 73 |
+
|
| 74 |
+
and $[t, \sigma(s))_{\mathbb{T}}$. We can therefore set up the system of equations
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
\begin{align*}
|
| 78 |
+
p_0(\sigma(s)) &= p_0(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] + o(s - t) \\
|
| 79 |
+
p_1(\sigma(s)) &= p_1(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] \\
|
| 80 |
+
&\quad + p_0(t)[-(\ominus\lambda)(t)(\sigma(s) - t)] + o(s - t) \\
|
| 81 |
+
&\vdots \\
|
| 82 |
+
p_k(\sigma(s)) &= p_k(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] \\
|
| 83 |
+
&\quad + p_{k-1}(t)[-(\ominus\lambda)(t)(\sigma(s) - t)] + o(s - t) \\
|
| 84 |
+
&\vdots
|
| 85 |
+
\end{align*}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
with initial conditions $p_0(0) = 1$ and $p_k(0) = 0$ for $k > 0$. We will let $s \to t$ and solve these equations recursively. Consider the $p_0$ equation. By the definition of the derivative on time scales, we have
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
p_0^\Delta(t) = \lim_{s \to t} \frac{p_0(\sigma(s)) - p_0(t)}{\sigma(s) - t} = (\ominus\lambda)(t)p_0(t),
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
which, using the initial value $p_0(0) = 1$, has a solution
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
p_0(t) = e_{\ominus\lambda}(t, 0). \tag{III.1}
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
Now consider the $p_1$ equation. Substituting the solution of the $p_0$ equation yields
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
\begin{align*}
|
| 104 |
+
p_1(\sigma(s)) &= p_1(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] \\
|
| 105 |
+
&\quad + e_{\ominus\lambda}(t, 0)[-(\ominus\lambda)(t)(\sigma(s) - t)] + o(s - t),
|
| 106 |
+
\end{align*}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
which, using (II.1), yields
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
p_1^{\Delta}(t) = (\ominus\lambda)(t)p_1(t) - (\ominus\lambda)(t)e_{\ominus\lambda}(t, 0). \quad (III.2)
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
Using the variation of constants formula on time scales [2], we arrive at the solution
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\begin{align*}
|
| 119 |
+
p_1(t) &= - \int_0^t e_{\ominus\lambda}(t, \sigma(\tau))(\ominus\lambda)(\tau)e_{\ominus\lambda}(\tau, 0)\Delta\tau \\
|
| 120 |
+
&= - \int_0^t e_\lambda(\tau, t)(1 + \mu(\tau)\lambda)(\ominus\lambda)(\tau)e_{\ominus\lambda}(\tau, 0)\Delta\tau \\
|
| 121 |
+
&= \lambda \int_0^t e_\lambda(\tau, 0)e_\lambda(0, t)e_{\ominus\lambda}(\tau, 0)\Delta\tau \\
|
| 122 |
+
&= \lambda \int_0^t e_{\ominus\lambda}(t, 0)\Delta\tau \\
|
| 123 |
+
&= \lambda t e_{\ominus\lambda}(t, 0) \\
|
| 124 |
+
&= \frac{\lambda}{1 + \mu(0)\lambda} t e_{\ominus\lambda}(t, \sigma(0)) \\
|
| 125 |
+
&= -( \ominus \lambda )(0) t e_{\ominus \lambda }(t, \sigma(0)).
|
| 126 |
+
\end{align*}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
Now consider the $p_2$ equation. Substituting the solution of the $p_1$ equation yields
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\begin{align*}
|
| 133 |
+
p_2(\sigma(s)) &= p_2(t)[1 + (\ominus\lambda)(t)(\sigma(s) - t)] \\
|
| 134 |
+
&\quad - (\ominus\lambda)(0)te_{\ominus\lambda}(t, \sigma(0))[-(\ominus\lambda)(t)(\sigma(s) - t)] \\
|
| 135 |
+
&\quad + o(s - t),
|
| 136 |
+
\end{align*}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
which, using (II.1) yields
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
p_2^{\Delta}(t) = (\ominus\lambda)(t)p_2(t) + (\ominus\lambda)(0)(\ominus\lambda)(t)te_{\ominus\lambda}(t, \sigma(0)).
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
Again, using the variation of constants formula on time scales,
|
| 146 |
+
we arrive at the solution
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\begin{align*}
|
| 150 |
+
p_2(t) &= (\ominus\lambda)(0) \\
|
| 151 |
+
& \quad \times \int_0^t e_{\ominus\lambda}(t, \sigma(\tau))(\ominus\lambda)(\tau)\tau e_{\ominus\lambda}(\tau, \sigma(0)) \Delta\tau \\
|
| 152 |
+
&= (\ominus\lambda)(0) \\
|
| 153 |
+
& \quad \times \int_0^t e_{\lambda}(\tau, t)(1 + \mu(\tau)\lambda)(\ominus\lambda)(\tau)\tau e_{\ominus\lambda}(\tau, \sigma(0)) \Delta\tau \\
|
| 154 |
+
&= -\lambda(\ominus\lambda)(0) \\
|
| 155 |
+
& \quad \times \int_0^t \tau e_{\lambda}(\tau, \sigma(0)) e_{\lambda}(\sigma(0), t) e_{\ominus\lambda}(\tau, \sigma(0)) \Delta\tau \\
|
| 156 |
+
&= -\lambda(\ominus\lambda)(0) e_{\ominus\lambda}(t, \sigma(0)) \int_0^t \tau \Delta\tau \\
|
| 157 |
+
&= -\lambda(\ominus\lambda)(0) e_{\ominus\lambda}(t, \sigma(0)) h_2(t, 0) \\
|
| 158 |
+
&= \frac{-\lambda}{1 + \mu(\sigma(0))\lambda} (\ominus\lambda)(0) e_{\ominus\lambda}(t, \sigma^2(0)) h_2(t, 0) \\
|
| 159 |
+
&= (\ominus\lambda)(\sigma(0)) (\ominus\lambda)(0) h_2(t, 0) e_{\ominus\lambda}(t, \sigma^2(0)).
|
| 160 |
+
\end{align*}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
In general, it can be shown via induction that
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
p_k(t) = (-1)^k h_k(t, 0) e_{k-1}(t, \sigma^k(0)) \prod_{i=0}^{k-1} (\ominus_i \sigma^i(0)),
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
where $h_k(t, 0)$ is the $k^{\text{th}}$ generalized Taylor monomial [2].
|
| 170 |
+
|
| 171 |
+
The above derivation motivates the following definition:
|
| 172 |
+
|
| 173 |
+
**Definition III.1.** Let $\mathbb{T}$ be a time scale. We say $S: \mathbb{T} \rightarrow \mathbb{N}^0$ is a $\mathbb{T}$-Poisson process with rate $\lambda > 0$ if for $t \in \mathbb{T}$ and $k \in \mathbb{N}^0$,
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
\P[S(t; \lambda) = k] = (-1)^k h_k(t, 0) e_{-\lambda}(t, \sigma^k(0)) \prod_{i=0}^{k-1} (\ominus_i \lambda)(\sigma^i(0)). \quad (III.3)
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
Each fixed $t \in T$ generates a discrete distribution of the number of arrivals at $t$. We now examine the specific examples of $\mathbb{R}$, $\mathbb{Z}$ and the harmonic time scale [2].
|
| 180 |
+
|
| 181 |
+
A. On $\mathbb{R}$ and $\mathbb{Z}$
|
| 182 |
+
|
| 183 |
+
Let $S: \mathbb{R} \to \mathbb{N}^0$ be an $\mathbb{R}$-Poisson process. Then $\sigma^i(0) = 0$ for all $i \in \mathbb{N}$, $(\ominus\lambda)(t) = -\lambda$ for all $t \in \mathbb{R}$ and $h_k(t) = \frac{t^k}{k!}$.
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\P[S(t; \lambda) = k] = \frac{(\lambda t)^k}{k!} e^{-\lambda t},
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
which we recognize as the Poisson distribution.
|
| 190 |
+
|
| 191 |
+
Now let $S: Z \to N^0$ be an $N^0$-Poisson process. We have
|
| 192 |
+
$\sigma^i(0) = i$ for all $i \in N$, $(\ominus\lambda)(t) = \frac{-\lambda}{1+\lambda} := -p$, and $h_k(t) =$
|
| 193 |
+
$\binom{t}{k}$. Thus we have
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
\P[S(t; \lambda) = k] = \binom{t}{k} p^k (1-p)^{t-k},
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
which we recognize as the binomial distribution.
|
| 200 |
+
---PAGE_BREAK---
|
| 201 |
+
|
| 202 |
+
Fig. 1. Probability against Number of Events and Time for the $H_n$-Poisson Process with rate 1.
|
| 203 |
+
|
| 204 |
+
Fig. 2. A comparison of probability versus number of events near $t = 2$ for the $H_n$-Poisson process with rate 1, the $\mathbb{R}$-Poisson process with rate 1 and the $Z$-Poisson process with rate 1. Note that the $H_n$-Poisson process behaves more like the $Z$-Poisson process than the $\mathbb{R}$-Poisson process.
|
| 205 |
+
|
| 206 |
+
## B. On the Harmonic Time Scale
|
| 207 |
+
|
| 208 |
+
Now let $S: \mathbb{H}_n \to \mathbb{N}^0$ be an $\mathbb{H}_n$-Poisson process with rate $\lambda$, where
|
| 209 |
+
|
| 210 |
+
$$ t \in \mathbb{H}_n \text{ if and only if } t = \sum_{k=1}^{n} \frac{1}{k} \text{ for some } n \in \mathbb{N}, $$
|
| 211 |
+
|
| 212 |
+
which we call the harmonic time scale. To help understand later figures and emphasize that $S$ yields a distinct discrete distribution for each value of $t$, we show the probability against the number of events and time in Figure 1. The choice of $\mathbb{H}_n$ as the time scale shows very informative behavior. Near $t=0$, when the graininess is large, we find behavior that is more like the integers. In contrast, away from $t=0$, where the graininess is small, we find behavior that is more like the real numbers. This behavior is demonstrated in Figures 2–4.
|
| 213 |
+
|
| 214 |
+
Fig. 3. A comparison of probability versus number of events near $t = 4$ for the $\mathbb{H}_n$-Poisson process with rate 1, the $\mathbb{R}$-Poisson process with rate 1 and the $Z$-Poisson process with rate 1. Note that the $\mathbb{H}_n$-Poisson process behaves more like the $\mathbb{R}$-Poisson process than the $Z$-Poisson process.
|
| 215 |
+
|
| 216 |
+
Fig. 4. A comparison of probability versus time when we fix the number of events at 2 for the $\mathbb{H}_n$-Poisson process with rate 1, the $\mathbb{R}$-Poisson process with rate 1 and the $Z$-Poisson process with rate 1. Note that the $\mathbb{H}_n$-Poisson process behaves more like the $Z$-Poisson process near $t = 0$ and more like the $\mathbb{R}$-Poisson process away from $t = 0$.
|
| 217 |
+
|
| 218 |
+
# IV. THE ERLANG DISTRIBUTION ON TIME SCALES
|
| 219 |
+
|
| 220 |
+
A time scales generalization of the Erlang distribution can be generated by examining the waiting times between any number of events in the $\mathbb{T}$-Poisson process. To that end, let $\mathbb{T}$ be a time scale. Let $S: \mathbb{T} \to \mathbb{N}$ be a $\mathbb{T}$-Poisson process with rate $\lambda$. Let $T_n$ be a random variable which denotes the time until the $n^{th}$ event. We have
|
| 221 |
+
|
| 222 |
+
$$
|
| 223 |
+
\begin{aligned}
|
| 224 |
+
\mathbb{P}[S(t; \lambda) < n] &= \mathbb{P}[T_n > t] \\
|
| 225 |
+
&= 1 - \mathbb{P}[T_n \leq t].
|
| 226 |
+
\end{aligned}
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
which implies
|
| 230 |
+
|
| 231 |
+
$$ 1 - \sum_{k=0}^{n-1} \mathbb{P}[S(t; \lambda) = k] = \mathbb{P}[T_n \leq t], $$
|
| 232 |
+
|
| 233 |
+
which motivates the following definition.
|
| 234 |
+
|
| 235 |
+
**Definition IV.1.** Let $\mathbb{T}$ be a time scale, $S: \mathbb{T} \to \mathbb{N}^0$ be a $\mathbb{T}$-Poisson Process with rate $\lambda > 0$. We say $F(t; n, \lambda)$ is the $\mathbb{T}$-Erlang cumulative distribution function with shape parameter
|
| 236 |
+
---PAGE_BREAK---
|
| 237 |
+
|
| 238 |
+
$n$ and rate $\lambda$ provided
|
| 239 |
+
|
| 240 |
+
$$F(t; n, \lambda) = 1 - \sum_{k=0}^{n-1} \mathbb{P}[S(t; \lambda) = k].$$
|
| 241 |
+
|
| 242 |
+
From our derivation, it is clear that the $\mathbb{T}$-Erlang distribution models the time until the $n^{th}$ event in the $\mathbb{T}$-Poisson process. We would like to know the probability that the $n^{th}$ event is in any subset of $\mathbb{T}$. To this end, we introduce the $\mathbb{T}$-Erlang probability density function in the next definition.
|
| 243 |
+
|
| 244 |
+
**Definition IV.2.** Let $\mathbb{T}$ be a time scale, $S: \mathbb{T} \to \mathbb{N}^0$ be a $\mathbb{T}$-Poisson Process with rate $\lambda > 0$. We say $f(t; n, \lambda)$ is the $\mathbb{T}$-Erlang probability density function with shape parameter $n$ and rate $\lambda$ provided
|
| 245 |
+
|
| 246 |
+
$$f(t; n, \lambda) = - \sum_{k=0}^{n-1} [\mathbb{P}[S(t; \lambda) = k]]^\Delta.$$
|
| 247 |
+
|
| 248 |
+
where the $\Delta$-differentiation is with respect to $t$.
|
| 249 |
+
|
| 250 |
+
We want to show that $f(t; n, \lambda)$ can rightly be called a probability density with respect to some accumulation function. Thus, we have the following theorem.
|
| 251 |
+
|
| 252 |
+
**Theorem IV.1.** Let $\mathbb{T}$ be a time scale. Let $F(t; n, \lambda)$ be a $\mathbb{T}$-Erlang cumulative distribution function with shape parameter $n$ and rate $\lambda$ and let $f(t; n, \lambda)$ be a $\mathbb{T}$-Erlang probability density function with shape parameter $n$ and rate $\lambda$. Then
|
| 253 |
+
|
| 254 |
+
$$\int_0^t f(\tau; n, \lambda) \Delta\tau = F(t; n, \lambda) \quad (IV.1)$$
|
| 255 |
+
|
| 256 |
+
and in particular
|
| 257 |
+
|
| 258 |
+
$$\int_{\mathbb{T}} f(\tau; n, \lambda) \Delta\tau = 1. \quad (IV.2)$$
|
| 259 |
+
|
| 260 |
+
*Proof:* Implicit in the definition of the $\mathbb{T}$-Erlang probability distribution is a $\mathbb{T}$-Poisson process $S: \mathbb{T} \to \mathbb{N}^0$. By the assumption that
|
| 261 |
+
|
| 262 |
+
$$\mathbb{P}[S(0; \lambda) = k] = \begin{cases} 1 & k = 0 \\ 0 & k > 0, \end{cases}$$
|
| 263 |
+
|
| 264 |
+
we have,
|
| 265 |
+
|
| 266 |
+
$$\begin{align*}
|
| 267 |
+
\int_0^t f(\tau; n, \lambda) \Delta\tau &= \int_0^t -\sum_{k=0}^{n-1} \mathbb{P}[S(\tau; \lambda) = k]^{\Delta} \Delta\tau \\
|
| 268 |
+
&= -\sum_{k=0}^{n-1} \int_0^t \mathbb{P}[S(\tau; \lambda) = k]^{\Delta} \Delta\tau \\
|
| 269 |
+
&= -\sum_{k=0}^{n-1} \mathbb{P}[S(\tau; \lambda) = k]|_0^t \\
|
| 270 |
+
&= -\sum_{k=0}^{n-1} \mathbb{P}[S(t; \lambda) = k] \\
|
| 271 |
+
&\qquad + \sum_{k=0}^{n-1} \mathbb{P}[S(0; \lambda) = k] \\
|
| 272 |
+
&= 1 - \sum_{k=0}^{n-1} \mathbb{P}[S(t; \lambda) = k] \\
|
| 273 |
+
&= F(t; n, \lambda),
|
| 274 |
+
\end{align*}$$
|
| 275 |
+
|
| 276 |
+
which proves (IV.1). To prove (IV.2), we note for all $k < n$,
|
| 277 |
+
|
| 278 |
+
$$\lim_{t \to \infty} \mathbb{P}[S(t; \lambda) = k] = 0,$$
|
| 279 |
+
|
| 280 |
+
by repeated application of L'Hôpital's rule for time scales on III.3 [1]. This fact proves (IV.2) by the same argument as the proof of (IV.1). ■
|
| 281 |
+
|
| 282 |
+
We note that the moments of the $\mathbb{T}$-Erlang distribution cannot in general be calculated explicitly without some knowledge of the time scale.
|
| 283 |
+
|
| 284 |
+
## V. THE EXPONENTIAL DISTRIBUTION ON TIME SCALES
|
| 285 |
+
|
| 286 |
+
Of particular interest to us is the $\mathbb{T}$-Erlang distribution with shape parameter 1. By the above discussion and equation (III.1), the probability density function of this distribution is given by
|
| 287 |
+
|
| 288 |
+
$$f(t; 1, \lambda) = -\mathbb{P}[S^{\Delta}(t; \lambda) = 0] = -(\ominus\lambda)(t)e_{\ominus\lambda}(t, 0).$$
|
| 289 |
+
|
| 290 |
+
**Definition V.1.** Let $\mathbb{T}$ be a time scale and let $T$ be a $\mathbb{T}$-Erlang random variable with shape parameter 1 and rate $\lambda$. Then we say $T$ is a $\mathbb{T}$-exponential random variable with rate $\lambda$.
|
| 291 |
+
|
| 292 |
+
### A. The Expected Value
|
| 293 |
+
|
| 294 |
+
The $\mathbb{T}$-exponential distribution gives us the rare opportunity to calculate a moment without any knowledge of the time scale.
|
| 295 |
+
|
| 296 |
+
**Lemma V.1.** Let $\mathbb{T}$ be a time scale and let $T$ be a $\mathbb{T}$-exponential random variable with rate $\lambda > 0$. Then
|
| 297 |
+
|
| 298 |
+
$$\mathbb{E}(T) = \frac{1}{\lambda}.$$
|
| 299 |
+
---PAGE_BREAK---
|
| 300 |
+
|
| 301 |
+
**Proof:** Using integration by parts on time scales, we find
|
| 302 |
+
|
| 303 |
+
$$
|
| 304 |
+
\begin{align*}
|
| 305 |
+
\mathbb{E}(T) &= \int_0^\infty t[-(\ominus\lambda)(t)e_{\ominus\lambda}(t, 0)]\Delta t \\
|
| 306 |
+
&= -te_{\ominus\lambda}(t, 0)|_0^\infty + \int_0^\infty e_{\ominus\lambda}(\sigma(t), 0)\Delta t \\
|
| 307 |
+
&= 0 + \int_0^\infty (1 + \mu(t)(\ominus\lambda)(t))e_{\ominus\lambda}(t, 0)\Delta t \\
|
| 308 |
+
&= \int_0^\infty \frac{1}{1 + \mu(t)\lambda}e_{\ominus\lambda}(t, 0)\Delta t \\
|
| 309 |
+
&= -\frac{1}{\lambda}\int_0^\infty \frac{-\lambda}{1 + \mu(t)\lambda}e_{\ominus\lambda}(t, 0)\Delta t \\
|
| 310 |
+
&= -\frac{1}{\lambda}\int_0^\infty (\ominus\lambda)(t)e_{\ominus\lambda}(t, 0)\Delta t \\
|
| 311 |
+
&= -\frac{1}{\lambda}e_{\ominus\lambda}(t, 0)|_0^\infty \\
|
| 312 |
+
&= -\frac{1}{\lambda}[0 - 1] \\
|
| 313 |
+
&= \frac{1}{\lambda},
|
| 314 |
+
\end{align*}
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+
which proves our claim.
|
| 318 |
+
|
| 319 |
+
■
|
| 320 |
+
|
| 321 |
+
**B. On $\mathbb{R}$ and $\mathbb{Z}$**
|
| 322 |
+
|
| 323 |
+
We note that if $\mathbb{T} = \mathbb{R}$, then we have
|
| 324 |
+
|
| 325 |
+
$$f(t; 1, \lambda) = \lambda e^{-\lambda t},$$
|
| 326 |
+
|
| 327 |
+
which we recognize as the exponential distribution. By Lemma V.1, we find the mean of the exponential distribution is $1/\lambda$, which is well known.
|
| 328 |
+
|
| 329 |
+
Now if $\mathbb{T} = \mathbb{Z}$, then we have
|
| 330 |
+
|
| 331 |
+
$$f(t; 1, \lambda) = \frac{\lambda}{1+\lambda} \left(1 - \frac{\lambda}{1+\lambda}\right)^t = p(1-p)^t,$$
|
| 332 |
+
|
| 333 |
+
where $p := \frac{\lambda}{1+\lambda}$. We recognize the above as the geometric distribution. By Lemma V.1, we find the mean of the geometric distribution is $1/\lambda = (1-p)/p$.
|
| 334 |
+
|
| 335 |
+
**C. The $\omega$-Memorylessness Property**
|
| 336 |
+
|
| 337 |
+
Both the geometric and exponential distributions are completely classified by the fact that they have the memorylessness property [8]. We recall the the memoryless property on $\mathbb{R}$ is the property that if $T$ is a continuous random variable, then for all $t, \tau \in \mathbb{R}$,
|
| 338 |
+
|
| 339 |
+
$$\mathbb{P}[T > t + \tau | T > t] = \mathbb{P}[T > \tau]$$
|
| 340 |
+
|
| 341 |
+
and that the memoryless property on $\mathbb{Z}$ is the property that if $T$ is a discrete random variable, then for all $t, \tau \in \mathbb{Z}$,
|
| 342 |
+
|
| 343 |
+
$$\mathbb{P}[T > t + \tau | T > t] = \mathbb{P}[T > \tau].$$
|
| 344 |
+
|
| 345 |
+
We would like to find conditions on the time scale $\mathbb{T}$ such that the $\mathbb{T}$-exponential distribution on time scales has this property. Let $\mathbb{T}$ is $\omega$-periodic, that is, if $t \in \mathbb{T}$ then $t+\omega \in \mathbb{T}$. Then we can define a property much like the memorylessness property.
|
| 346 |
+
|
| 347 |
+
**Definition V.2.** Let $\mathbb{T}$ be an $\omega$-periodic time scale. We say a probability distribution on $\mathbb{T}$ has the $\omega$-memorylessness property provided for all $t \in \mathbb{T}$,
|
| 348 |
+
|
| 349 |
+
$$P(T > t + \omega | T > t) = P(T > \omega),$$
|
| 350 |
+
|
| 351 |
+
We note that this definition generalizes the memorylessness property on $\mathbb{R}$ and $\mathbb{Z}$ since $\mathbb{R}$ and $\mathbb{Z}$ are $\omega$-periodic for any $\omega$ in $\mathbb{R}$ and $\mathbb{Z}$, respectively.
|
| 352 |
+
|
| 353 |
+
Let $\mathbb{T}$ be $\omega$-periodic and let $T$ be a $\mathbb{T}$-exponential random variable. Then we claim the $\mathbb{T}$-exponential distribution has the $\omega$-memorylessness property. To show this claim, we first prove two lemmas.
|
| 354 |
+
|
| 355 |
+
**Lemma V.2.** Let $\mathbb{T}$ be an $\omega$-periodic time scale and let $\lambda > 0$. Then for $t, t_0 \in \mathbb{T}$, $e_{\ominus\lambda}(t+\omega, t_0) = e_{\ominus\lambda}(t, t_0 - \omega)$.
|
| 356 |
+
|
| 357 |
+
**Proof:** By the definition of the time scales exponential function,
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
\begin{align*}
|
| 361 |
+
e_{\ominus\lambda}(t+\omega, t_0) &= \exp\left(\int_{t_0}^{t+\omega} \frac{\text{Log}(1+(\ominus\lambda)(s)\mu(s))\Delta s}{\mu(s)}\right) \\
|
| 362 |
+
&= \exp\left(\int_{t_0}^{t+\omega} \frac{\text{Log}\left(1+\frac{-\lambda\mu(s)}{1+\lambda\mu(s)}\right)\Delta s}{\mu(s)}\right) \\
|
| 363 |
+
&= \exp\left(\int_{t_0-\omega}^{t} \frac{\text{Log}\left(1+\frac{-\lambda\mu(\tau+\omega)}{1+\lambda\mu(\tau+\omega)}\right)\Delta\tau}{\mu(\tau+\omega)}\right) \\
|
| 364 |
+
&= \exp\left(\int_{t_0-\omega}^{t} \frac{\text{Log}\left(1+\frac{-\lambda\mu(\tau)}{1+\lambda\mu(\tau)}\right)\Delta\tau}{\mu(\tau)}\right) \\
|
| 365 |
+
&= \exp\left(\int_{t_0-\omega}^{t} \frac{\text{Log}(1+(\ominus\lambda)(\tau)\mu(\tau))\Delta\tau}{\mu(\tau)}\right) \\
|
| 366 |
+
&= e_{\ominus\lambda}(t, t_0 - \omega),
|
| 367 |
+
\end{align*}
|
| 368 |
+
$$
|
| 369 |
+
|
| 370 |
+
where we use the fact that for $\omega$-periodic time scales $\mu(t+\omega) = \mu(t)$ for all $t \in \mathbb{T}$ and the change of variables $\tau = s-\omega$.
|
| 371 |
+
|
| 372 |
+
■
|
| 373 |
+
|
| 374 |
+
**Lemma V.3.** Let $\mathbb{T}$ be an $\omega$-periodic time scale and $\lambda > 0$. Then for all $t \in \mathbb{T}$, $e_{\ominus\lambda}^{\Delta}(t+\omega, t) = 0$.
|
| 375 |
+
|
| 376 |
+
**Proof:** By the product rule on time scales and Lemma V.2,
|
| 377 |
+
|
| 378 |
+
$$
|
| 379 |
+
\begin{align*}
|
| 380 |
+
e_{\ominus\lambda}^{\Delta}(t+\omega,t) &= (e_{\ominus\lambda}(t+\omega,t_0)e_{\ominus\lambda}(t_0,t))^{\Delta} \\
|
| 381 |
+
&= (e_{\ominus\lambda}(t,t_0-\omega)e_{\ominus\lambda}(t_0,t))^{\Delta} \\
|
| 382 |
+
&= (e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0))^{\Delta} \\
|
| 383 |
+
&= e_{\ominus\lambda}(\sigma(t), t_0-\omega)\lambda e_{\lambda}(t,t_0) \\
|
| 384 |
+
&+ (\ominus\lambda)(t)e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0) \\
|
| 385 |
+
&= \lambda(1+(\ominus\lambda)(t)\mu(t))e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0) \\
|
| 386 |
+
&+ (\ominus\lambda)(t)e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0) \\
|
| 387 |
+
&= [-(\ominus\lambda)(t) + (\ominus\lambda)(t)] \\
|
| 388 |
+
&e_{\ominus\lambda}(t,t_0-\omega)e_{\lambda}(t,t_0) \\
|
| 389 |
+
&= 0.
|
| 390 |
+
\end{align*}
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
■
|
| 394 |
+
---PAGE_BREAK---
|
| 395 |
+
|
| 396 |
+
The above lemmas allow us to prove the following result.
|
| 397 |
+
|
| 398 |
+
**Theorem V.4.** Let $\mathbb{T}$ be an $\omega$-periodic time scale and let $\lambda > 0$. Then the $\mathbb{T}$-exponential distribution with rate $\lambda$ has the $\omega$-memorylessness property.
|
| 399 |
+
|
| 400 |
+
*Proof:* Let $T$ be a $\mathbb{T}$-exponential random variable with rate $\lambda > 0$. By Lemma V.2 and Lemma V.3,
|
| 401 |
+
|
| 402 |
+
$$
|
| 403 |
+
\begin{aligned}
|
| 404 |
+
P(T > t + \omega | T > t) &= \frac{P(T > t + \omega)}{P(T > t)} \\
|
| 405 |
+
&= \frac{\int_{t+\omega}^{\infty} -(\ominus\lambda)(\tau)e_{\ominus\lambda}(\tau, 0)\Delta\tau}{\int_{t}^{\infty} -(\ominus\lambda)(\tau)e_{\ominus\lambda}(\tau, 0)\Delta\tau} \\
|
| 406 |
+
&= \frac{e_{\ominus\lambda}(t+\omega, 0)}{e_{\ominus\lambda}(t, 0)} \\
|
| 407 |
+
&= e_{\ominus\lambda}(t+\omega, t) \\
|
| 408 |
+
&= e_{\ominus\lambda}(\omega, 0) \\
|
| 409 |
+
&= P(T > \omega),
|
| 410 |
+
\end{aligned}
|
| 411 |
+
$$
|
| 412 |
+
|
| 413 |
+
since $e_{\ominus\lambda}(\omega, 0)$ is a constant independent of $t$ by Lemma V.3. Thus the $\mathbb{T}$-exponential distribution has the $\omega$-memorylessness property ■
|
| 414 |
+
|
| 415 |
+
## REFERENCES
|
| 416 |
+
|
| 417 |
+
[1] M. Bohner and A. Peterson, *Advances in Dynamic Equations on Time Scales*, Birkhäuser, Boston, 2003.
|
| 418 |
+
|
| 419 |
+
[2] M. Bohner and A. Peterson, *Dynamic Equations on Time Scales*, Birkhäuser, Boston, 2001.
|
| 420 |
+
|
| 421 |
+
[3] W. Ching and M. Ng, *Markov chains: models, algorithms and applications*, Springer, New York, 2006.
|
| 422 |
+
|
| 423 |
+
[4] John M. Davis, Ian A. Gravagne and Robert J. Marks II, "Bilateral Laplace Transforms on Time Scales: Convergence, Convolution, and the Characterization of Stationary Stochastic Time Series," Circuits, Systems, and Signal Processing, Birkhäuser, Boston, Volume 29, Issue 6 (2010), Page 1141. [DOI 10.1007/s00034-010-9196-2]
|
| 424 |
+
|
| 425 |
+
[5] S. Kahraman, "Probability Theory Applications on Time Scales," M.S. Thesis, İzmir Institute of Technology, 2008
|
| 426 |
+
|
| 427 |
+
[6] Robert J. Marks II, Ian A. Gravagne and John M. Davis, "A Generalized Fourier Transform and Convolution on Time Scales," Journal of Mathematical Analysis and Applications Volume 340, Issue 2, 15 April 2008, Pages 901-919.
|
| 428 |
+
|
| 429 |
+
[7] R.J. Marks II, *Handbook of Fourier Analysis and Its Applications*, Oxford University Press (2009).
|
| 430 |
+
|
| 431 |
+
[8] A. Papoulis, *Probability, Random Variables and Stochastic Processes*, 3rd Edition, McGraw-Hill, New York (1991)
|
| 432 |
+
|
| 433 |
+
[9] S. Sanyal, "Stochastic Dynamic Equations," Ph.D. Thesis, Missouri University of Science and Technology, 2008
|
| 434 |
+
|
| 435 |
+
[10] Baylor Time Scales Group, http://timescales.org/
|
samples_new/texts_merged/88513.md
ADDED
|
@@ -0,0 +1,161 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---PAGE_BREAK---
|
| 3 |
+
|
| 4 |
+
# VALIDATION OF THE GAMMA SUBMERSION CALCULATION OF THE REMOTE POWER PLANT MONITORING SYSTEM OF THE FEDERAL STATE OF BADEN-WÜRTTEMBERG
|
| 5 |
+
|
| 6 |
+
Janis Lapins¹, Wolfgang Bernnat², Walter Scheuermann²
|
| 7 |
+
|
| 8 |
+
¹Institute of Nuclear Technology and Energy Systems, Pfaffenwaldring 31, University of Stuttgart,
|
| 9 |
+
Stuttgart, Germany
|
| 10 |
+
|
| 11 |
+
²KE-Technologie GmbH, Stuttgart, Germany
|
| 12 |
+
|
| 13 |
+
**Abstract:** The radioactive dispersion model used in the framework of the remote nuclear power plant monitoring system of the federal state of Baden-Württemberg applies the method of adjoint fluxes to calculate the sky shine from gamma rays with a regarded gamma energy spectrum for nuclides released. The spectrum is represented by 30 energy groups of the released nuclides. A procedure has been developed to calculate the dose distribution on the ground in case of an accident with a release of radioactivity. For validation purposes, the results produced with the adjoint method in the dispersion code ABR are compared to results produced by forward calculations with Monte Carlo methods using the Los Alamos Code MCNP6.
|
| 14 |
+
|
| 15 |
+
**Key words:** adjoint method, MCNP, validation, gamma submersion
|
| 16 |
+
|
| 17 |
+
## THE MODULAR DISPERSION TOOL “ABR”
|
| 18 |
+
|
| 19 |
+
The federal state of Baden-Württemberg, Germany, operates a remote power plant monitoring system that has an online access to the main safety relevant parameter of the power plant as well as the meteorological data provided by the German weather forecast service (DWD). The data are sent to a server system that is operated for the Ministry of Environment of the federal state. The radioactive dispersion tool “ABR” is an integral part of this system and is used for calculation of radiological consequences in case of an accident, or to prepare and to perform emergency exercises for the civil protection. For a dispersion calculation, the ABR has to account for the following:
|
| 20 |
+
|
| 21 |
+
* Interpolation of forecasted or measured precipitation to grid (precipitation module)
|
| 22 |
+
|
| 23 |
+
* Calculation of the wind field from forecast or measurement on grid (terrain-following wind field module)
|
| 24 |
+
|
| 25 |
+
* Release of the amount of radioactivity to the environment accounting for decay time of nuclides between shutdown of the reactor and the time of emission (release module)
|
| 26 |
+
|
| 27 |
+
* Transport of radioactivity with wind, also washout and fallout due to deposition or rain, respectively (Lagrange particle transport module)
|
| 28 |
+
|
| 29 |
+
* Sky shine to a detector 1 m above the ground (sky shine module)
|
| 30 |
+
|
| 31 |
+
* Calculation of the doses from various exposure paths (gamma submersion, beta submersion, inhalation and ground shine) and for 25 organs and one effective dose (dose module)
|
| 32 |
+
|
| 33 |
+
All of this is performed by the different modules of the programme system mentioned above. However, this paper will focus on the validation of the sky shine module in conjunction with the dose module which calculates the gamma submersion by the method of adjoint fluxes [1]. For validation, the reference code system MCNP6 [2] is used. Results produced with ABR are benchmarked against it.
|
| 34 |
+
|
| 35 |
+
## METHOD OF CALCULATION
|
| 36 |
+
|
| 37 |
+
The dose calculation is performed applying the method of adjoint fluxes to calculate the gamma cloud radiation with a regarded gamma ray energy spectrum for nuclides released comprising 30 energy groups. This procedure enables an efficient algorithm to calculate the dose rates or integrated doses in case of an accident with a release of radioactivity. The system is part of the emergency preparedness and response and is in online operational service. The adjoint fluxes were produced by results from MCNP6 [2]. For validation purposes, the results produced with the adjoint method in the dispersion code ABR are compared to results produced by forward calculations with Monte Carlo methods using MCNP6. The
|
| 38 |
+
---PAGE_BREAK---
|
| 39 |
+
|
| 40 |
+
computational procedure comprises the following steps: From a point or a volume source, respectively, photons are started isotropically for average energies of the 30 energy groups or distinctive gamma spectrum for single nuclides. Travelling through space, these photons collide with nuclides present in air or the ground and are scattered until they reach the detector. With the help of point detectors, the flux density spectrum can be estimated, and, by making use of a dose-flux-relation, the resulting gamma submersion dose on the ground can be determined.
|
| 41 |
+
|
| 42 |
+
The backward method in the ABR uses the adjoint fluxes to evaluate the influence of a certain nuclide (spectrum) in the cloud at a certain distance from a detector point on the ground. To obtain these adjoint fluxes, a large number of calculations has been performed to determine the adjoint flux for all energy groups and distances (radii). The radii for which the fluxes were produced are support points. Radii between support points are interpolated. Depending on the energy of the group under consideration there are different exponential fitting functions that account for both energy and distance. The energy deposited within human tissue is accounted for by age classes and by use dose factors from the German Radiation Protection Ordinance that provide dose factors for organs and effective dose [5].
|
| 43 |
+
|
| 44 |
+
## SOLUTION OF THE TRANSPORT EQUATION
|
| 45 |
+
|
| 46 |
+
The transport equation in operator notation is
|
| 47 |
+
|
| 48 |
+
$$M\Phi = Q \quad (1)$$
|
| 49 |
+
|
| 50 |
+
with
|
| 51 |
+
|
| 52 |
+
$$M = \vec{\Omega}grad + \Sigma_T(E) + \iint_{\vec{\Omega}\setminus E'} \Sigma_s(\vec{\Omega}' \rightarrow \vec{\Omega}, E' \rightarrow E)dE' d\Omega' \quad (2)$$
|
| 53 |
+
|
| 54 |
+
In equation (1) above $Q(\vec{r}, \vec{\Omega}, E)$ represents the source vector and $\Phi(\vec{r}, \vec{\Omega}, E)$ represents the flux density vector which both depend on the location $\vec{r}$, the direction $\vec{\Omega}$, and the Energy $E$. In equation (2) the first term represents the leakage term, $\Sigma_T(E)$ represents the collision, and the integral represents the scattering from any direction $\vec{\Omega}'$ and energy $E'$ into the direction $\vec{\Omega}$ and energy $E$ of interest.
|
| 55 |
+
|
| 56 |
+
After solution of the transport equation reaction rates, e.g. dose rates $\bar{D}$ can be calculated with the help of a response function $R(\vec{r}, E)$ such that the condition
|
| 57 |
+
|
| 58 |
+
$$\bar{D} = \langle \Phi R \rangle = \int_V \int_E \Phi(\vec{r}, E) R(\vec{r}, E) dr dE \quad (3)$$
|
| 59 |
+
|
| 60 |
+
is valid. The adjoint equation to the equation (1) is
|
| 61 |
+
|
| 62 |
+
$$M^+ \Phi^+ = R \quad (4)$$
|
| 63 |
+
|
| 64 |
+
The adjoint equation has to be defined in a way that the condition
|
| 65 |
+
|
| 66 |
+
$$\langle \Phi^+ M \Phi \rangle = \langle \Phi M^+ \Phi^+ \rangle \quad (5)$$
|
| 67 |
+
|
| 68 |
+
holds. If this is the case, the following is also valid:
|
| 69 |
+
|
| 70 |
+
$$\bar{D} = \langle \Phi^+ M \Phi \rangle = \langle \Phi^+ Q \rangle = \langle \Phi M^+ \Phi^+ \rangle = \langle \Phi R \rangle = \bar{D} \quad (6)$$
|
| 71 |
+
|
| 72 |
+
I.e. instead of eq. (1), the adjoint function eq. (4) can be solved and the reaction rates are determined by eq. (3). The solution of the adjoint transport equation provides a relation between photon emission of a certain energy/energy range of a point/volume regarded and the dose at a computational point.
|
| 73 |
+
|
| 74 |
+
## CALCULATION OF ADJOINT FLUXES WITH MCNP
|
| 75 |
+
|
| 76 |
+
The calculation of the gamma submersion as a consequence of radioactive nuclides in the radioactive cloud can be achieved if the spatial and energy distribution of the gamma sources in relation to certain computational points at the ground are known, together with the composition of air and soil. The computation necessitates the solution of the photon transport equation with respect to the energy dependence of the possible reactions of photons with atoms in air or soil (photo-electrical effect, Compton effect, pair production effect etc.). The solution of the transport equation yields photon spectra for computational points that enable dose calculations. Relevant dose/flux relations are defined by ICRP, [3]. For photons ICRP 74 can be applied. The dose/flux relation is presented in **Figure 2**. With Monte Carlo codes with their continuous energy dependence of the cross sections, a direct solution of the adjoint transport equation is not possible. Nevertheless, these codes can be used to estimate the contribution of a source point/volume to the dose at a computational point, see **Figure 1**. To do this, the source
|
| 77 |
+
---PAGE_BREAK---
|
| 78 |
+
|
| 79 |
+
point/volume a sufficiently great number of photon trajectories have to be simulated and their
|
| 80 |
+
contribution to the dose is calculated. Computing the dose rates at a computational point of interest, the
|
| 81 |
+
relevant contributions from all source points/volumes of the whole emission field have to be summed up
|
| 82 |
+
such that the dose at the computational point (x, y, z) can be estimated with
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
D(x, y, z) = \sum_q \sum_g \Phi_g^+ (r_q, z_q - z) \cdot Q(x_q, y_q, z_q) \cdot V_q \quad (7)
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
with
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
r_q = \sqrt{(x_q - x)^2 + (y_q - y)^2} \tag{8}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
$\Phi_g^+$ as the adjoint flux depending on the radius and the height, $Q_g$ as specific source concentration and $V_q$ as the volume that contains the concentration.
|
| 95 |
+
|
| 96 |
+
**Figure 1.** Source point/volume $Q(r_q, z_q)$ and computational point of interest $P(x, y, z)$ in dose calculations
|
| 97 |
+
|
| 98 |
+
The index q corresponds to the source; the index g corresponds to the energy group or the gamma line of
|
| 99 |
+
the source of the photon emission energy. The coordinates x, y, z correspond to the computational point
|
| 100 |
+
of interest. The coordinates xq, yq (resp. rq), zq correspond to the centre point of the source volume Vq,
|
| 101 |
+
see Figure 1.
|
| 102 |
+
|
| 103 |
+
**Figure 2.** Dose/flux relation for gamma energies from 0.01 – 10 MeV in 0.07 cm depth of the body according to ICRP 74 [3]
|
| 104 |
+
|
| 105 |
+
## 2 SCENARIOS FOR DOSE COMPARISONS: A HOMOGENEOUS AND A NON-HOMOGENEOUS RADIOACTIVE CLOUD OF REFERENCE NUCLIDES
|
| 106 |
+
|
| 107 |
+
For comparison of the results of the gamma submersion dose rates, two scenarios have been defined. The base scenario assumes a homogeneous concentration distribution of three reference nuclides Xe-133, Cs-137 and I-131 with a flat topography both for the ABR and MCNP, respectively. There is no use of the dispersion module of the ABR, but the concentrations are artificially input into the sky shine and dose modules of the ABR. The computational domain and the boundary conditions for this scenario are presented in Table 1. A sketch of the scenario is shown in Figure 5.
|
| 108 |
+
|
| 109 |
+
An advanced scenario with a 3-D cloud is also presented. For this scenario, a realistic concentration distribution has been generated with the ABR, i.e. the release height of 150 metres with a wind speed of 4 m/s at 10 m height and increasing wind speed with the height for diffusion category D (neutral conditions). The released activity is transported with the wind. After one time step the doses are compared. Since the MCNP cannot simulate the transport of radioactive particles with the wind, the distribution of concentration of the isotope regarded is imported to MCNP via an interface. The results for the dose calculation are also compared. The radioactive cloud together with the wind speed is presented in
|
| 110 |
+
---PAGE_BREAK---
|
| 111 |
+
|
| 112 |
+
Figure 6. For this paper, the shape of the cloud is regarded as given as the dose rates are subject to comparison and not the cloud shape. The boundary conditions and general assumptions for this case is given in Table 2
|
| 113 |
+
|
| 114 |
+
The gamma lines of the reference nuclides are shown in **Figure 3** and **Figure 4**, [4]. These gamma emissions are accounted for in the 30 group spectrum of the ABR with their respective intensity. For the MCNP calculation, the gamma energies and their respective intensity are directly input.
|
| 115 |
+
|
| 116 |
+
**Table 1.** Simulation set-up for homogeneous cloud from 120 – 160 m
|
| 117 |
+
|
| 118 |
+
<table><thead><tr><th>Constant source</th><th>ABR</th><th>MCNP6</th></tr></thead><tbody><tr><td>Computational area (x, y, z)</td><td>20 km x 20 km x 1 km</td><td>20 km x 20 km x 1 km</td></tr><tr><td>Mesh number (x, y,z)</td><td>100 x 100 x 25</td><td>-</td></tr><tr><td>Mesh size in x, y, z - direction</td><td>200 m, 200 m, 40 m</td><td>-</td></tr><tr><td>Cloud height</td><td>120 – 160 m</td><td>120 – 160 m</td></tr><tr><td colspan="3">Activity in cloud [Bqm<sup>-3</sup>]</td></tr><tr><td>Cs-137</td><td>6.0E+04</td><td>6.0E+04</td></tr><tr><td>Xe-133</td><td>2.0E+10</td><td>2.0E+10</td></tr><tr><td>I-131</td><td>1.0E+06</td><td>1.0E+06</td></tr></tbody></table>
|
| 119 |
+
|
| 120 |
+
**Table 2.** Simulation set-up for a non-homogeneous cloud
|
| 121 |
+
|
| 122 |
+
<table><thead><tr><th>Realistic source</th><th>ABR</th><th>MCNP6</th></tr></thead><tbody><tr><td>Computational area (x, y, z)</td><td>20 km x 20 km x 1 km</td><td>20 km x 20 km x 1 km</td></tr><tr><td>Mesh number (x, y,z)</td><td>100 x 100 x 25</td><td>100 x 100 x 25</td></tr><tr><td>Mesh size in x, y, z - direction</td><td>200 m, 200 m, 40 m</td><td>200 m, 200 m, 40 m</td></tr><tr><td>Emission height</td><td>150 m</td><td>150 m</td></tr><tr><td>Total activity released [Bq]</td><td></td><td>Activity imported via interface</td></tr><tr><td>Cs-137</td><td>6.0E+09</td><td>6.0E+09</td></tr><tr><td>Xe-133</td><td>2.0E+17</td><td>2.0E+17</td></tr><tr><td>I-131</td><td>1.0E+10</td><td>1.0E+10</td></tr><tr><td>Wind speed in 10 m height</td><td>4 m/s</td><td>-</td></tr><tr><td>Diffusion category</td><td>D</td><td>-</td></tr><tr><td>Emission duration</td><td>1 hour</td><td>-</td></tr></tbody></table>
|
| 123 |
+
|
| 124 |
+
**Figure 3.** Gamma lines and intensities of Cs-137 and Xe-133 (NUDAT 2.6) [4]
|
| 125 |
+
|
| 126 |
+
**Figure 4.** Gamma lines of I-131 (NUDAT 2.6) [4]
|
| 127 |
+
---PAGE_BREAK---
|
| 128 |
+
|
| 129 |
+
**Figure 5.** Sketch of the scenario with homogeneous emission layer and exemplary paths from the cloud to the detector (direct, indirect via air and ground reflection, or both)
|
| 130 |
+
|
| 131 |
+
**Figure 6.** Non-homogeneous distribution of aerosoles after 1 hour with a wind speed of 4 m/s at a height of 10 m simulated with the ABR. The concentration is exported to MCNP
|
| 132 |
+
|
| 133 |
+
## RESULTS OF COMPARISON
|
| 134 |
+
|
| 135 |
+
The results of the comparison are presented in the tables below. One can see that the results are in good agreement for the three reference nuclides.
|
| 136 |
+
|
| 137 |
+
**Table 3.** Results for the base case with homogeneous cloud
|
| 138 |
+
|
| 139 |
+
<table><thead><tr><td>Nuclide</td><td>MCNP6 [Sv/h]</td><td>ABR [Sv/h]</td><td>Ratio ABR/MCNP6</td></tr></thead><tbody><tr><td>Cs-137</td><td>9.31E-07</td><td>8.33E-07</td><td>0.89</td></tr><tr><td>Xe-133</td><td>1.36E-02</td><td>1.30E-02</td><td>0.96</td></tr><tr><td>I-131</td><td>1.01E-05</td><td>1.03E-05</td><td>1.02</td></tr></tbody></table>
|
| 140 |
+
|
| 141 |
+
**Table 4.** Results for the advanced case with non-homogenous cloud
|
| 142 |
+
|
| 143 |
+
<table><thead><tr><td>Nuclide</td><td>MCNP6 [Sv/h]</td><td>ABR [Sv/h]</td><td>Ratio ABR/MCNP6</td></tr></thead><tbody><tr><td>Cs-137</td><td>1.42E-10</td><td>1.36E-10</td><td>0.96</td></tr><tr><td>Xe-133</td><td>4.49E-04</td><td>4.9E-04</td><td>1.09</td></tr><tr><td>I-131</td><td>1.49E-10</td><td>1.57E-10</td><td>1.05</td></tr></tbody></table>
|
| 144 |
+
|
| 145 |
+
## CONCLUSION
|
| 146 |
+
|
| 147 |
+
The results for the comparison of gamma submersion dose rates show that there is good agreement between the ABR and MCNP6 for the cases analysed. It could be shown that for all three reference nuclides the maximum deviation for the dose rate of Cs-137 is -11% for the base case.
|
| 148 |
+
|
| 149 |
+
For the non-homogenous distribution of the concentration for the reference nuclides the agreement is better than 10%. Keeping in mind that for a real dispersion calculation there are a multitude of uncertainties, e.g. emitted nuclide vector, meteorological prediction, transport of cloud, this agreement presented for the comparison of the dose rates for the reference nuclide each can be regarded as excellent.
|
| 150 |
+
|
| 151 |
+
## REFERENCES
|
| 152 |
+
|
| 153 |
+
[1] Sohn, G. Pfister, W. Bernnat, G. Hehn: Dose, ein neuer Dosismodul zur Berechnung der effektiven Dosis von 21 Organdosen für die Dosispfade Submersion, Inhalation und Bodenstrahlung, IKE 6 UM 3, Nov. 1994.
|
| 154 |
+
|
| 155 |
+
[2] D. B. Pelowitz: MCNP6TM User's Manual Version 1.0 LA-CP-13-00634, Rev. 0 (2013)
|
| 156 |
+
|
| 157 |
+
[3] ICRP, 1996: Conversion Coefficients for use in Radiological Protection against External Radiation. ICRP Publication 74, Ann. ICRP 26 (3-4).
|
| 158 |
+
|
| 159 |
+
[4] NUDAT 2.6, National Nuclear Data Centre, Brookhaven National Laboratory.
|
| 160 |
+
|
| 161 |
+
[5] Entwurf zur AVV zu §47 Strahlenschutzverordnung, Anhang 3 (German: General Administrative Regulation for §47 of the German Radiation Protection Ordinance, Appendix 3), (2005).
|