Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +46 -0
- 5tAyT4oBgHgl3EQfpfgu/content/tmp_files/2301.00525v1.pdf.txt +1212 -0
- 5tAyT4oBgHgl3EQfpfgu/content/tmp_files/load_file.txt +0 -0
- 6tAzT4oBgHgl3EQf-P5_/content/2301.01931v1.pdf +3 -0
- 6tAzT4oBgHgl3EQf-P5_/vector_store/index.pkl +3 -0
- 6tAzT4oBgHgl3EQfvP3q/content/tmp_files/2301.01705v1.pdf.txt +885 -0
- 6tAzT4oBgHgl3EQfvP3q/content/tmp_files/load_file.txt +0 -0
- 6tE2T4oBgHgl3EQfPAa1/content/tmp_files/2301.03755v1.pdf.txt +786 -0
- 6tE2T4oBgHgl3EQfPAa1/content/tmp_files/load_file.txt +0 -0
- 7dE1T4oBgHgl3EQfnQQ6/content/2301.03306v1.pdf +3 -0
- 7dE1T4oBgHgl3EQfnQQ6/vector_store/index.faiss +3 -0
- 7dE1T4oBgHgl3EQfnQQ6/vector_store/index.pkl +3 -0
- 8dFST4oBgHgl3EQfaDjo/vector_store/index.faiss +3 -0
- 8tE5T4oBgHgl3EQfQw5Q/content/2301.05515v1.pdf +3 -0
- 8tE5T4oBgHgl3EQfQw5Q/vector_store/index.faiss +3 -0
- 8tE5T4oBgHgl3EQfQw5Q/vector_store/index.pkl +3 -0
- 9NE1T4oBgHgl3EQfUQM_/vector_store/index.faiss +3 -0
- B9AzT4oBgHgl3EQfTfz1/content/tmp_files/2301.01252v1.pdf.txt +2038 -0
- B9AzT4oBgHgl3EQfTfz1/content/tmp_files/load_file.txt +0 -0
- BtE4T4oBgHgl3EQf5Q6j/content/2301.05322v1.pdf +3 -0
- BtE4T4oBgHgl3EQf5Q6j/vector_store/index.faiss +3 -0
- BtE4T4oBgHgl3EQf5Q6j/vector_store/index.pkl +3 -0
- CtFQT4oBgHgl3EQfOzYB/content/tmp_files/2301.13276v1.pdf.txt +330 -0
- CtFQT4oBgHgl3EQfOzYB/content/tmp_files/load_file.txt +190 -0
- DtE3T4oBgHgl3EQfVAog/content/2301.04455v1.pdf +3 -0
- DtE3T4oBgHgl3EQfVAog/vector_store/index.faiss +3 -0
- DtE3T4oBgHgl3EQfVAog/vector_store/index.pkl +3 -0
- F9AyT4oBgHgl3EQfrPnq/content/tmp_files/2301.00559v1.pdf.txt +943 -0
- F9AyT4oBgHgl3EQfrPnq/content/tmp_files/load_file.txt +0 -0
- FdE0T4oBgHgl3EQfhAGj/content/tmp_files/2301.02426v1.pdf.txt +1923 -0
- FdE0T4oBgHgl3EQfhAGj/content/tmp_files/load_file.txt +0 -0
- J9E1T4oBgHgl3EQfGQOR/content/2301.02912v1.pdf +3 -0
- J9E1T4oBgHgl3EQfGQOR/vector_store/index.faiss +3 -0
- J9E1T4oBgHgl3EQfGQOR/vector_store/index.pkl +3 -0
- KdAyT4oBgHgl3EQfTvcg/content/2301.00110v1.pdf +3 -0
- KdAyT4oBgHgl3EQfTvcg/vector_store/index.faiss +3 -0
- KdAyT4oBgHgl3EQfTvcg/vector_store/index.pkl +3 -0
- KdE3T4oBgHgl3EQfvQvs/content/tmp_files/2301.04693v1.pdf.txt +1150 -0
- KdE3T4oBgHgl3EQfvQvs/content/tmp_files/load_file.txt +0 -0
- LtAzT4oBgHgl3EQfyv51/vector_store/index.pkl +3 -0
- M9E0T4oBgHgl3EQf0QJr/content/tmp_files/2301.02683v1.pdf.txt +2510 -0
- M9E0T4oBgHgl3EQf0QJr/content/tmp_files/load_file.txt +0 -0
- MtE1T4oBgHgl3EQfZQTY/vector_store/index.pkl +3 -0
- NNAyT4oBgHgl3EQfs_m_/content/2301.00588v1.pdf +3 -0
- NNAyT4oBgHgl3EQfs_m_/vector_store/index.faiss +3 -0
- NNAyT4oBgHgl3EQfs_m_/vector_store/index.pkl +3 -0
- NNFQT4oBgHgl3EQfWDbE/content/tmp_files/2301.13303v1.pdf.txt +1774 -0
- NNFQT4oBgHgl3EQfWDbE/content/tmp_files/load_file.txt +0 -0
- O9AyT4oBgHgl3EQfUffK/content/tmp_files/2301.00128v1.pdf.txt +360 -0
- O9E0T4oBgHgl3EQfjwGf/content/tmp_files/2301.02464v1.pdf.txt +976 -0
.gitattributes
CHANGED
|
@@ -557,3 +557,49 @@ DNE1T4oBgHgl3EQfqAUl/content/2301.03337v1.pdf filter=lfs diff=lfs merge=lfs -tex
|
|
| 557 |
Y9E5T4oBgHgl3EQfdw8X/content/2301.05613v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 558 |
s9E1T4oBgHgl3EQfQQNJ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 559 |
WtE1T4oBgHgl3EQfvgVo/content/2301.03400v1.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 557 |
Y9E5T4oBgHgl3EQfdw8X/content/2301.05613v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 558 |
s9E1T4oBgHgl3EQfQQNJ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 559 |
WtE1T4oBgHgl3EQfvgVo/content/2301.03400v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 560 |
+
_NE2T4oBgHgl3EQfQwY9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 561 |
+
ytE3T4oBgHgl3EQfPQn0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 562 |
+
9NE1T4oBgHgl3EQfUQM_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 563 |
+
bdAzT4oBgHgl3EQf2_4K/content/2301.01821v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 564 |
+
WdE5T4oBgHgl3EQfcg8E/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 565 |
+
ytE3T4oBgHgl3EQfPQn0/content/2301.04402v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 566 |
+
ntAzT4oBgHgl3EQfAPov/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 567 |
+
WtE1T4oBgHgl3EQfvgVo/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 568 |
+
ttE0T4oBgHgl3EQfsAEn/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 569 |
+
8dFST4oBgHgl3EQfaDjo/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 570 |
+
t9A0T4oBgHgl3EQfLv8V/content/2301.02121v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 571 |
+
Y9E5T4oBgHgl3EQfdw8X/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 572 |
+
eNE3T4oBgHgl3EQfHAm5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 573 |
+
eNE3T4oBgHgl3EQfHAm5/content/2301.04320v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 574 |
+
BtE4T4oBgHgl3EQf5Q6j/content/2301.05322v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 575 |
+
WdE5T4oBgHgl3EQfcg8E/content/2301.05603v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 576 |
+
t9AyT4oBgHgl3EQf0fl1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 577 |
+
VdE_T4oBgHgl3EQfxxyN/content/2301.08314v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 578 |
+
t9A0T4oBgHgl3EQfLv8V/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 579 |
+
VdE_T4oBgHgl3EQfxxyN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 580 |
+
KdAyT4oBgHgl3EQfTvcg/content/2301.00110v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 581 |
+
8tE5T4oBgHgl3EQfQw5Q/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 582 |
+
XtA0T4oBgHgl3EQfFf84/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 583 |
+
wtE2T4oBgHgl3EQfgwev/content/2301.03941v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 584 |
+
UNE1T4oBgHgl3EQfIQPc/content/2301.02938v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 585 |
+
RdAyT4oBgHgl3EQf7_on/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 586 |
+
XtA0T4oBgHgl3EQfFf84/content/2301.02032v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 587 |
+
BtE4T4oBgHgl3EQf5Q6j/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 588 |
+
a9E1T4oBgHgl3EQfxAV3/content/2301.03417v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 589 |
+
RdAyT4oBgHgl3EQf7_on/content/2301.00847v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 590 |
+
DtE3T4oBgHgl3EQfVAog/content/2301.04455v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 591 |
+
8tE5T4oBgHgl3EQfQw5Q/content/2301.05515v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 592 |
+
tdE1T4oBgHgl3EQf3gXH/content/2301.03491v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 593 |
+
7dE1T4oBgHgl3EQfnQQ6/content/2301.03306v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 594 |
+
7dE1T4oBgHgl3EQfnQQ6/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 595 |
+
NNAyT4oBgHgl3EQfs_m_/content/2301.00588v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 596 |
+
J9E1T4oBgHgl3EQfGQOR/content/2301.02912v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 597 |
+
KdAyT4oBgHgl3EQfTvcg/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 598 |
+
tdE1T4oBgHgl3EQf3gXH/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 599 |
+
NNAyT4oBgHgl3EQfs_m_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 600 |
+
DtE3T4oBgHgl3EQfVAog/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 601 |
+
J9E1T4oBgHgl3EQfGQOR/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 602 |
+
6tAzT4oBgHgl3EQf-P5_/content/2301.01931v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 603 |
+
TtE3T4oBgHgl3EQfzwsF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 604 |
+
YtE4T4oBgHgl3EQfOAxw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 605 |
+
s9AyT4oBgHgl3EQfmfj8/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
5tAyT4oBgHgl3EQfpfgu/content/tmp_files/2301.00525v1.pdf.txt
ADDED
|
@@ -0,0 +1,1212 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
arXiv:2301.00525v1 [math.DG] 2 Jan 2023
|
| 2 |
+
BLOWING-UP HERMITIAN YANG–MILLS CONNECTIONS
|
| 3 |
+
ANDREW CLARKE AND CARL TIPLER
|
| 4 |
+
Abstract. We investigate hermitian Yang–Mills connections for pullback vec-
|
| 5 |
+
tor bundles on blow-ups of K¨ahler manifolds along submanifolds. Under some
|
| 6 |
+
mild asumptions on the graded object of a simple and semi-stable vector bun-
|
| 7 |
+
dle, we provide a necessary and sufficent numerical criterion for the pullback
|
| 8 |
+
bundle to admit a sequence of hermitian Yang–Mills connections for polarisa-
|
| 9 |
+
tions that make the exceptional divisor sufficiently small, and show that those
|
| 10 |
+
connections converge to the pulled back hermitian Yang-Mills connection of
|
| 11 |
+
the graded object.
|
| 12 |
+
1. Introduction
|
| 13 |
+
A corner stone in gauge theory is the Hitchin–Kobayashi correspondence ([17,
|
| 14 |
+
20, 30, 12]). This celebrated generalisation of the Narasimhan and Seshadri the-
|
| 15 |
+
orem asserts that a holomorphic vector bundle over a K¨ahler manifold carries an
|
| 16 |
+
Hermite–Einstein metric if and only if it is polystable in the sense of Mumford and
|
| 17 |
+
Takemoto ([22, 29]). The interplay between the differential geometric side, her-
|
| 18 |
+
mitian Yang–Mills connections (HYM for short) that originated from physics, and
|
| 19 |
+
the algebro-geometric side, the stability notion motivated by moduli constructions,
|
| 20 |
+
has had many applications and became a very fertile source of inspiration. Given
|
| 21 |
+
that HYM connections are canonically attached to polystable vector bundles, it is
|
| 22 |
+
natural to investigate their relations to natural maps between vector bundles, such
|
| 23 |
+
as pullbacks. In this paper, we address the problem of pulling back HYM connec-
|
| 24 |
+
tions along blow-ups. While the similar problem for extremal K¨ahler metrics has
|
| 25 |
+
seen many developments in the past ten years [1, 2, 3, 28, 26, 8], relatively little
|
| 26 |
+
seems to be known about the behaviour of HYM connections under blow-ups [6, 9].
|
| 27 |
+
In this paper, under some mild asumptions, we solve the problem for pullback of
|
| 28 |
+
semi-stable vector bundles on blow-ups along smooth centers.
|
| 29 |
+
Let π : X′ → X be the blow-up of a polarised K¨ahler manifold (X, [ω]) along a
|
| 30 |
+
submanifold Z ⊂ X, and E′ = π∗E the pullback of a holomorphic vector bundle
|
| 31 |
+
E → X. For 0 < ε ≪ 1, Lε := π∗[ω] − εc1(Z′) defines a polarisation on X′, where
|
| 32 |
+
we set Z′ = π−1(Z) the exceptional divisor. There are obstructions for E′ to admit
|
| 33 |
+
HYM connections with respect to ωε ∈ c1(Lε), with 0 < ε ≪ 1. In particular, E
|
| 34 |
+
should be simple and semi-stable with respect to [ω] (see Section 2.3). In the latter
|
| 35 |
+
case, E admits a Jordan–Holder filtration by semi-stable sheaves with polystable
|
| 36 |
+
graded object Gr(E) (see Section 2.2 for definitions). A further obstruction comes
|
| 37 |
+
then from subsheaves of E arising from Gr(E). While those sheaves have the same
|
| 38 |
+
slope as E, their pullbacks to X′ could destabilise E′. Our main result asserts that
|
| 39 |
+
those are actually the only obstructions for E′ to carry a HYM connection, under
|
| 40 |
+
some mild asumptions on Gr(E).
|
| 41 |
+
2010 Mathematics Subject Classification. Primary: 53C07, Secondary: 53C55, 14J60.
|
| 42 |
+
1
|
| 43 |
+
|
| 44 |
+
2
|
| 45 |
+
A. CLARKE AND C. TIPLER
|
| 46 |
+
Recall that a semi-stable holomorphic vector bundle E → (X, [ω]) is said to be
|
| 47 |
+
sufficiently smooth if its graded object Gr(E) is locally free. Let E[ω] denote the set
|
| 48 |
+
of all subbundles of E arising in a Jordan–Holder filtration for E, or equivalently of
|
| 49 |
+
same slope as E with respect to [ω]. For F ∈ E[ω], denote by µLε(F) = c1(π∗F)·Ln−1
|
| 50 |
+
ε
|
| 51 |
+
rank(F)
|
| 52 |
+
the slope of π∗F on (X′, Lε).
|
| 53 |
+
Theorem 1.1. Let E → X be a simple sufficiently smooth semi-stable holomorphic
|
| 54 |
+
vector bundle on (X, [ω]). Assume that the stable components of Gr(E) are pairwise
|
| 55 |
+
non-isomorphic. Then, there exists ε0 > 0 and a sequence of HYM connections
|
| 56 |
+
(Aε)ε∈(0,ε0) on π∗E with respect to (ωε)ε∈(0,ε0) if and only if
|
| 57 |
+
(1.1)
|
| 58 |
+
∀ F ∈ E[ω], µLε(F) <
|
| 59 |
+
ε→0 µLε(E).
|
| 60 |
+
In that case, if A denotes a HYM connection on Gr(E) with respect to ω, then
|
| 61 |
+
(Aε)ε∈(0,ε0) can be chosen so that Aε −→
|
| 62 |
+
ε→0 π∗A in any Sobolev norm.
|
| 63 |
+
In the statement, the expression µLε(F) <
|
| 64 |
+
ε→0 µLε(E) means that the first non-
|
| 65 |
+
zero term in the ε-expansion for µLε(E) − µLε(F) is strictly positive.
|
| 66 |
+
Remark 1.2. Simplicity, semi-stability and condition (1.1) are necessary to pro-
|
| 67 |
+
duce the connections (Aε) from Theorem 1.1. The other two asumptions on Gr(E)
|
| 68 |
+
are technical. Assuming Gr(E) to be locally free enables to see E as a smooth com-
|
| 69 |
+
plex deformation of Gr(E) and to work with the various connections on the same
|
| 70 |
+
underlying complex vector bundle. We should warn the reader though that if one
|
| 71 |
+
drops this asumption, Condition (1.1) might not be enough to ensure semi-stability
|
| 72 |
+
of π∗E on (X′, Lε) (see the extra conditions in [23, Theorem 1.10]). On the other
|
| 73 |
+
hand, the asumption on Gr(E) having no pairwise isomorphic components is purely
|
| 74 |
+
technical, and ensures that its automorphism group, that will provide obstructions
|
| 75 |
+
in the perturbative theory, is abelian.
|
| 76 |
+
We now list some corollaries of Theorem 1.1. First, the stable case :
|
| 77 |
+
Corollary 1.3. Let E → X be a stable holomorphic vector bundle on (X, [ω]) and
|
| 78 |
+
let A be a HYM connection on E. Then, there exists ε0 > 0 and a sequence of HYM
|
| 79 |
+
connections (Aε)ε∈(0,ε0) on π∗E with respect to (ωε)ε∈(0,ε0) such that Aε →
|
| 80 |
+
ε→0 π∗A
|
| 81 |
+
in any Sobolev norm.
|
| 82 |
+
For the semi-stable case, Condition (1.1) reduces to a finite number of intersec-
|
| 83 |
+
tion product computations. One interesting feature comes from the second term in
|
| 84 |
+
the expansion of µLε(E). It is the opposite of the slope of the restriction of E to
|
| 85 |
+
Z. The following formula is proved in [23, Section 4.1], where m = dim(Z) :
|
| 86 |
+
(1.2)
|
| 87 |
+
µLε(E) = µL(E) −
|
| 88 |
+
�n − 1
|
| 89 |
+
m − 1
|
| 90 |
+
�
|
| 91 |
+
µL|Z(E|Z)εn−m + O(εn−m+1).
|
| 92 |
+
We then have :
|
| 93 |
+
Corollary 1.4. Let E → X be a simple sufficiently smooth semi-stable holomorphic
|
| 94 |
+
vector bundle on (X, [ω]). Assume that the stable components of Gr(E) are pairwise
|
| 95 |
+
non-isomorphic. Denote by A an HYM connection on E. If
|
| 96 |
+
(1.3)
|
| 97 |
+
∀ F ∈ E[ω], µL|Z(E|Z) < µL|Z(F|Z),
|
| 98 |
+
then, there exists ε0 > 0 and a sequence of HYM connections (Aε)ε∈(0,ε0) on π∗E
|
| 99 |
+
with respect to (ωε)ε∈(0,ε0) converging to π∗A in any Sobolev norm.
|
| 100 |
+
|
| 101 |
+
BLOWING-UP HYM CONNECTIONS
|
| 102 |
+
3
|
| 103 |
+
Condition (1.3) was checked on explicit examples in [23, Section 4.5] to produce
|
| 104 |
+
stable perturbations of tangent sheaves by blow-ups, and our result provides infor-
|
| 105 |
+
mation on the associated connections and their asymptotic behaviour. Note that
|
| 106 |
+
by Mehta–Ramanathan theorem [21], if [ω] = c1(L) is integral, and if Z is a generic
|
| 107 |
+
intersection of divisors in linear systems |Lk|, then E|Z is semi-stable as soon as E
|
| 108 |
+
is. In that case, Condition (1.3) cannot be satisfied, and it seems unlikely that Con-
|
| 109 |
+
dition (1.1) will hold true. Hence, blowing-up such subvarieties tend to destabilise
|
| 110 |
+
a semi-stable bundle.
|
| 111 |
+
In general, we expect that it should not be too hard to obtain stability of suffi-
|
| 112 |
+
ciently smooth pulled back bundles under condition (1.1) with purely algebraic
|
| 113 |
+
methods.
|
| 114 |
+
However, we emphasize that the Hitchin–Kobayashi correspondence
|
| 115 |
+
doesn’t provide any information on the asymptotic behaviour of the associated
|
| 116 |
+
HYM connections, which is then the main content of Theorem 1.1. Nevertheless, we
|
| 117 |
+
state the following corollary, that extends [23, Theorem 1.10] to a non-equivariant
|
| 118 |
+
situation:
|
| 119 |
+
Corollary 1.5. Let E → X be a simple sufficiently smooth semi-stable holomorphic
|
| 120 |
+
vector bundle on (X, [ω]). Assume that the stable components of Gr(E) are pairwise
|
| 121 |
+
non-isomorphic. Then, there exists ε0 > 0 such that π∗E → (X′, Lε) is
|
| 122 |
+
(i) stable if and only if for all F ∈ E[ω], µLε(F) <
|
| 123 |
+
ε→0 µLε(E),
|
| 124 |
+
(ii) semi-stable if and only if for all F ∈ E[ω], µLε(F) ≤
|
| 125 |
+
ε→0 µLε(E),
|
| 126 |
+
(iii) unstable otherwise.
|
| 127 |
+
Finally, we comment on previous related works. Theorem 1.1 extends results
|
| 128 |
+
from [6, 9] where blow-ups of HYM connections along points are considered. In the
|
| 129 |
+
present paper, we consider blow-ups along any smooth subvariety, and also cover
|
| 130 |
+
the semi-stable situation, which is technically more involved due to the presence of
|
| 131 |
+
automorphisms of the graded object that obstruct the linear theory. While [9] is a
|
| 132 |
+
gluing construction as in the similar problem of producing extremal K¨ahler metrics
|
| 133 |
+
on blow-ups [2, 3, 28, 26, 8], one of the key feature in our approach is to apply
|
| 134 |
+
directly the implicit function theorem to reduce to (an ε dependent family of) finite
|
| 135 |
+
dimensional GIT problems on a Kuranishi space parametrising small deformations
|
| 136 |
+
of Gr(E), as in [27, 8]. We then use the new technology developed in [24] to control
|
| 137 |
+
the perturbations of the associated moment maps when ωε varries. This is where
|
| 138 |
+
our hypothesis on Aut(Gr(E)) being abelian is used.
|
| 139 |
+
The main new technical
|
| 140 |
+
input comes from the fact that the underlying smooth manifold X is fixed in [24],
|
| 141 |
+
while it varries with the blow-up, which requires a carefull analysis of the operator
|
| 142 |
+
introduced to apply the implicit function theorem.
|
| 143 |
+
Outline: In Section 2, we recall basic material about HYM connections and stabil-
|
| 144 |
+
ity. We then perform in Section 2.3 the analysis of the linear theory on the blow-up.
|
| 145 |
+
Relying on this, in Section 3 we explain how to reduce the problem to finding zeros
|
| 146 |
+
of finite dimensional moment maps. Then, we conclude the proof of Theorem 1.1
|
| 147 |
+
and its corollaries in Section 4.
|
| 148 |
+
Acknowledgments: The authors benefited from visits to LMBA and Gotheborg
|
| 149 |
+
University; they would like to thank these welcoming institutions for providing
|
| 150 |
+
stimulating work environments. The idea of this project emerged from discussions
|
| 151 |
+
with Lars Martin Sektnan, whom we thank for sharing his ideas and insight. CT
|
| 152 |
+
|
| 153 |
+
4
|
| 154 |
+
A. CLARKE AND C. TIPLER
|
| 155 |
+
is partially supported by the grants MARGE ANR-21-CE40-0011 and BRIDGES
|
| 156 |
+
ANR–FAPESP ANR-21-CE40-0017.
|
| 157 |
+
2. Preliminaries
|
| 158 |
+
In Sections 2.1 and 2.2 we introduce the notions of HYM connections and slope
|
| 159 |
+
stability, together with some general results, and refer the reader to [18] and [16].
|
| 160 |
+
From Section 2.3 we start to specialise the discussion to blow-ups. In particular,
|
| 161 |
+
in Section 2.3.2, we provide various asymptotic expressions for the linearisation of
|
| 162 |
+
the HYM equation on the blow-up. Those results will be used in Section 3.
|
| 163 |
+
2.1. The hermitian Yang–Mills equation. Let E → X be a holomorphic vector
|
| 164 |
+
bundle over a compact K¨ahler manifold X. A hermitian metric on E is Hermite–
|
| 165 |
+
Einstein with respect to a K¨ahler metric with K¨ahler form ω if the curvature
|
| 166 |
+
Fh ∈ Ω2 (X, End E) of the corresponding Chern connection satisfies
|
| 167 |
+
Λω (iFh) = c IdE
|
| 168 |
+
(2.1)
|
| 169 |
+
for some real constant c. Equivalently, if h is some hermitian metric on the smooth
|
| 170 |
+
complex vector bundle underlying E, a hermitian connection A on (E, h) is said to
|
| 171 |
+
be hermitian Yang–Mills if it satisfies
|
| 172 |
+
�
|
| 173 |
+
F 0,2
|
| 174 |
+
A
|
| 175 |
+
=
|
| 176 |
+
0,
|
| 177 |
+
Λω (iFA)
|
| 178 |
+
=
|
| 179 |
+
c IdE .
|
| 180 |
+
The first equation of this system implies that the (0, 1)-part of A determines a
|
| 181 |
+
holomorphic structure on E, while the second that h is Hermite–Einstein for this
|
| 182 |
+
holomorphic structure. We will try to find hermitian Yang–Mills connections within
|
| 183 |
+
the complex gauge group orbit, which we now define. The (hermitian) complex
|
| 184 |
+
gauge group is
|
| 185 |
+
G C(E, h) = Γ (GL (E, C)) ∩ Γ (EndH(E, h)) ,
|
| 186 |
+
where EndH(E, h) stands for the hermitian endomorphisms of (E, h). Note that if
|
| 187 |
+
¯∂ is the Dolbeault operator defining the holomorphic structure on E, then f ◦ ¯∂◦f −1
|
| 188 |
+
defines a biholomorphic complex structure on E. Let dA = ∂A + ¯∂A be the Chern
|
| 189 |
+
connection of (E, h) with respect to the original complex structure (that is ¯∂A = ¯∂).
|
| 190 |
+
Then the Chern connection Af of h with respect to f ◦ ¯∂ ◦ f −1 is
|
| 191 |
+
dAf = (f ∗)−1 ◦ ∂A ◦ (f ∗) + f ◦ ¯∂ ◦ f −1.
|
| 192 |
+
Solving the hermitian Yang–Mills equation is equivalent to solving
|
| 193 |
+
Ψ(s) = c IdE
|
| 194 |
+
where
|
| 195 |
+
Ψ :
|
| 196 |
+
Lie(G C(E, h))
|
| 197 |
+
−→
|
| 198 |
+
Lie(G C(E, h))
|
| 199 |
+
s
|
| 200 |
+
�−→
|
| 201 |
+
iΛω(FAexp(s)),
|
| 202 |
+
and where Lie(G C(E, h)) := iΓ(EndH(E, h)) is the tangent space to G C(E, h) at
|
| 203 |
+
the identity. For a connection A on E, the Laplace operator ∆A is
|
| 204 |
+
∆A = iΛω
|
| 205 |
+
�¯∂A∂A − ∂A ¯∂A
|
| 206 |
+
�
|
| 207 |
+
.
|
| 208 |
+
(2.2)
|
| 209 |
+
If AEnd E denote the connection induced by A on End E, then :
|
| 210 |
+
|
| 211 |
+
BLOWING-UP HYM CONNECTIONS
|
| 212 |
+
5
|
| 213 |
+
Lemma 2.1. If A is the Chern connection of (E, ∂, h), the differential of Ψ at
|
| 214 |
+
identity is
|
| 215 |
+
dΨIdE = ∆AEnd E.
|
| 216 |
+
If moreover A is assumed to be hermitian Yang–Mills, then the kernel of ∆AEnd E
|
| 217 |
+
acting on Γ(End(E)) is given by the Lie algebra aut(E) of the space of automor-
|
| 218 |
+
phisms Aut(E) of (E, ∂).
|
| 219 |
+
The last statement about the kernel follows from the K¨ahler identities and the
|
| 220 |
+
Akizuki-Nakano identity that imply ∆AEnd E = ∂∗
|
| 221 |
+
A∂A + ¯∂∗
|
| 222 |
+
A ¯∂A, the two terms of
|
| 223 |
+
which are equal if A is Hermitian Yang-Mills. The operator ∆AEnd E being elliptic
|
| 224 |
+
and self-adjoint, aut(E) will then appear as a cokernel in the linear theory for
|
| 225 |
+
perturbations of hermitian Yang–Mills connections.
|
| 226 |
+
2.2. Slope stability. We recall some basic facts about slope stability, as intro-
|
| 227 |
+
duced by [22, 29], and refer the interested reader to [16] for a detailed treatment.
|
| 228 |
+
We denote here L := [ω] the polarisation of the n-dimensional K¨ahler manifold X.
|
| 229 |
+
Definition 2.2. For E a torsion-free coherent sheaf on X, the slope µL(E) ∈ Q
|
| 230 |
+
(with respect to L) is given by the intersection formula
|
| 231 |
+
(2.3)
|
| 232 |
+
µL(E) = degL(E)
|
| 233 |
+
rank(E) ,
|
| 234 |
+
where rank(E) denotes the rank of E while degL(E) = c1(E) · Ln−1 stands for its
|
| 235 |
+
degree. Then, E is said to be slope semi-stable (resp. slope stable) with respect to
|
| 236 |
+
L if for any coherent subsheaf F of E with 0 < rank(F) < rank(E), one has
|
| 237 |
+
µL(F) ≤ µL(E) ( resp. µL(F) < µL(E)).
|
| 238 |
+
A direct sum of slope stable sheaves of the same slope is said to be slope polystable.
|
| 239 |
+
In this paper, we will often omit “slope” and simply refer to stability of a sheaf,
|
| 240 |
+
the polarisation being implicit. We will make the standard identification of a holo-
|
| 241 |
+
morphic vector bundle E with its sheaf of sections, and thus talk about slope sta-
|
| 242 |
+
bility notions for vector bundles as well. In that case slope stability relates nicely
|
| 243 |
+
to differential geometry via the Hitchin–Kobayashi correspondence :
|
| 244 |
+
Theorem 2.3 ([17, 20, 30, 12]). There exists a Hermite–Einstein metric on E with
|
| 245 |
+
respect to ω if and only if E is polystable with respect to L
|
| 246 |
+
We will be mostly interested in semi-stable vector bundles. A Jordan–H¨older
|
| 247 |
+
filtration for a torsion-free sheaf E is a filtration by coherent subsheaves:
|
| 248 |
+
0 = F0 ⊂ F1 ⊂ . . . ⊂ Fℓ = E,
|
| 249 |
+
(2.4)
|
| 250 |
+
such that the corresponding quotients,
|
| 251 |
+
Gi =
|
| 252 |
+
Fi
|
| 253 |
+
Fi−1
|
| 254 |
+
,
|
| 255 |
+
(2.5)
|
| 256 |
+
for i = 1, . . . , ℓ, are stable with slope µL(Gi) = µL(E). In particular, the graded
|
| 257 |
+
object of this filtration
|
| 258 |
+
(2.6)
|
| 259 |
+
Gr(E) :=
|
| 260 |
+
l
|
| 261 |
+
�
|
| 262 |
+
i=1
|
| 263 |
+
Gi
|
| 264 |
+
is polystable. From [16, Section 1], we have the standard existence and uniqueness
|
| 265 |
+
result:
|
| 266 |
+
|
| 267 |
+
6
|
| 268 |
+
A. CLARKE AND C. TIPLER
|
| 269 |
+
Proposition 2.4. Any semi-stable coherent torsion-free sheaf E on (X, L) admits
|
| 270 |
+
a Jordan–H¨older filtration, and the graded object Gr(E) of such filtrations is unique
|
| 271 |
+
up to isomorphism.
|
| 272 |
+
When E is locally-free and semi-stable, we say that it is sufficiently smooth if
|
| 273 |
+
Gr(E) is locally-free. In that case, we denote E[ω] the set of holomorphic subbundles
|
| 274 |
+
of E built out of successive extensions of some of the stable components of Gr(E).
|
| 275 |
+
Equivalently, E[ω] is the set of holomorphic subbundles of E arising in a Jordan-
|
| 276 |
+
Holder filtration for E. Finally, we recall that a necessary condition for E to be
|
| 277 |
+
stable is simplicity, that is Aut(E) = C∗ · IdE.
|
| 278 |
+
2.3. Geometry of the blow-up. We consider now Z ⊂ X a m-dimensional com-
|
| 279 |
+
plex submanifold of codimension r = n − m ≥ 2 and the blow-up map
|
| 280 |
+
π : BlZ(X) → X.
|
| 281 |
+
We will denote by X′ = BlZ(X) the blown-up manifold and by Z′ = π−1(Z) the
|
| 282 |
+
exceptional divisor. We denote by
|
| 283 |
+
Lε := π∗L − ε[Z′]
|
| 284 |
+
a polarisation on X′, for 0 < ε ≪ 1. Let E → X be a holomorphic vector bundle,
|
| 285 |
+
and denote by E′ = π∗E the pulled back bundle. For any holomorphic subbundle
|
| 286 |
+
F ⊂ E, the intersection numbers µLε(π∗E)−µLε(π∗F) admit expansions in ε, with
|
| 287 |
+
first term given by µL(E) − µL(F). For that reason, given the Hitchin–Kobayashi
|
| 288 |
+
correspondence in Theorem 2.3, semi-stability of E on (X, L) is a necessary con-
|
| 289 |
+
dition for its pullback E′ to admit an HYM connection with respect to a K¨ahler
|
| 290 |
+
metric in Lε, for all 0 < ε ≪ 1. Another necessary condition is simplicity of E′,
|
| 291 |
+
which, by Hartogs’ theorem, is equivalent to simplicity of E. Then, natural can-
|
| 292 |
+
didates to test for stability of E′ are given by the pullbacks of elements in E[ω],
|
| 293 |
+
and Condition (1.1) clearly is necessary for E′ to be stable in the polarisations
|
| 294 |
+
we consider, and thus to admit an HYM connection. Hence, we will assume E to
|
| 295 |
+
be simple, semi-stable, and to satisfy (1.1). We now turn back to the differential
|
| 296 |
+
geometry of the blow-up.
|
| 297 |
+
2.3.1. Decomposition on spaces of sections. We have a commutative diagramm:
|
| 298 |
+
Z′
|
| 299 |
+
ι
|
| 300 |
+
−→
|
| 301 |
+
X′
|
| 302 |
+
↓
|
| 303 |
+
↓
|
| 304 |
+
Z
|
| 305 |
+
ι0
|
| 306 |
+
−→
|
| 307 |
+
X
|
| 308 |
+
where ι0 and ι denote the inclusions, while the vertical arrows are given by the
|
| 309 |
+
projection map π. We then have a pullback map on sections
|
| 310 |
+
π∗ : Γ(X, End(E)) −→ Γ(X′, End(π∗E))
|
| 311 |
+
as well as a restriction map :
|
| 312 |
+
ι∗ : Γ(X′, End(π∗E)) −→ Γ(Z′, End(ι∗π∗E)).
|
| 313 |
+
Our goal now is to fit those maps in a short exact sequence, that will in the end split
|
| 314 |
+
the space Γ(X′, End(π∗E)). If NZ = T X|Z/T Z denotes the normal bundle of Z in
|
| 315 |
+
X, then Z′ ≃ P(NZ), and we can fix a (1, 1)-form λ ∈ c1(OP(NZ)(1)) that restricts
|
| 316 |
+
to K¨ahler metrics on the fibers of P(NZ) → Z. We also fix a K¨ahler form ω ∈ c1(L)
|
| 317 |
+
on X, and consider its restriction to Z. We then have a K¨ahler CPr−1-fibration :
|
| 318 |
+
π : (Z′, λ) −→ (Z, ω).
|
| 319 |
+
|
| 320 |
+
BLOWING-UP HYM CONNECTIONS
|
| 321 |
+
7
|
| 322 |
+
By averaging along fibers as described in [25, Section 2.3], we obtain a splitting
|
| 323 |
+
(2.7)
|
| 324 |
+
Γ(Z′, End(ι∗π∗E)) = π∗(Γ(Z, End(ι∗
|
| 325 |
+
0E))) ⊕ Γ0(Z′, End(ι∗π∗E)).
|
| 326 |
+
We will omit the ι∗ and π∗ to simplify notation. Using the projection on the second
|
| 327 |
+
factor
|
| 328 |
+
p0 : Γ(Z′, End(E)) → Γ0(Z′, End(E))
|
| 329 |
+
in (2.7), we deduce a short exact sequence :
|
| 330 |
+
0 −→ Γ(X, End(E))
|
| 331 |
+
π∗
|
| 332 |
+
−→ Γ(X′, End(E))
|
| 333 |
+
p0◦ι∗
|
| 334 |
+
−→ Γ0(Z′, End(E)) −→ 0.
|
| 335 |
+
We can actually split this sequence by mean of a linear extension operator
|
| 336 |
+
ι∗ : Γ0(Z′, End(E)) −→ Γ(X′, End(E))
|
| 337 |
+
such that
|
| 338 |
+
p0 ◦ ι∗ ◦ ι∗ = Id.
|
| 339 |
+
This can be done using bump functions and a standard partition of unity argument.
|
| 340 |
+
The outcome is an isomorphism :
|
| 341 |
+
(2.8)
|
| 342 |
+
Γ(X′, End(E))
|
| 343 |
+
−→
|
| 344 |
+
Γ(X, End(E)) ⊕ Γ0(Z′, End(E))
|
| 345 |
+
s
|
| 346 |
+
�−→
|
| 347 |
+
(s − ι∗ ◦ p0 ◦ ι∗s , p0 ◦ ι∗s),
|
| 348 |
+
with inverse map (sX, sZ) �→ (π∗sX + ι∗sZ). This splits the Lie algebra of gauge
|
| 349 |
+
transformations, and will be used to identify contributions coming from X and from
|
| 350 |
+
Z′ in the ε-expansion of the linearisation, which we describe in the next section.
|
| 351 |
+
From now on, by abuse of notations, we will consider the spaces Γ(X, End(E))
|
| 352 |
+
and Γ0(Z′, End(E)) as subspaces of Γ(X′, End(π∗E)), and denote s = sX + sZ the
|
| 353 |
+
decomposition of an element s ∈ Γ(X′, End(E)).
|
| 354 |
+
2.3.2. Decomposition of the Laplace operator. We extend λ to a closed (1, 1)-form
|
| 355 |
+
over X′ as in [31, Section 3.3] and consider the family of K¨ahler metrics on X′:
|
| 356 |
+
ωε = π∗ω + ελ ∈ c1(Lε), 0 < ε ≪ 1.
|
| 357 |
+
Let A be a Hermitian connection on E, which we pull back to X′ and extend to
|
| 358 |
+
the bundle End(π∗E). We will now study the Laplace operator
|
| 359 |
+
∆εs = iΛε(¯∂A∂A − ∂A ¯∂A)s
|
| 360 |
+
acting on the various components of s = sX + sZ ∈ Γ(X′, End(E)), where Λε is
|
| 361 |
+
the Lefschetz operator for the metric ωε. For this, we need to introduce an elliptic
|
| 362 |
+
operator on Z′. The vertical Laplace operator, denoted
|
| 363 |
+
∆V : Γ0 (Z′, End(E)) → Γ0 (Z′, End(E)) ,
|
| 364 |
+
is the operator defined by the following procedure. Let σ ∈ Γ0(Z′, End(E)). Over a
|
| 365 |
+
point x ∈ Z, take the restriction σx of σ to Z′
|
| 366 |
+
x = π−1(x), and consider σx as a map
|
| 367 |
+
to Cp with components σi
|
| 368 |
+
x in a trivialisation π∗ End(E)x ∼= Cp of the restriction of
|
| 369 |
+
π∗ End(E) to the fibre Z′
|
| 370 |
+
x of Z′ → Z. Define
|
| 371 |
+
(∆V (σ))i
|
| 372 |
+
x = ∆(λ)|Z′x
|
| 373 |
+
�
|
| 374 |
+
σi
|
| 375 |
+
x
|
| 376 |
+
�
|
| 377 |
+
,
|
| 378 |
+
for ∆λ the Laplacian of the K¨ahler form �� on Z′
|
| 379 |
+
x. Then glue together to form
|
| 380 |
+
a section of π∗ End(E). As in [25, Section 4.1], one easily obtains that this con-
|
| 381 |
+
struction is independent on the trivialisation chosen, and sends smooth sections to
|
| 382 |
+
smooth sections. In the following Lemma, the supscript l (or l + 2) stands for the
|
| 383 |
+
Sobolev completion with respect to some L2,l Sobolev norm, where those norms
|
| 384 |
+
|
| 385 |
+
8
|
| 386 |
+
A. CLARKE AND C. TIPLER
|
| 387 |
+
can be produced out of the metrics ω, λ and any metric h on E, together with the
|
| 388 |
+
covariant derivatives given by A.
|
| 389 |
+
Lemma 2.5. [25, Section 4.1] The vertical Laplacian
|
| 390 |
+
∆V : Γ0 (Z′, End(E))l+2 → Γ0 (Z′, End(E))l
|
| 391 |
+
is invertible.
|
| 392 |
+
In the following statements, if A denotes a second order operator acting on
|
| 393 |
+
sections, then in an expression of the form
|
| 394 |
+
A(σ) = σ0 + εσ1 + . . . + εd−1σd−1 + O(εd)
|
| 395 |
+
the term O(εd) will stand for σd ·εd, where σd is a section whose L2,l Sobolev norm
|
| 396 |
+
is bounded by the L2,l+2 Sobolev norm of σ.
|
| 397 |
+
Lemma 2.6. If sZ = ι∗σZ for σZ ∈ Γ(Z′, End(E)), then
|
| 398 |
+
(p0 ◦ ι∗)∆ε(ι∗σZ) = ε−1∆VσZ + O(1).
|
| 399 |
+
Proof. We introduce the operator D given by
|
| 400 |
+
DsZ = i(¯∂A∂A − ∂A ¯∂A)sZ.
|
| 401 |
+
The Laplacian ∆ε satisfies on X′ :
|
| 402 |
+
∆εsZ ωn
|
| 403 |
+
ε = nDsZ ∧ ωn−1
|
| 404 |
+
ε
|
| 405 |
+
,
|
| 406 |
+
or equivalently
|
| 407 |
+
∆εsZ = n DsZ ∧ (ω + ελ)n−1
|
| 408 |
+
(ω + ελ)n
|
| 409 |
+
.
|
| 410 |
+
We note that ω is a K¨ahler form on X, but on X′ is degenerate along the fibre
|
| 411 |
+
directions of the submanifold Z′.
|
| 412 |
+
Then (i∗ω)m+1 = 0 ∈ Ω2(m+1)(Z′), and at
|
| 413 |
+
x ∈ Z′ ⊆ X′, ωm+2 = 0. Then, expanding (ω + ελ)n−1 and (ω + ελ)n gives
|
| 414 |
+
ι∗∆εsZ = (n − m − 1)ε−1 DsZ ∧ ωm+1 ∧ λn−m−2
|
| 415 |
+
ωm+1 ∧ λn−m−1
|
| 416 |
+
+ O(1).
|
| 417 |
+
Restricting to Z′, the connection 1-forms of A vanish, so ι∗DsZ = i∂ ¯∂σZ, acting
|
| 418 |
+
on the coefficient functions of σZ. On the other hand, by considering a convenient
|
| 419 |
+
orthonormal frame at x ∈ Z′, we see that ι∗∆ει∗σZ = ε−1∆VσZ + O(1).
|
| 420 |
+
□
|
| 421 |
+
In the next lemma, we denote ∆εsZ = (∆εsZ)X + (∆εsZ)Z the decomposition
|
| 422 |
+
according to (2.8).
|
| 423 |
+
Lemma 2.7. For sZ = ι∗σZ with σZ ∈ Γ(Z′, End(E)), we have
|
| 424 |
+
(∆εsZ)X = O(1).
|
| 425 |
+
Proof. By definition, (∆εsZ)X = π∗φ for some φ ∈ Γ(X, End(E)). As we also have
|
| 426 |
+
(∆εsZ)X
|
| 427 |
+
=
|
| 428 |
+
(Id − ι∗(p0 ◦ ι∗))ΛεDsZ,
|
| 429 |
+
we deduce that the section φ is the continuous extension of π∗(Id−ι∗(p0◦ι∗))ΛεDsZ
|
| 430 |
+
across Z ⊆ X. On X′ \ Z′ we have
|
| 431 |
+
ΛεDsZ
|
| 432 |
+
=
|
| 433 |
+
nDsZ ∧ (ωn−1 + O(ε))
|
| 434 |
+
ωn + O(ε)
|
| 435 |
+
= O(1).
|
| 436 |
+
As π∗(Id − ι∗(p0 ◦ ι∗)) is O(1), the result follows.
|
| 437 |
+
□
|
| 438 |
+
|
| 439 |
+
BLOWING-UP HYM CONNECTIONS
|
| 440 |
+
9
|
| 441 |
+
From the previous two lemmas, in the decomposition
|
| 442 |
+
s = sX + sZ,
|
| 443 |
+
∆εsZ also lies in the subspace Γ0(Z′, End(E)) ⊆ Γ(X′, End(E)) to higher order in
|
| 444 |
+
ε. For sX ∈ Γ(X, End(E)),
|
| 445 |
+
∆εsX = (∆εsX)X + (∆εsX)Z
|
| 446 |
+
where (∆εsX)Z = ι∗(p0 ◦ ι∗)∆εsX. We first consider ι∗∆εsX.
|
| 447 |
+
Lemma 2.8. For sX = π∗σX ∈ Γ(X, End(E)) ⊆ Γ(X′, End(E)),
|
| 448 |
+
ι∗∆εsX = (m + 1)DsX ∧ ωm ∧ λn−m−1
|
| 449 |
+
ωm+1 ∧ λn−m−1
|
| 450 |
+
+ O(ε).
|
| 451 |
+
Proof. Firstly, sX = π∗σX, and the connection A is pulled back from X, so DsX
|
| 452 |
+
is basic for the projection to X and Ds ∧ ωm+1 = 0 at points in Z′. Secondly, we
|
| 453 |
+
note that ωm+1 ∧λn−m−1 is a volume form on X′, in a neighbourhood of Z′. Then,
|
| 454 |
+
the result follows similarly to the previous lemma.
|
| 455 |
+
□
|
| 456 |
+
For the final term (∆εsX)X, we introduce ∆X the Laplace operator of A on
|
| 457 |
+
End(E) → (X, ω):
|
| 458 |
+
∆X :
|
| 459 |
+
Γ (X, End(E))
|
| 460 |
+
→
|
| 461 |
+
Γ (X, End(E))
|
| 462 |
+
σ
|
| 463 |
+
�→
|
| 464 |
+
iΛω(¯∂A∂A − ∂A ¯∂A)σ.
|
| 465 |
+
Lemma 2.9. For sX = π∗σX ∈ Γ(X, End(E)) ⊆ Γ(X′, End(E)),
|
| 466 |
+
(∆εsX)X = π∗(∆XσX) + O(ε).
|
| 467 |
+
Proof. There is φ ∈ Γ(X, End(E)) such that (∆εsX)X = π∗φ. The element φ can be
|
| 468 |
+
identified as the lowest order term in the asymptotic expansion in ε of (∆επ∗σX)X.
|
| 469 |
+
However, we have at x ∈ X′ \ Z′ :
|
| 470 |
+
∆επ∗σX = nDπ∗σX ∧ (ω + ελ)n−1
|
| 471 |
+
(ω + ελ)n
|
| 472 |
+
= nπ∗ DσX ∧ ωn−1
|
| 473 |
+
ωn
|
| 474 |
+
+ O(ε)
|
| 475 |
+
so we see that the lowest order term in the expansion of (∆επ∗σX)X is ∆XσX.
|
| 476 |
+
□
|
| 477 |
+
Summarizing the above calculations, with respect to the decomposition s =
|
| 478 |
+
sX + sZ produced by (2.8), the operator ∆ε takes the form
|
| 479 |
+
(2.9)
|
| 480 |
+
� ∆X
|
| 481 |
+
0
|
| 482 |
+
L
|
| 483 |
+
ε−1∆V
|
| 484 |
+
�
|
| 485 |
+
plus higher order terms, for some second order operator L. In the next section, we
|
| 486 |
+
will apply the previous lemmas and the resulting form for ∆ε to the pullback of a
|
| 487 |
+
HYM connection A0 on the graded object Gr(E) of E.
|
| 488 |
+
3. The perturbation argument
|
| 489 |
+
The goal of this section is to reduce the problem of finding a zero for the operator
|
| 490 |
+
s �→ iΛωε(FAexp(s)) − cεId in a gauge group orbit to a finite dimensional problem.
|
| 491 |
+
The ideas here go back to [13, 27], and our framework will be that of [5].
|
| 492 |
+
|
| 493 |
+
10
|
| 494 |
+
A. CLARKE AND C. TIPLER
|
| 495 |
+
3.1. Kuranishi slice. We start from a simple semi-stable and sufficiently smooth
|
| 496 |
+
holomorphic vector bundle E on (X, L), with L = [ω]. Denote by Gr(E) = �ℓ
|
| 497 |
+
i=1 Gi
|
| 498 |
+
the associated polystable graded object, with stable components Gi. We let ∂0 be
|
| 499 |
+
the Dolbeault operator of Gr(E). The automorphism group G := Aut(Gr(E)) is a
|
| 500 |
+
reductive Lie group with Lie algebra g := aut(Gr(E)) and compact form K ⊂ G,
|
| 501 |
+
with k := Lie(K). The Dolbeault operator ∂E on E is given by
|
| 502 |
+
∂E = ∂0 + γ
|
| 503 |
+
where γ ∈ Ω0,1(X, Gr(E)∗ ⊗ Gr(E)) can be written
|
| 504 |
+
γ =
|
| 505 |
+
�
|
| 506 |
+
i<j
|
| 507 |
+
γij
|
| 508 |
+
for (possibly vanishing) γij ∈ Ω0,1(X, G∗
|
| 509 |
+
j ⊗ Gi). Elements
|
| 510 |
+
g := g1 IdG1 + . . . , +gℓ IdGℓ ∈ G,
|
| 511 |
+
for (gi) ∈ (C∗)ℓ, act on ∂E and produce isomorphic holomorphic vector bundles in
|
| 512 |
+
the following way :
|
| 513 |
+
(3.1)
|
| 514 |
+
g · ∂E = ∂0 +
|
| 515 |
+
�
|
| 516 |
+
i<j
|
| 517 |
+
gig−1
|
| 518 |
+
j γij.
|
| 519 |
+
In particular, for g = (tℓ, tℓ−1, . . . , t), letting t �→ 0, we can see E as a small
|
| 520 |
+
complex deformation of Gr(E). Our starting point to produce HYM connections
|
| 521 |
+
on E′ = π∗E over X′ will then be the HYM connection A0 on Gr(E) → X given
|
| 522 |
+
by the Chern connection of (∂0, h0), where h0 is an Hermite-Einstein metric on the
|
| 523 |
+
polystable bundle Gr(E). Rather than working with the single bundle E, we will
|
| 524 |
+
consider the family of bundles given by the G-action on Dolbeault operators. This
|
| 525 |
+
will require the following proposition, whose proof follows as in [19] (see also [5, 10]
|
| 526 |
+
for a detailed treatment). We introduce the notation
|
| 527 |
+
V := H0,1(X, End(Gr(E)))
|
| 528 |
+
for the space of harmonic (0, 1)-forms with values in Gr(E), where the metrics used
|
| 529 |
+
to compute adjoints are ω on X and h0 on Gr(E). Note that the G-action on E
|
| 530 |
+
induces a linear representation G → GL(V ).
|
| 531 |
+
Proposition 3.1. There exists a holomorphic K-equivariant map
|
| 532 |
+
Φ : B → Ω0,1(X, End(Gr(E)))
|
| 533 |
+
from a ball around the origin B ⊂ V such that :
|
| 534 |
+
(1) Φ(0) = 0;
|
| 535 |
+
(2) Z := {b ∈ B | (∂0 + Φ(b))2 = 0} is a complex subspace of B;
|
| 536 |
+
(3) if (b, b′) ∈ Z2 lie in the same G-orbit, then ∂0 + Φ(b) and ∂0 + Φ(b′) induce
|
| 537 |
+
isomorphic holomorphic bundle structures;
|
| 538 |
+
(4) The G C(Gr(E))-orbit of any small complex deformation of Gr(E) intersects
|
| 539 |
+
Φ(Z).
|
| 540 |
+
Here, G C(Gr(E)) = Γ (GL (Gr(E), C)) stands for the full gauge group of Gr(E).
|
| 541 |
+
The space Z corresponds to the space of integrable Dolbeault operators in the image
|
| 542 |
+
of Φ, and Φ(B) is a slice for the gauge group action on the set of Dolbeault operators
|
| 543 |
+
|
| 544 |
+
BLOWING-UP HYM CONNECTIONS
|
| 545 |
+
11
|
| 546 |
+
nearby ∂0. We will then lift the slice to the space Ω0,1(X′, End(π∗Gr(E))) on the
|
| 547 |
+
blown-up manifold X′, and denote �Φ the map
|
| 548 |
+
π∗ ◦ Φ : B → Ω0,1(X′, End(Gr(E))),
|
| 549 |
+
where to ease notations we omitted π∗ for the pulled back bundle. The map �Φ
|
| 550 |
+
might no longer provide a slice for the gauge-group action on X′, but what matters
|
| 551 |
+
for us is that its image will contain all elements in the G-orbit of π∗∂E close to
|
| 552 |
+
π∗∂0.
|
| 553 |
+
3.2. Perturbing the slice. The next step will be to perturb �Φ to reduce our
|
| 554 |
+
problem to a finite dimensional one. The strategy to do this in family with respect
|
| 555 |
+
to the parameter ε was inspired by [7, 8, 24].
|
| 556 |
+
Given the metrics ω on X \ Z, λ on Z′, and h = π∗h0 on E, together with
|
| 557 |
+
the covariant derivatives given by ∇A0, we can introduce L2,l Sobolev norms on
|
| 558 |
+
spaces of sections. We will denote by El the L2,l Sobolev completion of any space
|
| 559 |
+
of sections E. In what follows, l ∈ N∗ will be assumed large enough for elements in
|
| 560 |
+
El to admit as much regularity as required.
|
| 561 |
+
Proposition 3.2. Up to shrinking B, there is ε0 > 0 and a continuously diffen-
|
| 562 |
+
rentiable map
|
| 563 |
+
ˇΦ : [0, ε0) × B → Ω0,1(X′, End(Gr(E)))l
|
| 564 |
+
such that for all (ε, b) ∈ [0, ε0) × B, if ˇAε,b is the Chern connection of (π∗∂0 +
|
| 565 |
+
ˇΦ(ε, b), h) :
|
| 566 |
+
(1) π∗∂0 + ˇΦ(ε, b) and π∗∂0 + �Φ(b) induce isomorphic holomorphic structures.
|
| 567 |
+
(2) ΛεiF ˇ
|
| 568 |
+
Aε,b ∈ k.
|
| 569 |
+
Remark 3.3. By elliptic regularity, elements in the image of ˇΦ will actually be
|
| 570 |
+
smooth. However, regularity of the map ˇΦ is with respect to the L2,l Sobolev norm.
|
| 571 |
+
We will use the implicit function theorem to prove Proposition 3.2, and will need
|
| 572 |
+
the following lemma, where we still denote A0 its pullback to π∗Gr(E), and use the
|
| 573 |
+
notation AsX+εsZ
|
| 574 |
+
0
|
| 575 |
+
for Aexp(sX+εsZ)
|
| 576 |
+
0
|
| 577 |
+
.
|
| 578 |
+
Lemma 3.4. The map :
|
| 579 |
+
Ψ : [0, ε0) × Γ(X, EndH(E))l+2 × Γ0(Z′, EndH(E))l+2
|
| 580 |
+
−→
|
| 581 |
+
Ω0(X′, EndH(E))l,
|
| 582 |
+
(ε , sX , sZ)
|
| 583 |
+
�→
|
| 584 |
+
ΛεFA
|
| 585 |
+
sX +εsZ
|
| 586 |
+
0
|
| 587 |
+
− cεId
|
| 588 |
+
is continuously differentiable.
|
| 589 |
+
Above, the topological constants cε are given by
|
| 590 |
+
cε =
|
| 591 |
+
2πn
|
| 592 |
+
volωε(X′)
|
| 593 |
+
�
|
| 594 |
+
c1(E) ∪ [ωε]n−1�
|
| 595 |
+
[X′]
|
| 596 |
+
rank(E)
|
| 597 |
+
.
|
| 598 |
+
Proof. Note first that for ε = 0, Ψ(0, sX, sZ) = π∗(ΛωFA
|
| 599 |
+
sX
|
| 600 |
+
0
|
| 601 |
+
− c0 IdE) and is well
|
| 602 |
+
defined. Then, recall that if f = exp(s) for s ∈ Γ(X′, EndH(E)), the curvature of
|
| 603 |
+
f · A0 is given by
|
| 604 |
+
FAs
|
| 605 |
+
0 = Ff·A0 = FA0 + (¯∂∂ − ∂ ¯∂)s + (∂s − ¯∂s) ∧ (∂s − ¯∂s),
|
| 606 |
+
|
| 607 |
+
12
|
| 608 |
+
A. CLARKE AND C. TIPLER
|
| 609 |
+
where ∂ and ¯∂ stand for the (1, 0) and (0, 1) components of dA0 (see e.g. [5][Section
|
| 610 |
+
1]). In particular, taking s = sX + εsZ,
|
| 611 |
+
FAs
|
| 612 |
+
0
|
| 613 |
+
=
|
| 614 |
+
FA0 + (¯∂∂ − ∂ ¯∂)sX + ε(¯∂∂ − ∂ ¯∂)sZ + (∂sX − ¯∂sX) ∧ (∂sX − ¯∂sX)
|
| 615 |
+
+ε(∂sX − ¯∂sX) ∧ (∂sZ − ¯∂sZ) + ε(∂sZ − ¯∂sZ) ∧ (∂sX − ¯∂sX)
|
| 616 |
+
+ε2(∂sZ − ¯∂sZ) ∧ (∂sZ − ¯∂sZ).
|
| 617 |
+
That is, ignoring the first term FA0, there are six remaining terms that we denote
|
| 618 |
+
F i
|
| 619 |
+
As, for i = 1, . . . 6. For each term we consider the factors coming from Z′ and
|
| 620 |
+
from X (using (2.8)) in ΛεF i
|
| 621 |
+
As and can conclude that Ψ is smooth. For example,
|
| 622 |
+
for the term F 2
|
| 623 |
+
As = ε(¯∂∂ − ∂ ¯∂)sZ,
|
| 624 |
+
ΛεF 2
|
| 625 |
+
As
|
| 626 |
+
=
|
| 627 |
+
nεDsZ ∧ (ω + ελ)n−1
|
| 628 |
+
(ω + ελ)n
|
| 629 |
+
,
|
| 630 |
+
ι∗ΛεF 2
|
| 631 |
+
As
|
| 632 |
+
=
|
| 633 |
+
n
|
| 634 |
+
εDsZ ∧
|
| 635 |
+
�� n−1
|
| 636 |
+
m+1
|
| 637 |
+
�
|
| 638 |
+
ωm+1 ∧ (ελ)n−m−2 + O(εn−m−1)
|
| 639 |
+
�
|
| 640 |
+
�
|
| 641 |
+
n
|
| 642 |
+
m+1
|
| 643 |
+
�
|
| 644 |
+
ωm+1 ∧ (ελ)n−m−1 + O(εn−m)
|
| 645 |
+
,
|
| 646 |
+
=
|
| 647 |
+
(n − m − 1)DsZ ∧
|
| 648 |
+
�
|
| 649 |
+
ωm+1 ∧ λn−m−1 + O(ε)
|
| 650 |
+
�
|
| 651 |
+
ωm+1 ∧ λn−m−1 + O(ε)
|
| 652 |
+
,
|
| 653 |
+
noting that here O(ε) denotes a polynomial in ε with coefficients 2n-forms on a
|
| 654 |
+
neighbourhood of Z′, such that O(0) = 0. We also note that ωm+1 ∧ λn−m−1 is a
|
| 655 |
+
volume form on a neighbourhood of Z′. We conclude that
|
| 656 |
+
(ΛεF 2
|
| 657 |
+
As)Z = ι∗(p0 ◦ ι∗)ΛεF 2
|
| 658 |
+
As
|
| 659 |
+
is a smooth function of (ε, sZ) with values in Γ0(Z′, End(E)).
|
| 660 |
+
The X-component of ΛεF 2
|
| 661 |
+
As,
|
| 662 |
+
(ΛεF 2
|
| 663 |
+
As)X
|
| 664 |
+
=
|
| 665 |
+
(Id − ι∗(p0 ◦ ι∗))ΛεF 2
|
| 666 |
+
As,
|
| 667 |
+
is of the form π∗φ for some φ ∈ Γ(X, End(E)).
|
| 668 |
+
The section φ is given as the
|
| 669 |
+
continuous extension of π∗(Id − ι∗(p0 ◦ ι∗))ΛεF 2
|
| 670 |
+
As across Z ⊆ X. On X′ \ Z′ we
|
| 671 |
+
have
|
| 672 |
+
ΛεF 2
|
| 673 |
+
As
|
| 674 |
+
=
|
| 675 |
+
nDsZ ∧ (ωn−1 + O(ε))
|
| 676 |
+
ωn + O(ε)
|
| 677 |
+
,
|
| 678 |
+
which depends smoothly on sZ and ε. As π∗(Id − ι∗(p0 ◦ ι∗)) is linear, φ depends
|
| 679 |
+
smoothly on these variables too.
|
| 680 |
+
Using that sX is a pulled back section, at points in Z′ we have DsX ∧ωm+1 = 0,
|
| 681 |
+
from which we deduce ι∗ΛεF 1
|
| 682 |
+
As = O(1) and ι∗ΛεF 3
|
| 683 |
+
As = O(1). This shows, as for
|
| 684 |
+
(ΛεF 2
|
| 685 |
+
As)Z, that (ΛεF 1
|
| 686 |
+
As)Z and (ΛεF 3
|
| 687 |
+
As)Z are C1. The other terms F i
|
| 688 |
+
As can be dealt
|
| 689 |
+
with in a similar manner.
|
| 690 |
+
□
|
| 691 |
+
Proof of Proposition 3.2. For b ∈ B, we will denote by Ab the Chern connection
|
| 692 |
+
associated to (π∗∂0 + �Φ(b), h), where h = π∗h0.
|
| 693 |
+
Note that in particular A0 is
|
| 694 |
+
the pullback of a HYM connection on Gr(E). The aim is to apply the implicit
|
| 695 |
+
function theorem to perturb Ab along gauge orbits in order to satisfy point (2) of
|
| 696 |
+
the statement. The key will be to consider small perturbations along the exceptional
|
| 697 |
+
divisor. Recall the splitting from Section 2.3.1 induced by the operator ι∗:
|
| 698 |
+
iΓ(X′, EndH(Gr(E), h)) = iΓ(X, EndH(Gr(E), h)) ⊕ iΓ0(Z′, EndH(Gr(E), h)),
|
| 699 |
+
|
| 700 |
+
BLOWING-UP HYM CONNECTIONS
|
| 701 |
+
13
|
| 702 |
+
that we will simply denote
|
| 703 |
+
Γ(X′) = Γ(X) ⊕ Γ0(Z′).
|
| 704 |
+
For (sX, sZ) ∈ Γ(X) ⊕ Γ0(Z′), and ε small enough, we define
|
| 705 |
+
Ab(ε, sX, sZ) = AsX+εsZ
|
| 706 |
+
b
|
| 707 |
+
,
|
| 708 |
+
where sX + εsZ stands for π∗sX + ε ι∗sZ ∈ Γ(X′). By the regularity of �Φ, the
|
| 709 |
+
assignment (b, ε, sX, sZ) �→ Ab(ε, sX, sZ)− A (resp. (b, ε, sX, sZ) �→ FAb(ε,sX,sZ)) is
|
| 710 |
+
smooth from B ×[0, ε0)×Γ(X′)l to Ω1(X′, End(E))l−1 (resp. Ω2(X′, End(E))l−2),
|
| 711 |
+
for any ε0 small enough. Arguing as in Lemma 3.4, using the fact that the pertur-
|
| 712 |
+
bations along Z′ are O(ε), we deduce that the operator
|
| 713 |
+
�Ψ :
|
| 714 |
+
B × [0, ε0) × Γ(X′)l
|
| 715 |
+
→
|
| 716 |
+
Γ(X′)l−2
|
| 717 |
+
(b, ε, sX, sZ)
|
| 718 |
+
�→
|
| 719 |
+
ΛεiFAb(ε,sX,sZ) − cε IdE
|
| 720 |
+
is a C1 map. As A0 is HYM on Gr(E) → X, we have �Ψ(0) = 0. By the various
|
| 721 |
+
lemmas of Section 2.3.2, its differential in the (sX, sZ) direction at zero is given by
|
| 722 |
+
the map
|
| 723 |
+
Γ(X)l × Γ0(Z′)l
|
| 724 |
+
→
|
| 725 |
+
Γ(X)l−2 × Γ0(Z′)l−2
|
| 726 |
+
(sX, sZ)
|
| 727 |
+
�→
|
| 728 |
+
� ∆XsX
|
| 729 |
+
0
|
| 730 |
+
∗
|
| 731 |
+
∆VsZ
|
| 732 |
+
�
|
| 733 |
+
which, from Lemma 2.1 and Lemma 2.5, has cokernel ik×{0}. Then, by a standard
|
| 734 |
+
projection argument onto some orthogonal complement of ik, we can apply the im-
|
| 735 |
+
plicit function theorem and obtain a C1 map (ε, b) �→ s(ε, b) such that �Ψ(b, ε, s(ε, b))
|
| 736 |
+
lies in k, and conclude the proof by setting
|
| 737 |
+
ˇΦ(ε, b) = (Ab(ε, s(ε, b)))0,1 − A0,1.
|
| 738 |
+
□
|
| 739 |
+
We will now explain that for each ε ∈ [0, ε0), the map
|
| 740 |
+
(3.2)
|
| 741 |
+
µε :
|
| 742 |
+
B
|
| 743 |
+
→
|
| 744 |
+
k
|
| 745 |
+
b
|
| 746 |
+
�→
|
| 747 |
+
ΛεiF ˇ
|
| 748 |
+
Aε,b − cε IdE
|
| 749 |
+
is a moment map for the K-action on B, for suitable symplectic forms Ωε on B.
|
| 750 |
+
Recall from [4, 11] that for ε ∈ (0, ε0), the gauge action of G C(π∗Gr(E), h) on
|
| 751 |
+
the affine space ∂0 + Ω0,1(X′, End(Gr(E))) is hamiltonian for the symplectic form
|
| 752 |
+
given, for (a, b) ∈ Ω0,1(X′, End(Gr(E)))2, by
|
| 753 |
+
(3.3)
|
| 754 |
+
ΩD
|
| 755 |
+
ε (a, b) =
|
| 756 |
+
�
|
| 757 |
+
X′ trace(a ∧ b∗) ∧
|
| 758 |
+
ωn−1
|
| 759 |
+
ε
|
| 760 |
+
(n − 1)!,
|
| 761 |
+
with equivariant moment map ∂ �→ ΛεFA∂ where A∂ stands for the Chern connec-
|
| 762 |
+
tion of (∂, h). Here, we identified the Lie algebra of G C(Gr(E), h) with its dual by
|
| 763 |
+
mean of the invariant pairing
|
| 764 |
+
(3.4)
|
| 765 |
+
⟨s1, s2⟩ε :=
|
| 766 |
+
�
|
| 767 |
+
X′ trace(s1 · s∗
|
| 768 |
+
2) ωn
|
| 769 |
+
ε
|
| 770 |
+
n! .
|
| 771 |
+
Note that the above expressions admit continuous extensions for ε = 0 when we
|
| 772 |
+
restrict to the G C(Gr(E), h0) action on ∂0 + Ω0,1(X, End(Gr(E))) and integrate
|
| 773 |
+
over (X, ω).
|
| 774 |
+
|
| 775 |
+
14
|
| 776 |
+
A. CLARKE AND C. TIPLER
|
| 777 |
+
Remark 3.5. We used above the Chern correspondence, for h fixed, between
|
| 778 |
+
Dolbeault operators and hermitian connections to express the infinite dimensional
|
| 779 |
+
moment map picture on the space of Dolbeault operators.
|
| 780 |
+
Proposition 3.6. Up to shrinking ε0 and B, for all ε ∈ [0, ε0), the map ∂0+ ˇΦ(ε, ·)
|
| 781 |
+
is a K-equivariant map from B to ��0 + Ω0,1(X′, End(Gr(E))) whose image is a
|
| 782 |
+
symplectic submanifold for ΩD
|
| 783 |
+
ε .
|
| 784 |
+
Proof. The equivariance follows easily from Proposition 3.1 and from the construc-
|
| 785 |
+
tion of ˇΦ in the proof of Proposition 3.2. For ε = 0, the map ˇΦ(0, ·) is obtained by
|
| 786 |
+
perturbing �Φ = π∗ ◦ Φ. But Φ is complex analytic with, by construction, injective
|
| 787 |
+
differential at the origin (see e.g. the orginal proof [19] or [10]). So is �Φ, and thus
|
| 788 |
+
�Φ(B) is a complex subspace of Ω0,1(X′, End(π∗Gr(E))). We deduce that, up to
|
| 789 |
+
shrinking B, �Φ induces an embedding of B such that the restriction of ΩD
|
| 790 |
+
0 to �Φ(B)
|
| 791 |
+
is non-degenerate (recall that ΩD
|
| 792 |
+
0 is a K¨ahler form on the space of Dolbeault oper-
|
| 793 |
+
ators on X). As �Φ(ε, ·) is obtained by a small and continuous perturbation of �Φ,
|
| 794 |
+
and as being a symplectic embedding is and open condition, the result follows.
|
| 795 |
+
□
|
| 796 |
+
From this result, we deduce that the map µε defined in (3.2) is a moment map
|
| 797 |
+
for the K-action on B with respect to the pulled back symplectic form
|
| 798 |
+
Ωε := ˇΦ(ε, ·)∗ΩD
|
| 799 |
+
ε ,
|
| 800 |
+
and where we use the pairing ⟨·, ·⟩ε defined in (3.4) to identify k with its dual. From
|
| 801 |
+
the discussion of Section 3.1, E is obtained as a small complex deformation of Gr(E),
|
| 802 |
+
and thus by Proposition 3.1, ∂E is gauge equivalent to an element ∂b := ∂0 + Φ(b).
|
| 803 |
+
Then, from properties of the maps Φ and ˇΦ, for all ε ∈ [0, ε0) and for all g ∈ G,
|
| 804 |
+
π∗∂E will be gauge equivalent to π∗∂0 + ˇΦ(ε, g · b), provided g · b ∈ B. As a zero of
|
| 805 |
+
µε corresponds to a HYM connection on (X′, ωε), we are left with the problem of
|
| 806 |
+
finding a zero for µε in the G-orbit of b.
|
| 807 |
+
4. Proof of the main theorem
|
| 808 |
+
We carry on with notations from the last section, and our goal now is to prove
|
| 809 |
+
Theorem 1.1. This is where we will need to assume that in Gr(E) = �ℓ
|
| 810 |
+
i=1 Gi, all
|
| 811 |
+
stable components Gi are non isomorphic. This implies that
|
| 812 |
+
g = aut(Gr(E)) =
|
| 813 |
+
ℓ
|
| 814 |
+
�
|
| 815 |
+
i=1
|
| 816 |
+
C · IdGi
|
| 817 |
+
and thus its compact form k is abelian, with K a compact torus.
|
| 818 |
+
4.1. The local convex cone associated to the K-action. In order to prove the
|
| 819 |
+
existence of a zero of µε in Z := G · b ∩ B, we start by describing, at least locally,
|
| 820 |
+
the images of Z by the maps (µε)ε∈[0,ε0). In this section, relying on [24], we will
|
| 821 |
+
see that those images all contain translations of (a neighbourhood of the apex of)
|
| 822 |
+
the same convex cone.
|
| 823 |
+
By simplicity of E, the stabiliser of b under the K-action is reduced to the S1-
|
| 824 |
+
action induced by gauge transformations of the form eiθ IdE. As those elements fix
|
| 825 |
+
all the points in B, elements in S1 · IdE will play no role in the arguments that
|
| 826 |
+
follow. Hence, we will work instead with the quotient torus K0 := K/S1 · IdE.
|
| 827 |
+
Note that the constants cε that appear in the maps µε in (3.2) are chosen so that
|
| 828 |
+
⟨µε, IdE⟩ε = 0. As the µε take vakues in k, this is equivalent to say trace(µε) = 0.
|
| 829 |
+
|
| 830 |
+
BLOWING-UP HYM CONNECTIONS
|
| 831 |
+
15
|
| 832 |
+
Hence, setting k0 ⊂ k to be the set of trace free elements in �ℓ
|
| 833 |
+
i=1 iR · IdGi, we will
|
| 834 |
+
consider the family of moment maps µε : B → k0 for the K0-action, and we may,
|
| 835 |
+
and will, assume that the stabiliser of b is trivial. Then, by using the inner product
|
| 836 |
+
⟨·, ·⟩ε to identify k0 ≃ k∗
|
| 837 |
+
0, we can see the maps µε as taking values in k∗
|
| 838 |
+
0 :
|
| 839 |
+
µ∗
|
| 840 |
+
ε : B → k∗
|
| 841 |
+
0.
|
| 842 |
+
There is a weight decomposition of V under the abelian K-action
|
| 843 |
+
(4.1)
|
| 844 |
+
V :=
|
| 845 |
+
�
|
| 846 |
+
m∈M
|
| 847 |
+
Vm
|
| 848 |
+
for M ⊂ k∗
|
| 849 |
+
0 the lattice of characters of K0. In the matrix blocks decomposition
|
| 850 |
+
of V = H0,1(X, End(Gr(E))) induced by Gr(E) = �ℓ
|
| 851 |
+
i=1 Gi, using the product
|
| 852 |
+
hermitian metric h0, we have
|
| 853 |
+
V =
|
| 854 |
+
�
|
| 855 |
+
1≤i,j≤ℓ
|
| 856 |
+
H0,1(X, G∗
|
| 857 |
+
i ⊗ Gj).
|
| 858 |
+
The action of g ∈ K0 on Vij := H0,1(X, G∗
|
| 859 |
+
i ⊗ Gj) is, by Equation (3.1):
|
| 860 |
+
(4.2)
|
| 861 |
+
g · γij = gig−1
|
| 862 |
+
j γij.
|
| 863 |
+
Thus, in the weight space decomposition (4.1), Vij is the eigenspace with weight
|
| 864 |
+
(4.3)
|
| 865 |
+
mij := (0, . . . , 0, 1, 0, . . ., 0, −1, 0, . . ., 0)
|
| 866 |
+
where +1 appears in i-th position and −1 in the j-th position. If we decompose b
|
| 867 |
+
accordingly as
|
| 868 |
+
(4.4)
|
| 869 |
+
b =
|
| 870 |
+
�
|
| 871 |
+
ij
|
| 872 |
+
bij,
|
| 873 |
+
where bij ∈ Vij is non zero, as ∂E = ∂0 + γ with γ upper triangular, or equivalently
|
| 874 |
+
as E is obtained as successive extentions of the stable components Gi’s, only indices
|
| 875 |
+
(i, j) with i < j will appear in (4.4). From now on, we will restrict our setting to
|
| 876 |
+
B ∩
|
| 877 |
+
�
|
| 878 |
+
bij̸=0
|
| 879 |
+
Vij,
|
| 880 |
+
which we still denote by B. That is, we only consider weight spaces that appear in
|
| 881 |
+
the decomposition of b. Similarily, we use the notation V for �
|
| 882 |
+
bij̸=0 Vij.
|
| 883 |
+
To sum up, we are in the following setting :
|
| 884 |
+
(R1) The compact torus K0 acts effectively and holomorphically on the complex
|
| 885 |
+
vector space V ;
|
| 886 |
+
(R2) There is a continous family of symplectic forms (Ωε)0≤ε<ε0 on B ⊂ V
|
| 887 |
+
around the origin, with respect to which the K0-action is hamiltonian;
|
| 888 |
+
(R3) The point b ∈ B has trivial stabiliser, 0 in its KC
|
| 889 |
+
0 -orbit closure, and for all
|
| 890 |
+
weight mij ∈ M appearing in the weight space decomposition of V , bij ̸= 0.
|
| 891 |
+
(R4) The restriction of the symplectic form Ω0 to the KC
|
| 892 |
+
0 -orbit of b is non-
|
| 893 |
+
degenerate.
|
| 894 |
+
This last point follows as in the proof of Proposition 3.6. We set
|
| 895 |
+
Z := B ∩ (KC
|
| 896 |
+
0 · b).
|
| 897 |
+
We also introduce
|
| 898 |
+
σ :=
|
| 899 |
+
�
|
| 900 |
+
bij̸=0
|
| 901 |
+
R+ · mij ⊂ k∗
|
| 902 |
+
0
|
| 903 |
+
|
| 904 |
+
16
|
| 905 |
+
A. CLARKE AND C. TIPLER
|
| 906 |
+
with {mij, bij ̸= 0} the set of weights that appear in the decomposition of b ∈ V ,
|
| 907 |
+
and for η > 0
|
| 908 |
+
ση :=
|
| 909 |
+
�
|
| 910 |
+
bij̸=0
|
| 911 |
+
[0, η) · mij ⊂ k∗
|
| 912 |
+
0.
|
| 913 |
+
Note that by the local version of Atiyah and Guillemin–Sternberg’s convexity the-
|
| 914 |
+
orem, there exists η > 0 such that µ∗
|
| 915 |
+
ε(0) + ση ⊂ µ∗
|
| 916 |
+
ε(B) for all ε small enough (see
|
| 917 |
+
the equivariant Darboux Theorem [14, Theorem 3.2] combined with the local de-
|
| 918 |
+
scription of linear hamiltonian torus actions [14, Section 7.1]). By [24, Proposition
|
| 919 |
+
4.6], the properties (R1) − (R4) listed above actually imply :
|
| 920 |
+
Proposition 4.1. Up to shrinking B and ε0, there exists η > 0 such that for all
|
| 921 |
+
ε ∈ [0, ε0),
|
| 922 |
+
µ∗
|
| 923 |
+
ε(0) + Int(ση) ⊂ µ∗
|
| 924 |
+
ε(Z)
|
| 925 |
+
and
|
| 926 |
+
µ∗
|
| 927 |
+
ε(0) + ση ⊂ µ∗
|
| 928 |
+
ε(Z).
|
| 929 |
+
Remark 4.2. The fact that the interior of µ∗
|
| 930 |
+
ε(0) + ση is included in the image
|
| 931 |
+
of the KC
|
| 932 |
+
0 -orbit of b by µ∗
|
| 933 |
+
ε is not stated explicitely in [24], but follows from the
|
| 934 |
+
discussion at the beginning of the proof of [24, Proposition 4.6].
|
| 935 |
+
4.2. Solving the problem. From Proposition 4.1, to prove the existence of a
|
| 936 |
+
zero of µε in Z, it is enough to show that −µ∗
|
| 937 |
+
ε(0) ∈ Int(ση), which reduces to
|
| 938 |
+
−µ∗
|
| 939 |
+
ε(0) ∈ Int(σ) for small enough ε. Arguing as in [24, Lemma 4.8], σ and its dual
|
| 940 |
+
σ∨ := {v ∈ k0 | ⟨m, v⟩ ≥ 0 ∀m ∈ σ}
|
| 941 |
+
are strongly convex rationnal polyhedral cones of dimension ℓ − 1. Note that here
|
| 942 |
+
the pairing ⟨·, ·⟩ is the natural duality pairing. By duality, σ = (σ∨)∨, and we are
|
| 943 |
+
left with proving
|
| 944 |
+
−µ∗
|
| 945 |
+
ε(0) ∈ Int((σ∨)∨).
|
| 946 |
+
The cone σ∨ can be written
|
| 947 |
+
σ∨ =
|
| 948 |
+
�
|
| 949 |
+
a∈A
|
| 950 |
+
R+ · va
|
| 951 |
+
for a finite set of generators {va}a∈A ⊂ k0. Hence, our goal now is to show that for
|
| 952 |
+
all a ∈ A, ⟨µ∗
|
| 953 |
+
ε(0), va⟩ < 0, which by construction is equivalent to
|
| 954 |
+
(4.5)
|
| 955 |
+
⟨µε(0), va⟩ε < 0,
|
| 956 |
+
under the asumption that for any F ∈ E[ω],
|
| 957 |
+
(4.6)
|
| 958 |
+
µLε(F) <
|
| 959 |
+
ε→0 µLε(E).
|
| 960 |
+
We will then study in more details Equations (4.5) and (4.6). In order to simplify
|
| 961 |
+
the notations, in what follows, we will assume that all the stable components of
|
| 962 |
+
Gr(E) have rank one, so that trace(IdGi) = 1 for 1 ≤ i ≤ ℓ. The general case can
|
| 963 |
+
easily be adapted, and is left to the reader.
|
| 964 |
+
|
| 965 |
+
BLOWING-UP HYM CONNECTIONS
|
| 966 |
+
17
|
| 967 |
+
4.2.1. Condition (4.5) : generators of the dual cone. We will give here a more
|
| 968 |
+
precise form for the generators {va}a∈A of σ∨. Recall from [15, Section 1.2] the
|
| 969 |
+
method to find such generators : as σ is ℓ − 1-dimensional, each of its facets is
|
| 970 |
+
generated by ℓ − 2 elements amongst its generators (mij). Then, a generator va for
|
| 971 |
+
σ∨ will be an “inward pointing normal” to such a facet. Hence, if
|
| 972 |
+
va =
|
| 973 |
+
ℓ
|
| 974 |
+
�
|
| 975 |
+
i=1
|
| 976 |
+
ai IdGi
|
| 977 |
+
is a generator of σ∨, there exists a set S := {mij} of ℓ − 2 generators of σ such that
|
| 978 |
+
∀ mij ∈ S, ⟨mij, va⟩ = 0.
|
| 979 |
+
Moreover, va ∈ k0 should be trace free, and as we assume here rank(Gi) = 1 for all
|
| 980 |
+
stable components, it gives
|
| 981 |
+
ℓ
|
| 982 |
+
�
|
| 983 |
+
i=1
|
| 984 |
+
ai = 0.
|
| 985 |
+
Lemma 4.3. Up to scaling va, there exists a partition {1, . . ., ℓ} = I− ∪ I+ such
|
| 986 |
+
that for all i ∈ I−, ai = −
|
| 987 |
+
1
|
| 988 |
+
♯I− and for all i ∈ I+, ai =
|
| 989 |
+
1
|
| 990 |
+
♯I+ , where ♯ stands for the
|
| 991 |
+
cardinal of a set.
|
| 992 |
+
Proof. The key is to observe that if mij, mjk ∈ S2, then mik /∈ S. Indeed, by
|
| 993 |
+
(4.3), mij + mjk = mik, and those are generators of the cone. Equivalently, if
|
| 994 |
+
mij, mik ∈ S2, then mjk /∈ S. We then assign an oriented graph Ga to va. The
|
| 995 |
+
vertices are labelled a1 to aℓ, and we draw an oriented edge from ai to aj if ai = aj
|
| 996 |
+
and i < j. For each mij ∈ S, ⟨mij, va⟩ = 0 gives ai = aj. Hence, Ga has at least
|
| 997 |
+
ℓ − 2 edges. To prove the result, it is enough to show that Ga has 2 connected
|
| 998 |
+
components. Indeed, we can then set I− = {i |ai < 0} and I+ = {i |ai > 0}. All
|
| 999 |
+
elements ai for i ∈ I− will correspond to the same connected component and be
|
| 1000 |
+
equal, and similarily with i ∈ I+. As �ℓ
|
| 1001 |
+
i=1 ai = 0, we obtain the result by rescaling.
|
| 1002 |
+
Proving that Ga has two connected components is then routine. It has ℓ vertices
|
| 1003 |
+
and ℓ − 2 oriented edges, with the rule that if there is an edge from ai to aj and an
|
| 1004 |
+
edge from ai to ak, then there is no edge from aj to ak. We consider the number of
|
| 1005 |
+
edges that start from a1. If there are ℓ − 2 of those, then the connected component
|
| 1006 |
+
of a1 has at least ℓ − 1 vertices, and we are left with at most 1 singleton for the
|
| 1007 |
+
other component. The fact that va is trace free imposes that there are at least 2
|
| 1008 |
+
connected components, and we are done in that case. Then, if there are ℓ − 2 − k
|
| 1009 |
+
edges from a1, its connected component has at least ℓ − 1 − k elements, and we are
|
| 1010 |
+
left with at most k + 1 vertices and k edges for the other components. But its easy
|
| 1011 |
+
to show, by induction on k, that the rule stated above implies that there will be at
|
| 1012 |
+
most 1 connected component for such a graph with k + 1 vertices and k edges, and
|
| 1013 |
+
we are done.
|
| 1014 |
+
□
|
| 1015 |
+
We can now translate condition (4.5), by Lemma 4.3, it is equivalent to
|
| 1016 |
+
(4.7)
|
| 1017 |
+
�
|
| 1018 |
+
i∈I+⟨µε(0), IdGi⟩ε
|
| 1019 |
+
♯I+
|
| 1020 |
+
<
|
| 1021 |
+
�
|
| 1022 |
+
i∈I−⟨µε(0), IdGi⟩ε
|
| 1023 |
+
♯I−
|
| 1024 |
+
.
|
| 1025 |
+
|
| 1026 |
+
18
|
| 1027 |
+
A. CLARKE AND C. TIPLER
|
| 1028 |
+
4.2.2. Condition (4.6) : one parameter degenerations. We will associate to each
|
| 1029 |
+
generator va of σ∨ a subsheaf F ∈ E[ω]. Geometrically, the idea is that va ∈ k0
|
| 1030 |
+
generates a one-parameter subgroup of K0 and a degeneration of E to F ⊕ E/F,
|
| 1031 |
+
to which is assigned the Hilbert–Mumford weight µLε(F) − µLε(E) < 0. We let
|
| 1032 |
+
va = �ℓ
|
| 1033 |
+
i=1 ai IdGi ∈ σ∨ a generator as above, and define
|
| 1034 |
+
Fa =
|
| 1035 |
+
�
|
| 1036 |
+
i∈I+
|
| 1037 |
+
Gi,
|
| 1038 |
+
as a smooth complex vector bundle, and will show that ∂E(Fa) ⊂ Ω0,1(X′, Fa). This
|
| 1039 |
+
implies that Fa ∈ E[ω] as a holomorphic vector bundle, with Dolbeault operator the
|
| 1040 |
+
restriction of ∂E. Recall that ∂E = ∂0 + γ = ∂0 + �
|
| 1041 |
+
bij̸=0 γij, that is, by choice of
|
| 1042 |
+
b, the weights that appear in the weight decomposition of γ are the same as those
|
| 1043 |
+
that appear in the decomposition of b. In the matrix blocks decomposition given
|
| 1044 |
+
by �ℓ
|
| 1045 |
+
i=1 Gi, the operator ∂0 is diagonal, and thus sends Fa to Ω0,1(X′, Fa). We
|
| 1046 |
+
need to show that for each j ∈ I+, γ(Gj) ⊂ Ω0,1(X′, Fa). As va ∈ σ∨, it satisfies,
|
| 1047 |
+
for all generator mij of σ :
|
| 1048 |
+
⟨mij, va⟩ ≥ 0,
|
| 1049 |
+
that is, for all (i, j) with i < j and bij ̸= 0,
|
| 1050 |
+
ai − aj ≥ 0.
|
| 1051 |
+
As j ∈ I+, this implies ai ≥ aj > 0. Hence, if i < j is such that bij ̸= 0, then
|
| 1052 |
+
i ∈ I+. Equivalently, for i < j, i ∈ I− implies γij = 0, and thus we see that
|
| 1053 |
+
γ(Gj) ⊂ Ω0,1(X′, Fa), and hence ∂E(Fa) ⊂ Ω0,1(X′, Fa).
|
| 1054 |
+
Then we have Fa ∈ E[ω] and Condition (4.6) gives
|
| 1055 |
+
µLε(Fa) <
|
| 1056 |
+
ε→0 µLε(E),
|
| 1057 |
+
which, by the see-saw property of slopes (see e.g. [25, Corollary 3.5] ), gives
|
| 1058 |
+
µLε(Fa) <
|
| 1059 |
+
ε→0 µLε(E/Fa)
|
| 1060 |
+
and thus (recall we assume rank(Gi) = 1):
|
| 1061 |
+
(4.8)
|
| 1062 |
+
�
|
| 1063 |
+
i∈I+ µLε(Gi)
|
| 1064 |
+
♯I+
|
| 1065 |
+
<
|
| 1066 |
+
ε→0
|
| 1067 |
+
�
|
| 1068 |
+
i∈I− µLε(Gi)
|
| 1069 |
+
♯I−
|
| 1070 |
+
.
|
| 1071 |
+
4.2.3. Conclusion. Recall that Equation (4.8) means that in the ε-expansion of
|
| 1072 |
+
�
|
| 1073 |
+
i∈I+ µLε (Gi)
|
| 1074 |
+
♯I+
|
| 1075 |
+
−
|
| 1076 |
+
�
|
| 1077 |
+
i∈I− µLε(Gi)
|
| 1078 |
+
♯I−
|
| 1079 |
+
, the first non-zero term is strictly negative. By Chern–
|
| 1080 |
+
Weyl theory, using the fact that A0 and ˇAε,0 are gauge-equivalent by point (2) of
|
| 1081 |
+
Proposition 3.2, we have
|
| 1082 |
+
µLε(Gi)
|
| 1083 |
+
=
|
| 1084 |
+
c1(Gi) · [ωε]n−1
|
| 1085 |
+
=
|
| 1086 |
+
1
|
| 1087 |
+
2π⟨µε(0), IdGi⟩ε + cε
|
| 1088 |
+
2π⟨IdE, IdGi⟩ε.
|
| 1089 |
+
Hence Inequality (4.8) implies Inequality (4.7), for ε small enough, which concludes
|
| 1090 |
+
the existence of bε ∈ Z such that µε(bε) = 0. Then, by construction, the associated
|
| 1091 |
+
connections ˇAε,bε provide HYM connections with respect to ωε on bundles gauge
|
| 1092 |
+
equivalent to E, where the gauge equivalences are given by elements in the finite
|
| 1093 |
+
dimensional Lie group Aut(Gr(E)). To conclude the proof of Theorem 1.1, it then
|
| 1094 |
+
remains to show that the connections ˇAε,bε converge to π∗A0 = ˇA0,0 in any L2,l
|
| 1095 |
+
Sobolev norm. By construction of ˇAε,b in Proposition 3.2, it is enough to prove
|
| 1096 |
+
|
| 1097 |
+
BLOWING-UP HYM CONNECTIONS
|
| 1098 |
+
19
|
| 1099 |
+
that bε converges to 0 when ε → 0. Recall from [14, Theorem 3.2 and Section 7.1]
|
| 1100 |
+
that B can be chosen so that µ∗
|
| 1101 |
+
ε is given by
|
| 1102 |
+
(4.9)
|
| 1103 |
+
µ∗
|
| 1104 |
+
ε(b′) = µ∗
|
| 1105 |
+
ε(0) +
|
| 1106 |
+
�
|
| 1107 |
+
ij
|
| 1108 |
+
||b′
|
| 1109 |
+
ij||2
|
| 1110 |
+
ε · mij,
|
| 1111 |
+
for some norm || · ||ε that depends continously on ε. As µε(0) →
|
| 1112 |
+
ε→0 µ0(0) = 0, the
|
| 1113 |
+
equation µ∗
|
| 1114 |
+
ε(bε) = 0 implies that for all (i, j), ||(bε)ij||ε →
|
| 1115 |
+
ε→0 0. As the norms || · ||ε
|
| 1116 |
+
vary continuously, they are mutually bounded, and thus bε →
|
| 1117 |
+
ε→0 0, which concludes
|
| 1118 |
+
proof of Theorem 1.1.
|
| 1119 |
+
4.2.4. Proof of the corollaries. We comment now on the various corollaries stated in
|
| 1120 |
+
the introduction. First, Corollary 1.3 is a direct application of Theorem 1.1, where
|
| 1121 |
+
E = Gr(E) as a single stable component. Corollary 1.4 also follows directly, using
|
| 1122 |
+
Formula (1.2). What remains is to show Corollary 1.5. The only remaing case to
|
| 1123 |
+
study is when for all F ∈ E[ω], µLε(F) ≤
|
| 1124 |
+
ε→0 µLε(E), with at least one equality. In
|
| 1125 |
+
that situation, the discussion in the last two sections shows that −µε(0) ∈ σ will lie
|
| 1126 |
+
in the boundary of σ. Hence, by Proposition 4.1, there is a boundary point b′ ∈ Z in
|
| 1127 |
+
the orbit closure of b with µε(b′) = 0. This point corresponds to a HYM connection
|
| 1128 |
+
on a vector bundle that is then polystable for the holomorphic structure given by
|
| 1129 |
+
ˇA0,1
|
| 1130 |
+
ε,b′, with respect to Lε. As this bundle correspond to a boundary point in the
|
| 1131 |
+
complex orbit of b, it admits a small complex deformation to π∗E. As semi-stability
|
| 1132 |
+
is an open condition, we deduce that π∗E is itself semi-stable for Lε.
|
| 1133 |
+
References
|
| 1134 |
+
[1] Claudio Arezzo and Frank Pacard. Blowing up and desingularizing constant scalar curvature
|
| 1135 |
+
K¨ahler manifolds. Acta Math., 196(2):179–228, 2006. 1
|
| 1136 |
+
[2] Claudio Arezzo and Frank Pacard. Blowing up K¨ahler manifolds with constant scalar curva-
|
| 1137 |
+
ture. II. Ann. of Math. (2), 170(2):685–738, 2009. 1, 3
|
| 1138 |
+
[3] Claudio Arezzo, Frank Pacard, and Michael Singer. Extremal metrics on blowups. Duke Math.
|
| 1139 |
+
J., 157(1):1–51, 2011. 1, 3
|
| 1140 |
+
[4] M. F. Atiyah and R. Bott. The Yang-Mills equations over Riemann surfaces. Philos. Trans.
|
| 1141 |
+
Roy. Soc. London Ser. A, 308(1505):523–615, 1983. 13
|
| 1142 |
+
[5] Nicholas Buchdahl and Georg Schumacher. Polystable bundles and representations of their
|
| 1143 |
+
automorphisms. Complex Manifolds, 9(1):78–113, 2022. 9, 10, 12
|
| 1144 |
+
[6] Nicholas P. Buchdahl. Blowups and gauge fields. Pacific J. Math., 196(1):69–111, 2000. 1, 3
|
| 1145 |
+
[7] Ruadha´ı Dervan. Stability conditions for polarised varieties. ArXiv preprint arXiv:2103.03177,
|
| 1146 |
+
2021. 11
|
| 1147 |
+
[8] Ruadha´ı Dervan and Lars Martin Sektnan. Extremal K¨ahler metrics on blow-ups. ArXiv
|
| 1148 |
+
preprint arXiv:2110.13579, 2021. 1, 3, 11
|
| 1149 |
+
[9] Ruadha´ı Dervan and Lars Martin Sektnan. Hermitian Yang-Mills connections on blowups. J.
|
| 1150 |
+
Geom. Anal., 31(1):516–542, 2021. 1, 3
|
| 1151 |
+
[10] A.-K. Doan. Group actions on local moduli space of holomorphic vector bundles. ArXiv
|
| 1152 |
+
preprint arXiv:2201.10851, 2022. 10, 14
|
| 1153 |
+
[11] S. K. Donaldson. Anti self-dual Yang-Mills connections over complex algebraic surfaces and
|
| 1154 |
+
stable vector bundles. Proc. London Math. Soc. (3), 50(1):1–26, 1985. 13
|
| 1155 |
+
[12] S. K. Donaldson. Infinite determinants, stable bundles and curvature. Duke Math. J.,
|
| 1156 |
+
54(1):231–247, 1987. 1, 5
|
| 1157 |
+
[13] Simon K. Donaldson. K¨ahler geometry on toric manifolds, and some other manifolds with
|
| 1158 |
+
large symmetry. In Handbook of geometric analysis. No. 1, volume 7 of Adv. Lect. Math.
|
| 1159 |
+
(ALM), pages 29–75. Int. Press, Somerville, MA, 2008. 9
|
| 1160 |
+
|
| 1161 |
+
20
|
| 1162 |
+
A. CLARKE AND C. TIPLER
|
| 1163 |
+
[14] Shubham Dwivedi, Jonathan Herman, Lisa C. Jeffrey, and Theo van den Hurk. Hamiltonian
|
| 1164 |
+
group actions and equivariant cohomology. SpringerBriefs in Mathematics. Springer, Cham,
|
| 1165 |
+
2019. 16, 19
|
| 1166 |
+
[15] William Fulton. Introduction to toric varieties, volume 131 of Annals of Mathematics Stud-
|
| 1167 |
+
ies. Princeton University Press, Princeton, NJ, 1993. The William H. Roever Lectures in
|
| 1168 |
+
Geometry. 17
|
| 1169 |
+
[16] Daniel Huybrechts and Manfred Lehn. The geometry of moduli spaces of sheaves. Cambridge
|
| 1170 |
+
Mathematical Library. Cambridge University Press, Cambridge, second edition, 2010. 4, 5
|
| 1171 |
+
[17] Shoshichi Kobayashi. Curvature and stability of vector bundles. Proc. Japan Acad. Ser. A
|
| 1172 |
+
Math. Sci., 58(4):158–162, 1982. 1, 5
|
| 1173 |
+
[18] Shoshichi Kobayashi. Differential geometry of complex vector bundles. Princeton Legacy
|
| 1174 |
+
Library. Princeton University Press, Princeton, NJ, [2014]. Reprint of the 1987 edition [
|
| 1175 |
+
MR0909698]. 4
|
| 1176 |
+
[19] M. Kuranishi. New proof for the existence of locally complete families of complex structures.
|
| 1177 |
+
In Proc. Conf. Complex Analysis (Minneapolis, 1964), pages 142–154. Springer, Berlin, 1965.
|
| 1178 |
+
10, 14
|
| 1179 |
+
[20] Martin L¨ubke. Stability of Einstein-Hermitian vector bundles. Manuscripta Math., 42(2-
|
| 1180 |
+
3):245–257, 1983. 1, 5
|
| 1181 |
+
[21] V. B. Mehta and A. Ramanathan. Restriction of stable sheaves and representations of the
|
| 1182 |
+
fundamental group. Invent. Math., 77(1):163–172, 1984. 3
|
| 1183 |
+
[22] David Mumford. Projective invariants of projective structures and applications. In Proc.
|
| 1184 |
+
Internat. Congr. Mathematicians (Stockholm, 1962), pages 526–530. Inst. Mittag-Leffler,
|
| 1185 |
+
Djursholm, 1963. 1, 5
|
| 1186 |
+
[23] Achim Napame and Carl Tipler. Toric sheaves, stability and fibrations. ArXiv preprint
|
| 1187 |
+
arXiv:2210.04587, 2022. 2, 3
|
| 1188 |
+
[24] Lars Martin Sektnan and Carl Tipler. Analytic K-semi-stability and wall crossing. In prepa-
|
| 1189 |
+
ration. 3, 11, 14, 16
|
| 1190 |
+
[25] Lars Martin Sektnan and Carl Tipler. Hermitian Yang–Mills connections on pullback bundles.
|
| 1191 |
+
ArXiv preprint arXiv:2006.06453, 2020. 7, 8, 18
|
| 1192 |
+
[26] Reza Seyyedali and G´abor Sz´ekelyhidi. Extremal metrics on blowups along submanifolds. J.
|
| 1193 |
+
Differential Geom., 114(1):171–192, 2020. 1, 3
|
| 1194 |
+
[27] G´abor Sz´ekelyhidi. The K¨ahler-Ricci flow and K-polystability. Amer. J. Math., 132(4):1077–
|
| 1195 |
+
1090, 2010. 3, 9
|
| 1196 |
+
[28] G´abor Sz´ekelyhidi. Blowing up extremal K¨ahler manifolds II. Invent. Math., 200(3):925–977,
|
| 1197 |
+
2015. 1, 3
|
| 1198 |
+
[29] Fumio Takemoto. Stable vector bundles on algebraic surfaces. Nagoya Math. J., 47:29–48,
|
| 1199 |
+
1972. 1, 5
|
| 1200 |
+
[30] K. Uhlenbeck and S.-T. Yau. On the existence of Hermitian-Yang-Mills connections in stable
|
| 1201 |
+
vector bundles. volume 39, pages S257–S293. 1986. Frontiers of the mathematical sciences:
|
| 1202 |
+
1985 (New York, 1985). 1, 5
|
| 1203 |
+
[31] Claire Voisin. Hodge theory and complex algebraic geometry. I, volume 76 of Cambridge
|
| 1204 |
+
Studies in Advanced Mathematics. Cambridge University Press, Cambridge, english edition,
|
| 1205 |
+
2007. Translated from the French by Leila Schneps. 7
|
| 1206 |
+
Andrew Clarke, Instituto de Matem´atica, Universidade Federal do Rio de Janeiro,
|
| 1207 |
+
Av. Athos da Silveira Ramos 149, Rio de Janeiro, RJ, 21941-909, Brazil
|
| 1208 |
+
Email address: andrew@im.ufrj.br
|
| 1209 |
+
Carl Tipler, Univ Brest, UMR CNRS 6205, Laboratoire de Math´ematiques de Bre-
|
| 1210 |
+
tagne Atlantique, France
|
| 1211 |
+
Email address: Carl.Tipler@univ-brest.fr
|
| 1212 |
+
|
5tAyT4oBgHgl3EQfpfgu/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
6tAzT4oBgHgl3EQf-P5_/content/2301.01931v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a774ea994a92ea2dab8ff470f286beda48d6494faac8cf57b615432cf513aac3
|
| 3 |
+
size 1683339
|
6tAzT4oBgHgl3EQf-P5_/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:abb1a144b4a3e973b2501afa165c9ae07ab635a718088962c860d3435b736b69
|
| 3 |
+
size 152983
|
6tAzT4oBgHgl3EQfvP3q/content/tmp_files/2301.01705v1.pdf.txt
ADDED
|
@@ -0,0 +1,885 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
A Survey on Deep Industrial Transfer Learning in Fault Prognostics
|
| 2 |
+
1
|
| 3 |
+
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
A Survey on Deep Industrial Transfer
|
| 7 |
+
Learning in Fault Prognostics
|
| 8 |
+
|
| 9 |
+
Benjamin Maschler1
|
| 10 |
+
|
| 11 |
+
ABSTRACT
|
| 12 |
+
Due to its probabilistic nature, fault prognostics is a prime example of a use case for deep learning
|
| 13 |
+
utilizing big data. However, the low availability of such data sets combined with the high effort of
|
| 14 |
+
fitting, parameterizing and evaluating complex learning algorithms to the heterogenous and dynamic
|
| 15 |
+
settings typical for industrial applications oftentimes prevents the practical application of this
|
| 16 |
+
approach. Automatic adaptation to new or dynamically changing fault prognostics scenarios can be
|
| 17 |
+
achieved using transfer learning or continual learning methods. In this paper, a first survey of such
|
| 18 |
+
approaches is carried out, aiming at establishing best practices for future research in this field. It is
|
| 19 |
+
shown that the field is lacking common benchmarks to robustly compare results and facilitate
|
| 20 |
+
scientific progress. Therefore, the data sets utilized in these publications are surveyed as well in
|
| 21 |
+
order to identify suitable candidates for such benchmark scenarios.
|
| 22 |
+
Keywords Artificial Intelligence, Continual Learning, Domain Adaptation, Fault Prognostics,
|
| 23 |
+
Feature Extraction, Regression, Remaining Useful Lifetime, Survey, Transfer Learning
|
| 24 |
+
|
| 25 |
+
1. INTRODUCTION
|
| 26 |
+
Fault prognostics is the ability to predict the time at which an entity becomes dysfunctional, i.e. faulty [1].
|
| 27 |
+
Depending on the entity and its environment, causes for faults can be diverse and complex, rendering fault
|
| 28 |
+
prognostics highly probabilistic. In recent years, the combination of big data and deep learning methods have
|
| 29 |
+
demonstrated to have a great potential [1–3]. However, many approaches delivering promising results in research
|
| 30 |
+
environments fail to achieve wide-spread utilization in industry [4–7].
|
| 31 |
+
One major challenge towards a wider adoption is the high effort of fitting, parameterizing and evaluating complex
|
| 32 |
+
learning algorithms to the heterogenous and dynamic settings typical for industrial applications [1, 3, 4]. This is
|
| 33 |
+
especially severe for fault prognostics, as failures are usually to be avoided in productive systems, making the
|
| 34 |
+
collection of labeled training data sets even harder than usual. A lack of automatic adaptability and ready-to-use
|
| 35 |
+
architectures or frameworks diminishes the benefits data-driven fault prognostics could bring and thereby deters
|
| 36 |
+
potential users [4].
|
| 37 |
+
It is therefore necessary to lower the effort of adapting learning algorithms to changing problems – may those
|
| 38 |
+
changes be caused by internal problem dynamics or by the problem being new and dissimilar from others. One
|
| 39 |
+
promising approach is the transfer of knowledge between different, unidentical instances of a problem, e.g. by
|
| 40 |
+
deep transfer or deep continual learning [4]. However, this being a recent research trend, not much comparison or
|
| 41 |
+
benchmarking of methods nor results has been published and no best practice analysis been performed.
|
| 42 |
+
Although there are first surveys on transfer learning in technical applications in general [8], there are none on fault
|
| 43 |
+
prognostics in particular, yet. Therefore, the objective of this article is
|
| 44 |
+
•
|
| 45 |
+
to provide a brief introduction in industrial transfer learning methods as well as fault prognostics basics
|
| 46 |
+
in order to facilitate mutual understanding between experts of the respective fields,
|
| 47 |
+
|
| 48 |
+
1 University of Stuttgart, Department of Computer Science, Electrical Engineering and Information Technology, Pfaffenwaldring 47, 70569 Stuttgart, Germany, +49 711 685
|
| 49 |
+
67295, benjamin.maschler@ias.uni-stuttgart.de, ORCID: 0000-0001-6539-3173
|
| 50 |
+
|
| 51 |
+
2
|
| 52 |
+
A Survey on Deep Industrial Transfer Learning in Fault Prognostics
|
| 53 |
+
|
| 54 |
+
•
|
| 55 |
+
to provide a comprehensive overview of published research activities in the field of deep industrial
|
| 56 |
+
transfer learning for fault prognostics and to analyze it with regards to best practices and lessons learned
|
| 57 |
+
in order to consolidate scientific progress and, thereby, offer guidance for future research projects,
|
| 58 |
+
•
|
| 59 |
+
to provide a comprehensive overview of open-access fault prognostics data sets suitable for deep
|
| 60 |
+
industrial transfer learning research and to analyze it in order to lower the threshold for new research in
|
| 61 |
+
this field and allow for a benchmarking of results.
|
| 62 |
+
This article is organized as follows: Chapter 2 introduces the concepts and methods of deep industrial transfer
|
| 63 |
+
learning as well as the basic principles of fault prognostics. Chapter 3 then briefly describes the research
|
| 64 |
+
methodology. Chapter 4 is divided into two parts: The first part presents the surveyed publications while the second
|
| 65 |
+
part presents the corresponding data sets. Chapter 5 retains this structure, discussing first the publications and then
|
| 66 |
+
the data sets. Chapter 6 concludes this article and points out new research directions.
|
| 67 |
+
2. RELATED WORK
|
| 68 |
+
In this chapter, first, the concepts and methods of deep industrial transfer learning are introduced. Then, an
|
| 69 |
+
overview of the principles of fault prognostics is presented. Both serve to set the terminology for the remainder of
|
| 70 |
+
this article and facilitate understanding between experts of the respective fields.
|
| 71 |
+
2.1 DEEP INDUSTRIAL TRANSFER LEARNING
|
| 72 |
+
A general approach to overcome the described challenges is the transfer of knowledge across multiple (sub-)
|
| 73 |
+
problems. This makes it possible to create a (more) complete model of the problem to be solved across different
|
| 74 |
+
scenarios and to adapt it dynamically again and again without the need to completely retrain the algorithm
|
| 75 |
+
representing said model every time.
|
| 76 |
+
In machine learning research, a distinction is made between two families of solutions: transfer learning, which
|
| 77 |
+
aims solely at better solving a new target problem [9], and continual learning, which aims at solving a new target
|
| 78 |
+
problem while maintaining the ability to solve previously encountered source problems [10]. In practice, however,
|
| 79 |
+
this theoretical division often turns out to be unsuitable, since both the generalization capabilities of continual
|
| 80 |
+
learning and the specialization capabilities of transfer learning might be helpful in extracting generalities from
|
| 81 |
+
various, known (sub-)problems and then adapting them to the problem at hand [8]. This is to be represented by the
|
| 82 |
+
term industrial transfer learning [4, 8, 11].
|
| 83 |
+
Different methods of machine learning can be utilized in the context of (industrial) transfer resp. continual learning,
|
| 84 |
+
e.g. artificial neural networks, support vector machines or Bayesian networks [9, 12]. When only deep learning
|
| 85 |
+
methods are used, this is referred to as deep industrial transfer learning.
|
| 86 |
+
If the transfer of knowledge from known problems can influence the learning of solutions to new problems, then
|
| 87 |
+
such an influence does not necessarily have to be positive. A harmful knowledge transfer is therefore called
|
| 88 |
+
negative transfer [9, 12]. It is defined with regards to the difference in performance of a given learning algorithm
|
| 89 |
+
with and without using data of the source problem, the so-called transfer performance. If the performance is better
|
| 90 |
+
without using data of the source problem, negative transfer is present.
|
| 91 |
+
In practice, there are three main approach categories used in deep industrial transfer learning. The following sub-
|
| 92 |
+
chapters will introduce them briefly.
|
| 93 |
+
2.1.1 Feature representation transfer
|
| 94 |
+
Feature representation transfer includes approaches which map the samples from the source and target problems
|
| 95 |
+
into a common feature space to improve training of the target problem [8, 9, 12]. A central concept of feature
|
| 96 |
+
representation transfer is domain adaptation [9, 12]. A distinction is made between unsupervised domain
|
| 97 |
+
adaptation, i.e., requiring no target labels at all, and semi-supervised domain adaptation, i.e., requiring only a few
|
| 98 |
+
target labels. Domain adaptation is usually based on minimizing the distance between the feature distributions of
|
| 99 |
+
the different (sub)problems. Common metrics for this distance are the maximum mean discrepancy (MMD) or the
|
| 100 |
+
Kullback-Leibler (KL) divergence.
|
| 101 |
+
2.1.2 Parameter transfer
|
| 102 |
+
Parameter transfer includes approaches, which pass parameters or initializations from the learning algorithm
|
| 103 |
+
trained on the source problem to the target problem learning algorithm to improve its initialization before actual
|
| 104 |
+
training on the target problem begins [8, 9, 12]. Two forms of parameter transfer can be distinguished in deep
|
| 105 |
+
industrial transfer learning:
|
| 106 |
+
Partial parameter transfer includes approaches that pass only the parameters of the feature extractor from the
|
| 107 |
+
learning algorithm trained on the source problem to the target problem learning algorithm [13]. The feature
|
| 108 |
+
|
| 109 |
+
A Survey on Deep Industrial Transfer Learning in Fault Prognostics
|
| 110 |
+
3
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
extractor is then not subject to the training on the target problem but remains static throughout that phase. Such
|
| 114 |
+
use of a shared feature extractor reduces the training effort on the target problem, because only a small part of
|
| 115 |
+
the entire learning algorithm still needs to be trained [14].
|
| 116 |
+
Full parameter transfer includes approaches that pass all parameters and initializations of the learning algorithm
|
| 117 |
+
trained on the source problem to the target problem learning algorithm [13]. The transferred learning algorithm is
|
| 118 |
+
then further trained on the target problem. This process, also called finetuning, reduces the training effort on the
|
| 119 |
+
target problem, because the learning algorithm is already pre-trained [14].
|
| 120 |
+
2.1.3 Regularization
|
| 121 |
+
Regularization-based continual learning includes approaches which extend the loss function of an algorithm to
|
| 122 |
+
penalize the changing of parameters that were important for solving previously learned tasks [8, 10]. It is therefore
|
| 123 |
+
related to finetuning, because it involves passing all parameters and initializations of the learning algorithm trained
|
| 124 |
+
on the source problem to the target problem learning algorithm. However, most of the prominent examples of
|
| 125 |
+
regularization methods are only suitable for classification problems [15].
|
| 126 |
+
2.2 FAULT PROGNOSTICS
|
| 127 |
+
A fault is understood to be the arrival at a state of dysfunctionality. In relation to industrial components, this can
|
| 128 |
+
be accompanied by the failure of other components or higher-level systems. Due to the high costs associated with
|
| 129 |
+
unplanned failures, fault prognostics is an important field of research. Its subject is the prediction of the timing of
|
| 130 |
+
a failure - usually without further consideration of its cause. Its goal is, among other things, to enable proactive
|
| 131 |
+
maintenance, to increase operational safety and to reduce fault costs [2, 16–18].
|
| 132 |
+
Usually, three different approaches to fault prognostics are distinguished:
|
| 133 |
+
Model-based approaches, further subdivided into physics-based and expert-based approaches [17], describe
|
| 134 |
+
deterioration processes using mathematical models and starting from the causes of faults and the factors
|
| 135 |
+
influencing them. Such approaches are very efficient and accurate but require a complete understanding of the
|
| 136 |
+
system. Adaptation to new or changed scenarios usually has to be done manually and is therefore only worthwhile
|
| 137 |
+
for static and expensive scenarios [2, 16]. Data-based approaches, further subdivided into numerical or statistical
|
| 138 |
+
and machine learning-based approaches [17], describe deterioration processes on the basis of historical data, which
|
| 139 |
+
can be obtained either from the real plants, from test setups or simulations. Such approaches are usually
|
| 140 |
+
inexpensive to develop, easy to adapt, and require little or no understanding of the system. On the other hand, they
|
| 141 |
+
are very demanding in terms of variety, quantity and quality of data, and sometimes require a lot of training [2,
|
| 142 |
+
16]. Hybrid approaches combine model- and data-based methods with the goal to increase the quality of data-
|
| 143 |
+
based approaches without generating the high effort of exclusively model-based approaches [2, 16].
|
| 144 |
+
There is a large canon of research on data-based fault prognostics using deep learning methods. They can be
|
| 145 |
+
categorized into two different classes based on their objective:
|
| 146 |
+
The prediction of the Remaining Useful Lifetime (RUL) as a continuous percentage of the total lifetime is the
|
| 147 |
+
usual approach for fault prognostics [1, 3, 19, 20]. It is a regression problem. CNN, RNN, Autoencoder and Deep
|
| 148 |
+
Belief Networks are mainly used [1–3]. Occasionally, time series prediction using LSTM is also used for RUL
|
| 149 |
+
prediction [21]. Due to the fact that many characteristics due not significantly change during the first part of the
|
| 150 |
+
total lifetime [22], sometimes a piecewise linear RUL (p-RUL) is used instead of the fully linear RUL.
|
| 151 |
+
An alternative to RUL prediction is State of Health (SoH) estimation in the form of a multi-class classification.
|
| 152 |
+
While the term SoH originally originated in the field of battery research, where it was also a continuous percentage
|
| 153 |
+
SoH% related to the ratio of different actual performance parameters to their respective nominal values [19, 20,
|
| 154 |
+
23], the term is now also used for a discrete classification of the remaining lifetime SoHclass - both in the battery
|
| 155 |
+
context [24, 25] and beyond [26–29]. SoHclass estimation is primarily used when RUL prediction is not possible,
|
| 156 |
+
|
| 157 |
+
FIGURE 1. RUL, p-RUL, SoH% and SoHclass as a function of useful lifetime already passed
|
| 158 |
+
Passed Useful Lifetime
|
| 159 |
+
0 %
|
| 160 |
+
0 %
|
| 161 |
+
100 %
|
| 162 |
+
100 %
|
| 163 |
+
SoHclass
|
| 164 |
+
ok
|
| 165 |
+
at risk
|
| 166 |
+
declining
|
| 167 |
+
|
| 168 |
+
4
|
| 169 |
+
A Survey on Deep Industrial Transfer Learning in Fault Prognostics
|
| 170 |
+
|
| 171 |
+
for example, for methodological reasons or due to insufficient training data. In some cases, the more accurate RUL
|
| 172 |
+
prediction is deliberately omitted because the SoHclass estimation is less complex and sufficient for the application
|
| 173 |
+
at hand. Occasionally, the SoHclass is also used as a preliminary stage of a subsequent RUL prediction [30, 31].
|
| 174 |
+
Figure 1 shows an example of RUL, p-RUL, SoH% and SoHclass as a function of useful lifetime already passed. It
|
| 175 |
+
can be seen that SoH% is advantageous over linear RUL in the context of strongly non-linear behavior of e.g.
|
| 176 |
+
battery capacity [19, 20] - however, in other areas with a more linear behavior or a focus on the remaining lifetime
|
| 177 |
+
as opposed to the remaining functionality, it does not provide any advantage. Subsequently, SoH will therefore be
|
| 178 |
+
used in the sense of SoHclass throughout this article.
|
| 179 |
+
3. RESEARCH METHODOLOGY
|
| 180 |
+
In order to present a comprehensive overview of the current state of research in the field of deep industrial transfer
|
| 181 |
+
learning for fault prognostics, in this study, a two-stepped systematic literature review is conducted.
|
| 182 |
+
In the first step, publications matching a combination of search terms are selected on Google Scholar. The search
|
| 183 |
+
terms are listed in Table 1. One term of the category “term 1” and one term of the category “term 2” was selected
|
| 184 |
+
for each search query.
|
| 185 |
+
In the second step, a manual selection process further filtered those publications. Only full-text, English language,
|
| 186 |
+
original research publications that utilized deep learning methods and some kind of transfer or continual learning
|
| 187 |
+
on fault prognostics use cases and were published until the end of 2021 were to be included in the detailed analysis
|
| 188 |
+
for this study.
|
| 189 |
+
TABLE 1. Search terms
|
| 190 |
+
Term 1
|
| 191 |
+
Term 2
|
| 192 |
+
Transfer Learning
|
| 193 |
+
Fault Prognostics
|
| 194 |
+
Fault Prognosis
|
| 195 |
+
Continual Learning
|
| 196 |
+
Remaining Useful Lifetime
|
| 197 |
+
State of Health
|
| 198 |
+
4. RESULTS
|
| 199 |
+
In this chapter, the results of the systematic literature review are presented. First, the original research publications,
|
| 200 |
+
the methods and scenarios utilized therein are described. Then, open-access fault prognostics data sets suitable for
|
| 201 |
+
deep industrial transfer learning are introduced.
|
| 202 |
+
4.1 APPROACHES
|
| 203 |
+
The publications matching the criteria described in section 3 are listed in TABLE 2. All of them are studies
|
| 204 |
+
involving deep-learning-based transfer or continual learning on fault prognostics use cases. In the following, they
|
| 205 |
+
are analyzed grouped by their transfer approach category.
|
| 206 |
+
[32] uses domain-adaptation for semi-supervised RUL prediction. The described prototype combines long short-
|
| 207 |
+
term memory (LSTM) as feature extractor, an unspecified neural network as discriminator and fully connected
|
| 208 |
+
neural networks (FCNN) as regressor. An evaluation on the NASA Turbofan data set demonstrates the positive
|
| 209 |
+
transfer between source and target problems. The approach without transfer functionality was trained on the source
|
| 210 |
+
problem only, because no labels should be necessary for training the target problem. Comparisons with other
|
| 211 |
+
domain adaptation algorithms from [33] also revealed better performance of the presented algorithm. The
|
| 212 |
+
prototype described in [34] uses a combination of convolutional neural networks (CNN) as feature extractor,
|
| 213 |
+
FCNN as discriminator and FCNN as regressor. An evaluation on the NASA Milling data set demonstrates the
|
| 214 |
+
positive transfer between source and target problem, especially when only a small number of target samples is
|
| 215 |
+
used. However, because only a single transfer scenario is considered, the validity of the study is limited.
|
| 216 |
+
[35] uses domain-adaptation for unsupervised RUL prediction. The described prototype combines stacked
|
| 217 |
+
denoising autoencoders as domain adaptor and FCNN as regressor, where only the feature extractor is adapted to
|
| 218 |
+
the target problem and the regressor remains fixed. Thus, an unsupervised adaptation of the supervised pre-trained
|
| 219 |
+
algorithm to the target problem is possible. An evaluation on a proprietary, univariate milling data set consisting
|
| 220 |
+
of vibration data demonstrates the positive transfer between source and target problems. The prototype described
|
| 221 |
+
in [33] uses a combination of LSTM as feature extractor and FCNN as regressor and discriminator. An evaluation
|
| 222 |
+
on the NASA Turbofan data set demonstrates the positive transfer between source and target problems, utilizing a
|
| 223 |
+
separate hyperparameter optimization for each transfer scenario. A comparison with other unsupervised domain
|
| 224 |
+
adaptation algorithms (including [36]) shows that the presented algorithm achieves higher accuracy and,
|
| 225 |
+
additionally, a comparison with the supervised approach of [37] is also in its favor. Furthermore, the alternative
|
| 226 |
+
use of FCNN, CNN or recurrent neural networks (RNN) as feature extractors is investigated, with the best overall
|
| 227 |
+
|
| 228 |
+
A Survey on Deep Industrial Transfer Learning in Fault Prognostics
|
| 229 |
+
5
|
| 230 |
+
|
| 231 |
+
|
| 232 |
+
results obtained on an LSTM basis. In contrast, [30] uses a combination of FCNN and bidirectional gated recurrent
|
| 233 |
+
units (GRU) as regressors. These are preceded by a feature extraction, which generates previously defined features
|
| 234 |
+
from the time series signals, which are then checked for their domain invariance before further use. An evaluation
|
| 235 |
+
on FEMTO-ST Bearing data set demonstrates the positive transfer between source and target problems. A
|
| 236 |
+
comparison with other algorithms (among others [36, 38]) shows that the presented algorithm achieves a higher
|
| 237 |
+
accuracy and that the special feature extraction heavily contributes to this. [38] describes a combination of CNN
|
| 238 |
+
and FCNN as feature extractors, multi-kernel MMD as objective function for the adaption process and FCNN as
|
| 239 |
+
regressor. An extensive evaluation is performed on the FEMTO-SE Bearing data set which, however, is only used
|
| 240 |
+
in a univariate fashion after a fast Fourier transformation (FFT): At first, only considering the described prototype,
|
| 241 |
+
the optimal parametrization of the objective function is investigated. It is shown that the MMD should be
|
| 242 |
+
determined on features extracted as late as possible, i e. on the results of the feature extraction. By means of a
|
| 243 |
+
comparison with learning algorithms of the same architecture but without transfer functionality and trained on
|
| 244 |
+
labeled source, target or source and target samples, it is shown that the presented algorithm produces a positive
|
| 245 |
+
transfer and also generalizes better. A final comparison with other algorithms, which however only sporadically
|
| 246 |
+
use deep transfer learning, demonstrates that the presented algorithm achieves higher accuracy. In [39], the authors
|
| 247 |
+
of [38] present two other approaches to unsupervised RUL prediction using domain adaptation - an adversarial
|
| 248 |
+
approach and a non-adversarial one. Both prototypes described use a combination of CNN as feature extractor and
|
| 249 |
+
FCNN as regressor. The non-adversarial approach is based on an extended loss function that considers the marginal
|
| 250 |
+
probability distribution using multi-kernel MMD and the conditional probability distribution based on a fuzzy
|
| 251 |
+
class division (analogous to SoH classes) in combination with MMD. The adversarial approach uses a modular
|
| 252 |
+
discriminator for the marginal and conditional probability distributions, although their internal structures are not
|
| 253 |
+
described in detail. Via a reverse validation based source sample selection, suitable source samples are identified
|
| 254 |
+
before starting the actual adaptation process. On the XJTU-SY and FEMTO-ST Bearing data sets, a comprehensive
|
| 255 |
+
evaluation is performed – again, on univariate time series of the FFT’ed raw data: By means of comparisons with
|
| 256 |
+
learning algorithms of different architectures with and without transfer functionality and different training
|
| 257 |
+
strategies, it is demonstrated that both algorithms presented produce a positive transfer as well as perform (in many
|
| 258 |
+
cases significantly) better. On one data set, the non-adversarial approach comes out ahead, on the other the
|
| 259 |
+
adversarial approach. The prototype described in [40] uses a combination of FCNN as feature extractor, multi-
|
| 260 |
+
kernel MMD as objective function for the adaptation process and kernel regression as regressor. An evaluation on
|
| 261 |
+
FEMTO-ST data set demonstrates a positive transfer between source and target problem, although the overall
|
| 262 |
+
predictive accuracy is rather low. Instead of using raw data, the spectral power density (PSD) is used as an input.
|
| 263 |
+
A comparison with other algorithms (including [33] and [38]) shows that the presented algorithm achieves higher
|
| 264 |
+
accuracy. However, it remains unclear on which numbers this comparison is based and the average values given
|
| 265 |
+
for the presented approach are not taken from the publication itself. [41] describes a combination of CNN as feature
|
| 266 |
+
extractor, FCNN as discriminator and FCNN as regressor. An evaluation on the FEMTO-ST data set demonstrates
|
| 267 |
+
the better prediction quality of the algorithm compared to several other algorithms (among others [38]). Even
|
| 268 |
+
though these other algorithms include one without transfer functionality, a direct evaluation of the transfer
|
| 269 |
+
performance is not possible due to its different architecture. Instead, an investigation of the effect of different
|
| 270 |
+
learning rates and kernel sizes is performed. Concludingly, [42] uses a combination of CNN and FCNN as feature
|
| 271 |
+
extractors and FCNN as regressors. Via extensions of the loss functions for "healthy" and failure-prone samples
|
| 272 |
+
(similar to SoH classes), a separate domain adaptation is performed for these two categories respectively. An
|
| 273 |
+
evaluation on the NASA Turbofan and XJTU-SY Bearing data sets demonstrates a positive transfer between source
|
| 274 |
+
and target problems. The influence of the transfer functionality is investigated for a conventional domain
|
| 275 |
+
adaptation as well as for the proposed domain adaptation with separate treatment for different SoH classes. The
|
| 276 |
+
proposed approach achieves the best results, but also requires a longer training time.
|
| 277 |
+
[37] uses finetuning for supervised RUL prediction. The described prototype combines bidirectional LSTM as
|
| 278 |
+
feature extractor and FCNN as regressor. An evaluation on the NASA Turbofan data set demonstrates the positive
|
| 279 |
+
transfer between the source and target problems. It is observed that even in the case of larger differences between
|
| 280 |
+
source and target problems, e.g., in terms of the number of operating or fault conditions, the transfer was mostly
|
| 281 |
+
positive. [43] combines pre-trained CNN as feature extractor and own bidirectional LSTM and FCNN as regressor.
|
| 282 |
+
An evaluation on the XYTU-SY Bearing data set and a Proprietary Gearbox data set, whose univariate time series
|
| 283 |
+
are, however, both converted to images, demonstrates a positive transfer between the generic ImageNet data set as
|
| 284 |
+
source problem and the aforementioned target problems. Moreover, the training time with transfer functionality is
|
| 285 |
+
smaller than without. A comparison with other algorithms shows that the presented algorithm achieves higher
|
| 286 |
+
accuracy. However, it remains unclear why the feature extractor is not adapted more to the present use case - for
|
| 287 |
+
example, regarding the same image being processed three times because the pre-trained algorithm used has three
|
| 288 |
+
input channels. Moreover, [44] compares different approaches of parameter transfer for supervised RUL
|
| 289 |
+
|
| 290 |
+
6
|
| 291 |
+
A Survey on Deep Industrial Transfer Learning in Fault Prognostics
|
| 292 |
+
|
| 293 |
+
prediction. Based on an investigation of different deep learning methods without transfer functionality, a prototype
|
| 294 |
+
combines LSTM and FCNN as regressors is described. An evaluation on a proprietary air compressor data set and
|
| 295 |
+
the NASA Turbofan data set demonstrates a positive transfer between the source and target problems. Here,
|
| 296 |
+
different approaches to parameter transfers, e. g. of sub-algorithms that can be finetuned as well as sub-algorithms
|
| 297 |
+
whose parameters were kept static, were investigated.
|
| 298 |
+
[45] uses a shared feature extractor with finetuning for supervised RUL prediction. The described prototype
|
| 299 |
+
combines LSTM as feature extractor and FCNN as regressor, preserving the feature extractor without any changes
|
| 300 |
+
and adapting only the regressor to the target problem, if necessary. This adaptation depends on the result of the
|
| 301 |
+
Gray Relational Analysis [46] of handcrafted features of specific univariate time series of the source and target
|
| 302 |
+
data set. An evaluation on the NASA and CALCE Battery data sets proves the positive transfer between source
|
| 303 |
+
and target problems especially related to the period just before failure occurrence. Furthermore, the training time
|
| 304 |
+
with transfer functionality is smaller than without. A comparison with other algorithms shows that the presented
|
| 305 |
+
algorithm achieves a higher accuracy than other deep-learning-based algorithms but a lower accuracy than other
|
| 306 |
+
non-deep-learning based algorithms. [47] also uses a shared feature extractor with finetuning, here in the form of
|
| 307 |
+
a reduction of the MMD between the probability distributions of the source and target problems' extended loss
|
| 308 |
+
function, for supervised, indirect RUL prediction. Specifically, the wear of the cutting edge of cutting tools is
|
| 309 |
+
determined directly, which is supposed to allow the indirect inference of the RUL. The described prototype
|
| 310 |
+
combines a pre-trained CNN as feature extractor and a custom FCNN as regressor. Only the regressor is adapted
|
| 311 |
+
to the target problem and the feature extractor remains unchanged. An evaluation on an industrial image data set
|
| 312 |
+
returns a high accuracy value but does not provide any comparative results. Thus, neither an assessment of the
|
| 313 |
+
relative performance nor of the transfer performance is possible.
|
| 314 |
+
[27] uses regularization-based continual learning for supervised SoH estimation. The described prototype
|
| 315 |
+
combines LSTM and FCNN as classifier. An evaluation on the NASA Turbofan data set demonstrates a positive,
|
| 316 |
+
even multiple transfer between source and target problems. A strong dependence of the transfer performance on
|
| 317 |
+
the similarity of source and target problems is described. [25] expands on those findings, using a similar
|
| 318 |
+
architecture on the NASA Battery data set. Again, a positive, multiple transfer between source and target problems
|
| 319 |
+
can be shown. However, the strong dependence of transfer performance on similarity, direction, sequence, and
|
| 320 |
+
number of source and target problems, which has not yet been investigated in detail, is described as problematic
|
| 321 |
+
as it naturally greatly influences the approach’s applicability.
|
| 322 |
+
|
| 323 |
+
TABLE 2. Overview of publications utilizing deep industrial transfer learning for fault prognostics
|
| 324 |
+
Source
|
| 325 |
+
Learning
|
| 326 |
+
Category
|
| 327 |
+
Problem
|
| 328 |
+
Category
|
| 329 |
+
Data Type(s)
|
| 330 |
+
Data Set(s)
|
| 331 |
+
Transfer Category
|
| 332 |
+
Zhang
|
| 333 |
+
et al. (2018)
|
| 334 |
+
[37]
|
| 335 |
+
Supervised
|
| 336 |
+
RUL
|
| 337 |
+
Multivar. Time
|
| 338 |
+
Series
|
| 339 |
+
NASA Turbofan
|
| 340 |
+
Finetuning
|
| 341 |
+
Sun et al. (2019)
|
| 342 |
+
[35]
|
| 343 |
+
Unsupervised
|
| 344 |
+
RUL
|
| 345 |
+
Univar. Time
|
| 346 |
+
Series
|
| 347 |
+
Proprietary Milling
|
| 348 |
+
Domain Adaptation
|
| 349 |
+
Da Costa et al.
|
| 350 |
+
(2020) [33]
|
| 351 |
+
Unsupervised
|
| 352 |
+
RUL
|
| 353 |
+
Multivar. Time
|
| 354 |
+
Series
|
| 355 |
+
NASA Turbofan
|
| 356 |
+
Domain Adaptation
|
| 357 |
+
Maschler et al.
|
| 358 |
+
(2020) [27]
|
| 359 |
+
Supervised
|
| 360 |
+
SOHclass
|
| 361 |
+
Multivar. Time
|
| 362 |
+
Series
|
| 363 |
+
NASA Turbofan
|
| 364 |
+
Regularization
|
| 365 |
+
Ragab et al. (2020)
|
| 366 |
+
[32]
|
| 367 |
+
Unsupervised
|
| 368 |
+
RUL
|
| 369 |
+
Multivar. Time
|
| 370 |
+
Series
|
| 371 |
+
NASA Turbofan
|
| 372 |
+
Domain Adaptation
|
| 373 |
+
Russell et al. (2020)
|
| 374 |
+
[34]
|
| 375 |
+
Semi-
|
| 376 |
+
supervised
|
| 377 |
+
RUL
|
| 378 |
+
Univar. Time
|
| 379 |
+
Series
|
| 380 |
+
NASA Milling
|
| 381 |
+
Domain Adaptation
|
| 382 |
+
Tan et al. (2020)
|
| 383 |
+
[45]
|
| 384 |
+
Supervised
|
| 385 |
+
SOH%
|
| 386 |
+
(Hand-crafted)
|
| 387 |
+
Features
|
| 388 |
+
NASA Battery;
|
| 389 |
+
CALCE Battery
|
| 390 |
+
Shared Feature
|
| 391 |
+
Extractor plus
|
| 392 |
+
Finetuning
|
| 393 |
+
Zhang et al. (2020)
|
| 394 |
+
[43]
|
| 395 |
+
Supervised
|
| 396 |
+
RUL
|
| 397 |
+
(Self-generated)
|
| 398 |
+
Images
|
| 399 |
+
ImageNet;
|
| 400 |
+
XJTU-SY Bearing;
|
| 401 |
+
Proprietary Gearbox
|
| 402 |
+
Finetuning
|
| 403 |
+
Cao et al. (2021)
|
| 404 |
+
[30]
|
| 405 |
+
Unsupervised
|
| 406 |
+
RUL
|
| 407 |
+
Univar. Time
|
| 408 |
+
Series
|
| 409 |
+
FEMTO-ST Bearing
|
| 410 |
+
Domain Adaptation
|
| 411 |
+
Cheng et al. (2021)
|
| 412 |
+
[38]
|
| 413 |
+
Unsupervised
|
| 414 |
+
RUL
|
| 415 |
+
Univar. Time
|
| 416 |
+
Series
|
| 417 |
+
FEMTO-ST Bearing (FFT)
|
| 418 |
+
Domain Adaptation
|
| 419 |
+
Cheng et al. (2021)
|
| 420 |
+
[39]
|
| 421 |
+
Unsupervised
|
| 422 |
+
RUL
|
| 423 |
+
Univar. Time
|
| 424 |
+
Series
|
| 425 |
+
XJTU-SY Bearing (FFT);
|
| 426 |
+
FEMTO-ST Bearing (FFT)
|
| 427 |
+
Domain Adaptation
|
| 428 |
+
Ding et al. (2021)
|
| 429 |
+
[40]
|
| 430 |
+
Unsupervised
|
| 431 |
+
RUL
|
| 432 |
+
Univar. Time
|
| 433 |
+
Series
|
| 434 |
+
FEMTO-ST Bearing (PSD)
|
| 435 |
+
Domain Adaptation
|
| 436 |
+
Gribbestad et al.
|
| 437 |
+
(2021) [44]
|
| 438 |
+
Supervised
|
| 439 |
+
RUL
|
| 440 |
+
Multivar. Time
|
| 441 |
+
Series
|
| 442 |
+
Proprietary Air Compressor;
|
| 443 |
+
NASA Turbofan
|
| 444 |
+
Parameter Transfer
|
| 445 |
+
|
| 446 |
+
A Survey on Deep Industrial Transfer Learning in Fault Prognostics
|
| 447 |
+
7
|
| 448 |
+
|
| 449 |
+
|
| 450 |
+
Marei et al. (2021)
|
| 451 |
+
[47]
|
| 452 |
+
Supervised
|
| 453 |
+
RUL
|
| 454 |
+
Images
|
| 455 |
+
Proprietary Milling 2
|
| 456 |
+
Shared Feature
|
| 457 |
+
Extractor plus
|
| 458 |
+
Finetuning
|
| 459 |
+
Maschler et al.
|
| 460 |
+
(2021) [25]
|
| 461 |
+
Supervised
|
| 462 |
+
SOHclass
|
| 463 |
+
(Hand-crafted)
|
| 464 |
+
Features
|
| 465 |
+
NASA Battery
|
| 466 |
+
Regularization
|
| 467 |
+
Zeng et al. (2021)
|
| 468 |
+
[41]
|
| 469 |
+
Unsupervised
|
| 470 |
+
RUL
|
| 471 |
+
Univar. Time
|
| 472 |
+
Series
|
| 473 |
+
FEMTO-ST Bearing
|
| 474 |
+
Domain Adaptation
|
| 475 |
+
Zhang et al. (2021)
|
| 476 |
+
[42]
|
| 477 |
+
Supervised
|
| 478 |
+
RUL
|
| 479 |
+
Multivar. Time
|
| 480 |
+
Series
|
| 481 |
+
NASA Turbofan;
|
| 482 |
+
XJTU-SY Bearing
|
| 483 |
+
Domain Adaptation
|
| 484 |
+
4.2 DATA SETS
|
| 485 |
+
In order to demonstrate the deep industrial transfer learning algorithms’ applicability to real-world problems, the
|
| 486 |
+
data sets used need to reflect the complexity and dynamics of such problems. Therefore, the data sets used in the
|
| 487 |
+
surveyed publications are listed in TABLE 3 – with the exception of proprietary data sets only accessible to some
|
| 488 |
+
researchers and because of that not relevant to a wider audience. Two recently published data sets not yet used in
|
| 489 |
+
publications were added in order to include them in the discussion in chapter 4. In the following, all listed data
|
| 490 |
+
sets are described:
|
| 491 |
+
For the NASA Milling data set [48], sixteen milling tools of unspecified type were run to failure on a MC-510V
|
| 492 |
+
milling center under eight different operating conditions characterized by different depth of cut, feed rate and
|
| 493 |
+
material. Acoustic emissions, vibrations and current are recorded with 250 Hz resulting in approximately 1.5
|
| 494 |
+
million multivariate samples.
|
| 495 |
+
For the NASA Bearing data set [49], twelve bearings Rexnord ZA-2115 were run to failure with four being
|
| 496 |
+
simultaneously on the same shaft but subject to different radial forces. Horizontal and vertical (for some bearings
|
| 497 |
+
only one) acceleration signals are recorded with 20 kHz for approximately 1 second of every tenth minute, resulting
|
| 498 |
+
in approximately 15.5 million multivariate (bi- respectively univariate if only one bearing is used) samples.
|
| 499 |
+
However, because the experiments were stopped once one bearing became faulty, there are exact RUL labels for
|
| 500 |
+
only four bearings (one experiment encountered a double fault).
|
| 501 |
+
For the NASA Battery data set [50], 34 type 18650 lithium-ion battery cells were run to failure under 34 different
|
| 502 |
+
operating conditions characterized by different ambient temperatures, discharge modes and stopping conditions.
|
| 503 |
+
For each (dis-)charging cycle, static sensor values as well as different time series at 1 Hz, e. g. battery voltage,
|
| 504 |
+
current and temperature, are recorded. Therefore, there are two levels of multivariate time series contained in this
|
| 505 |
+
data set. The total number of samples is approximately 7.3 million multivariate samples. However, failures of
|
| 506 |
+
measurement equipment for some of the batteries as well as other anomalies lead to a lower number of labeled and
|
| 507 |
+
usable samples [25].
|
| 508 |
+
For the NASA Turbofan data set [51], 1,416 virtual turbofan engines were run to failure under six different
|
| 509 |
+
operating conditions characterized by altitude, throttle resolver angle and speed using the so-called Commercial
|
| 510 |
+
Modular Aero-Propulsion System Simulation (C-MAPSS). 26 different sensor values are recorded as snapshots
|
| 511 |
+
once per simulated flight resulting in 265,256 multivariate samples.
|
| 512 |
+
The CALCE Battery data set [52] is an extensive collection of run-to-failure experiments on different types of
|
| 513 |
+
batteries. The only study included in this survey utilizing some of this data is [45] – we will therefore focus on the
|
| 514 |
+
“CS2” data set used there. It consists of data from thirteen lithium-ion batteries run to failure under six different
|
| 515 |
+
operating conditions characterized by different discharge currents and different cut-off voltages. Six different
|
| 516 |
+
electric sensor values are recorded approximately every 30 seconds resulting in 8.4 million multivariate samples.
|
| 517 |
+
Two more lithium-ion batteries were also measured, however, the measured characteristics are different from the
|
| 518 |
+
others and therefore excluded in this overview.
|
| 519 |
+
For the FEMTO-ST Bearing data set [53], seventeen bearings were run to failure under three different operating
|
| 520 |
+
conditions characterized by different rotating speeds and different radial force. Horizontal and vertical acceleration
|
| 521 |
+
signals are recorded with 25.6 kHz 0.1 seconds every 10 seconds and (for only thirteen specimens) temperature
|
| 522 |
+
signals with 10 Hz, resulting in approximately 63.7 million multivariate samples.
|
| 523 |
+
After a multi-year period without the release of major new data sets used in transfer learning studies, recent years
|
| 524 |
+
finally brought a new wave of larger, more complex data sets:
|
| 525 |
+
For the XJTU-SY Bearing data set [22], five heavy duty bearings LDK UER204 each were run to failure under
|
| 526 |
+
three different operating conditions characterized by different rotating speeds and different radial force. Horizontal
|
| 527 |
+
and vertical acceleration signals are recorded with 25.6 kHz for 1.28 seconds of every minute, resulting in
|
| 528 |
+
approximately 302 million bivariate samples. Usually, only the horizontal acceleration is used for fault prediction.
|
| 529 |
+
|
| 530 |
+
8
|
| 531 |
+
A Survey on Deep Industrial Transfer Learning in Fault Prognostics
|
| 532 |
+
|
| 533 |
+
To the best of our knowledge, the following two newest data sets have not been used in published transfer learning
|
| 534 |
+
studies yet. However, due to their complexity, they appear highly suitable for transfer learning evaluation scenarios
|
| 535 |
+
and should therefore be brought to the community’s attention:
|
| 536 |
+
The NASA Turbofan 2 data set [54] features a high number of non-trivial data dimensions and thereby provides a
|
| 537 |
+
highly complex problem scenario. For this data set, nine virtual turbofan engines were run to failure using C-
|
| 538 |
+
MAPSS parametrized by real flight data. Seven engines had similar operating conditions, whereas two had
|
| 539 |
+
individual operating conditions. 45 different sensor values are recorded with 1 Hz resulting in 6.5 million
|
| 540 |
+
multivariate samples.
|
| 541 |
+
The US Relays data set [55] features a high number of operating conditions, specimens and samples and thereby
|
| 542 |
+
provides highly complex problem scenario. For this data set, 100 electromechanical relays of 5 different types
|
| 543 |
+
were run to failure under 22 different operating conditions characterized by different supply voltages and different
|
| 544 |
+
load resistances. For each switching cycle, a number of static sensor values as well as voltage time series were
|
| 545 |
+
recorded. Therefore, there are two levels of multivariate time series contained in this data set. The total number of
|
| 546 |
+
samples is approximately 162.5 million multivariate samples.
|
| 547 |
+
|
| 548 |
+
TABLE 3. Overview of fault prognostics data sets used in deep industrial transfer learning publications
|
| 549 |
+
Source
|
| 550 |
+
Data Set Name
|
| 551 |
+
Data Type(s)
|
| 552 |
+
No. of Operating
|
| 553 |
+
Conditions
|
| 554 |
+
No. of
|
| 555 |
+
Specimens
|
| 556 |
+
No. of
|
| 557 |
+
Samples*
|
| 558 |
+
Agogino et al. (2007)
|
| 559 |
+
[48]
|
| 560 |
+
NASA Milling
|
| 561 |
+
Multivariate Time Series
|
| 562 |
+
8
|
| 563 |
+
16
|
| 564 |
+
1,503,000
|
| 565 |
+
Lee et al. (2007) [49]
|
| 566 |
+
NASA Bearing
|
| 567 |
+
Multivariate** Time Series
|
| 568 |
+
3
|
| 569 |
+
12
|
| 570 |
+
15,540,224
|
| 571 |
+
Saxena et al. (2007)
|
| 572 |
+
[50]
|
| 573 |
+
NASA Battery
|
| 574 |
+
Multivariate Time Series
|
| 575 |
+
34
|
| 576 |
+
34
|
| 577 |
+
7,282,946
|
| 578 |
+
Saxena et al. (2008)
|
| 579 |
+
[51]
|
| 580 |
+
NASA Turbofan
|
| 581 |
+
Multivariate Time Series
|
| 582 |
+
6
|
| 583 |
+
1,416
|
| 584 |
+
265,256
|
| 585 |
+
CALCE (2011) [52]
|
| 586 |
+
CALCE Battery
|
| 587 |
+
Multivariate Time Series
|
| 588 |
+
6
|
| 589 |
+
13
|
| 590 |
+
8,438,937
|
| 591 |
+
Nectoux et al. (2012)
|
| 592 |
+
[53]
|
| 593 |
+
FEMTO-ST
|
| 594 |
+
Bearing
|
| 595 |
+
Multivariate Time Series
|
| 596 |
+
3
|
| 597 |
+
17
|
| 598 |
+
63.718.828
|
| 599 |
+
Wang et al. (2020) [22]
|
| 600 |
+
XJTU-SY Bearing
|
| 601 |
+
Bivariate Time Series
|
| 602 |
+
3
|
| 603 |
+
15
|
| 604 |
+
302,000,000
|
| 605 |
+
Arias Chao et al. (2021)
|
| 606 |
+
[54]
|
| 607 |
+
NASA Turbofan 2
|
| 608 |
+
Multivariate Time Series
|
| 609 |
+
3
|
| 610 |
+
9
|
| 611 |
+
6,500,000
|
| 612 |
+
Maschler et al. (2022)
|
| 613 |
+
[55]
|
| 614 |
+
US Relays
|
| 615 |
+
Multivariate Time Series
|
| 616 |
+
22
|
| 617 |
+
100
|
| 618 |
+
162,500,000
|
| 619 |
+
* Samples are defined as individual data tuples measured at respectively representing a different time or using a different measurement
|
| 620 |
+
object than any other data tuple. If the data set differentiates between test, validation or training data, this differentiation is ignored
|
| 621 |
+
here and the maximum number of unique samples considered.
|
| 622 |
+
** The time series are multivariate for sets of four bearings and bi- respectively univariate for individual bearings.
|
| 623 |
+
5. DISCUSSION
|
| 624 |
+
In this chapter, the results of the systematic literature review are discussed and further analyzed. First, learnings
|
| 625 |
+
and best practices are derived from the original research publications in order to consolidate the current state of
|
| 626 |
+
research in this field. Then, open-access fault prognostics data sets are examined and suitable ones for
|
| 627 |
+
benchmarking identified.
|
| 628 |
+
5.1 APPROACHES
|
| 629 |
+
The presented approaches all belong to the solution categories of feature representation and parameter transfer of
|
| 630 |
+
transfer learning and the regularization strategies of continual learning, which, however, methodically represent a
|
| 631 |
+
form of parameter transfer (see chapter 2.1). Various deep learning methods are utilized, from simple FCNNs and
|
| 632 |
+
RNNs of various forms to autoencoders or complex, pre-trained CNNs such as AlexNet or ResNet.
|
| 633 |
+
As input data type, primarily time series data is used, which is typical for industrial applications. Only
|
| 634 |
+
occasionally are features, image or meta data used directly [25, 43, 45, 47]. Only some of the presented approaches
|
| 635 |
+
process multivariate time series [27, 32, 33, 37, 42, 44] and no approach uses different types of data in parallel.
|
| 636 |
+
The complexity of the scenarios used for evaluation is therefore still low, lacking evidence for the applicability of
|
| 637 |
+
the approaches presented towards more diverse and dynamic real-life scenarios.
|
| 638 |
+
Most of the publications presented use only a single data set for evaluation. Only [42–45] evaluate their algorithm
|
| 639 |
+
on different, similar data sets and no study uses evaluation data sets with different data types. Thus, statements
|
| 640 |
+
about the transferability of the approaches presented are based upon only thin evidence, ready-to-use architectures
|
| 641 |
+
or frameworks are not available, yet.
|
| 642 |
+
|
| 643 |
+
A Survey on Deep Industrial Transfer Learning in Fault Prognostics
|
| 644 |
+
9
|
| 645 |
+
|
| 646 |
+
|
| 647 |
+
Only two of the publications presented address SoHclass estimation [25, 27] for fault prognostics, which makes it
|
| 648 |
+
a fringe approach. Direct RUL prediction is much more prominently represented, marking it the default problem
|
| 649 |
+
category in the field of fault prognostics.
|
| 650 |
+
The results of some publications are not well enough documented to allow a clear evaluation of the presented
|
| 651 |
+
approaches’ performance. This may be due to a lack of comparative values [34, 41, 47] or their insufficient
|
| 652 |
+
documentation [40]. [43] appears unfinished due to the generic input data treatment which is not adapted to the
|
| 653 |
+
use case. In general, although six studies make use of the NASA Turbofan data set and five utilize the FEMTO-
|
| 654 |
+
ST bearing data set and, thereby, allow benchmarking to some degree, the field would benefit from ubiquitous
|
| 655 |
+
comparability based upon a common set of benchmarking algorithms and well-documented methodologies as well
|
| 656 |
+
as evaluation results. This would greatly increase the identification of generalizable best practices and by this speed
|
| 657 |
+
up scientific progress [1, 4].
|
| 658 |
+
Future research should build upon the following findings: The appropriate selection of source data to be used in
|
| 659 |
+
a transfer increases said transfer’s performance [30]. Furthermore, [42] shows that a separate treatment of different
|
| 660 |
+
sample clusters, e.g. of previously known sub-scenarios, increases a transfer’s performance as well. Regularization
|
| 661 |
+
approaches, however, do not appear suitable due to their complex dependencies [25, 27]. Regarding the use of
|
| 662 |
+
vibration data, utilizing preprocessed data (e.g. FFT) improves results compared to utilizing raw data.
|
| 663 |
+
5.2 DATA SETS
|
| 664 |
+
The data sets used in the presented publications are predominantly open-access, with only four exceptions in [4,
|
| 665 |
+
43, 44, 47]. Due to their low accessibility, proprietary data sets obviously are unsuitable for benchmarking
|
| 666 |
+
purposes and should be accompanied by the use of open-access data sets in any high-impact publication.
|
| 667 |
+
So far, the NASA Ames Research Center has provided most of the open-access data sets used for research in the
|
| 668 |
+
field of industrial transfer learning for fault prognostics – notably, the FEMTO-ST data set is made available
|
| 669 |
+
through their repository as well. The most widely used data sets were promoted by the Prognostics and Health
|
| 670 |
+
Management (PHM) challenges of 2008 [51] and 2021 [53]. It is plausible that [54] will encounter a similar effect
|
| 671 |
+
by the PHM challenge of 2021. Contests like the PHM challenges facilitate comparability of approaches and results
|
| 672 |
+
by forcing the participants to use the same data sets for evaluation. Furthermore, institutional repositories such as
|
| 673 |
+
NASA Ames’ data repository increase the chance of the contained data sets’ long-term availability compared to
|
| 674 |
+
private or individual university chairs’ websites. Both aspects, wide-spread usage and long-term availability,
|
| 675 |
+
are qualities required by a benchmark data set.
|
| 676 |
+
Apart from those meta-criteria, a benchmark data set should reflect the challenges typical for the respective
|
| 677 |
+
application or usage scenario. In this case, they should therefore be heterogenous and dynamic, ideally consisting
|
| 678 |
+
of (many) different data dimensions, operating conditions and specimen while providing a high number of samples
|
| 679 |
+
to train and evaluate algorithms on. Even if the sub-problem to be solved by an algorithm does not in itself require
|
| 680 |
+
the full complexity, using a very complex data set will allow comparability with a wider array of other publications
|
| 681 |
+
and should therefore be preferable to a data set of lesser complexity. These criteria are best met by the two newest
|
| 682 |
+
data sets presented in [54, 55].
|
| 683 |
+
6. CONCLUSION
|
| 684 |
+
Fault prognostics is of great importance, e.g. in reducing downtime in manufacturing, harm by failures or wastage
|
| 685 |
+
by premature maintenance, and deep-learning-based approaches show promising results in research environments.
|
| 686 |
+
However, in order to be applicable to the heterogenous and dynamic nature of real-world industrial scenarios, deep
|
| 687 |
+
industrial transfer learning capabilities are required to lower the effort of data collection and algorithm adaptation
|
| 688 |
+
to new (sub-)problems.
|
| 689 |
+
This article introduces transfer learning’s main approaches of feature representation and parameter transfer as well
|
| 690 |
+
as continual learning’s regularization strategy. It then describes the basics of fault prognostics regarding different
|
| 691 |
+
approaches and different objectives. The systematic literature review presents a comprehensive overview of the
|
| 692 |
+
approaches published on the topic of fault prognostics by deep industrial transfer learning. Despite the diverse
|
| 693 |
+
array of approaches, results and applications, there is a lack of comparability. Still, some best practices e.g.
|
| 694 |
+
regarding source data selection and handling can be identified. An ensuing review of open-access data sets for
|
| 695 |
+
fault prognostics by deep industrial transfer learning underlines the availability of a variety of such data sets.
|
| 696 |
+
However, only some of them provide the complexity necessary to allow the full range of transfer scenarios – and
|
| 697 |
+
only those are fully suitable for benchmarking purposes. Luckily, after a few years without new data sets, there
|
| 698 |
+
have recently been notable publications.
|
| 699 |
+
|
| 700 |
+
10
|
| 701 |
+
A Survey on Deep Industrial Transfer Learning in Fault Prognostics
|
| 702 |
+
|
| 703 |
+
Thereby, this article provides a range of state-of-the-art examples and analyses for anyone considering to enter the
|
| 704 |
+
field of fault prognostics by deep industrial transfer learning.
|
| 705 |
+
REFERENCES
|
| 706 |
+
[1]
|
| 707 |
+
S. Khan and T. Yairi, “A review on the application of deep learning in system health management,” Mechanical Systems and Signal
|
| 708 |
+
Processing, vol. 107, pp. 241–265, 2018, doi: 10.1016/j.ymssp.2017.11.024.
|
| 709 |
+
[2]
|
| 710 |
+
Y. Wang, Y. Zhao, and S. Addepalli, “Remaining Useful Life Prediction using Deep Learning Approaches: A Review,” Procedia
|
| 711 |
+
Manufacturing, vol. 49, pp. 81–88, 2020, doi: 10.1016/j.promfg.2020.06.015.
|
| 712 |
+
[3]
|
| 713 |
+
G. Xu et al., “Data-Driven Fault Diagnostics and Prognostics for Predictive Maintenance: A Brief Overview *,” in 2019 IEEE 15th
|
| 714 |
+
International Conference on Automation Science and Engineering (CASE), Vancouver, Canada, 2019, pp. 103–108.
|
| 715 |
+
[4]
|
| 716 |
+
B. Maschler, H. Vietz, H. Tercan, C. Bitter, T. Meisen, and M. Weyrich, “Insights and Example Use Cases on Industrial Transfer
|
| 717 |
+
Learning,” Procedia CIRP, no. 107, 511-516, 2022, doi: 10.1016/j.procir.2022.05.017.
|
| 718 |
+
[5]
|
| 719 |
+
J. Krauß, M. Frye, G. T. D. Beck, and R. H. Schmitt, “Selection and Application of Machine Learning- Algorithms in Production
|
| 720 |
+
Quality,” in Technologien für die intelligente Automation, Machine Learning for Cyber Physical Systems, J. Beyerer, C. Kühnert, and
|
| 721 |
+
O. Niggemann, Eds., Berlin, Heidelberg, Deutschland: Springer Berlin Heidelberg, 2019, pp. 46–57.
|
| 722 |
+
[6]
|
| 723 |
+
T. Bernard, C. Kühnert, and E. Campbell, “Web-based Machine Learning Platform for Condition-Monitoring,” in Technologien für
|
| 724 |
+
die intelligente Automation, Machine Learning for Cyber Physical Systems, J. Beyerer, C. Kühnert, and O. Niggemann, Eds., Berlin,
|
| 725 |
+
Heidelberg, Deutschland: Springer Berlin Heidelberg, 2019, pp. 36–45.
|
| 726 |
+
[7]
|
| 727 |
+
J. Wang, Y. Ma, L. Zhang, R. X. Gao, and D. Wu, “Deep learning for smart manufacturing: Methods and applications,” Journal of
|
| 728 |
+
Manufacturing Systems, vol. 48, pp. 144–156, 2018, doi: 10.1016/j.jmsy.2018.01.003.
|
| 729 |
+
[8]
|
| 730 |
+
B. Maschler and M. Weyrich, “Deep Transfer Learning for Industrial Automation,” Industrial Electronics Magazine, vol. 15, no. 2,
|
| 731 |
+
65-75, 2021, doi: 10.1109/MIE.2020.3034884.
|
| 732 |
+
[9]
|
| 733 |
+
S. J. Pan and Q. Yang, “A Survey on Transfer Learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, 2010, doi:
|
| 734 |
+
10.1109/TKDE.2009.191.
|
| 735 |
+
[10]
|
| 736 |
+
G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, “Continual lifelong learning with neural networks: A review,” Neural
|
| 737 |
+
Networks, vol. 113, pp. 54–71, 2019, doi: 10.1016/j.neunet.2019.01.012.
|
| 738 |
+
[11]
|
| 739 |
+
H. Tercan, A. Guajardo, and T. Meisen, “Industrial Transfer Learning: Boosting Machine Learning in Production,” in 2019 IEEE 17th
|
| 740 |
+
International Conference on Industrial Informatics (INDIN), Helsinki, Finland, 2019, pp. 274–279.
|
| 741 |
+
[12]
|
| 742 |
+
K. Weiss, T. M. Khoshgoftaar, and D. Wang, “A survey of transfer learning,” J Big Data, vol. 3, no. 1, 2016, doi: 10.1186/s40537-
|
| 743 |
+
016-0043-6.
|
| 744 |
+
[13]
|
| 745 |
+
B. Lindemann, B. Maschler, N. Sahlab, and M. Weyrich, “A Survey on Anomaly Detection for Technical Systems using LSTM
|
| 746 |
+
Networks,” Computers in Industry, no. 131, 103498, 2021, doi: 10.1016/j.compind.2021.103498.
|
| 747 |
+
[14]
|
| 748 |
+
M. A. Morid, A. Borjali, and G. Del Fiol, “A scoping review of transfer learning research on medical image analysis using
|
| 749 |
+
ImageNet,” Computers in biology and medicine, vol. 128, p. 104115, 2021, doi: 10.1016/j.compbiomed.2020.104115.
|
| 750 |
+
[15]
|
| 751 |
+
H. Tercan, P. Deibert, and T. Meisen, “Continual learning of neural networks for quality prediction in production using memory
|
| 752 |
+
aware synapses and weight transfer,” J Intell Manuf, 2021, doi: 10.1007/s10845-021-01793-0.
|
| 753 |
+
[16]
|
| 754 |
+
D. An, N. H. Kim, and J.-H. Choi, “Practical options for selecting data-driven or physics-based prognostics algorithms with reviews,”
|
| 755 |
+
Reliability Engineering & System Safety, vol. 133, pp. 223–236, 2015, doi: 10.1016/j.ress.2014.09.014.
|
| 756 |
+
[17]
|
| 757 |
+
H. M. Elattar, H. K. Elminir, and A. M. Riad, “Prognostics: a literature review,” Complex Intell. Syst., vol. 2, no. 2, pp. 125–154,
|
| 758 |
+
2016, doi: 10.1007/s40747-016-0019-3.
|
| 759 |
+
[18]
|
| 760 |
+
C. Okoh, R. Roy, J. Mehnen, and L. Redding, “Overview of Remaining Useful Life Prediction Techniques in Through-life
|
| 761 |
+
Engineering Services,” Procedia CIRP, vol. 16, pp. 158–163, 2014, doi: 10.1016/j.procir.2014.02.006.
|
| 762 |
+
[19]
|
| 763 |
+
M. G. Pecht and M. Kang, Eds., Prognostics and Health Management of Electronics. Chichester, UK: John Wiley and Sons Ltd,
|
| 764 |
+
2018. [Online]. Available: https://onlinelibrary.wiley.com/doi/book/10.1002/9781119515326
|
| 765 |
+
[20]
|
| 766 |
+
H. Tian, P. Qin, K. Li, and Z. Zhao, “A review of the state of health for lithium-ion batteries: Research status and suggestions,”
|
| 767 |
+
Journal of Cleaner Production, vol. 261, p. 120813, 2020, doi: 10.1016/j.jclepro.2020.120813.
|
| 768 |
+
[21]
|
| 769 |
+
B. Lindemann, T. Müller, H. Vietz, N. Jazdi, and M. Weyrich, “A survey on long short-term memory networks for time series
|
| 770 |
+
prediction,” Procedia CIRP, vol. 99, pp. 650–655, 2021, doi: 10.1016/j.procir.2021.03.088.
|
| 771 |
+
[22]
|
| 772 |
+
B. Wang, Y. Lei, N. Li, and N. Li, “A Hybrid Prognostics Approach for Estimating Remaining Useful Life of Rolling Element
|
| 773 |
+
Bearings,” IEEE Trans. Rel., vol. 69, no. 1, pp. 401–412, 2020, doi: 10.1109/TR.2018.2882682.
|
| 774 |
+
[23]
|
| 775 |
+
F. von Bülow, J. Mentz, and T. Meisen, “State of health forecasting of Lithium-ion batteries applicable in real-world operational
|
| 776 |
+
conditions,” Journal of Energy Storage, vol. 44, p. 103439, 2021, doi: 10.1016/j.est.2021.103439.
|
| 777 |
+
[24]
|
| 778 |
+
A. Bonfitto, “A Method for the Combined Estimation of Battery State of Charge and State of Health Based on Artificial Neural
|
| 779 |
+
Networks,” Energies, vol. 13, no. 10, p. 2548, 2020, doi: 10.3390/en13102548.
|
| 780 |
+
[25]
|
| 781 |
+
B. Maschler, S. Tatiyosyan, and M. Weyrich, “Regularization-based Continual Learning for Fault Prediction in Lithium-Ion
|
| 782 |
+
Batteries,” 2021 15th CIRP Conference on Intelligent Computation in Manufacturing Engineering (ICME), 1-6, 2021.
|
| 783 |
+
[26]
|
| 784 |
+
L. Kirschbaum, D. Roman, V. Robu, and D. Flynn, “Deep Learning Pipeline for State-of-Health Classification of Electromagnetic
|
| 785 |
+
Relays,” in 2021 IEEE 30th International Symposium on Industrial Electronics (ISIE), Kyoto, Japan, 2021, pp. 1–7.
|
| 786 |
+
[27]
|
| 787 |
+
B. Maschler, H. Vietz, N. Jazdi, and M. Weyrich, “Continual Learning of Fault Prediction for Turbofan Engines using Deep Learning
|
| 788 |
+
with Elastic Weight Consolidation,” in 2020 25th IEEE International Conference on Emerging Technologies and Factory Automation
|
| 789 |
+
(ETFA), Vienna, Austria, 2020, pp. 959–966.
|
| 790 |
+
[28]
|
| 791 |
+
B. Lindemann, N. Jazdi, and M. Weyrich, “Anomaly detection and prediction in discrete manufacturing based on cooperative LSTM
|
| 792 |
+
networks,” in 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), Hong Kong, China, 2020,
|
| 793 |
+
pp. 1003–1010.
|
| 794 |
+
[29]
|
| 795 |
+
J. Li, J. Lu, C. Chen, J. Ma, and X. Liao, “Tool wear state prediction based on feature-based transfer learning,” Int J Adv Manuf
|
| 796 |
+
Technol, vol. 113, 11-12, pp. 3283–3301, 2021, doi: 10.1007/s00170-021-06780-6.
|
| 797 |
+
[30]
|
| 798 |
+
Y. Cao, M. Jia, P. Ding, and Y. Ding, “Transfer learning for remaining useful life prediction of multi-conditions bearings based on
|
| 799 |
+
bidirectional-GRU network,” Measurement, vol. 178, p. 109287, 2021, doi: 10.1016/j.measurement.2021.109287.
|
| 800 |
+
[31]
|
| 801 |
+
N. Beganovic and D. Söffker, “Remaining lifetime modeling using State-of-Health estimation,” Mechanical Systems and Signal
|
| 802 |
+
Processing, vol. 92, pp. 107–123, 2017, doi: 10.1016/j.ymssp.2017.01.031.
|
| 803 |
+
|
| 804 |
+
A Survey on Deep Industrial Transfer Learning in Fault Prognostics
|
| 805 |
+
11
|
| 806 |
+
|
| 807 |
+
|
| 808 |
+
[32]
|
| 809 |
+
M. Ragab, Z. Chen, M. Wu, C. K. Kwoh, and X. Li, “Adversarial Transfer Learning for Machine Remaining Useful Life Prediction,”
|
| 810 |
+
in 2020 IEEE International Conference on Prognostics and Health Management (ICPHM), Detroit, USA, 2020, pp. 1–7.
|
| 811 |
+
[33]
|
| 812 |
+
P. R. d. O. Da Costa, A. Akçay, Y. Zhang, and U. Kaymak, “Remaining useful lifetime prediction via deep domain adaptation,”
|
| 813 |
+
Reliability Engineering & System Safety, vol. 195, p. 106682, 2020, doi: 10.1016/j.ress.2019.106682.
|
| 814 |
+
[34]
|
| 815 |
+
M. Russell and P. Wang, “Domain Adversarial Transfer Learning for Generalized Tool Wear Prediction,” in Proceedings of the 12th
|
| 816 |
+
Annual Conference of the PHM Society, 2020, p. 8.
|
| 817 |
+
[35]
|
| 818 |
+
C. Sun, M. Ma, Z. Zhao, S. Tian, R. Yan, and X. Chen, “Deep Transfer Learning Based on Sparse Autoencoder for Remaining Useful
|
| 819 |
+
Life Prediction of Tool in Manufacturing,” IEEE Trans. Ind. Inf., vol. 15, no. 4, pp. 2416–2425, 2019, doi:
|
| 820 |
+
10.1109/TII.2018.2881543.
|
| 821 |
+
[36]
|
| 822 |
+
S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, “Domain adaptation via transfer component analysis,” IEEE Transactions on Neural
|
| 823 |
+
Networks, vol. 22, no. 2, pp. 199–210, 2011, doi: 10.1109/TNN.2010.2091281.
|
| 824 |
+
[37]
|
| 825 |
+
A. Zhang et al., “Transfer Learning with Deep Recurrent Neural Networks for Remaining Useful Life Estimation,” Applied Sciences,
|
| 826 |
+
vol. 8, no. 12, p. 2416, 2018, doi: 10.3390/app8122416.
|
| 827 |
+
[38]
|
| 828 |
+
H. Cheng, X. Kong, G. Chen, Q. Wang, and R. Wang, “Transferable convolutional neural network based remaining useful life
|
| 829 |
+
prediction of bearing under multiple failure behaviors,” Measurement, vol. 168, p. 108286, 2021, doi:
|
| 830 |
+
10.1016/j.measurement.2020.108286.
|
| 831 |
+
[39]
|
| 832 |
+
H. Cheng, X. Kong, Q. Wang, H. Ma, S. Yang, and G. Chen, “Deep transfer learning based on dynamic domain adaptation for
|
| 833 |
+
remaining useful life prediction under different working conditions,” J Intell Manuf, 2021, doi: 10.1007/s10845-021-01814-y.
|
| 834 |
+
[40]
|
| 835 |
+
Y. Ding, M. Jia, Q. Miao, and P. Huang, “Remaining useful life estimation using deep metric transfer learning for kernel regression,”
|
| 836 |
+
Reliability Engineering & System Safety, vol. 212, p. 107583, 2021, doi: 10.1016/j.ress.2021.107583.
|
| 837 |
+
[41]
|
| 838 |
+
F. Zeng, Y. Li, Y. Jiang, and G. Song, “An online transfer learning-based remaining useful life prediction method of ball bearings,”
|
| 839 |
+
Measurement, vol. 176, p. 109201, 2021, doi: 10.1016/j.measurement.2021.109201.
|
| 840 |
+
[42]
|
| 841 |
+
W. Zhang, X. Li, H. Ma, Z. Luo, and X. Li, “Transfer learning using deep representation regularization in remaining useful life
|
| 842 |
+
prediction across operating conditions,” Reliability Engineering & System Safety, vol. 211, p. 107556, 2021, doi:
|
| 843 |
+
10.1016/j.ress.2021.107556.
|
| 844 |
+
[43]
|
| 845 |
+
H. Zhang, Q. Zhang, S. Shao, T. Niu, X. Yang, and H. Ding, “Sequential Network with Residual Neural Network for Rotatory
|
| 846 |
+
Machine Remaining Useful Life Prediction Using Deep Transfer Learning,” Shock and Vibration, vol. 2020, pp. 1–16, 2020, doi:
|
| 847 |
+
10.1155/2020/8888627.
|
| 848 |
+
[44]
|
| 849 |
+
M. Gribbestad, M. U. Hassan, and I. A. Hameed, “Transfer Learning for Prognostics and Health Management (PHM) of Marine Air
|
| 850 |
+
Compressors,” JMSE, vol. 9, no. 1, p. 47, 2021, doi: 10.3390/jmse9010047.
|
| 851 |
+
[45]
|
| 852 |
+
Y. Tan and G. Zhao, “Transfer Learning With Long Short-Term Memory Network for State-of-Health Prediction of Lithium-Ion
|
| 853 |
+
Batteries,” IEEE Trans. Ind. Electron., vol. 67, no. 10, pp. 8723–8731, 2020, doi: 10.1109/TIE.2019.2946551.
|
| 854 |
+
[46]
|
| 855 |
+
N. Tosun, “Determination of optimum parameters for multi-performance characteristics in drilling by using grey relational analysis,”
|
| 856 |
+
Int J Adv Manuf Technol, vol. 28, 5-6, pp. 450–455, 2006, doi: 10.1007/s00170-004-2386-y.
|
| 857 |
+
[47]
|
| 858 |
+
M. Marei, S. E. Zaatari, and W. Li, “Transfer learning enabled convolutional neural networks for estimating health state of cutting
|
| 859 |
+
tools,” Robotics and Computer-Integrated Manufacturing, vol. 71, p. 102145, 2021, doi: 10.1016/j.rcim.2021.102145.
|
| 860 |
+
[48]
|
| 861 |
+
A. Agogino and K. Goebel, Milling Data Set. [Online]. Available: http://ti.arc.nasa.gov/project/prognostic-data-repository (accessed:
|
| 862 |
+
Aug. 3 2021).
|
| 863 |
+
[49]
|
| 864 |
+
J. Lee, H. Qiu, G. Yu, J. Lin, and Rexnord Technical Services, Bearing Data Set. [Online]. Available: http://ti.arc.nasa.gov/project/
|
| 865 |
+
prognostic-data-repository (accessed: Feb. 21 2022).
|
| 866 |
+
[50]
|
| 867 |
+
A. Saxena and K. Goebel, Battery Data Set. [Online]. Available: http://ti.arc.nasa.gov/project/prognostic-data-repository (accessed:
|
| 868 |
+
Feb. 21 2022).
|
| 869 |
+
[51]
|
| 870 |
+
A. Saxena and K. Goebel, Turbofan Engine Degradation Simulation Data Set. [Online]. Available: http://ti.arc.nasa.gov/project/
|
| 871 |
+
prognostic-data-repository (accessed: Feb. 21 2022).
|
| 872 |
+
[52]
|
| 873 |
+
Center of Advanced Life Cycle Engineering (CALCE), CX2 Battery Data Set. [Online]. Available: https://web.calce.umd.edu/
|
| 874 |
+
batteries/data.htm (accessed: Feb. 21 2022).
|
| 875 |
+
[53]
|
| 876 |
+
P. Nectoux et al., “PRONOSTIA: An experimental platform for bearings accelerated degradation tests,” in IEEE International
|
| 877 |
+
Conference on Prognostics and Health Management (PHM '12), Denver, USA, 2012, pp. 1–8.
|
| 878 |
+
[54]
|
| 879 |
+
M. Arias Chao, C. Kulkarni, K. Goebel, and O. Fink, “Aircraft Engine Run-to-Failure Dataset under Real Flight Conditions for
|
| 880 |
+
Prognostics and Diagnostics,” Data, vol. 6, no. 1, p. 5, 2021, doi: 10.3390/data6010005.
|
| 881 |
+
[55]
|
| 882 |
+
B. Maschler, A. Iliev, T. T. H. Pham, and M. Weyrich, “Stuttgart Open Relay Degradation Dataset (SOReDD),” University of
|
| 883 |
+
Stuttgart, 2022. Accessed: Apr. 24 2022. [Online]. Available: http://doi.org/10.18419/darus-2785
|
| 884 |
+
|
| 885 |
+
|
6tAzT4oBgHgl3EQfvP3q/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
6tE2T4oBgHgl3EQfPAa1/content/tmp_files/2301.03755v1.pdf.txt
ADDED
|
@@ -0,0 +1,786 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
A real neural network state for quantum chemistry
|
| 2 |
+
Yangjun Wu,1 Xiansong Xu,2, 3 Dario Poletti,2, 4 Yi Fan,5 Chu Guo,6, 7, ∗ and Honghui Shang1, †
|
| 3 |
+
1Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
|
| 4 |
+
2Science, Mathematics and Technology Cluster, Singapore University of Technology and Design, 8 Somapah Road, 487372 Singapore
|
| 5 |
+
3College of Physics and Electronic Engineering, and Center for Computational Sciences, Sichuan Normal University, Chengdu 610068, China
|
| 6 |
+
4EPD Pillar, Singapore University of Technology and Design, 8 Somapah Road, 487372 Singapore
|
| 7 |
+
5University of Science and Technology of China, Hefei, China
|
| 8 |
+
6Henan Key Laboratory of Quantum Information and Cryptography, Zhengzhou, Henan 450000, China
|
| 9 |
+
7Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, Department of Physics
|
| 10 |
+
and Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, China
|
| 11 |
+
The restricted Boltzmann machine (RBM) has been successfully applied to solve the many-electron
|
| 12 |
+
Schr¨odinger equation. In this work we propose a single-layer fully connected neural network adapted from
|
| 13 |
+
RBM and apply it to study ab initio quantum chemistry problems. Our contribution is two-fold: 1) our neural
|
| 14 |
+
network only uses real numbers to represent the real electronic wave function, while we obtain comparable pre-
|
| 15 |
+
cision to RBM for various prototypical molecules; 2) we show that the knowledge of the Hartree-Fock reference
|
| 16 |
+
state can be used to systematically accelerate the convergence of the variational Monte Carlo algorithm as well
|
| 17 |
+
as to increase the precision of the final energy.
|
| 18 |
+
I.
|
| 19 |
+
INTRODUCTION
|
| 20 |
+
Ab initio electronic structure calculations based on
|
| 21 |
+
quantum-chemical approaches (Hartree–Fock theory and
|
| 22 |
+
post-Hartree–Fock methods) have been successfully applied
|
| 23 |
+
in molecular systems [1].
|
| 24 |
+
For strongly correlated many-
|
| 25 |
+
electron systems, the exponentially growing Hilbert space size
|
| 26 |
+
limits the application scale of most numerical algorithms. For
|
| 27 |
+
example, the full configuration interaction (FCI) which takes
|
| 28 |
+
the whole Hilbert space into account, is currently limited
|
| 29 |
+
within around 24 orbitals and 24 electrons [2]. The density
|
| 30 |
+
matrix renormalization group (DMRG) algorithm [3, 4] has
|
| 31 |
+
been used to solve larger chemical systems of several tens of
|
| 32 |
+
electrons [5, 6], however it is essentially limited by the ex-
|
| 33 |
+
pressive power of its underlying variational ansatz: the matrix
|
| 34 |
+
product state (MPS) which is a special instance of the one-
|
| 35 |
+
dimensional tensor network state [7], therefore DMRG could
|
| 36 |
+
also be extremely difficult to approach even larger systems.
|
| 37 |
+
The coupled cluster (CC) [8, 9] method expresses the exact
|
| 38 |
+
wave function in terms of an exponential form of a variational
|
| 39 |
+
wave function ansatz, and higher level of accuracy can be ob-
|
| 40 |
+
tained by considering electronic excitations up to doublets in
|
| 41 |
+
CCSD or triplets in CCSD(T). In practice, it is often accu-
|
| 42 |
+
rate with a durable computational cost, thus considered as the
|
| 43 |
+
“gold standard” in electronic structure calculations. However,
|
| 44 |
+
the accuracy of the CC method is only restricted in studying
|
| 45 |
+
weakly correlated systems [10]. The multi-configuration self-
|
| 46 |
+
consistent field (MCSCF) [11–13] method is crucial for de-
|
| 47 |
+
scribing molecular systems containing nearly degenerate or-
|
| 48 |
+
bitals. It introduces a small number of (active) orbitals, then
|
| 49 |
+
the configuration interaction coefficients and the orbital co-
|
| 50 |
+
efficients are optimized to minimize the total energy of the
|
| 51 |
+
MCSCF state. It has been applied to systems with around 50
|
| 52 |
+
active orbitals [14], but they are still limited by the exponen-
|
| 53 |
+
tial complexity that grows with the system size.
|
| 54 |
+
∗ guochu604b@gmail.com
|
| 55 |
+
† shanghonghui@ict.ac.cn
|
| 56 |
+
In recent years the variational Monte Carlo (VMC) method
|
| 57 |
+
in combination with a neural network ansatz for the underly-
|
| 58 |
+
ing quantum state (wave function) [15], referred to as the neu-
|
| 59 |
+
ral network quantum states (NNQS), has been demonstrated
|
| 60 |
+
to be a scalable and accurate tool for many-spin systems [16–
|
| 61 |
+
18] and many-fermion systems [19]. NNQS allow very flex-
|
| 62 |
+
ible choices of the neural network ansatz, and with an appro-
|
| 63 |
+
priate variational ansatz, it could often achieve comparable or
|
| 64 |
+
higher accuracy compared to existing methods. NNQS has
|
| 65 |
+
also been applied to solve ab-initio quantum chemistry sys-
|
| 66 |
+
tems in real space with up to 30 electrons [20–22], as well as
|
| 67 |
+
in discrete basis after second quantization [23–25]. Up to now
|
| 68 |
+
various neural networks have been used, such as the restricted
|
| 69 |
+
Boltzmann machine (RBM) [15], convolutional neural net-
|
| 70 |
+
work [16], recurrent neural networks [26] and variational
|
| 71 |
+
auto-encoder [25]. In all those neural networks, the RBM is
|
| 72 |
+
a very special instance in that: 1) it has a very simple struc-
|
| 73 |
+
ture which contains only a fully connected dense layer plus
|
| 74 |
+
a nonlinear activation; 2) with such a simple structure, RBM
|
| 75 |
+
can be more expressive than MPS [27], in fact it is equivalent
|
| 76 |
+
to certain two-dimensional tensor network states [28], and can
|
| 77 |
+
even represent certain quantum state with volume-law entan-
|
| 78 |
+
glement [29]. In practice RBM achieves comparable accuracy
|
| 79 |
+
to other more sophisticated neural networks for complicated
|
| 80 |
+
applications such as frustrated many-spin systems [30, 31].
|
| 81 |
+
For the ground state of molecular systems, the wave func-
|
| 82 |
+
tion is real. However, if one uses a real RBM as the vari-
|
| 83 |
+
ational ansatz for the wave function, then all the amplitudes
|
| 84 |
+
of the wave function will be positive, which means that it
|
| 85 |
+
may be good for ferromagnetic states but will be completely
|
| 86 |
+
wrong for anti-ferromagnetic states. Therefore even for real
|
| 87 |
+
wave functions one would have to use complex RBMs or two
|
| 88 |
+
RBMs [32] in general.
|
| 89 |
+
In this work we propose a neural
|
| 90 |
+
network with real numbers which is slightly modified from
|
| 91 |
+
the RBM such that its output can be both positive and neg-
|
| 92 |
+
ative, and use it as the neural network ansatz to solve quan-
|
| 93 |
+
tum chemistry problems. To accelerate convergence of the
|
| 94 |
+
VMC iterations, we explicitly use the Hartree-Fock reference
|
| 95 |
+
state as the starting point for the Monte Carlo sampling af-
|
| 96 |
+
arXiv:2301.03755v1 [quant-ph] 10 Jan 2023
|
| 97 |
+
|
| 98 |
+
2
|
| 99 |
+
ter a number of VMC iterations such that the wave function
|
| 100 |
+
ansatz has become sufficiently close to the ground state. We
|
| 101 |
+
show that this technique can generally improve the conver-
|
| 102 |
+
gence and the precision of the final result, even when using
|
| 103 |
+
other neural networks. Our paper is organized as follows. In
|
| 104 |
+
Sec. II we present our neural network ansatz. In Sec. III we
|
| 105 |
+
present our numerical results demonstrating the effectiveness
|
| 106 |
+
of our neural network ansatz and the technique of initializ-
|
| 107 |
+
ing the Monte Carlo sampling with the Hartree-Fock reference
|
| 108 |
+
state. We conclude in Sec. IV.
|
| 109 |
+
II.
|
| 110 |
+
METHODS
|
| 111 |
+
A.
|
| 112 |
+
Real neural network ansatz
|
| 113 |
+
Before we introduce our model we first briefly review the
|
| 114 |
+
RBM used in NNQS. For a classical many-spin system, one
|
| 115 |
+
could embed the system into a larger one consisting of visible
|
| 116 |
+
spins (corresponding to the system) and hidden spins with the
|
| 117 |
+
total (classical) Hamiltonian
|
| 118 |
+
H =
|
| 119 |
+
Nv
|
| 120 |
+
�
|
| 121 |
+
j=1
|
| 122 |
+
ajxj +
|
| 123 |
+
Nh
|
| 124 |
+
�
|
| 125 |
+
i=1
|
| 126 |
+
bihi +
|
| 127 |
+
�
|
| 128 |
+
i,j
|
| 129 |
+
Wijhixj,
|
| 130 |
+
(1)
|
| 131 |
+
where xj represents the visible spin and hi the hidden spin.
|
| 132 |
+
Nv and Nh are the number of visible and hidden spins respec-
|
| 133 |
+
tively. The coefficients θ = {a, b, W} are variational param-
|
| 134 |
+
eters of the Hamiltonian. Since there is no coupling between
|
| 135 |
+
the hidden spins, one could explicitly integrate them out and
|
| 136 |
+
get the partition function of the system Z as
|
| 137 |
+
Z =
|
| 138 |
+
�
|
| 139 |
+
x
|
| 140 |
+
p(x),
|
| 141 |
+
(2)
|
| 142 |
+
with x = {x1, x2, . . . , xNv} a particular configuration and
|
| 143 |
+
p(x) the unnormalized probability (in case of real coefficients)
|
| 144 |
+
of x, which can be explicitly written as
|
| 145 |
+
p(x) =
|
| 146 |
+
�
|
| 147 |
+
h
|
| 148 |
+
eH
|
| 149 |
+
= e
|
| 150 |
+
�Nv
|
| 151 |
+
j=1 ajxj ×
|
| 152 |
+
Nh
|
| 153 |
+
�
|
| 154 |
+
i=1
|
| 155 |
+
2 cosh(bi +
|
| 156 |
+
Nv
|
| 157 |
+
�
|
| 158 |
+
j=1
|
| 159 |
+
Wijxj).
|
| 160 |
+
(3)
|
| 161 |
+
When using RBM as a variational ansatz for the wave func-
|
| 162 |
+
tion of a quantum many-spin system, p(x) is interpreted as
|
| 163 |
+
the amplitude (instead of the probability) of the configuration
|
| 164 |
+
x. Eq.(3) can be seen as a single-layer fully connected neu-
|
| 165 |
+
ral work which accepts a configuration (a vector of integers)
|
| 166 |
+
as input and outputs a scalar. For real coefficients, the output
|
| 167 |
+
will always be positive by definition, therefore one generally
|
| 168 |
+
has to use complex coefficients even for real wave functions.
|
| 169 |
+
In this work, we slightly change Eq.(3) as follows so as to be
|
| 170 |
+
able to output any real numbers with a real neural network:
|
| 171 |
+
p(x) = tanh(
|
| 172 |
+
Nv
|
| 173 |
+
�
|
| 174 |
+
j=1
|
| 175 |
+
ajxj) ×
|
| 176 |
+
Nh
|
| 177 |
+
�
|
| 178 |
+
i=1
|
| 179 |
+
2 cosh(bi +
|
| 180 |
+
Nv
|
| 181 |
+
�
|
| 182 |
+
j=1
|
| 183 |
+
Wijxj). (4)
|
| 184 |
+
(a) tanh-FCN
|
| 185 |
+
(b) RBM
|
| 186 |
+
FIG. 1. The architectures for (a) our tanh-FCN and (b) RBM. The
|
| 187 |
+
major difference is that we use hyperbolic tangent as the activation
|
| 188 |
+
function such that tanh-FCN could output both positive and negative
|
| 189 |
+
numbers even if it only uses real numbers.
|
| 190 |
+
In the following we will write p(x) as Ψθ(x) to stress its
|
| 191 |
+
dependence on the variational parameters and that it is inter-
|
| 192 |
+
preted as a wave function instead of a probability distribution,
|
| 193 |
+
we will also refer to our neural network in Eq.(4) as tanh-FCN
|
| 194 |
+
since it contains a fully connected layer followed by hyper-
|
| 195 |
+
bolic tangent as the activation function. The difference be-
|
| 196 |
+
tween RBM and tanh-FCN is demonstrated in Fig. 1.
|
| 197 |
+
B.
|
| 198 |
+
Variational Monte Carlo
|
| 199 |
+
The electronic Hamiltonian ˆHe of a chemical system can
|
| 200 |
+
be written in a second-quantized formulation:
|
| 201 |
+
ˆHe =
|
| 202 |
+
�
|
| 203 |
+
p,q
|
| 204 |
+
hp
|
| 205 |
+
qa†
|
| 206 |
+
paq + 1
|
| 207 |
+
2
|
| 208 |
+
�
|
| 209 |
+
p,q
|
| 210 |
+
r,s
|
| 211 |
+
gpq
|
| 212 |
+
rsa†
|
| 213 |
+
pa†
|
| 214 |
+
qaras
|
| 215 |
+
(5)
|
| 216 |
+
where hp
|
| 217 |
+
q and gpq
|
| 218 |
+
rs are one- and two-electron integrals in
|
| 219 |
+
molecular orbital basis, a†
|
| 220 |
+
p and aq in the Hamiltonian are the
|
| 221 |
+
creation and annihilation operators. To treat the fermionic sys-
|
| 222 |
+
tems, we first use the Jordan-Wigner transformation to map
|
| 223 |
+
the electronic Hamiltonian to a sum of Pauli operators, fol-
|
| 224 |
+
lowing Ref. [23], and then use our tanh-FCN in Eq.(4) as the
|
| 225 |
+
ansatz for the resulting many-spin system. The resulting spin
|
| 226 |
+
Hamiltonian ˆH can generally be written in the following form
|
| 227 |
+
ˆH =
|
| 228 |
+
�
|
| 229 |
+
i
|
| 230 |
+
ci
|
| 231 |
+
N
|
| 232 |
+
�
|
| 233 |
+
j=1
|
| 234 |
+
σvi,j
|
| 235 |
+
j
|
| 236 |
+
,
|
| 237 |
+
(6)
|
| 238 |
+
where N = Nv is the number of spins, ci is a real coefficient
|
| 239 |
+
and σvi,j
|
| 240 |
+
j
|
| 241 |
+
is a single spin Pauli operator acting on the j-th spin
|
| 242 |
+
(vi,j ∈ {0, 1, 2, 3} and σ0 = I, σ1 = σx, σ2 = σy, σ3 = σz).
|
| 243 |
+
Given the wave function ansatz Ψθ(x), the corresponding
|
| 244 |
+
energy can be computed as
|
| 245 |
+
E(θ) = ⟨Ψθ| ˆH|Ψθ⟩
|
| 246 |
+
⟨Ψθ|Ψθ⟩
|
| 247 |
+
=
|
| 248 |
+
�
|
| 249 |
+
x Eloc(x) |Ψθ(x)|2
|
| 250 |
+
�
|
| 251 |
+
y |Ψθ(y)|2
|
| 252 |
+
,
|
| 253 |
+
(7)
|
| 254 |
+
where the “local energy” Eloc(x) for a configuration x is de-
|
| 255 |
+
fined as
|
| 256 |
+
Eloc(x) =
|
| 257 |
+
�
|
| 258 |
+
x′
|
| 259 |
+
Ψθ(x′)
|
| 260 |
+
Ψθ(x) Hx′x,
|
| 261 |
+
(8)
|
| 262 |
+
|
| 263 |
+
3
|
| 264 |
+
with Hx′x = ⟨x′| ˆH|x⟩. The VMC algorithm evaluates Eq.(7)
|
| 265 |
+
approximately using Monte Carlo sampling, namely
|
| 266 |
+
˜E(θ) = ⟨Eloc⟩,
|
| 267 |
+
(9)
|
| 268 |
+
where the average is over a set of samples {x1, x2, . . . , xNs}
|
| 269 |
+
(Ns is the total number of samples), generated from the proba-
|
| 270 |
+
bility distribution |Ψθ(x)|2. ˜E(θ) will converge to E(θ) if Ns
|
| 271 |
+
is large enough. In this work we use the Metropolis-Hastings
|
| 272 |
+
sampling algorithm to generate samples [33]. A configura-
|
| 273 |
+
tion x is updated using the SWAP operation between nearest-
|
| 274 |
+
neighbour pairs of spins to preserve the electron-number con-
|
| 275 |
+
servation. We also use the natural gradient of Eq.(9) for the
|
| 276 |
+
stochastic gradient descent algorithm in VMC, namely the pa-
|
| 277 |
+
rameters are updated as
|
| 278 |
+
θk+1 = θk − αS−1F,
|
| 279 |
+
(10)
|
| 280 |
+
where k is the number of iterations, α is the learning rate (α is
|
| 281 |
+
dependent on k in general), S is the stochastic reconfiguration
|
| 282 |
+
matrix [34, 35] and F is the gradient of Eq.(9). Concretely, S
|
| 283 |
+
and F are computed by
|
| 284 |
+
Sij(k) = ⟨O∗
|
| 285 |
+
i Oj⟩ − ⟨O∗
|
| 286 |
+
i ⟩⟨Oj⟩,
|
| 287 |
+
(11)
|
| 288 |
+
and
|
| 289 |
+
Fi(k) = ⟨ElocO∗
|
| 290 |
+
i ⟩ − ⟨Eloc⟩⟨O∗
|
| 291 |
+
i ⟩
|
| 292 |
+
(12)
|
| 293 |
+
respectively, with Oi(x) defined as
|
| 294 |
+
Oi(x) =
|
| 295 |
+
1
|
| 296 |
+
Ψθ(x)
|
| 297 |
+
∂Ψθ(x)
|
| 298 |
+
∂θi
|
| 299 |
+
.
|
| 300 |
+
(13)
|
| 301 |
+
In general S can be non-invertible, and a simple regulariza-
|
| 302 |
+
tion is to add a small shift to the diagonals of S, namely using
|
| 303 |
+
Sreg = S + ϵI instead of S in Eq.(10), with ϵ a small num-
|
| 304 |
+
ber. The calculation of S can become the bottleneck in case
|
| 305 |
+
the number of parameters is too large. This issue could be
|
| 306 |
+
leveraged by representing S as a matrix function instead of
|
| 307 |
+
building it explicitly [36], or by freezing a large portion of
|
| 308 |
+
S during each iteration similar to DMRG [37]. Here this is
|
| 309 |
+
not a significant concern, because we use at most about 1000
|
| 310 |
+
parameters to specify the network. To further enhance the sta-
|
| 311 |
+
bility of the algorithm, we add the contribution of an L2 reg-
|
| 312 |
+
ularization term when evaluating the gradient in Eq.(10), that
|
| 313 |
+
is, instead of directly choosing F as the gradient of ˜E(θ), F is
|
| 314 |
+
chosen as the gradient of the function ˜E(θ) + λ||θ||2 instead
|
| 315 |
+
where || · ||2 means the square of the Euclidean norm. In this
|
| 316 |
+
work we choose ϵ = 0.02 and λ = 10−3 for our numerical
|
| 317 |
+
simulations if not particularly specified.
|
| 318 |
+
III.
|
| 319 |
+
RESULTS
|
| 320 |
+
A.
|
| 321 |
+
Training Details
|
| 322 |
+
In this work we use the Adam optimizer [38] for the VMC
|
| 323 |
+
iterations, with an initial learning rate of α = 0.001, and the
|
| 324 |
+
decay rates for the first- and second-moment to be β1 = 0.9,
|
| 325 |
+
20
|
| 326 |
+
30
|
| 327 |
+
40
|
| 328 |
+
50
|
| 329 |
+
60
|
| 330 |
+
70
|
| 331 |
+
80
|
| 332 |
+
Hidden Size
|
| 333 |
+
−107.675
|
| 334 |
+
−107.650
|
| 335 |
+
−107.625
|
| 336 |
+
−107.600
|
| 337 |
+
−107.575
|
| 338 |
+
−107.550
|
| 339 |
+
−107.525
|
| 340 |
+
−107.500
|
| 341 |
+
Energy (Ha)
|
| 342 |
+
tanh-FCN
|
| 343 |
+
Hartree Fock
|
| 344 |
+
CCSD
|
| 345 |
+
FCI
|
| 346 |
+
FIG. 2. Influence of the number of hidden spins in our tanh-FCN on
|
| 347 |
+
the accuracy of the final energy. The N2 molecule in the STO-3G
|
| 348 |
+
basis is used.
|
| 349 |
+
β2 = 0.99 respectively. For the Metropolis-Hastings sam-
|
| 350 |
+
pling, we will use a fixed Ns = 4×104 for our numerical sim-
|
| 351 |
+
ulations if not particularly specified (in principle one should
|
| 352 |
+
use a larger Ns for larger systems, however in this work we
|
| 353 |
+
focus on molecular systems with at most 30 qubits). We will
|
| 354 |
+
also use a thermalization step of Nth = 2 × 104 (namely
|
| 355 |
+
throwing away Nth samples starting from the initial state).
|
| 356 |
+
To avoid auto-correlation between successive samples we will
|
| 357 |
+
only pick one out of every 10Nv samples. In addition, for
|
| 358 |
+
each simulation we run 8 Markov chains, and the energy is
|
| 359 |
+
chosen to be the lowest of them. Since the energy will always
|
| 360 |
+
contain some small fluctuations when Ns is not large enough,
|
| 361 |
+
the final energy is evaluated by averaging over the energies of
|
| 362 |
+
the last 20 VMC iterations.
|
| 363 |
+
B.
|
| 364 |
+
Effect of hidden size
|
| 365 |
+
We first study the effect of Nh which essentially determines
|
| 366 |
+
the number of parameters, thus the expressivity of our tanh-
|
| 367 |
+
FCN (analogously to RBM). The result is shown in Fig. 2
|
| 368 |
+
where we have taken the N2 molecule as an example. We
|
| 369 |
+
can see that by enlarging Nh, the precision of tanh-FCN can
|
| 370 |
+
be systematically improved. With Nh = 4Nv = 80, we can
|
| 371 |
+
already obtain a final energy that is lower than the CCSD re-
|
| 372 |
+
sults.
|
| 373 |
+
C.
|
| 374 |
+
Potential Energy Surfaces
|
| 375 |
+
Now we demonstrate the accuracy of our tanh-FCN by
|
| 376 |
+
studying the potential energy surfaces of the two molecules
|
| 377 |
+
H2 and LiH in the STO-3G basis, as shown in Fig. 3. We
|
| 378 |
+
can see that for both molecules under different bond lengths,
|
| 379 |
+
our simulation can reach lower or very close to the chemical
|
| 380 |
+
precision, namely error within 1.6 × 10−3 Hatree (Ha) or 1
|
| 381 |
+
|
| 382 |
+
4
|
| 383 |
+
0.6
|
| 384 |
+
0.8
|
| 385 |
+
1.0
|
| 386 |
+
−1.14
|
| 387 |
+
−1.12
|
| 388 |
+
−1.10
|
| 389 |
+
−1.08
|
| 390 |
+
−1.06
|
| 391 |
+
Energy (Ha)
|
| 392 |
+
(a1)
|
| 393 |
+
H2
|
| 394 |
+
Hartree-Fock
|
| 395 |
+
CCSD
|
| 396 |
+
FCI
|
| 397 |
+
tanh-FCN
|
| 398 |
+
1.25
|
| 399 |
+
1.50
|
| 400 |
+
1.75
|
| 401 |
+
2.00
|
| 402 |
+
2.25
|
| 403 |
+
−7.88
|
| 404 |
+
−7.86
|
| 405 |
+
−7.84
|
| 406 |
+
−7.82
|
| 407 |
+
(b1)
|
| 408 |
+
LiH
|
| 409 |
+
0.6
|
| 410 |
+
0.8
|
| 411 |
+
1.0
|
| 412 |
+
Nuclear separation (˚A)
|
| 413 |
+
10−10
|
| 414 |
+
10−8
|
| 415 |
+
10−6
|
| 416 |
+
10−4
|
| 417 |
+
10−2
|
| 418 |
+
Absolute error (Ha)
|
| 419 |
+
(a2)
|
| 420 |
+
Hartree-Fock
|
| 421 |
+
CCSD
|
| 422 |
+
tanh-FCN
|
| 423 |
+
1.25
|
| 424 |
+
1.50
|
| 425 |
+
1.75
|
| 426 |
+
2.00
|
| 427 |
+
2.25
|
| 428 |
+
Nuclear separation (˚A)
|
| 429 |
+
10−5
|
| 430 |
+
10−4
|
| 431 |
+
10−3
|
| 432 |
+
10−2
|
| 433 |
+
(b2)
|
| 434 |
+
FIG. 3. Potential energy surfaces of (a1) H2 and (b1) LiH. We have
|
| 435 |
+
used Nh/Nv = 2 for H2 and Nh/Nv = 4 for LiH, which are suf-
|
| 436 |
+
ficient for our tanh-FCN to reach chemical precision. We have also
|
| 437 |
+
used Ns = 2 × 104 for both molecules during the training. (a2) and
|
| 438 |
+
(b2) show the absolute error with respect to the FCI energy for H2
|
| 439 |
+
and LiH respectively.
|
| 440 |
+
kcal/mol (CCSD results are extremely accurate for these two
|
| 441 |
+
molecules).
|
| 442 |
+
D.
|
| 443 |
+
Final energies for several molecular systems
|
| 444 |
+
TABLE I. List of molecules and the ground state energies computed
|
| 445 |
+
using RBM, tanh-FCN, CCSD. The FCI energy is also shown as a
|
| 446 |
+
reference. The column Nv shows the number of qubits. We have
|
| 447 |
+
used Nh/Nv = 2 for all the molecules studied.
|
| 448 |
+
Molecule Nv RBM [23]
|
| 449 |
+
tanh-FCN
|
| 450 |
+
CCSD
|
| 451 |
+
FCI
|
| 452 |
+
H2
|
| 453 |
+
4
|
| 454 |
+
−1.1373
|
| 455 |
+
−1.1373
|
| 456 |
+
−1.1373
|
| 457 |
+
−1.1373
|
| 458 |
+
Be
|
| 459 |
+
10
|
| 460 |
+
-
|
| 461 |
+
−14.4033
|
| 462 |
+
−14.4036
|
| 463 |
+
−14.4036
|
| 464 |
+
C
|
| 465 |
+
10
|
| 466 |
+
-
|
| 467 |
+
−37.2184
|
| 468 |
+
−37.1412
|
| 469 |
+
−37.2187
|
| 470 |
+
Li2
|
| 471 |
+
20
|
| 472 |
+
-
|
| 473 |
+
−14.6641
|
| 474 |
+
−14.6665
|
| 475 |
+
−14.6666
|
| 476 |
+
LiH
|
| 477 |
+
12
|
| 478 |
+
−7.8826
|
| 479 |
+
−7.8816
|
| 480 |
+
−7.8828
|
| 481 |
+
−7.8828
|
| 482 |
+
NH3
|
| 483 |
+
16
|
| 484 |
+
−55.5277
|
| 485 |
+
−55.5101
|
| 486 |
+
−55.5279
|
| 487 |
+
−55.5282
|
| 488 |
+
H2O
|
| 489 |
+
14
|
| 490 |
+
−75.0232
|
| 491 |
+
−75.0021
|
| 492 |
+
−75.0231
|
| 493 |
+
−75.0233
|
| 494 |
+
C2
|
| 495 |
+
20
|
| 496 |
+
−74.6892
|
| 497 |
+
−74.6134
|
| 498 |
+
−74.6744
|
| 499 |
+
−74.6908
|
| 500 |
+
N2
|
| 501 |
+
20 −107.6767 −107.622 −107.6716 −107.6774
|
| 502 |
+
CO2
|
| 503 |
+
30
|
| 504 |
+
-
|
| 505 |
+
−185.1247 −184.8927 −185.2761
|
| 506 |
+
We further compare the precision of tanh-FCN with RBM
|
| 507 |
+
and CCSD for several small-scale molecules in STO-3G ba-
|
| 508 |
+
sis, which are shown in Table. I. For these simulations we
|
| 509 |
+
have used Nh/Nv = 2, while the RBM results are taken from
|
| 510 |
+
Ref. [23]. These results show that even with a relatively small
|
| 511 |
+
number of parameters and a real neural network, we can still
|
| 512 |
+
obtain the ground state energies of a wide variety of molecules
|
| 513 |
+
0
|
| 514 |
+
500
|
| 515 |
+
1000
|
| 516 |
+
1500
|
| 517 |
+
2000
|
| 518 |
+
Epoch
|
| 519 |
+
10−2
|
| 520 |
+
10−1
|
| 521 |
+
100
|
| 522 |
+
101
|
| 523 |
+
Absolute error
|
| 524 |
+
(a)
|
| 525 |
+
tanh-FCN
|
| 526 |
+
HF re-initialization
|
| 527 |
+
Random initialization
|
| 528 |
+
0
|
| 529 |
+
500
|
| 530 |
+
1000
|
| 531 |
+
1500
|
| 532 |
+
2000
|
| 533 |
+
Epoch
|
| 534 |
+
10−4
|
| 535 |
+
10−3
|
| 536 |
+
10−2
|
| 537 |
+
10−1
|
| 538 |
+
100
|
| 539 |
+
101
|
| 540 |
+
Absolute error
|
| 541 |
+
(b)
|
| 542 |
+
RBM
|
| 543 |
+
HF re-initialization
|
| 544 |
+
Random initialization
|
| 545 |
+
FIG. 4. Effect of the Hartree-Fock (HF) re-initialization compared
|
| 546 |
+
to random initialization for (a) tanh-FCN and (b) RBM. The H2O
|
| 547 |
+
(STO-3G basis, 14 qubits) molecule is used here. The y-axis is the
|
| 548 |
+
absolute error between the VMC energies and the FCI energy. For
|
| 549 |
+
both methods we start to use the HF re-initialization starting from
|
| 550 |
+
600-th VMC iteration marked by the vertical dashed lines. The other
|
| 551 |
+
parameters used are Ns = 2 × 104, Nh/Nv = 1 and λ = 10−4.
|
| 552 |
+
to very high precision (close to or lower than the CCSD ener-
|
| 553 |
+
gies). In the meantime, we note that the energies obtained us-
|
| 554 |
+
ing tanh-FCN is not as accurate as those obtained using RBM,
|
| 555 |
+
however the computational cost of tanh-FCN is at least two
|
| 556 |
+
times lower than RBM under with the same Nh and we could
|
| 557 |
+
relatively easily study larger systems such as CO2 with 30
|
| 558 |
+
qubits.
|
| 559 |
+
E.
|
| 560 |
+
Effect of Hartree-Fock re-initialization
|
| 561 |
+
There are generally two ingredients which would affect
|
| 562 |
+
the effectiveness of the NNQS algorithm: 1) the expressiv-
|
| 563 |
+
ity of the underlying neural network ansatz and 2) the abil-
|
| 564 |
+
ity to quickly approach the desired parameter regime during
|
| 565 |
+
the VMC iterations. The former is dependent on an intelli-
|
| 566 |
+
gent choice of the neural network ansatz. The effect of the
|
| 567 |
+
latter is more significant for larger systems, and one gener-
|
| 568 |
+
ally needs to use a knowledged starting point such as transfer
|
| 569 |
+
learning [39, 40] for the VMC algorithm to guarantee success.
|
| 570 |
+
For molecular systems it is difficult to explore transfer learn-
|
| 571 |
+
ing since the knowledge for different molecules can hardly
|
| 572 |
+
be shared. However, for molecular systems the Hartree-Fock
|
| 573 |
+
reference state may have a large overlap with the exact ground
|
| 574 |
+
state, and is often used as a first approximation of the ground
|
| 575 |
+
state. Here we show that for quantum chemistry problems
|
| 576 |
+
the ability to reach faster the ground state can be improved
|
| 577 |
+
by using the knowledge of the Hartree-Fock reference state.
|
| 578 |
+
Concretely, during the VMC iterations, after the energies
|
| 579 |
+
have become sufficiently close to the ground state energy, we
|
| 580 |
+
stop using random initialization for our Metropolis-Hastings
|
| 581 |
+
sampling, but use the Hartree-Fock reference state instead
|
| 582 |
+
(Hartree-Fock re-initialization).
|
| 583 |
+
The effect of the Hartree-
|
| 584 |
+
Fock re-initialization is demonstrated in Fig. 4, where we have
|
| 585 |
+
taken the H2O molecule as our example. To show the versa-
|
| 586 |
+
tility of the Hartree-Fock re-initialization, we demonstrate its
|
| 587 |
+
effect for RBM as well. We can see that for both tanh-FCN
|
| 588 |
+
and RBM, using Hartree-Fock re-initialization after a num-
|
| 589 |
+
|
| 590 |
+
5
|
| 591 |
+
ber of VMC iterations can greatly accelerate the convergence
|
| 592 |
+
and reach a lower ground state energy than using random ini-
|
| 593 |
+
tialization throughout the VMC optimization. We can also
|
| 594 |
+
see that for the H2O molecule tanh-FCN is less accurate than
|
| 595 |
+
RBM using the same Nh, which is probably due to the fact
|
| 596 |
+
that under the same Nh tanh-FCN has a different expressive
|
| 597 |
+
power as RBM for H2O.
|
| 598 |
+
IV.
|
| 599 |
+
CONCLUSION
|
| 600 |
+
We propose a fully connected neural network inspired from
|
| 601 |
+
the restricted Boltzmann machine to solve quantum chemistry
|
| 602 |
+
problems. Compared to RBM, our tanh-FCN is able to out-
|
| 603 |
+
put both positive and negative numbers even if the parameters
|
| 604 |
+
of the network are purely real. As a result we can directly
|
| 605 |
+
study quantum chemistry problems using tanh-FCN with real
|
| 606 |
+
numbers. In our numerical simulation, we demonstrate that
|
| 607 |
+
tanh-FCN can be used to compute the ground states with high
|
| 608 |
+
accuracy for a wide range of molecular systems with up to 30
|
| 609 |
+
qubits. In addition, we propose to explicitly use the Hartree-
|
| 610 |
+
Fock reference state as the initial state for the Markov chain
|
| 611 |
+
sampling used during the VMC algorithm and demonstrate
|
| 612 |
+
that this technique can significantly accelerate the conver-
|
| 613 |
+
gence and improve the accuracy of the final energy for both
|
| 614 |
+
tanh-FCN and RBM. Our method could be used in combina-
|
| 615 |
+
tion with existing high performance computing devices which
|
| 616 |
+
are well optimized for real numbers, such as to provide a scal-
|
| 617 |
+
able solution for large-scale quantum chemistry problems.
|
| 618 |
+
ACKNOWLEDGMENTS
|
| 619 |
+
We thank Xiao Liang, Mingfan Li for helpful discussions
|
| 620 |
+
of the algorithm.
|
| 621 |
+
C. G. acknowledges support from Na-
|
| 622 |
+
tional Natural Science Foundation of China under Grant No.
|
| 623 |
+
11805279.
|
| 624 |
+
H. S. acknowledges support from the National
|
| 625 |
+
Natural Science Foundation of China (22003073, T2222026).
|
| 626 |
+
D.P. acknowledges support from the National Research Foun-
|
| 627 |
+
dation, Singapore under its QEP2.0 programme (NRF2021-
|
| 628 |
+
QEP2-02- P03).
|
| 629 |
+
[1] “Front matter,” in Molecular Electronic-Structure Theory (John
|
| 630 |
+
Wiley & Sons, Ltd, 2000).
|
| 631 |
+
[2] K. D. Vogiatzis, D. Ma, J. Olsen, L. Gagliardi,
|
| 632 |
+
and W. A.
|
| 633 |
+
de Jong, The Journal of Chemical Physics 147, 184111 (2017),
|
| 634 |
+
https://doi.org/10.1063/1.4989858.
|
| 635 |
+
[3] S. R. White, Phys. Rev. Lett. 69, 2863 (1992).
|
| 636 |
+
[4] S. R. White, Phys. Rev. B 48, 10345 (1993).
|
| 637 |
+
[5] J.
|
| 638 |
+
Brabec,
|
| 639 |
+
J.
|
| 640 |
+
Brandejs,
|
| 641 |
+
K.
|
| 642 |
+
Kowalski,
|
| 643 |
+
S.
|
| 644 |
+
Xanth-
|
| 645 |
+
eas,
|
| 646 |
+
¨O.
|
| 647 |
+
Legeza,
|
| 648 |
+
and
|
| 649 |
+
L.
|
| 650 |
+
Veis,
|
| 651 |
+
Journal
|
| 652 |
+
of
|
| 653 |
+
Computational
|
| 654 |
+
Chemistry
|
| 655 |
+
42,
|
| 656 |
+
534
|
| 657 |
+
(2021),
|
| 658 |
+
https://onlinelibrary.wiley.com/doi/pdf/10.1002/jcc.26476.
|
| 659 |
+
[6] H. R. Larsson, H. Zhai, C. J. Umrigar,
|
| 660 |
+
and G. K.-L. Chan,
|
| 661 |
+
Journal of the American Chemical Society 144, 15932 (2022).
|
| 662 |
+
[7] D. Perez-Garcia, F. Verstraete, M. M. Wolf, and J. I. Cirac,
|
| 663 |
+
Quantum Info. Comput. 7, 401–430 (2007).
|
| 664 |
+
[8] G. D. Purvis and R. J. Bartlett, Journal of Chemical Physics 76,
|
| 665 |
+
1910 (1982).
|
| 666 |
+
[9] J. ˇC´ıˇzek, The Journal of Chemical Physics 45, 4256 (1966),
|
| 667 |
+
https://doi.org/10.1063/1.1727484.
|
| 668 |
+
[10] F. Coester and H. K¨ummel, Nuclear Physics 17, 477 (1960).
|
| 669 |
+
[11] R. Shepard, “The multiconfiguration self-consistent field
|
| 670 |
+
method,” in Advances in Chemical Physics (John Wiley & Sons,
|
| 671 |
+
Ltd, 1987) pp. 63–200.
|
| 672 |
+
[12] P. J. Knowles and H.-J. Werner, Chemical Physics Letters 115,
|
| 673 |
+
259 (1985).
|
| 674 |
+
[13] H. J. A. Jensen, “Electron correlation in molecules using direct
|
| 675 |
+
second order mcscf,” in Relativistic and Electron Correlation
|
| 676 |
+
Effects in Molecules and Solids, edited by G. L. Malli (Springer
|
| 677 |
+
US, Boston, MA, 1994) pp. 179–206.
|
| 678 |
+
[14] Q. Sun, X. Zhang, S. Banerjee, P. Bao, M. Barbry, N. S. Blunt,
|
| 679 |
+
N. A. Bogdanov, G. H. Booth, J. Chen, Z.-H. Cui, J. J. Eriksen,
|
| 680 |
+
Y. Gao, S. Guo, J. Hermann, M. R. Hermes, K. Koh, P. Ko-
|
| 681 |
+
val, S. Lehtola, Z. Li, J. Liu, N. Mardirossian, J. D. McClain,
|
| 682 |
+
M. Motta, B. Mussard, H. Q. Pham, A. Pulkin, W. Purwanto,
|
| 683 |
+
P. J. Robinson, E. Ronca, E. R. Sayfutyarova, M. Scheurer,
|
| 684 |
+
H. F. Schurkus, J. E. T. Smith, C. Sun, S.-N. Sun, S. Upad-
|
| 685 |
+
hyay, L. K. Wagner, X. Wang, A. White, J. D. Whitfield, M. J.
|
| 686 |
+
Williamson, S. Wouters, J. Yang, J. M. Yu, T. Zhu, T. C. Berkel-
|
| 687 |
+
bach, S. Sharma, A. Y. Sokolov, and G. K.-L. Chan, The Jour-
|
| 688 |
+
nal of Chemical Physics 153, 024109 (2020).
|
| 689 |
+
[15] G.
|
| 690 |
+
Carleo
|
| 691 |
+
and
|
| 692 |
+
M.
|
| 693 |
+
Troyer,
|
| 694 |
+
Science
|
| 695 |
+
355,
|
| 696 |
+
602
|
| 697 |
+
(2017),
|
| 698 |
+
https://www.science.org/doi/pdf/10.1126/science.aag2302.
|
| 699 |
+
[16] K. Choo, T. Neupert, and G. Carleo, Phys. Rev. B 100, 125124
|
| 700 |
+
(2019).
|
| 701 |
+
[17] M. Schmitt and M. Heyl, Phys. Rev. Lett. 125, 100503 (2020).
|
| 702 |
+
[18] D. Yuan, H.-R. Wang, Z. Wang, and D.-L. Deng, Phys. Rev.
|
| 703 |
+
Lett. 126, 160401 (2021).
|
| 704 |
+
[19] J.
|
| 705 |
+
R.
|
| 706 |
+
Moreno,
|
| 707 |
+
G.
|
| 708 |
+
Carleo,
|
| 709 |
+
A.
|
| 710 |
+
Georges,
|
| 711 |
+
and
|
| 712 |
+
J.
|
| 713 |
+
Stokes,
|
| 714 |
+
Proceedings
|
| 715 |
+
of
|
| 716 |
+
the
|
| 717 |
+
National
|
| 718 |
+
Academy
|
| 719 |
+
of
|
| 720 |
+
Sciences
|
| 721 |
+
119,
|
| 722 |
+
e2122059119
|
| 723 |
+
(2022),
|
| 724 |
+
https://www.pnas.org/doi/pdf/10.1073/pnas.2122059119.
|
| 725 |
+
[20] J. Hermann, Z. Sch¨atzle, and F. No´e, Nature Chemistry 12, 891
|
| 726 |
+
(2020).
|
| 727 |
+
[21] D. Pfau, J. S. Spencer, A. G. D. G. Matthews, and W. M. C.
|
| 728 |
+
Foulkes, Phys. Rev. Res. 2, 033429 (2020).
|
| 729 |
+
[22] S. Humeniuk, Y. Wan, and L. Wang, arXiv:2210.05871 (2022).
|
| 730 |
+
[23] K. Choo, A. Mezzacapo, and G. Carleo, Nature communica-
|
| 731 |
+
tions 11, 2368 (2020).
|
| 732 |
+
[24] T. D. Barrett, A. Malyshev, and A. Lvovsky, Nature Machine
|
| 733 |
+
Intelligence 4, 351 (2022).
|
| 734 |
+
[25] T. Zhao, J. Stokes,
|
| 735 |
+
and S. Veerapaneni, arXiv:2208.05637
|
| 736 |
+
(2022).
|
| 737 |
+
[26] D. Wu, R. Rossi, F. Vicentini, and G. Carleo, arXiv:2206.12363
|
| 738 |
+
(2022).
|
| 739 |
+
[27] O. Sharir, A. Shashua, and G. Carleo, Phys. Rev. B 106, 205136
|
| 740 |
+
(2022).
|
| 741 |
+
[28] I. Glasser, N. Pancotti, M. August, I. D. Rodriguez, and J. I.
|
| 742 |
+
Cirac, Phys. Rev. X 8, 011006 (2018).
|
| 743 |
+
[29] D.-L. Deng, X. Li, and S. Das Sarma, Phys. Rev. X 7, 021021
|
| 744 |
+
(2017).
|
| 745 |
+
[30] Y. Nomura and M. Imada, Phys. Rev. X 11, 031034 (2021).
|
| 746 |
+
|
| 747 |
+
6
|
| 748 |
+
[31] X. Liang, M. Li, Q. Xiao, H. An, L. He, X. Zhao, J. Chen,
|
| 749 |
+
C. Yang, F. Wang, H. Qian, et al., arXiv:2204.07816 (2022).
|
| 750 |
+
[32] G. Torlai, G. Mazzola, J. Carrasquilla, M. Troyer, R. Melko,
|
| 751 |
+
and G. Carleo, Nature Physics 14, 447 (2018).
|
| 752 |
+
[33] W. K. Hastings, Biometrika 57, 97 (1970).
|
| 753 |
+
[34] S. Sorella and L. Capriotti, Phys. Rev. B 61, 2599 (2000).
|
| 754 |
+
[35] S.
|
| 755 |
+
Sorella,
|
| 756 |
+
M.
|
| 757 |
+
Casula,
|
| 758 |
+
and
|
| 759 |
+
D.
|
| 760 |
+
Rocca,
|
| 761 |
+
The
|
| 762 |
+
Jour-
|
| 763 |
+
nal
|
| 764 |
+
of
|
| 765 |
+
Chemical
|
| 766 |
+
Physics
|
| 767 |
+
127,
|
| 768 |
+
014105
|
| 769 |
+
(2007),
|
| 770 |
+
https://doi.org/10.1063/1.2746035.
|
| 771 |
+
[36] F. Vicentini, D. Hofmann, A. Szab´o, D. Wu, C. Roth, C. Giu-
|
| 772 |
+
liani, G. Pescia, J. Nys, V. Vargas-Calder´on, N. Astrakhantsev,
|
| 773 |
+
and G. Carleo, SciPost Phys. Codebases , 7 (2022).
|
| 774 |
+
[37] W. Zhang, X. Xu, Z. Wu, V. Balachandran,
|
| 775 |
+
and D. Poletti,
|
| 776 |
+
arXiv:2207.10882 (2022).
|
| 777 |
+
[38] D. P. Kingma and J. Ba, in 3rd International Conference
|
| 778 |
+
on Learning Representations, ICLR 2015, San Diego, CA,
|
| 779 |
+
USA, May 7-9, 2015, Conference Track Proceedings, edited by
|
| 780 |
+
Y. Bengio and Y. LeCun (2015).
|
| 781 |
+
[39] R. Zen, L. My, R. Tan, F. H´ebert, M. Gattobigio, C. Miniatura,
|
| 782 |
+
D. Poletti, and S. Bressan, Phys. Rev. E 101, 053301 (2020).
|
| 783 |
+
[40] F. H´ebert, R. Zen, L. My, R. Tan, M. Gattobigio, C. Miniatura,
|
| 784 |
+
D. Poletti, and S. Bressan, in Proceedings of the 24th European
|
| 785 |
+
Conference on Artificial Intelligence (ECAI 2020) (2020).
|
| 786 |
+
|
6tE2T4oBgHgl3EQfPAa1/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
7dE1T4oBgHgl3EQfnQQ6/content/2301.03306v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:04290dd42e3c5357aad7cef448fa2b1f83dd0bc84279fd22ac916319f2984f38
|
| 3 |
+
size 182639
|
7dE1T4oBgHgl3EQfnQQ6/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:abc753f75cfce4f0bb7c4c7a5ebeadd368540ca77e2cf085548e0819d63bfada
|
| 3 |
+
size 2752557
|
7dE1T4oBgHgl3EQfnQQ6/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fb05c3d158458a99563a99d61cf4f8717d237230fdc117453a63519745b65d61
|
| 3 |
+
size 95185
|
8dFST4oBgHgl3EQfaDjo/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:36dd571ca5b86c0b20b3030a074ff941610fba383bc901a5f92388762e94bbb0
|
| 3 |
+
size 3080237
|
8tE5T4oBgHgl3EQfQw5Q/content/2301.05515v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5340c1567a13cefef35482d45a3b1843d762c316b751d91e508a58a271e834ee
|
| 3 |
+
size 1656182
|
8tE5T4oBgHgl3EQfQw5Q/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4e598244dd488d7de1b3c59096d2e645cdc0aabc145a9e1d52737b2e09065208
|
| 3 |
+
size 2097197
|
8tE5T4oBgHgl3EQfQw5Q/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:74c5fa3f44ae3bb41594151bc1a4d8a3568af879eb87c6b0c9518bd409491caf
|
| 3 |
+
size 69887
|
9NE1T4oBgHgl3EQfUQM_/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eda0ec4a45feddc7f0f743a3f7f6a7976586038e83e0138243328e5e8ccb431c
|
| 3 |
+
size 2818093
|
B9AzT4oBgHgl3EQfTfz1/content/tmp_files/2301.01252v1.pdf.txt
ADDED
|
@@ -0,0 +1,2038 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Comparison of machine learning algorithms for merging gridded
|
| 2 |
+
satellite and earth-observed precipitation data
|
| 3 |
+
Georgia Papacharalampous1, Hristos Tyralis2, Anastasios Doulamis3, Nikolaos Doulamis4
|
| 4 |
+
1 Department of Topography, School of Rural, Surveying and Geoinformatics Engineering,
|
| 5 |
+
National Technical University of Athens, Iroon Polytechniou 5, 157 80 Zografou, Greece
|
| 6 |
+
(papacharalampous.georgia@gmail.com, https://orcid.org/0000-0001-5446-954X)
|
| 7 |
+
2 Department of Topography, School of Rural, Surveying and Geoinformatics Engineering,
|
| 8 |
+
National Technical University of Athens, Iroon Polytechniou 5, 157 80 Zografou, Greece
|
| 9 |
+
(montchrister@gmail.com,
|
| 10 |
+
hristos@itia.ntua.gr,
|
| 11 |
+
https://orcid.org/0000-0002-8932-
|
| 12 |
+
4997)
|
| 13 |
+
3 Department of Topography, School of Rural, Surveying and Geoinformatics Engineering,
|
| 14 |
+
National Technical University of Athens, Iroon Polytechniou 5, 157 80 Zografou, Greece
|
| 15 |
+
(adoulam@cs.ntua.gr, https://orcid.org/0000-0002-0612-5889)
|
| 16 |
+
4 Department of Topography, School of Rural, Surveying and Geoinformatics Engineering,
|
| 17 |
+
National Technical University of Athens, Iroon Polytechniou 5, 157 80 Zografou, Greece
|
| 18 |
+
(ndoulam@cs.ntua.gr, https://orcid.org/0000-0002-4064-8990)
|
| 19 |
+
Abstract: Gridded satellite precipitation datasets are useful in hydrological applications
|
| 20 |
+
as they cover large regions with high density. However, they are not accurate in the sense
|
| 21 |
+
that they do not agree with ground-based measurements. An established means for
|
| 22 |
+
improving their accuracy is to correct them by adopting machine learning algorithms. The
|
| 23 |
+
problem is defined as a regression setting, in which the ground-based measurements have
|
| 24 |
+
the role of the dependent variable and the satellite data are the predictor variables,
|
| 25 |
+
together with topography factors (e.g., elevation). Most studies of this kind involve a
|
| 26 |
+
limited number of machine learning algorithms, and are conducted at a small region and
|
| 27 |
+
for a limited time period. Thus, the results obtained through them are of local importance
|
| 28 |
+
and do not provide more general guidance and best practices. To provide results that are
|
| 29 |
+
generalizable and to contribute to the delivery of best practices, we here compare eight
|
| 30 |
+
state-of-the-art machine learning algorithms in correcting satellite precipitation data for
|
| 31 |
+
the entire contiguous United States and for a 15-year period. We use monthly data from
|
| 32 |
+
the PERSIANN (Precipitation Estimation from Remotely Sensed Information using
|
| 33 |
+
Artificial Neural Networks) gridded dataset, together with monthly earth-observed
|
| 34 |
+
|
| 35 |
+
2
|
| 36 |
+
|
| 37 |
+
precipitation data from the Global Historical Climatology Network monthly database,
|
| 38 |
+
version 2 (GHCNm). The results suggest that extreme gradient boosting (XGBoost) and
|
| 39 |
+
random forests are the most accurate in terms of the squared error scoring function. The
|
| 40 |
+
remaining algorithms can be ordered as follows from the best to the worst ones: Bayesian
|
| 41 |
+
regularized feed-forward neural networks, multivariate adaptive polynomial splines
|
| 42 |
+
(poly-MARS), gradient boosting machines (gbm), multivariate adaptive regression splines
|
| 43 |
+
(MARS), feed-forward neural networks and linear regression.
|
| 44 |
+
Keywords: contiguous US; gradient boosting machines; large-scale benchmarking;
|
| 45 |
+
PERSIANN; poly-MARS; random forests; remote sensing; satellite precipitation
|
| 46 |
+
correction; spatial interpolation; XGBoost
|
| 47 |
+
1.
|
| 48 |
+
Introduction
|
| 49 |
+
Knowing the quantity of precipitation at a dense spatial grid and for an extensive time
|
| 50 |
+
period is important in solving a variety of hydrological engineering and science problems,
|
| 51 |
+
including many of the major unsolved problems listed in Blöschl et al. (2019). The main
|
| 52 |
+
sources of precipitation data are ground-based gauge networks and satellites (Sun et al.
|
| 53 |
+
2018). Data from ground-based gauge networks are precise; however, maintaining such
|
| 54 |
+
a network with high spatial density and for a long time period is costly. On the other hand,
|
| 55 |
+
satellite precipitation data are cheap to obtain but not accurate (Mega et al. 2019, Salmani-
|
| 56 |
+
Dehaghi and Samani 2021, Li et al. 2022, Tang et al. 2022).
|
| 57 |
+
By merging gridded satellite precipitation products and ground-based measurements,
|
| 58 |
+
we can obtain data that are more accurate than the raw satellite data and, simultaneously,
|
| 59 |
+
cover space with much higher density compared to the ground-based measurements. This
|
| 60 |
+
merging is practically a regression problem in a spatial setting, with the satellite data
|
| 61 |
+
being the predictor variables and the ground-based data being the dependent variables.
|
| 62 |
+
Such kind of problems are also commonly referred to under the term “downscaling” and
|
| 63 |
+
are special types of spatial interpolation. The latter problem is met in a variety of fields
|
| 64 |
+
(see, e.g., the reviews by Bivand et al. 2013, Li and Heap 2014, Heuvelink and Webster
|
| 65 |
+
2022, Kopczewska 2022). Reviews of the relevant methods for the case of precipitation
|
| 66 |
+
can be found in Hu et al. (2019) and Abdollahipour et al. (2022).
|
| 67 |
+
Spatial interpolation of precipitation by merging satellite precipitation products and
|
| 68 |
+
ground-based measurements has been done at multiple temporal and spatial time scales
|
| 69 |
+
by using a variety of regression algorithms, including several machine learning ones. A
|
| 70 |
+
|
| 71 |
+
3
|
| 72 |
+
|
| 73 |
+
non-exhaustive list of previous works on the topic and a summary of their methodological
|
| 74 |
+
information can be found in Table 1. Notably, this table is indicative of the large diversity
|
| 75 |
+
in the temporal and spatial scales examined and in the algorithms utilized.
|
| 76 |
+
Table 1. Summary of previous works on merging gridded satellite precipitation products
|
| 77 |
+
and ground-based measurements.
|
| 78 |
+
Reference
|
| 79 |
+
Time scale
|
| 80 |
+
Spatial scale
|
| 81 |
+
Algorithms
|
| 82 |
+
He et al. (2016)
|
| 83 |
+
Hourly
|
| 84 |
+
South-western, central, north-
|
| 85 |
+
eastern and southeast United
|
| 86 |
+
States
|
| 87 |
+
Random forests
|
| 88 |
+
Meyer et al. (2016)
|
| 89 |
+
Daily
|
| 90 |
+
Germany
|
| 91 |
+
Random forests, artificial neural networks,
|
| 92 |
+
support vector regression
|
| 93 |
+
Tao et al. (2016)
|
| 94 |
+
Daily
|
| 95 |
+
Central United States
|
| 96 |
+
Deep learning
|
| 97 |
+
Yang et al. (2016)
|
| 98 |
+
Daily
|
| 99 |
+
Chile
|
| 100 |
+
Quantile mapping
|
| 101 |
+
Baez-Villanueva et al. (2020)
|
| 102 |
+
Daily
|
| 103 |
+
Chile
|
| 104 |
+
Random forests
|
| 105 |
+
Chen et al. (2020a)
|
| 106 |
+
Daily
|
| 107 |
+
Dallas–Fort Worth in the United
|
| 108 |
+
States
|
| 109 |
+
Deep learning
|
| 110 |
+
Chen et al. (2020b)
|
| 111 |
+
Daily
|
| 112 |
+
Xijiang basin in China
|
| 113 |
+
Geographically weighted ridge regression
|
| 114 |
+
Rata et al. (2020)
|
| 115 |
+
Annual
|
| 116 |
+
Chéliff watershed in Algeria
|
| 117 |
+
Kriging
|
| 118 |
+
Chen et al. (2021)
|
| 119 |
+
Monthly
|
| 120 |
+
Sichuan Province in China
|
| 121 |
+
Artificial neural networks, geographical
|
| 122 |
+
weighted regression, kriging, random
|
| 123 |
+
forests
|
| 124 |
+
Nguyen et al. (2021)
|
| 125 |
+
Daily
|
| 126 |
+
South Korea
|
| 127 |
+
Random forests
|
| 128 |
+
Shen and Yong (2021)
|
| 129 |
+
Annual
|
| 130 |
+
China
|
| 131 |
+
Gradient boosting decision trees, random
|
| 132 |
+
forests, support vector regression
|
| 133 |
+
Zhang et al. (2021)
|
| 134 |
+
Daily
|
| 135 |
+
China
|
| 136 |
+
Artificial neural networks, extreme
|
| 137 |
+
learning machine, random forests, support
|
| 138 |
+
vector regression
|
| 139 |
+
Chen et al. (2022a)
|
| 140 |
+
Daily
|
| 141 |
+
Coastal mountain region in the
|
| 142 |
+
western United States
|
| 143 |
+
Deep learning
|
| 144 |
+
Fernandez-Palomino et al. (2022)
|
| 145 |
+
Daily
|
| 146 |
+
Ecuador and Peru
|
| 147 |
+
Random forests
|
| 148 |
+
Lin et al. (2022)
|
| 149 |
+
Daily
|
| 150 |
+
Three Gorges Reservoir area in
|
| 151 |
+
China
|
| 152 |
+
Adaptive boosting decision trees, decision
|
| 153 |
+
trees, random forests
|
| 154 |
+
Yang et al. (2022)
|
| 155 |
+
Daily
|
| 156 |
+
Kelantan river basin in Malaysia
|
| 157 |
+
Deep learning
|
| 158 |
+
Zandi et al. (2022)
|
| 159 |
+
Monthly
|
| 160 |
+
Alborz and Zagros mountain
|
| 161 |
+
ranges in Iran
|
| 162 |
+
Artificial neural networks, locally weighted
|
| 163 |
+
linear regression, random forests, stacked
|
| 164 |
+
generalization, support vector regression
|
| 165 |
+
Militino et al. (2023)
|
| 166 |
+
Daily
|
| 167 |
+
Navarre in Spain
|
| 168 |
+
K-nearest neighbors, random forests,
|
| 169 |
+
artificial neural networks
|
| 170 |
+
Machine learning for spatial interpolation has gained prominence in various fields of
|
| 171 |
+
environmental science (Li et al. 2011). These fields include but are not limited to the
|
| 172 |
+
agricultural sciences (Baratto et al. 2022), climate science (Sekulić et al. 2020b, Sekulić et
|
| 173 |
+
al. 2021), hydrology (Tyralis et al. 2019c, Papacharalampous and Tyralis 2022b) and soil
|
| 174 |
+
science (Wadoux et al. 2020, Chen et al. 2022b). Among the various machine learning
|
| 175 |
+
algorithms, random forests seem to be the most frequently used ones (see the examples
|
| 176 |
+
in Hengl et al. 2018). Notably, as machine learning algorithms do not model spatial
|
| 177 |
+
dependence explicitly in their original form, efforts have been made to remedy this
|
| 178 |
+
shortcoming, either directly (Saha et al. 2021) or indirectly (Behrens et al. 2018, Sekulić
|
| 179 |
+
et al. 2020a, Georganos et al. 2021, Georganos and Kalogirou 2022). By exploiting spatial
|
| 180 |
+
dependence information, the algorithms become more accurate.
|
| 181 |
+
As it has been noted earlier, machine learning algorithms constitute a major means for
|
| 182 |
+
merging satellite products and ground-based measurements for obtaining precipitation
|
| 183 |
+
|
| 184 |
+
4
|
| 185 |
+
|
| 186 |
+
data. However, their empirical properties are not well known. This holds because most of
|
| 187 |
+
the existing studies investigate a few algorithms, and because their investigations may be
|
| 188 |
+
somewhat limited in terms of the length of the time periods examined and the size of the
|
| 189 |
+
geographical areas examined. Large-scale benchmark tests and comparisons could be
|
| 190 |
+
useful in providing directions on which algorithm to implement in specific settings of
|
| 191 |
+
practical interest; thus, they have started to appear in other hydrological sub-disciplines.
|
| 192 |
+
Relevant examples are available in Papacharalampous et al. (2019) and Tyralis et al.
|
| 193 |
+
(2021).
|
| 194 |
+
In this study, we work towards filling the above-identified gap. More precisely, we
|
| 195 |
+
compare several machine learning algorithms with respect to how accurate they are in
|
| 196 |
+
providing estimates of total monthly precipitation in spatial interpolation settings by
|
| 197 |
+
merging gridded satellite products and gauge-based measurements. The comparison is
|
| 198 |
+
made for a long time period and for a large geographical area, and thus leads to trustable
|
| 199 |
+
results for the monthly time scale. Moreover, proper evaluations are made according to
|
| 200 |
+
theory and best practices from the field of statistics, with the methodological aspects
|
| 201 |
+
developed in this endeavour contributing to the transfer of knowledge in the overall topic
|
| 202 |
+
of spatial interpolation using machine and statistical learning algorithms.
|
| 203 |
+
The remainder of the paper is structured as follows: Section 2 describes the algorithms
|
| 204 |
+
selected and the methodology followed for exploring the relevant regression setting.
|
| 205 |
+
Section 3 presents the data and the validation procedure. Section 4 presents the results.
|
| 206 |
+
Section 5 discusses the most important findings and provides recommendations for
|
| 207 |
+
future research. Section 6 concludes the work.
|
| 208 |
+
2.
|
| 209 |
+
Methods
|
| 210 |
+
2.1 Machine learning algorithms for spatial interpolation
|
| 211 |
+
Eight machine learning algorithms were implemented in this work for conducting spatial
|
| 212 |
+
interpolation and were extensively compared with each other in the context of merging
|
| 213 |
+
gridded satellite products and gauge-based measurements. In this section, we list and
|
| 214 |
+
briefly describe these algorithms, while their detailed description can be found in Hastie
|
| 215 |
+
et al. (2009), James et al. (2013) and Efron and Hastie (2016). Such a description is out of
|
| 216 |
+
the scope of this work, as the implementations and documentations of the algorithms are
|
| 217 |
+
already available in the R programming language. The R packages utilized are listed in
|
| 218 |
+
Appendix A.
|
| 219 |
+
|
| 220 |
+
5
|
| 221 |
+
|
| 222 |
+
2.1.1 Linear regression
|
| 223 |
+
A linear regression algorithm models the dependent variable as a linear weighted sum of
|
| 224 |
+
the predictor variables (Hastie et al. 2009, pp 43–55). The algorithm is optimized with a
|
| 225 |
+
squared error scoring function.
|
| 226 |
+
2.1.2 Multivariate adaptive regression splines
|
| 227 |
+
The multivariate adaptive regression splines (MARS; Friedman 1991, 1993) model the
|
| 228 |
+
dependent variable with a weighted sum of basis functions. The total number of basis
|
| 229 |
+
functions (product degree) and associated parameters (knot locations) are automatically
|
| 230 |
+
determined from the data. Herein, we implemented an additive model with hinge basis
|
| 231 |
+
functions. The implementation was made with the default parameters.
|
| 232 |
+
2.1.3 Multivariate adaptive polynomial splines
|
| 233 |
+
Multivariate adaptive polynomial splines (poly-MARS; Kooperberg et al. 1997, Stone et al.
|
| 234 |
+
1997) use piecewise linear splines to model the dependent variable in an adaptive
|
| 235 |
+
regression procedure. Their main differences compared to MARS are that they require
|
| 236 |
+
“linear terms of a predictor to be in the model before nonlinear terms using the same
|
| 237 |
+
predictor can be added”, along with ”a univariate basis function to be in the model before a
|
| 238 |
+
tensor-product basis function involving the univariate basis function can be in the model’’
|
| 239 |
+
(Kooperberg 2022). In the present work, multivariate adaptive polynomial splines were
|
| 240 |
+
implemented with the default parameters.
|
| 241 |
+
2.1.4 Random forests
|
| 242 |
+
Random forests (Breiman 2001a) are an ensemble of regression trees based on bagging
|
| 243 |
+
(acronym for “bootstrap aggregation”). The benefits accompanying the application of this
|
| 244 |
+
algorithm were summarized by Tyralis et al. (2019b), who also documented its recent
|
| 245 |
+
popularity in hydrology with a systematic literature review. In random forests, a fixed
|
| 246 |
+
number of predictor variables are randomly selected as candidates when determining the
|
| 247 |
+
nodes of the regression tree. Herein, random forests were implemented with the default
|
| 248 |
+
parameters. The number of trees was equal to 500.
|
| 249 |
+
2.1.5 Gradient boosting machines
|
| 250 |
+
Gradient boosting machines are an ensemble learning algorithm. In brief, they iteratively
|
| 251 |
+
train new base learners using the errors of previously trained base learners (Friedman
|
| 252 |
+
|
| 253 |
+
6
|
| 254 |
+
|
| 255 |
+
2001, Mayr et al. 2014, Natekin and Knoll 2013, Tyralis and Papacharalampous 2021).
|
| 256 |
+
The final algorithm is essentially a sum of the trained base learners. Optimizations are
|
| 257 |
+
performed by using a gradient descent algorithm and by adapting the loss function. The
|
| 258 |
+
latter is the squared error scoring function in the implementation of this work. In the same
|
| 259 |
+
implementation, the optimization’s scoring function was the squared error and the base
|
| 260 |
+
learners were regression trees. Also, the number of trees was set equal to 500 for keeping
|
| 261 |
+
consistency with the implementation of the random forest algorithm. The defaults were
|
| 262 |
+
used for the remaining parameters.
|
| 263 |
+
2.1.6 Extreme gradient boosting
|
| 264 |
+
Extreme gradient boosting (XGBoost; Chen and Guestrin 2016) is another boosting
|
| 265 |
+
algorithm. It is considerably faster and better in performance in comparison to traditional
|
| 266 |
+
implementations of boosting algorithms. It is also further regularized compared to such
|
| 267 |
+
implementations for controlling overfitting. In the implementation of this work, the
|
| 268 |
+
maximum number of the boosting iterations was set equal to 500. The remaining
|
| 269 |
+
parameters were kept as default. For instance, the maximum depth of each tree was kept
|
| 270 |
+
as equal to 6.
|
| 271 |
+
2.1.7 Feed-forward neural networks
|
| 272 |
+
Artificial neural networks (or simply “neural networks”) extract linear combinations of
|
| 273 |
+
the predictor variables as derived features and, subsequently, model the dependent
|
| 274 |
+
variable as a nonlinear function of these features (Hastie et al. 2009, p 389). Herein, we
|
| 275 |
+
used feed-forward neural networks (Ripley 1996, pp 143–180). The number of units in
|
| 276 |
+
the hidden layer and the maximum number of iterations were set equal to 20 and 1 000,
|
| 277 |
+
respectively, while the remaining parameters were kept as default.
|
| 278 |
+
2.1.8 Feed-forward neural networks with Bayesian regularization
|
| 279 |
+
Feed-forward neural networks with Bayesian regularization (MacKay 1992) for avoiding
|
| 280 |
+
overfitting were also employed herein. In the respective implementation, the number of
|
| 281 |
+
neurons that was set equal to 20 and the remaining parameters were kept as default. For
|
| 282 |
+
instance, the maximum number of iterations was kept equal to 1 000.
|
| 283 |
+
2.2 Variable importance metric
|
| 284 |
+
We computed the random forests’ permutation importance of the predictor variables, a
|
| 285 |
+
metric measuring the mean increase of the prediction mean squared error on the out-of-
|
| 286 |
+
|
| 287 |
+
7
|
| 288 |
+
|
| 289 |
+
bag portion of the data after permuting each predictor variable in the regression trees of
|
| 290 |
+
the trained model and provides relative rankings of the importance of the predictor
|
| 291 |
+
variables (Breiman 2001a). More generally, variable importance metrics can support
|
| 292 |
+
explanations of the performance of machine learning algorithms (Breiman 2001b,
|
| 293 |
+
Shmueli 2010), thereby expanding the overall scope of machine learning. This scope is
|
| 294 |
+
often perceived as limited to the provision of accurate predictions. Random forests were
|
| 295 |
+
fitted with 5 000 trees for computing variable importance.
|
| 296 |
+
3.
|
| 297 |
+
Data and application
|
| 298 |
+
3.1 Data
|
| 299 |
+
Our experiments relied totally on open databases that offer earth-observed precipitation
|
| 300 |
+
data at the monthly temporal resolution, gridded satellite precipitation data and elevation
|
| 301 |
+
data for all the gauged locations and grid points shown in Figure 1.
|
| 302 |
+
|
| 303 |
+
8
|
| 304 |
+
|
| 305 |
+
|
| 306 |
+
Figure 1. Maps of the geographical locations of the: (a) earth-located stations offering data
|
| 307 |
+
for the present work; and (b) points composing the Persiann grid defined herein.
|
| 308 |
+
|
| 309 |
+
5
|
| 310 |
+
Latitude (°)
|
| 311 |
+
a
|
| 312 |
+
4
|
| 313 |
+
Latitude (°)
|
| 314 |
+
(b)
|
| 315 |
+
120
|
| 316 |
+
100
|
| 317 |
+
-80
|
| 318 |
+
Longitude (°)9
|
| 319 |
+
|
| 320 |
+
3.1.1 Earth-observed precipitation data
|
| 321 |
+
Total monthly precipitation data of the Global Historical Climatology Network monthly
|
| 322 |
+
database, version 2 (GHCNm; Peterson and Vose 1997) were used for the verification of
|
| 323 |
+
the implemented algorithms. From the entire database, 1 421 stations that are located in
|
| 324 |
+
the contiguous US were extracted, and data that span the time period 2001–2015 were
|
| 325 |
+
selected. These data were sourced from the website of the National Oceanic and
|
| 326 |
+
Atmospheric Administration (NOAA) (https://www.ncei.noaa.gov/pub/data/ghcn/v2;
|
| 327 |
+
assessed on 2022-09-24).
|
| 328 |
+
3.1.2 Satellite precipitation data
|
| 329 |
+
For the application, we additionally used precipitation data from the current operational
|
| 330 |
+
PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial
|
| 331 |
+
Neural Networks) system. The latter was developed by the Centre for Hydrometeorology
|
| 332 |
+
and Remote Sensing (CHRS) at the University of California, Irvine (UCI). The PERSIANN
|
| 333 |
+
satellite data are created using artificial neural networks to establish a relationship
|
| 334 |
+
between remotely sensed cloud-top temperature, measured by long-wave infrared (IR)
|
| 335 |
+
sensors on geostationary orbiting satellites, and rainfall rates. Bias correction from
|
| 336 |
+
passive microwave (PMW) records measured by low Earth-orbiting (LEO) satellites (Hsu
|
| 337 |
+
et al. 1997, Nguyen et al. 2018, Nguyen et al. 2019) is also applied. These data were
|
| 338 |
+
sourced in their daily format from the website of the Center for Hydrometeorology and
|
| 339 |
+
Remote Sensing (CHRS) (https://chrsdata.eng.uci.edu; assessed on 2022-03-07).
|
| 340 |
+
The final product spans a grid with a spatial resolution of 0.25° x 0.25°. We extracted a
|
| 341 |
+
grid that spans the contiguous United States at the time period 2001-2015. We also
|
| 342 |
+
transformed daily precipitation to total monthly precipitation for supporting the
|
| 343 |
+
investigations of this work.
|
| 344 |
+
3.1.3 Elevation data
|
| 345 |
+
For all the gauged geographical locations and the grid points shown in Figure 1, elevation
|
| 346 |
+
data were computed by using the get_elev_point function of the elevatr R package
|
| 347 |
+
(Hollister 2022). This function extracts point elevation data from the Amazon Web
|
| 348 |
+
Services (AWS) Terrain Tiles (https://registry.opendata.aws/terrain-tiles; assessed on
|
| 349 |
+
2022-09-25). Elevation is a key variable in predicting hydrological processes (Xiong et al.
|
| 350 |
+
2022).
|
| 351 |
+
|
| 352 |
+
10
|
| 353 |
+
|
| 354 |
+
3.2 Validation setting and predictor variables
|
| 355 |
+
We define the earth-observed total monthly precipitation at the point of interest as the
|
| 356 |
+
dependent variable. Notably, the ground-based stations are located irregularly in the
|
| 357 |
+
region (see Figure 1); therefore, the problem of defining predictor variables is not the
|
| 358 |
+
usual one that is met in problems with tabular data. To set the regression settings, we
|
| 359 |
+
found, separately for each station, the closest four grid points and we computed the
|
| 360 |
+
distances di, i = 1, 2, 3, 4 from those points. We also indexed the points Si, i = 1, 2, 3, 4
|
| 361 |
+
according to their distance from the stations, where d1 < d2 <d3 < d4 (see Figure 2).
|
| 362 |
+
|
| 363 |
+
Figure 2. Setting of the regression problem. Note that the term “grid point” is used to
|
| 364 |
+
describe the geographical locations with satellite data, while the term “station” is used to
|
| 365 |
+
describe the geographical locations with ground-based measurements. Note also that,
|
| 366 |
+
throughout the present work, the distances di, i = 1, 2, 3, 4 are also referred to as “distances
|
| 367 |
+
#1−4”, respectively, and the total monthly precipitation values at the grid points #1−4 are
|
| 368 |
+
referred to as “Persiann values #1−4”, respectively.
|
| 369 |
+
Possible predictor variables for the technical problem of the present work are the total
|
| 370 |
+
monthly precipitation values at the four closest grid points (which are referred to as
|
| 371 |
+
“Persiann values #1−4”), the respective distances from the station (which are referred to
|
| 372 |
+
as “distances #1−4”), the station’s elevation, and the station’s longitude and latitude. We
|
| 373 |
+
defined and examined three different regression settings. Each of these correspond to a
|
| 374 |
+
different set of predictor variables (see Table 2).
|
| 375 |
+
|
| 376 |
+
Satellitedatagrid
|
| 377 |
+
Gaugestation
|
| 378 |
+
Distance,d, i= 1, 2,3,4
|
| 379 |
+
d<d,<d<d
|
| 380 |
+
grid point #1
|
| 381 |
+
grid point #3
|
| 382 |
+
station #1
|
| 383 |
+
grid point #2
|
| 384 |
+
grid point #411
|
| 385 |
+
|
| 386 |
+
Table 2. Inclusion of predictor variables in the predictor sets examined in this work.
|
| 387 |
+
Predictor variable
|
| 388 |
+
Predictor set #1
|
| 389 |
+
Predictor set #2
|
| 390 |
+
Predictor set #3
|
| 391 |
+
Persiann value #1
|
| 392 |
+
✔
|
| 393 |
+
✔
|
| 394 |
+
✔
|
| 395 |
+
Persiann value #2
|
| 396 |
+
✔
|
| 397 |
+
✔
|
| 398 |
+
✔
|
| 399 |
+
Persiann value #3
|
| 400 |
+
✔
|
| 401 |
+
✔
|
| 402 |
+
✔
|
| 403 |
+
Persiann value #4
|
| 404 |
+
✔
|
| 405 |
+
✔
|
| 406 |
+
✔
|
| 407 |
+
Distance #1
|
| 408 |
+
×
|
| 409 |
+
✔
|
| 410 |
+
✔
|
| 411 |
+
Distance #2
|
| 412 |
+
×
|
| 413 |
+
✔
|
| 414 |
+
✔
|
| 415 |
+
Distance #3
|
| 416 |
+
×
|
| 417 |
+
✔
|
| 418 |
+
✔
|
| 419 |
+
Distance #4
|
| 420 |
+
×
|
| 421 |
+
✔
|
| 422 |
+
✔
|
| 423 |
+
Station elevation
|
| 424 |
+
✔
|
| 425 |
+
✔
|
| 426 |
+
✔
|
| 427 |
+
Station longitude
|
| 428 |
+
×
|
| 429 |
+
×
|
| 430 |
+
✔
|
| 431 |
+
Station latitude
|
| 432 |
+
×
|
| 433 |
+
×
|
| 434 |
+
✔
|
| 435 |
+
Predictor sets #1 and #2 do not account directly for possible spatial dependences, as
|
| 436 |
+
the station’s longitude and latitude are not part of them. Still, by using these predictor
|
| 437 |
+
sets, spatial dependence is modelled indirectly, through covariance information (satellite
|
| 438 |
+
precipitation at close points and station elevation). Predictor set #2 includes more
|
| 439 |
+
information with respect to predictor set #1 and, more precisely, the distances between
|
| 440 |
+
the station location and closest grid points. Predictor set #3 allows spatial dependence
|
| 441 |
+
modelling, as it comprises the station’s longitude and latitude.
|
| 442 |
+
The dataset is composed by 91 623 samples. Each sample includes the total monthly
|
| 443 |
+
precipitation observation at a specific earth-located station for a specified month and a
|
| 444 |
+
specified year, as well as the respective values of the predictor variables, with the latter
|
| 445 |
+
being dependent on the regression setting (see Table 2). The results of the performance
|
| 446 |
+
comparison are obtained within a five-fold cross-validation setting.
|
| 447 |
+
Overall, the validation setting proposed in this work benefits from the following:
|
| 448 |
+
− Stations with missing monthly precipitation values do not need to be excluded from
|
| 449 |
+
the dataset and missing values do not need to be filled. These hold as a varying number of
|
| 450 |
+
stations are included in the procedure for each time point in the period investigated. In
|
| 451 |
+
brief, we keep a dataset with the maximum possible size and we do not add uncertainties
|
| 452 |
+
in the procedure by filling the missing values.
|
| 453 |
+
− The cross-validation is totally random with respect to both space and time. That is a
|
| 454 |
+
standard procedure in the validation of precipitation products that combine satellite and
|
| 455 |
+
gauged-based data.
|
| 456 |
+
− In the setting proposed, it is possible to create a corrected precipitation gridded
|
| 457 |
+
dataset, because after fitting the regression algorithm it is possible to directly interpolate
|
| 458 |
+
|
| 459 |
+
12
|
| 460 |
+
|
| 461 |
+
in the space conditional upon the predictor variables that are known.
|
| 462 |
+
− There is no need to first interpolate the station data to grid points and then verify
|
| 463 |
+
the algorithms based on the earth-observed data previously interpolated. That procedure
|
| 464 |
+
is common in the field but it induces additional uncertainties.
|
| 465 |
+
A few limitations of the validation setting proposed in this work also exist. Indeed,
|
| 466 |
+
there might be some degree of bias due to the fact that this setting does not incorporate,
|
| 467 |
+
in a direct way, information on spatial dependencies. Such incorporations would require
|
| 468 |
+
a different partitioning of the dataset (Meyer and Pebesma 2021, 2022), as machine
|
| 469 |
+
learning models that may explicitly model spatial dependencies (see, e.g., Liu et al. 2022,
|
| 470 |
+
Talebi et al. 2022) may not be applicable in settings with varying number of spatial
|
| 471 |
+
observations at different times.
|
| 472 |
+
To deliver exploratory insight into the technical problem investigated in this work, we
|
| 473 |
+
additionally estimated Spearman correlation (Spearman 1904) for all the possible pairs
|
| 474 |
+
of the variables appearing in the regression settings. We also ranked the total of the
|
| 475 |
+
predictor variables with respect to their importance in predicting the dependent variable.
|
| 476 |
+
The latter was made after estimating the importance according to Section 2.2.
|
| 477 |
+
3.3 Performance metrics and assessment
|
| 478 |
+
To compare the algorithms outlined in Section 2.1 in performing the spatial interpolation,
|
| 479 |
+
we used the squared error scoring function. This function is defined by
|
| 480 |
+
|
| 481 |
+
S(x, y) := (x – y)2
|
| 482 |
+
(1)
|
| 483 |
+
In the above equation, y is the realization (observation) of the spatial process and x is the
|
| 484 |
+
prediction. The squared error scoring function is consistent for the mean functional of the
|
| 485 |
+
predictive distributions (Gneiting 2011). Predictions of models in hydrology should be
|
| 486 |
+
provided in probabilistic terms (see, e.g., the relevant review by Papacharalampous and
|
| 487 |
+
Tyralis 2022a); still, a specific functional of the predictive distribution may be of interest.
|
| 488 |
+
A model trained with the squared error scoring function predicts the mean functional of
|
| 489 |
+
the predictive distribution (Gneiting 2011).
|
| 490 |
+
The performance criterion for the machine learning algorithms takes the form of the
|
| 491 |
+
median squared error (MedSE) by computing the median of the squared error function,
|
| 492 |
+
separately for each set {machine learning algorithm, predictor set, test fold}, according to
|
| 493 |
+
Equation (2). In this equation, the subscript to x and y, i.e., i ∈ {1, …, n}, indicates the
|
| 494 |
+
sample.
|
| 495 |
+
|
| 496 |
+
13
|
| 497 |
+
|
| 498 |
+
|
| 499 |
+
MedSE := mediann{S(xi, yi)}
|
| 500 |
+
(2)
|
| 501 |
+
The five MedSE values computed for each set {machine learning algorithm, predictor set}
|
| 502 |
+
were then used to compute five relative scores (which are else referred to as “relative
|
| 503 |
+
improvements” herein), separately for each predictor set, by using the set {linear
|
| 504 |
+
regression, predictor set} as the reference case. These relative scores were then averaged,
|
| 505 |
+
separately for each set {machine learning algorithm, predictor set}, to provide mean
|
| 506 |
+
relative scores (which are else referred to as “mean relative improvements” herein). A
|
| 507 |
+
skill score with linear regression as the reference technique for an arbitrary algorithm of
|
| 508 |
+
interest k is defined by
|
| 509 |
+
|
| 510 |
+
Sskill := MedSE{k, predictor set}/MedSE{linear regression, predictor set}
|
| 511 |
+
(3)
|
| 512 |
+
The relative scores computed for the assessment are defined by
|
| 513 |
+
|
| 514 |
+
RS{linear regression, predictor set} := 100 (1 − Sskill)
|
| 515 |
+
(4)
|
| 516 |
+
To extent the comparison for also including the assessment between differences in
|
| 517 |
+
performance across predictor sets, the procedures for computing the relative and mean
|
| 518 |
+
relative scores were repeated by considering the set {linear regression, predictor set #1}
|
| 519 |
+
as the reference case for all the sets {machine learning algorithm, predictor set}. In
|
| 520 |
+
addition to the two types of relative improvements, we present information on the
|
| 521 |
+
rankings of the machine learning algorithms. For getting the respective results, we first
|
| 522 |
+
ranked the eight machine learning algorithms, separately for each set {case, predictor set,
|
| 523 |
+
test fold}. Then, we grouped these rankings per set {predictor set, test fold} and computed
|
| 524 |
+
their mean. Lastly, we averaged the five mean ranking values corresponding to each
|
| 525 |
+
predictor set and provided the results of this procedure, which are referred to in what
|
| 526 |
+
follows as “mean rankings”. Moreover, we repeated the mean ranking computation after
|
| 527 |
+
computing the rankings collectively for all the predictor sets.
|
| 528 |
+
Notably, we did not compare the algorithms using alternative scoring functions (e.g.,
|
| 529 |
+
the absolute error scoring function) because such functions may not be consistent for the
|
| 530 |
+
mean functional (excluding functions of the Bregman family; Gneiting 2011). It is also
|
| 531 |
+
possible to use other skill scores (e.g., the Nash-Sutcliffe efficiency, which is used widely
|
| 532 |
+
in hydrology). Here, we preferred to use the simple linear regression algorithm as a
|
| 533 |
+
reference technique. We believe that this choice is credible because of the simplicity and
|
| 534 |
+
ease in the use of the algorithm.
|
| 535 |
+
|
| 536 |
+
14
|
| 537 |
+
|
| 538 |
+
4.
|
| 539 |
+
Results
|
| 540 |
+
4.1 Regression setting exploration
|
| 541 |
+
Figure 3 presents the Spearman correlation estimates for all the possible pairs of the
|
| 542 |
+
variables appearing in the three regression settings examined in this work. The
|
| 543 |
+
relationships between the predictand (i.e., the precipitation quantity observed at the
|
| 544 |
+
earth-located stations) and the 11 predictor variables (see Section 3.2) can be assessed
|
| 545 |
+
through the estimates displayed on the first column on the left side of the heatmap. Based
|
| 546 |
+
on the Spearman correlation estimates, the strongest and, at the same time, equally strong
|
| 547 |
+
among these relationships are those between the predictand and the four predictors
|
| 548 |
+
referring to the precipitation quantities drawn from the Persiann grid. A possible
|
| 549 |
+
explanation of this equality could be found in the Spearman correlation estimates made
|
| 550 |
+
for the six pairs of Persiann values, which are equal to either 0.98 or 0.99, indicating
|
| 551 |
+
extremely strong relationships. This strength can, in its turn, be attributed to strong
|
| 552 |
+
spatial relationships on the Persian grid.
|
| 553 |
+
|
| 554 |
+
Figure 3. Heatmap of the Spearman correlation estimates for all the possible pairs of the
|
| 555 |
+
variables appearing in the three regression settings.
|
| 556 |
+
Other relationships that are notably strong and, thus, expected, at least at an initial
|
| 557 |
+
stage, to be particularly beneficial for estimating precipitation in the herein adopted
|
| 558 |
+
|
| 559 |
+
Spearman
|
| 560 |
+
correlation -1.0
|
| 561 |
+
-0.5
|
| 562 |
+
0.0
|
| 563 |
+
0.5
|
| 564 |
+
1.0
|
| 565 |
+
-0.15
|
| 566 |
+
0.14
|
| 567 |
+
-0.14
|
| 568 |
+
-0.14
|
| 569 |
+
-0.14
|
| 570 |
+
-0.12
|
| 571 |
+
-0.29
|
| 572 |
+
-0.27
|
| 573 |
+
-0.34
|
| 574 |
+
0.32
|
| 575 |
+
-0.16
|
| 576 |
+
1
|
| 577 |
+
0.45
|
| 578 |
+
0.35
|
| 579 |
+
0.35
|
| 580 |
+
0.35
|
| 581 |
+
0.35
|
| 582 |
+
0.06
|
| 583 |
+
0.1
|
| 584 |
+
0.12
|
| 585 |
+
0.15
|
| 586 |
+
-0.51
|
| 587 |
+
1
|
| 588 |
+
-0.16
|
| 589 |
+
Station latitude
|
| 590 |
+
-0.4
|
| 591 |
+
0.19
|
| 592 |
+
-0.19
|
| 593 |
+
-0.19
|
| 594 |
+
-0.19
|
| 595 |
+
-0.11
|
| 596 |
+
-0.21
|
| 597 |
+
-0.27
|
| 598 |
+
-0.32
|
| 599 |
+
0.51
|
| 600 |
+
0.32
|
| 601 |
+
0.12
|
| 602 |
+
0.04
|
| 603 |
+
0.04
|
| 604 |
+
0.04
|
| 605 |
+
0.04
|
| 606 |
+
-0.36
|
| 607 |
+
0.1
|
| 608 |
+
0.8
|
| 609 |
+
1
|
| 610 |
+
-0.32
|
| 611 |
+
0.15
|
| 612 |
+
-0.34
|
| 613 |
+
#4
|
| 614 |
+
Station elevation,
|
| 615 |
+
0.1
|
| 616 |
+
0.03
|
| 617 |
+
0.03
|
| 618 |
+
0.03
|
| 619 |
+
0.03
|
| 620 |
+
-0.3
|
| 621 |
+
-0.13
|
| 622 |
+
0.8
|
| 623 |
+
-0.27
|
| 624 |
+
0.12
|
| 625 |
+
-0.27
|
| 626 |
+
Distance :
|
| 627 |
+
Variable
|
| 628 |
+
-
|
| 629 |
+
0.05
|
| 630 |
+
0.03
|
| 631 |
+
0.03
|
| 632 |
+
0.03
|
| 633 |
+
0.03
|
| 634 |
+
-0.2
|
| 635 |
+
#2
|
| 636 |
+
-0.13
|
| 637 |
+
0.1
|
| 638 |
+
-0.21
|
| 639 |
+
0.1
|
| 640 |
+
-0.29
|
| 641 |
+
Distance
|
| 642 |
+
-
|
| 643 |
+
0.01
|
| 644 |
+
0.01
|
| 645 |
+
0.01
|
| 646 |
+
0.01
|
| 647 |
+
0.01
|
| 648 |
+
-0.2
|
| 649 |
+
-0.3
|
| 650 |
+
-0.36
|
| 651 |
+
-0.11
|
| 652 |
+
0.06
|
| 653 |
+
-0.12
|
| 654 |
+
Distance
|
| 655 |
+
0.67
|
| 656 |
+
0.99
|
| 657 |
+
0.99
|
| 658 |
+
66'0
|
| 659 |
+
0.01
|
| 660 |
+
0.03
|
| 661 |
+
0.03
|
| 662 |
+
0.04
|
| 663 |
+
-0.19
|
| 664 |
+
0.35
|
| 665 |
+
-0.14
|
| 666 |
+
Distance #
|
| 667 |
+
#3
|
| 668 |
+
0.67
|
| 669 |
+
0.99
|
| 670 |
+
0.98
|
| 671 |
+
0.99
|
| 672 |
+
0.01
|
| 673 |
+
0.03
|
| 674 |
+
0.03
|
| 675 |
+
0.04
|
| 676 |
+
-0.19
|
| 677 |
+
0.35
|
| 678 |
+
-0.14
|
| 679 |
+
Persiann value :
|
| 680 |
+
#2
|
| 681 |
+
-
|
| 682 |
+
0.67
|
| 683 |
+
0.99
|
| 684 |
+
0.98
|
| 685 |
+
66'0
|
| 686 |
+
0.01
|
| 687 |
+
0.03
|
| 688 |
+
0.03
|
| 689 |
+
0.04
|
| 690 |
+
-0.19
|
| 691 |
+
0.35
|
| 692 |
+
-0.14
|
| 693 |
+
-
|
| 694 |
+
0.67
|
| 695 |
+
1
|
| 696 |
+
66'0
|
| 697 |
+
66'0
|
| 698 |
+
66'0
|
| 699 |
+
0.01
|
| 700 |
+
0.03
|
| 701 |
+
0.03
|
| 702 |
+
0.04
|
| 703 |
+
-0.19
|
| 704 |
+
0.35
|
| 705 |
+
-0.14
|
| 706 |
+
value
|
| 707 |
+
0.67
|
| 708 |
+
0.67
|
| 709 |
+
0.67
|
| 710 |
+
0.67
|
| 711 |
+
0.01
|
| 712 |
+
0.05
|
| 713 |
+
0.1
|
| 714 |
+
0.12
|
| 715 |
+
-0.4
|
| 716 |
+
0.45
|
| 717 |
+
-0.15
|
| 718 |
+
value
|
| 719 |
+
.
|
| 720 |
+
-
|
| 721 |
+
.
|
| 722 |
+
True
|
| 723 |
+
value
|
| 724 |
+
&
|
| 725 |
+
&
|
| 726 |
+
value
|
| 727 |
+
#3
|
| 728 |
+
value
|
| 729 |
+
Distance
|
| 730 |
+
Distance
|
| 731 |
+
Station
|
| 732 |
+
Station latitude
|
| 733 |
+
Persiann
|
| 734 |
+
Variable15
|
| 735 |
+
|
| 736 |
+
spatial setting are those indicated by the values 0.45 and −0.40 (which again appear in the
|
| 737 |
+
same column of the same heatmap; see Figure 3). The former (latter) of these latter values
|
| 738 |
+
refers to the relationship between the precipitation quantity observed at an earth-located
|
| 739 |
+
station and the longitude (elevation) at the location of this station. The remaining
|
| 740 |
+
relationships between the predictand and predictor variables are found to be less strong;
|
| 741 |
+
nonetheless, they could also be worth-exploiting in the regression setting. Examples of
|
| 742 |
+
the above-discussed relationships can be further inspected through Figure 4.
|
| 743 |
+
|
| 744 |
+
Figure 4. Scatterplots between the predictand (i.e., the precipitation value observed at an
|
| 745 |
+
earth-located station) and the following predictor variables: (a) elevation at the location
|
| 746 |
+
of this station; (b) precipitation value at the closest point on the Persiann grid for this
|
| 747 |
+
station; (c) distance of the fourth closest point on the Persiann grid for this station; and
|
| 748 |
+
(d) longitude at the location of this station. The Spearman correlation estimates are
|
| 749 |
+
repeated here from Figure 3 for convenience. The more reddish the colour on the graphs,
|
| 750 |
+
the denser the points.
|
| 751 |
+
Moreover, Figure 5 presents the estimates of the importance of the 11 predictor
|
| 752 |
+
variables, as these estimates were provided by the random forest algorithm when
|
| 753 |
+
|
| 754 |
+
800
|
| 755 |
+
800
|
| 756 |
+
a
|
| 757 |
+
(b)
|
| 758 |
+
Spearman
|
| 759 |
+
Spearman
|
| 760 |
+
009
|
| 761 |
+
correlation=-0.40
|
| 762 |
+
600
|
| 763 |
+
correlation=0.67
|
| 764 |
+
Truevalue
|
| 765 |
+
Truevalue
|
| 766 |
+
400
|
| 767 |
+
400
|
| 768 |
+
200
|
| 769 |
+
200
|
| 770 |
+
0
|
| 771 |
+
0
|
| 772 |
+
1000
|
| 773 |
+
2000
|
| 774 |
+
3000
|
| 775 |
+
0
|
| 776 |
+
250
|
| 777 |
+
500
|
| 778 |
+
750
|
| 779 |
+
1000
|
| 780 |
+
Station elevation
|
| 781 |
+
Persiannvalue#1
|
| 782 |
+
800
|
| 783 |
+
800
|
| 784 |
+
d
|
| 785 |
+
Spearman
|
| 786 |
+
Spearman
|
| 787 |
+
009
|
| 788 |
+
correlation=0.12
|
| 789 |
+
009
|
| 790 |
+
correlation=0.45
|
| 791 |
+
Truevalue
|
| 792 |
+
Truevalue
|
| 793 |
+
400
|
| 794 |
+
400
|
| 795 |
+
200
|
| 796 |
+
200
|
| 797 |
+
0
|
| 798 |
+
0
|
| 799 |
+
40000
|
| 800 |
+
80000
|
| 801 |
+
120000
|
| 802 |
+
160000
|
| 803 |
+
-120
|
| 804 |
+
-100
|
| 805 |
+
-80
|
| 806 |
+
Distance#4
|
| 807 |
+
Station longitude16
|
| 808 |
+
|
| 809 |
+
considering all of these predictor variables in the regression setting. It also provides the
|
| 810 |
+
ordering of the same estimates, which is also the ordering of the 11 predictor variables
|
| 811 |
+
according to their importance. The longitude at the location of the earth-located station is
|
| 812 |
+
the most important predictor variable, followed by the precipitation quantities drawn
|
| 813 |
+
from the first, second and fourth closest points to the earth-located station at the Persian
|
| 814 |
+
grid. These latter three predictors are followed by the elevation at the location of the
|
| 815 |
+
earth-located station. The next predictor in terms of importance is the precipitation
|
| 816 |
+
quantity drawn from the third closest point to the earth-located station at the Persian
|
| 817 |
+
grid. The latitude at the location of the earth-located station follows and the four variables
|
| 818 |
+
referring to distances are the least important ones.
|
| 819 |
+
|
| 820 |
+
Figure 5. Barplot of the permutation importance scores of the predictor variables. The
|
| 821 |
+
latter were ordered from the most to the least important ones (from top to bottom) based
|
| 822 |
+
on the same scores.
|
| 823 |
+
4.2 Comparison of the algorithms
|
| 824 |
+
Figure 6 presents information that directly allows us to understand how the algorithms
|
| 825 |
+
outlined in Section 2.1 performed with respect to each other in the various experiments,
|
| 826 |
+
separately for each predictor set. Both the mean relative improvements (Figure 6a) and
|
| 827 |
+
the mean rankings (Figure 6b) indicate that, overall, extreme gradient boosting (XGBoost)
|
| 828 |
+
and random forests are the two best-performing algorithms. In terms of mean relative
|
| 829 |
+
improvements, the former of these algorithms led to a much better performance than the
|
| 830 |
+
latter when they were both run with the predictor sets #1, 2 and to somewhat better
|
| 831 |
+
performance than the latter when they were both run with the predictor set #3. Feed-
|
| 832 |
+
forward neural networks with Bayesian regularization follow in the line and, in terms of
|
| 833 |
+
mean rankings, were empirically proven to have, almost equally good performance with
|
| 834 |
+
random forests. Multivariate adaptive polynomial splines (poly-MARS) and gradient
|
| 835 |
+
|
| 836 |
+
Station longitude
|
| 837 |
+
Persiann value #1
|
| 838 |
+
Predictor variable
|
| 839 |
+
Persiann value #2
|
| 840 |
+
Persiann value #4
|
| 841 |
+
Station elevation
|
| 842 |
+
Persiann value #3
|
| 843 |
+
Station latitude
|
| 844 |
+
Distance #4
|
| 845 |
+
Distance #2
|
| 846 |
+
Distance #3
|
| 847 |
+
Distance #1
|
| 848 |
+
0
|
| 849 |
+
500
|
| 850 |
+
1000
|
| 851 |
+
1500
|
| 852 |
+
2000
|
| 853 |
+
2500
|
| 854 |
+
Permutation importance17
|
| 855 |
+
|
| 856 |
+
boosting machines (gbm) are the fourth and fifth algorithms in the line, respectively.
|
| 857 |
+
While the mean rankings corresponding to the latter two algorithms do not suggest large
|
| 858 |
+
differences in their performance, the mean relative improvements favour poly-MARS to a
|
| 859 |
+
notable extent. In terms of both mean relative improvements and mean rankings, feed-
|
| 860 |
+
forward neural networks performed better than gbm and multivariate adaptive
|
| 861 |
+
regression splines (MARS) when these three algorithms were run with the predictor set
|
| 862 |
+
#1. The linear regression algorithm was the worst for all the predictor sets investigated
|
| 863 |
+
in this work. For the predictor sets #2, 3, feed-forward neural networks were the second-
|
| 864 |
+
worst algorithm with very close performance to that of linear regression probably due to
|
| 865 |
+
overfitting.
|
| 866 |
+
|
| 867 |
+
Figure 6. Heatmaps of the: (a) relative improvement (%) in terms of the median square
|
| 868 |
+
error metric, averaged across the five folds, as this improvement was provided by each
|
| 869 |
+
machine and statistical learning algorithm with respect to the linear regression algorithm;
|
| 870 |
+
and (b) mean ranking of each machine and statistical learning algorithm, averaged across
|
| 871 |
+
the five folds. The computations were made separately for each prediction set. The darker
|
| 872 |
+
the colour, the better the predictions on average.
|
| 873 |
+
Furthermore, Figure 7 facilitates comparisons, both across algorithms and across
|
| 874 |
+
predictor sets, of the frequency with which each algorithm appeared in the various
|
| 875 |
+
positions from the first to the eighth (i.e., the last) in the experiments. For the predictor
|
| 876 |
+
set #1 (see Figure 7a), the linear regression algorithm was most commonly found in the
|
| 877 |
+
last position, while its second most common position was the first and the six remaining
|
| 878 |
+
positions appeared in much smaller and largely comparable frequencies. For the same
|
| 879 |
+
predictor set, the XGBoost algorithm followed a somewhat similar pattern, although for it
|
| 880 |
+
|
| 881 |
+
(a)Meanrelativeimprovement
|
| 882 |
+
(b)Meanranking
|
| 883 |
+
Predictor set
|
| 884 |
+
Predictor set#1
|
| 885 |
+
-
|
| 886 |
+
0
|
| 887 |
+
20.9
|
| 888 |
+
31.02
|
| 889 |
+
31.452
|
| 890 |
+
25.35
|
| 891 |
+
38.3
|
| 892 |
+
30.94
|
| 893 |
+
32.65
|
| 894 |
+
Predictorset#1
|
| 895 |
+
-
|
| 896 |
+
5:04
|
| 897 |
+
4.61
|
| 898 |
+
4.4
|
| 899 |
+
4.41
|
| 900 |
+
4.5
|
| 901 |
+
4.2
|
| 902 |
+
4.45
|
| 903 |
+
4.39
|
| 904 |
+
Predictor set #2
|
| 905 |
+
0
|
| 906 |
+
19.79
|
| 907 |
+
28.19
|
| 908 |
+
38.83
|
| 909 |
+
25.33
|
| 910 |
+
44.25
|
| 911 |
+
0.25
|
| 912 |
+
31.08
|
| 913 |
+
Predictorset#2
|
| 914 |
+
5.01
|
| 915 |
+
4.59
|
| 916 |
+
4.43
|
| 917 |
+
4.1
|
| 918 |
+
4.47
|
| 919 |
+
4.01
|
| 920 |
+
4.99
|
| 921 |
+
4.39
|
| 922 |
+
Predictor set #3
|
| 923 |
+
0
|
| 924 |
+
26.72
|
| 925 |
+
32.2
|
| 926 |
+
42.57
|
| 927 |
+
29.39
|
| 928 |
+
43.46
|
| 929 |
+
0.53
|
| 930 |
+
39.3
|
| 931 |
+
Predictorset#3
|
| 932 |
+
5.05
|
| 933 |
+
4.5
|
| 934 |
+
4.41
|
| 935 |
+
4.17
|
| 936 |
+
4.45
|
| 937 |
+
5.04
|
| 938 |
+
4.18
|
| 939 |
+
Linearregression
|
| 940 |
+
Multivariate adaptive regression splines
|
| 941 |
+
Randomforests
|
| 942 |
+
Gradient boosting machines
|
| 943 |
+
Extreme gradient boosting
|
| 944 |
+
Feed-forward neural networks
|
| 945 |
+
Feed-forwardneural networks
|
| 946 |
+
with Bayesian regularization
|
| 947 |
+
Linear regression
|
| 948 |
+
Multivariate adaptive regression splines
|
| 949 |
+
Multivariate adaptive polynomial splines
|
| 950 |
+
Randomforests
|
| 951 |
+
Gradient boostingmachines
|
| 952 |
+
Extreme gradient boosting
|
| 953 |
+
Feed-forwardneuralnetworks
|
| 954 |
+
Feed-forward neural networks
|
| 955 |
+
withBayesianregularization
|
| 956 |
+
Algorithm
|
| 957 |
+
Algorithm
|
| 958 |
+
Mean relative
|
| 959 |
+
Mean ranking
|
| 960 |
+
improvement
|
| 961 |
+
40
|
| 962 |
+
30
|
| 963 |
+
20
|
| 964 |
+
10
|
| 965 |
+
0
|
| 966 |
+
4.25
|
| 967 |
+
4.50
|
| 968 |
+
4.75
|
| 969 |
+
5.0018
|
| 970 |
+
|
| 971 |
+
the first position was found to be the most common and the last position was found to be
|
| 972 |
+
the second most common. The remaining positions appeared with smaller frequencies.
|
| 973 |
+
Also, the remaining algorithms were found less frequently in the first and last positions
|
| 974 |
+
than the linear regression and XGBoost algorithms, with random forests appearing more
|
| 975 |
+
often in these same positions than the other five algorithms. The frequency with which
|
| 976 |
+
random forests appeared in the first, second, seventh and eighth positions is almost the
|
| 977 |
+
same and larger than the frequency with which it appeared in the middle four positions.
|
| 978 |
+
On the other hand, poly-MARS, feed-forward neural networks and feed-forward neural
|
| 979 |
+
networks with Bayesian optimization appeared more often in the four middle positions
|
| 980 |
+
than they appeared in the first two and last two positions, and MARS appeared more often
|
| 981 |
+
in the six middle positions that it appeared in the first and last positions.
|
| 982 |
+
|
| 983 |
+
Figure 7. Sinaplots of the rankings from 1 to 8 of the machine and statistical learning
|
| 984 |
+
algorithms for the predictor sets (a−c) #1−3. These rankings were computed separately
|
| 985 |
+
for each pair {fold, prediction set}.
|
| 986 |
+
For the predictor set #2 (see Figure 7b), there is differentiation in most of the above-
|
| 987 |
+
discussed patterns. Notably, for this predictor set, the patterns observed for feed-forward
|
| 988 |
+
neural networks and linear regression are quite similar. These algorithms appeared in
|
| 989 |
+
one of the last two positions more often than any other algorithm. Moreover, the seventh
|
| 990 |
+
position was somewhat more frequent for them, and their frequency of appearance in the
|
| 991 |
+
|
| 992 |
+
(a)Predictorset#1
|
| 993 |
+
(b)Predictorset#2
|
| 994 |
+
(c)Predictorset#3
|
| 995 |
+
8
|
| 996 |
+
8
|
| 997 |
+
8
|
| 998 |
+
Ranking
|
| 999 |
+
4
|
| 1000 |
+
2
|
| 1001 |
+
regression
|
| 1002 |
+
splines
|
| 1003 |
+
Randomforests
|
| 1004 |
+
machines
|
| 1005 |
+
Extreme gradient boosting
|
| 1006 |
+
Feed-forward neural networks
|
| 1007 |
+
Feed-forward neural networks
|
| 1008 |
+
with Bayesian regularization
|
| 1009 |
+
Linearregression
|
| 1010 |
+
splines
|
| 1011 |
+
splines
|
| 1012 |
+
Randomforests
|
| 1013 |
+
machines
|
| 1014 |
+
Extremegradient boosting
|
| 1015 |
+
Feed-forwardneuralnetworks
|
| 1016 |
+
Feed-forwardneural networks
|
| 1017 |
+
withBayesianregularization
|
| 1018 |
+
Linearregression
|
| 1019 |
+
Isplines
|
| 1020 |
+
Randomforests
|
| 1021 |
+
machines
|
| 1022 |
+
Extreme gradient boosting
|
| 1023 |
+
Feed-forward neural networks
|
| 1024 |
+
Feed-forwardneuralnetworks
|
| 1025 |
+
withBayesian regularization
|
| 1026 |
+
Multivariateadaptivepolynomial
|
| 1027 |
+
Multivariate adaptive regression
|
| 1028 |
+
polynomial
|
| 1029 |
+
Multivariate adaptive polynomial
|
| 1030 |
+
Linearr
|
| 1031 |
+
Gradientboosting
|
| 1032 |
+
Gradient boostingr
|
| 1033 |
+
Gradient boosting
|
| 1034 |
+
Multivariateadaptive
|
| 1035 |
+
Algorithm
|
| 1036 |
+
Algorithm
|
| 1037 |
+
Algorithm19
|
| 1038 |
+
|
| 1039 |
+
first, third, fourth, fifth and sixth positions was almost the same and a bit smaller than
|
| 1040 |
+
their frequency of appearance in the second position. The same algorithms appeared in
|
| 1041 |
+
the last position equally often with the XGBoost algorithm. The latter is the algorithm that
|
| 1042 |
+
appeared most often in the first position by far. Similarly to what previously noted for the
|
| 1043 |
+
predictor set #1, this algorithm appeared more frequently in the first and last position
|
| 1044 |
+
than in every other position for the predictor set #2, with the first position also being
|
| 1045 |
+
much more frequent than the last one. Random forests appeared more often in the first
|
| 1046 |
+
two positions than in any other position and the remaining algorithms appeared more
|
| 1047 |
+
often in the third, fourth, fifth and sixth position than in the remaining four positions.
|
| 1048 |
+
For the predictor set #3 (see Figure 7c), the frequency with which each algorithm
|
| 1049 |
+
appeared in the various positions from the first to the last exhibits more similarities with
|
| 1050 |
+
what was found for the predictor set #2 than with what was found for the predictor set
|
| 1051 |
+
#1. Yet, there are a few notable differences with respect to this good reference case. In
|
| 1052 |
+
fact, although the XGBoost algorithm appeared more often, here as well, in the first and
|
| 1053 |
+
last positions, the frequency of its appearance in the remaining positions was notably
|
| 1054 |
+
larger than the respective frequency for the case of the predictor set #2. Also, the random
|
| 1055 |
+
forest algorithm appeared more often in the third, fourth, fifth and sixth positions than it
|
| 1056 |
+
did for the same reference case.
|
| 1057 |
+
Last but not least, Figures 8 and 9 allow us to understand how much the additional
|
| 1058 |
+
predictors in predictor sets #2, 3 improved or deteriorated the performance of the eight
|
| 1059 |
+
algorithms with respect to using the predictor set #1. The computed improvements were
|
| 1060 |
+
found to be all positive and particularly large for the random forest and the two boosting
|
| 1061 |
+
algorithms, especially when moving to predictor set #3. Notably large and positive are
|
| 1062 |
+
also the performance improvements offered by the additional predictors in predictor set
|
| 1063 |
+
#3 with respect to predictor set #1 for linear regression, MARS, poly-MARS and feed-
|
| 1064 |
+
forward neural networks with Bayesian regularization, while the same does not apply to
|
| 1065 |
+
the case of using predictor set #2 instead of predictor set #1 for the same algorithms.
|
| 1066 |
+
Figure 8 further reveals the best-performing combinations of algorithms and predictors.
|
| 1067 |
+
These are the {extreme gradient boosting, predictor set #3} and {random forests,
|
| 1068 |
+
predictor set #3}, with the former of them offering somewhat better performance.
|
| 1069 |
+
|
| 1070 |
+
20
|
| 1071 |
+
|
| 1072 |
+
|
| 1073 |
+
Figure 8. Heatmaps of the: (a) relative improvement (%) in terms of the median square
|
| 1074 |
+
error metric, averaged across the five folds, as this improvement was provided by each
|
| 1075 |
+
machine and statistical learning algorithm with respect to the linear regression algorithm,
|
| 1076 |
+
with this latter algorithm being run with the predictor set #1; and (b) mean ranking of
|
| 1077 |
+
each machine and statistical learning algorithm, averaged across the five folds. The
|
| 1078 |
+
computations were made collectively for all the predictor sets. The darker the colour, the
|
| 1079 |
+
better the predictions on average.
|
| 1080 |
+
|
| 1081 |
+
(a)Meanrelativeimprovement
|
| 1082 |
+
(b)Meanranking
|
| 1083 |
+
Predictor set
|
| 1084 |
+
Predictor set #1
|
| 1085 |
+
20.9
|
| 1086 |
+
31.02
|
| 1087 |
+
31.45
|
| 1088 |
+
25.35
|
| 1089 |
+
38.3
|
| 1090 |
+
30.94
|
| 1091 |
+
32.65
|
| 1092 |
+
Predictorset#1
|
| 1093 |
+
14.31
|
| 1094 |
+
13.05
|
| 1095 |
+
12.56
|
| 1096 |
+
12.57
|
| 1097 |
+
12.82
|
| 1098 |
+
11.96
|
| 1099 |
+
12.65
|
| 1100 |
+
12.53
|
| 1101 |
+
Predictor set #2
|
| 1102 |
+
1.49
|
| 1103 |
+
20.99
|
| 1104 |
+
29.26
|
| 1105 |
+
39.75
|
| 1106 |
+
26.44
|
| 1107 |
+
45.09
|
| 1108 |
+
1.74
|
| 1109 |
+
32.11
|
| 1110 |
+
Predictorset#2
|
| 1111 |
+
14.26
|
| 1112 |
+
13.01
|
| 1113 |
+
12.57
|
| 1114 |
+
11.6
|
| 1115 |
+
12.69
|
| 1116 |
+
11.27
|
| 1117 |
+
14.23
|
| 1118 |
+
12.44
|
| 1119 |
+
Predictor set#3
|
| 1120 |
+
6.91
|
| 1121 |
+
31.78
|
| 1122 |
+
36.92
|
| 1123 |
+
46.54
|
| 1124 |
+
34.26
|
| 1125 |
+
47.37
|
| 1126 |
+
7.4
|
| 1127 |
+
43.5
|
| 1128 |
+
Predictorset#3
|
| 1129 |
+
13.69
|
| 1130 |
+
11.89
|
| 1131 |
+
11.59
|
| 1132 |
+
10.89
|
| 1133 |
+
11.69
|
| 1134 |
+
11.07
|
| 1135 |
+
13.69
|
| 1136 |
+
10.98
|
| 1137 |
+
Linearregression
|
| 1138 |
+
Randomforests
|
| 1139 |
+
Gradientboosting machines-
|
| 1140 |
+
Extremegradient boosting
|
| 1141 |
+
Feed-forward neural networks-
|
| 1142 |
+
Linearregression
|
| 1143 |
+
Randomforests-
|
| 1144 |
+
Gradient boosting machines
|
| 1145 |
+
Extreme gradient boosting
|
| 1146 |
+
Feed-forward neural networks
|
| 1147 |
+
withBayesianregularization
|
| 1148 |
+
Feed-forward neural networks
|
| 1149 |
+
withBayesianregularization
|
| 1150 |
+
Algorithm
|
| 1151 |
+
Algorithm
|
| 1152 |
+
Mean relative
|
| 1153 |
+
Mean ranking
|
| 1154 |
+
improvement
|
| 1155 |
+
40
|
| 1156 |
+
30
|
| 1157 |
+
20
|
| 1158 |
+
10
|
| 1159 |
+
0
|
| 1160 |
+
11
|
| 1161 |
+
12
|
| 1162 |
+
13
|
| 1163 |
+
1421
|
| 1164 |
+
|
| 1165 |
+
|
| 1166 |
+
Figure 9. Sinaplots of the rankings from 1 to 24 of the machine and statistical learning
|
| 1167 |
+
algorithms for the predictor sets (a−c) #1−3. These rankings were computed separately
|
| 1168 |
+
for each fold and collectively for all the predictor sets.
|
| 1169 |
+
5.
|
| 1170 |
+
Discussion
|
| 1171 |
+
In summary, the large-scale comparison showed that the machine learning algorithms of
|
| 1172 |
+
this work can be ordered from the best to the worst ones as regards their accuracy in
|
| 1173 |
+
correcting satellite precipitation products at the monthly temporal scale as follows:
|
| 1174 |
+
extreme gradient boosting (XGBoost), random forests, Bayesian regularized feed-forward
|
| 1175 |
+
neural networks, multivariate adaptive polynomial splines (poly-MARS), gradient
|
| 1176 |
+
boosting machines (gbm), multivariate adaptive regression splines (MARS), feed-forward
|
| 1177 |
+
neural networks and linear regression. The differences in performance were found to be
|
| 1178 |
+
smaller between some of the pairs of algorithms when the application is made with
|
| 1179 |
+
specific predictors (e.g., random forests and XGBoost when run with predictor set #3) and
|
| 1180 |
+
larger (or medium) in other cases. Especially the magnitude of those differences
|
| 1181 |
+
computed between each of the two best-performing and the remaining algorithms, for the
|
| 1182 |
+
|
| 1183 |
+
(a)Predictorset#1
|
| 1184 |
+
(b)Predictorset#2
|
| 1185 |
+
(c)Predictorset#3
|
| 1186 |
+
25
|
| 1187 |
+
25
|
| 1188 |
+
25
|
| 1189 |
+
20
|
| 1190 |
+
20
|
| 1191 |
+
20
|
| 1192 |
+
15
|
| 1193 |
+
15
|
| 1194 |
+
15
|
| 1195 |
+
Ranking
|
| 1196 |
+
10
|
| 1197 |
+
10
|
| 1198 |
+
10
|
| 1199 |
+
5
|
| 1200 |
+
5
|
| 1201 |
+
0
|
| 1202 |
+
Linearregression
|
| 1203 |
+
Multivariateadaptivepolynomialsplines
|
| 1204 |
+
Randomforests
|
| 1205 |
+
Gradient boosting machines
|
| 1206 |
+
Extreme gradient boosting
|
| 1207 |
+
Feed-forward neural networks
|
| 1208 |
+
Feed-forward neural networks
|
| 1209 |
+
with Bayesian regularization
|
| 1210 |
+
Linearregression
|
| 1211 |
+
Isplines
|
| 1212 |
+
Randomforests
|
| 1213 |
+
Gradient boosting machines
|
| 1214 |
+
Extreme gradient boosting
|
| 1215 |
+
Feed-forward neural networks
|
| 1216 |
+
Feed-forward neural networks
|
| 1217 |
+
with Bayesian regularization
|
| 1218 |
+
Linearregression
|
| 1219 |
+
Isplines
|
| 1220 |
+
Randomforests
|
| 1221 |
+
machines
|
| 1222 |
+
Feed-forwardneural networks
|
| 1223 |
+
Feed-forward neural networks
|
| 1224 |
+
with Bayesian regularization
|
| 1225 |
+
Multivariate adaptive polynomial
|
| 1226 |
+
Multivariateadaptivepolynomial
|
| 1227 |
+
Gradient boosting
|
| 1228 |
+
Algorithm
|
| 1229 |
+
Algorithm
|
| 1230 |
+
Algorithm22
|
| 1231 |
+
|
| 1232 |
+
case in which the most information-rich predictor set is exploited, suggests that the
|
| 1233 |
+
consideration of the findings of this work can have a large positive impact on future
|
| 1234 |
+
applications. Notably, the fact that the random forest, XGBoost and gbm algorithms
|
| 1235 |
+
perform better or, in the worst case, similarly when predictors are added could be
|
| 1236 |
+
attributed to their known theoretical properties. Summaries of these properties are
|
| 1237 |
+
provided in the reviews by Tyralis et al. (2019b) and Tyralis and Papacharalampous
|
| 1238 |
+
(2021), where extensive lists of references to the related machine learning literature are
|
| 1239 |
+
also provided.
|
| 1240 |
+
Aside from the selection of a machine learning algorithm and the selection of a set of
|
| 1241 |
+
predictor variables, which are well-covered by this work for the monthly temporal scale,
|
| 1242 |
+
there are also other important themes, whose investigation could substantially improve
|
| 1243 |
+
performance in the problem of correcting satellite precipitation products at the various
|
| 1244 |
+
temporal scales. Perhaps the most worthy of discussion here is the use of ensembles of
|
| 1245 |
+
machine learning algorithms in the context of ensemble learning. A few works are devoted
|
| 1246 |
+
to ensemble learning algorithms for spatial interpolation (e.g., Davies and Van Der Laan
|
| 1247 |
+
2016, Egaña et al. 2021) and could provide a starting point, together with the present
|
| 1248 |
+
work, for building detailed big data comparisons of ensemble learning algorithms upon.
|
| 1249 |
+
Note here that the ensemble learning algorithms include the simple combinations (see,
|
| 1250 |
+
e.g., those in Petropoulos and Svetunkov 2020, Papacharalampous and Tyralis 2020) and
|
| 1251 |
+
more advanced stacking and meta-learning approaches (see, e.g., those in Wolpert 1992;
|
| 1252 |
+
Tyralis et al. 2019a, Montero-Manso et al. 2020, Talagala et al. 2021), and are increasingly
|
| 1253 |
+
adopted in many fields, including hydrology.
|
| 1254 |
+
Other possible themes for future research, in the important direction of improving both
|
| 1255 |
+
our understanding of the practical problem of correcting satellite precipitation products
|
| 1256 |
+
and the various algorithmic solutions to this problem, include the investigation of spatial
|
| 1257 |
+
and temporal patterns (as the precipitation product correction errors might follow such
|
| 1258 |
+
patterns) and the explanation of the predictive performance of the various algorithms by
|
| 1259 |
+
combining time series feature estimation (see multiple examples of time series features
|
| 1260 |
+
in Fulcher et al. 2013, Kang et al. 2017) and explainable machine learning (see, e.g., the
|
| 1261 |
+
relevant reviews in Linardatos et al. 2020, Belle and Papantonis 2021). Examples of such
|
| 1262 |
+
investigations are available for a different modelling context in Papacharalampous et al.
|
| 1263 |
+
(2022). Last but not least, the comparisons could be extended to include algorithms for
|
| 1264 |
+
predictive uncertainty quantification. A few works are devoted to such machine learning
|
| 1265 |
+
|
| 1266 |
+
23
|
| 1267 |
+
|
| 1268 |
+
algorithms for spatial interpolation (e.g., Fouedjio and Klump 2019). Still, comparison
|
| 1269 |
+
frameworks and large-scale results for multiple algorithms are currently missing from the
|
| 1270 |
+
literature of satellite precipitation data correction.
|
| 1271 |
+
6.
|
| 1272 |
+
Conclusions
|
| 1273 |
+
Hydrological applications often rely on gridded precipitation datasets from satellites, as
|
| 1274 |
+
these datasets cover large regions with higher spatial density compared to the ones from
|
| 1275 |
+
ground-based measurements. Still, the former datasets are less accurate than the latter,
|
| 1276 |
+
with the various machine learning algorithms consisting an established means for
|
| 1277 |
+
improving their accuracy in regression settings. In these settings, the ground-based
|
| 1278 |
+
measurements play the role of the dependent variable and the satellite data play the role
|
| 1279 |
+
of the predictor variables, together with data for topography factors (e.g., elevation). The
|
| 1280 |
+
studies devoted to this important endeavour are numerous; still, most of them involve a
|
| 1281 |
+
limited number of machine learning algorithms, and are further conducted at a small
|
| 1282 |
+
region and for a limited time period. Thus, their results are mostly of local importance,
|
| 1283 |
+
and cannot support the derivation of more general guidance and best practices.
|
| 1284 |
+
In this work, we moved beyond the above-outlined standard approach by comparing
|
| 1285 |
+
eight machine learning algorithms in correcting precipitation satellite data for the entire
|
| 1286 |
+
contiguous United States and for a 15-year period. More precisely, we exploited monthly
|
| 1287 |
+
precipitation satellite data from the PERSIANN (Precipitation Estimation from Remotely
|
| 1288 |
+
Sensed Information using Artificial Neural Networks) gridded dataset and monthly earth-
|
| 1289 |
+
observed precipitation data from the Global Historical Climatology Network monthly
|
| 1290 |
+
database, version 2 (GHCNm), and based the comparison on the squared error scoring
|
| 1291 |
+
function. Overall, extreme gradient boosting (XGBoost) and random forests were found to
|
| 1292 |
+
be the most accurate algorithms, with the former one being somewhat more accurate than
|
| 1293 |
+
the latter. The remaining algorithms can be ordered from the best- to the worst-
|
| 1294 |
+
performing ones as follows: feed-forward neural networks with Bayesian regularization,
|
| 1295 |
+
multivariate adaptive polynomial splines (poly-MARS), gradient boosting machines
|
| 1296 |
+
(gbm), multivariate adaptive regression splines (MARS), feed-forward neural networks
|
| 1297 |
+
and linear regression.
|
| 1298 |
+
Conflicts of interest: The authors declare no conflict of interest.
|
| 1299 |
+
Author contributions: GP and HT conceptualized and designed the work with input from
|
| 1300 |
+
AD and ND. GP and HT performed the analyses and visualizations, and wrote the first
|
| 1301 |
+
|
| 1302 |
+
24
|
| 1303 |
+
|
| 1304 |
+
draft, which was commented and enriched with new text, interpretations and discussions
|
| 1305 |
+
by AD and ND.
|
| 1306 |
+
Funding: This work was conducted in the context of the research project BETTER RAIN
|
| 1307 |
+
(BEnefiTTing from machine lEarning algoRithms and concepts for correcting satellite
|
| 1308 |
+
RAINfall products). This research project was supported by the Hellenic Foundation for
|
| 1309 |
+
Research and Innovation (H.F.R.I.) under the “3rd Call for H.F.R.I. Research Projects to
|
| 1310 |
+
support Post-Doctoral Researchers” (Project Number: 7368).
|
| 1311 |
+
Acknowledgements: The authors would like to acknowledge the contribution of the late
|
| 1312 |
+
Professor Yorgos Photis in the proposal of the research project BETTER RAIN.
|
| 1313 |
+
Appendix A
|
| 1314 |
+
Statistical software information
|
| 1315 |
+
We used the R programming language (R Core Team 2022) to implement the algorithms,
|
| 1316 |
+
and to report and visualize the results.
|
| 1317 |
+
For data processing and visualizations, we used the contributed R packages caret
|
| 1318 |
+
(Kuhn 2022), data.table (Dowle and Srinivasan 2022), elevatr (Hollister 2022),
|
| 1319 |
+
ggforce (Pedersen 2022), ncdf4 (Pierce 2021), rgdal (Bivand et al. 2022), sf
|
| 1320 |
+
(Pebesma 2018, 2022), spdep (Bivand 2022, Bivand and Wong 2018, Bivand et al. 2013),
|
| 1321 |
+
tidyverse (Wickham et al. 2019, Wickham 2022).
|
| 1322 |
+
The algorithms were implemented by using the contributed R packages brnn
|
| 1323 |
+
(Rodriguez and Gianola 2022), earth (Milborrow 2021), gbm (Greenwell et al. 2022),
|
| 1324 |
+
nnet (Ripley 2022, Venables and Ripley 2002), polspline (Kooperberg 2022),
|
| 1325 |
+
ranger (Wright 2022, Wright and Ziegler 2017), xgboost (Chen et al. 2022c).
|
| 1326 |
+
The performance metrics were computed by implementing the contributed R package
|
| 1327 |
+
scoringfunctions (Tyralis and Papacharalampous 2022a, 2022b).
|
| 1328 |
+
Reports were produced by using the contributed R packages devtools (Wickham et
|
| 1329 |
+
al. 2022), knitr (Xie 2014, 2015, 2022), rmarkdown (Allaire et al. 2022, Xie et al. 2018,
|
| 1330 |
+
2022).
|
| 1331 |
+
References
|
| 1332 |
+
[1]
|
| 1333 |
+
Abdollahipour A, Ahmadi H, Aminnejad B (2022) A review of downscaling
|
| 1334 |
+
methods of satellite-based precipitation estimates. Earth Science Informatics
|
| 1335 |
+
15(1). https://doi.org/10.1007/s12145-021-00669-4.
|
| 1336 |
+
|
| 1337 |
+
25
|
| 1338 |
+
|
| 1339 |
+
[2]
|
| 1340 |
+
Allaire JJ, Xie Y, McPherson J, Luraschi J, Ushey K, Atkins A, Wickham H, Cheng J,
|
| 1341 |
+
Chang W, Iannone R (2022) rmarkdown: Dynamic documents for R. R package
|
| 1342 |
+
version 2.17. https://CRAN.R-project.org/package=rmarkdown.
|
| 1343 |
+
[3]
|
| 1344 |
+
Baez-Villanueva OM, Zambrano-Bigiarini M, Beck HE, McNamara I, Ribbe L,
|
| 1345 |
+
Nauditt A, Birkel C, Verbist K, Giraldo-Osorio JD, Xuan Thinh N (2020) RF-MEP: A
|
| 1346 |
+
novel random forest method for merging gridded precipitation products and
|
| 1347 |
+
ground-based measurements. Remote Sensing of Environment 239:111606.
|
| 1348 |
+
https://doi.org/10.1016/j.rse.2019.111606.
|
| 1349 |
+
[4]
|
| 1350 |
+
Baratto PFB, Cecílio RA, de Sousa Teixeira DB, Zanetti SS, Xavier AC (2022)
|
| 1351 |
+
Random forest for spatialization of daily evapotranspiration (ET0) in watersheds
|
| 1352 |
+
in the Atlantic Forest. Environmental Monitoring and Assessment 194(6):449.
|
| 1353 |
+
https://doi.org/10.1007/s10661-022-10110-y.
|
| 1354 |
+
[5]
|
| 1355 |
+
Behrens T, Schmidt K, Viscarra Rossel RA, Gries P, Scholten T, MacMillan RA
|
| 1356 |
+
(2018) Spatial modelling with Euclidean distance fields and machine learning.
|
| 1357 |
+
European
|
| 1358 |
+
Journal
|
| 1359 |
+
of
|
| 1360 |
+
Soil
|
| 1361 |
+
Science
|
| 1362 |
+
69(5):757–770.
|
| 1363 |
+
https://doi.org/10.1111/ejss.12687.
|
| 1364 |
+
[6]
|
| 1365 |
+
Belle V, Papantonis I (2021) Principles and practice of explainable machine
|
| 1366 |
+
learning.
|
| 1367 |
+
Frontiers
|
| 1368 |
+
in
|
| 1369 |
+
Big
|
| 1370 |
+
Data
|
| 1371 |
+
4:688969.
|
| 1372 |
+
https://doi.org/10.3389/fdata.2021.688969.
|
| 1373 |
+
[7]
|
| 1374 |
+
Bivand RS (2022) spdep: Spatial dependence: Weighting schemes, statistics. R
|
| 1375 |
+
package version 1.2-7. https://CRAN.R-project.org/package=spdep.
|
| 1376 |
+
[8]
|
| 1377 |
+
Bivand RS, Wong DWS (2018) Comparing implementations of global and local
|
| 1378 |
+
indicators
|
| 1379 |
+
of
|
| 1380 |
+
spatial
|
| 1381 |
+
association.
|
| 1382 |
+
TEST
|
| 1383 |
+
27(3):716−748.
|
| 1384 |
+
https://doi.org/10.1007/s11749-018-0599-x.
|
| 1385 |
+
[9]
|
| 1386 |
+
Bivand RS, Pebesma E, Gómez-Rubio V (2013) Applied Spatial Data Analysis with
|
| 1387 |
+
R: Second Edition. Springer New York, NY. https://doi.org/10.1007/978-1-4614-
|
| 1388 |
+
7618-4.
|
| 1389 |
+
[10]
|
| 1390 |
+
Bivand RS, Keitt T, Rowlingson B (2022) rgdal: Bindings for the ‘Geospatial’
|
| 1391 |
+
data abstraction library. R package version 1.5-32. https://CRAN.R-
|
| 1392 |
+
project.org/package=rgdal.
|
| 1393 |
+
[11]
|
| 1394 |
+
Blöschl G, Bierkens MFP, Chambel A, Cudennec C, Destouni G, Fiori A, Kirchner
|
| 1395 |
+
JW, McDonnell JJ, Savenije HHG, Sivapalan M, et al. (2019) Twenty-three unsolved
|
| 1396 |
+
problems in hydrology (UPH)–A community perspective. Hydrological Sciences
|
| 1397 |
+
Journal 64(10):1141–1158. https://doi.org/10.1080/02626667.2019.1620507.
|
| 1398 |
+
[12]
|
| 1399 |
+
Breiman
|
| 1400 |
+
L
|
| 1401 |
+
(2001a)
|
| 1402 |
+
Random
|
| 1403 |
+
forests.
|
| 1404 |
+
Machine
|
| 1405 |
+
Learning
|
| 1406 |
+
45(1):5–32.
|
| 1407 |
+
https://doi.org/10.1023/A:1010933404324.
|
| 1408 |
+
[13]
|
| 1409 |
+
Breiman L (2001b) Statistical modeling: The two cultures. Statistical Science
|
| 1410 |
+
16(3):199–215. https://doi.org/10.1214/ss/1009213726.
|
| 1411 |
+
[14]
|
| 1412 |
+
Chen T, Guestrin C (2016) XGBoost: A scalable tree boosting system. In:
|
| 1413 |
+
Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge
|
| 1414 |
+
Discovery
|
| 1415 |
+
and
|
| 1416 |
+
Data
|
| 1417 |
+
Mining.
|
| 1418 |
+
pp
|
| 1419 |
+
785–794.
|
| 1420 |
+
https://doi.org/10.1145/2939672.2939785.
|
| 1421 |
+
[15]
|
| 1422 |
+
Chen H, Chandrasekar V, Cifelli R, Xie P (2020a) A machine learning system for
|
| 1423 |
+
precipitation estimation using satellite and ground radar network observations.
|
| 1424 |
+
IEEE Transactions on Geoscience and Remote Sensing 58(2):982–994.
|
| 1425 |
+
https://doi.org/10.1109/TGRS.2019.2942280.
|
| 1426 |
+
|
| 1427 |
+
26
|
| 1428 |
+
|
| 1429 |
+
[16]
|
| 1430 |
+
Chen S, Xiong L, Ma Q, Kim J-S, Chen J, Xu C-Y (2020b) Improving daily spatial
|
| 1431 |
+
precipitation estimates by merging gauge observation with multiple satellite-
|
| 1432 |
+
based precipitation products based on the geographically weighted ridge
|
| 1433 |
+
regression
|
| 1434 |
+
method.
|
| 1435 |
+
Journal
|
| 1436 |
+
of
|
| 1437 |
+
Hydrology
|
| 1438 |
+
589:125156.
|
| 1439 |
+
https://doi.org/10.1016/j.jhydrol.2020.125156.
|
| 1440 |
+
[17]
|
| 1441 |
+
Chen C, Hu B, Li Y (2021) Easy-to-use spatial random-forest-based downscaling-
|
| 1442 |
+
calibration method for producing precipitation data with high resolution and
|
| 1443 |
+
high accuracy. Hydrology and Earth System Sciences 25(11):5667–5682.
|
| 1444 |
+
https://doi.org/10.5194/hess-25-5667-2021.
|
| 1445 |
+
[18]
|
| 1446 |
+
Chen H, Sun L, Cifelli R, Xie P (2022a) Deep learning for bias correction of satellite
|
| 1447 |
+
retrievals of orographic precipitation. IEEE Transactions on Geoscience and
|
| 1448 |
+
Remote Sensing 60:4104611. https://doi.org/10.1109/TGRS.2021.3105438.
|
| 1449 |
+
[19]
|
| 1450 |
+
Chen S, Arrouays D, Leatitia Mulder V, Poggio L, Minasny B, Roudier P, Libohova
|
| 1451 |
+
Z, Lagacherie P, Shi Z, Hannam J, Meersmans J, Richer-de-Forges AC, Walter C
|
| 1452 |
+
(2022b) Digital mapping of GlobalSoilMap soil properties at a broad scale: A
|
| 1453 |
+
review.
|
| 1454 |
+
Geoderma
|
| 1455 |
+
409:115567.
|
| 1456 |
+
https://doi.org/10.1016/j.geoderma.2021.115567.
|
| 1457 |
+
[20]
|
| 1458 |
+
Chen T, He T, Benesty M, Khotilovich V, Tang Y, Cho H, Chen K, Mitchell R, Cano I,
|
| 1459 |
+
Zhou T, Li M, Xie J, Lin M, Geng Y, Li Y, Yuan J (2022c) xgboost: Extreme gradient
|
| 1460 |
+
boosting.
|
| 1461 |
+
R
|
| 1462 |
+
package
|
| 1463 |
+
version
|
| 1464 |
+
1.6.0.1.
|
| 1465 |
+
https://CRAN.R-
|
| 1466 |
+
project.org/package=xgboost.
|
| 1467 |
+
[21]
|
| 1468 |
+
Davies MM, Van Der Laan MJ (2016) Optimal spatial prediction using ensemble
|
| 1469 |
+
machine learning. International Journal of Biostatistics 12(1):179–201.
|
| 1470 |
+
https://doi.org/10.1515/ijb-2014-0060.
|
| 1471 |
+
[22]
|
| 1472 |
+
Dowle M, Srinivasan A (2022) data.table: Extension of 'data.frame'. R package
|
| 1473 |
+
version 1.14.4. https://CRAN.R-project.org/package=data.table.
|
| 1474 |
+
[23]
|
| 1475 |
+
Efron B, Hastie T (2016) Computer age statistical inference. Cambridge
|
| 1476 |
+
University Press, New York. https://doi.org/10.1017/CBO9781316576533.
|
| 1477 |
+
[24]
|
| 1478 |
+
Egaña A, Navarro F, Maleki M, Grandón F, Carter F, Soto F (2021) Ensemble spatial
|
| 1479 |
+
interpolation: A new approach to natural or anthropogenic variable assessment.
|
| 1480 |
+
Natural
|
| 1481 |
+
Resources
|
| 1482 |
+
Research
|
| 1483 |
+
30(5):3777–3793.
|
| 1484 |
+
https://doi.org/10.1007/s11053-021-09860-2.
|
| 1485 |
+
[25]
|
| 1486 |
+
Fernandez-Palomino CA, Hattermann FF, Krysanova V, Lobanova A, Vega-Jácome
|
| 1487 |
+
F, Lavado W, Santini W, Aybar C, Bronstert A (2022) A novel high-resolution
|
| 1488 |
+
gridded precipitation dataset for Peruvian and Ecuadorian watersheds:
|
| 1489 |
+
Development and hydrological evaluation. Journal of Hydrometeorology
|
| 1490 |
+
23(3):309–336. https://doi.org/10.1175/JHM-D-20-0285.1.
|
| 1491 |
+
[26]
|
| 1492 |
+
Fouedjio F, Klump J (2019) Exploring prediction uncertainty of spatial data in
|
| 1493 |
+
geostatistical and machine learning approaches. Environmental Earth Sciences
|
| 1494 |
+
78(1):38. https://doi.org/10.1007/s12665-018-8032-z.
|
| 1495 |
+
[27]
|
| 1496 |
+
Friedman JH (1991) Multivariate adaptive regression splines. The Annals of
|
| 1497 |
+
Statistics 19(1):1–67. https://doi.org/10.1214/aos/1176347963.
|
| 1498 |
+
[28]
|
| 1499 |
+
Friedman JH (1993) Fast MARS. Stanford University, Department of Statistics.
|
| 1500 |
+
Technical
|
| 1501 |
+
Report
|
| 1502 |
+
110.
|
| 1503 |
+
https://statistics.stanford.edu/sites/g/files/sbiybj6031/f/LCS%20110.pdf.
|
| 1504 |
+
[29]
|
| 1505 |
+
Friedman JH (2001) Greedy function approximation: A gradient boosting
|
| 1506 |
+
machine.
|
| 1507 |
+
The
|
| 1508 |
+
Annals
|
| 1509 |
+
of
|
| 1510 |
+
Statistics
|
| 1511 |
+
29(5):1189–1232.
|
| 1512 |
+
https://doi.org/10.1214/aos/1013203451.
|
| 1513 |
+
|
| 1514 |
+
27
|
| 1515 |
+
|
| 1516 |
+
[30]
|
| 1517 |
+
Fulcher BD, Little MA, Jones NS (2013) Highly comparative time-series analysis:
|
| 1518 |
+
The empirical structure of time series and their methods. Journal of the Royal
|
| 1519 |
+
Society Interface 10(83):20130048. https://doi.org/10.1098/rsif.2013.0048.
|
| 1520 |
+
[31]
|
| 1521 |
+
Georganos S, Kalogirou S (2022) A forest of forests: A spatially weighted and
|
| 1522 |
+
computationally efficient formulation of geographical random forests. ISPRS
|
| 1523 |
+
International
|
| 1524 |
+
Journal
|
| 1525 |
+
of
|
| 1526 |
+
Geo-Information
|
| 1527 |
+
11(9):471.
|
| 1528 |
+
https://doi.org/10.3390/ijgi11090471.
|
| 1529 |
+
[32]
|
| 1530 |
+
Georganos S, Grippa T, Niang Gadiaga A, Linard C, Lennert M, Vanhuysse S, Mboga
|
| 1531 |
+
N, Wolff E, Kalogirou S (2021) Geographical random forests: A spatial extension
|
| 1532 |
+
of the random forest algorithm to address spatial heterogeneity in remote
|
| 1533 |
+
sensing and population modelling. Geocarto International 36(2):121–136.
|
| 1534 |
+
https://doi.org/10.1080/10106049.2019.1595177.
|
| 1535 |
+
[33]
|
| 1536 |
+
Gneiting T (2011) Making and evaluating point forecasts. Journal of the American
|
| 1537 |
+
Statistical
|
| 1538 |
+
Association
|
| 1539 |
+
106(494):746–762.
|
| 1540 |
+
https://doi.org/10.1198/jasa.2011.r10138.
|
| 1541 |
+
[34]
|
| 1542 |
+
Greenwell B, Boehmke B, Cunningham J, et al. (2022) gbm: Generalized boosted
|
| 1543 |
+
regression
|
| 1544 |
+
models.
|
| 1545 |
+
R
|
| 1546 |
+
package
|
| 1547 |
+
version
|
| 1548 |
+
2.1.8.1.
|
| 1549 |
+
|
| 1550 |
+
https://CRAN.R-
|
| 1551 |
+
project.org/package=gbm.
|
| 1552 |
+
[35]
|
| 1553 |
+
Hastie T, Tibshirani R, Friedman J (2009) The Elements of Statistical Learning.
|
| 1554 |
+
Springer, New York. https://doi.org/10.1007/978-0-387-84858-7.
|
| 1555 |
+
[36]
|
| 1556 |
+
He X, Chaney NW, Schleiss M, Sheffield J (2016) Spatial downscaling of
|
| 1557 |
+
precipitation using adaptable random forests. Water Resources Research
|
| 1558 |
+
52(10):8217–8237. https://doi.org/10.1002/2016WR019034.
|
| 1559 |
+
[37]
|
| 1560 |
+
Hengl T, Nussbaum M, Wright MN, Heuvelink GBM, Gräler B (2018) Random
|
| 1561 |
+
forest as a generic framework for predictive modeling of spatial and spatio-
|
| 1562 |
+
temporal variables. PeerJ 6(8):e5518. https://doi.org/10.7717/peerj.5518.
|
| 1563 |
+
[38]
|
| 1564 |
+
Heuvelink GBM, Webster R (2022) Spatial statistics and soil mapping: A
|
| 1565 |
+
blossoming partnership under pressure. Spatial Statistics 50:100639.
|
| 1566 |
+
https://doi.org/10.1016/j.spasta.2022.100639.
|
| 1567 |
+
[39]
|
| 1568 |
+
Hollister JW (2022) elevatr: Access elevation data from various APIs. R
|
| 1569 |
+
package version 0.4.2. https://CRAN.R-project.org/package=elevatr.
|
| 1570 |
+
[40]
|
| 1571 |
+
Hsu K-L, Gao X, Sorooshian S, Gupta HV (1997) Precipitation estimation from
|
| 1572 |
+
remotely sensed information using artificial neural networks. Journal of Applied
|
| 1573 |
+
Meteorology
|
| 1574 |
+
36(9):1176–1190.
|
| 1575 |
+
https://doi.org/10.1175/1520-
|
| 1576 |
+
0450(1997)036<1176:PEFRSI>2.0.CO;2.
|
| 1577 |
+
[41]
|
| 1578 |
+
Hu Q, Li Z, Wang L, Huang Y, Wang Y, Li L (2019) Rainfall spatial estimations: A
|
| 1579 |
+
review from spatial interpolation to multi-source data merging. Water 11(3):579.
|
| 1580 |
+
https://doi.org/10.3390/w11030579.
|
| 1581 |
+
[42]
|
| 1582 |
+
James G, Witten D, Hastie T, Tibshirani R (2013) An Introduction to Statistical
|
| 1583 |
+
Learning. Springer, New York. https://doi.org/10.1007/978-1-4614-7138-7.
|
| 1584 |
+
[43]
|
| 1585 |
+
Kang Y, Hyndman RJ, Smith-Miles K (2017) Visualising forecasting algorithm
|
| 1586 |
+
performance using time series instance spaces. International Journal of
|
| 1587 |
+
Forecasting 33(2):345−358. https://doi.org/10.1016/j.ijforecast.2016.09.004.
|
| 1588 |
+
[44]
|
| 1589 |
+
Kooperberg C (2022) polspline: Polynomial spline routines. R package
|
| 1590 |
+
version 1.1.20. https://CRAN.R-project.org/package=polspline.
|
| 1591 |
+
[45]
|
| 1592 |
+
Kooperberg C, Bose S, Stone CJ (1997) Polychotomous regression. Journal of the
|
| 1593 |
+
American
|
| 1594 |
+
Statistical
|
| 1595 |
+
Association
|
| 1596 |
+
92(437):117–127.
|
| 1597 |
+
https://doi.org/10.1080/01621459.1997.10473608.
|
| 1598 |
+
|
| 1599 |
+
28
|
| 1600 |
+
|
| 1601 |
+
[46]
|
| 1602 |
+
Kopczewska K (2022) Spatial machine learning: New opportunities for regional
|
| 1603 |
+
science.
|
| 1604 |
+
Annals
|
| 1605 |
+
of
|
| 1606 |
+
Regional
|
| 1607 |
+
Science
|
| 1608 |
+
68(3):713–755.
|
| 1609 |
+
https://doi.org/10.1007/s00168-021-01101-x.
|
| 1610 |
+
[47]
|
| 1611 |
+
Kuhn M (2022) caret: Classification and regression training. R package version
|
| 1612 |
+
6.0-93. https://CRAN.R-project.org/package=caret.
|
| 1613 |
+
[48]
|
| 1614 |
+
Li J, Heap AD (2014) Spatial interpolation methods applied in the environmental
|
| 1615 |
+
sciences: A review. Environmental Modelling and Software 53:173–189.
|
| 1616 |
+
https://doi.org/10.1016/j.envsoft.2013.12.008.
|
| 1617 |
+
[49]
|
| 1618 |
+
Li J, Heap AD, Potter A, Daniell JJ (2011) Application of machine learning methods
|
| 1619 |
+
to spatial interpolation of environmental variables. Environmental Modelling and
|
| 1620 |
+
Software 26(12):1647–1659. https://doi.org/10.1016/j.envsoft.2011.07.004.
|
| 1621 |
+
[50]
|
| 1622 |
+
Li W, Jiang Q, He X, Sun H, Sun W, Scaioni M, Chen S, Li X, Gao J, Hong Y (2022)
|
| 1623 |
+
Effective multi-satellite precipitation fusion procedure conditioned by gauge
|
| 1624 |
+
background fields over the Chinese mainland. Journal of Hydrology 610:127783.
|
| 1625 |
+
https://doi.org/10.1016/j.jhydrol.2022.127783.
|
| 1626 |
+
[51]
|
| 1627 |
+
Lin Q, Peng T, Wu Z, Guo J, Chang W, Xu Z (2022) Performance evaluation, error
|
| 1628 |
+
decomposition and tree-based machine learning error correction of GPM IMERG
|
| 1629 |
+
and TRMM 3B42 products in the Three Gorges reservoir area. Atmospheric
|
| 1630 |
+
Research 268:105988. https://doi.org/10.1016/j.atmosres.2021.105988.
|
| 1631 |
+
[52]
|
| 1632 |
+
Linardatos P, Papastefanopoulos V, Kotsiantis S (2020) Explainable AI: A review
|
| 1633 |
+
of
|
| 1634 |
+
machine
|
| 1635 |
+
learning
|
| 1636 |
+
interpretability
|
| 1637 |
+
methods.
|
| 1638 |
+
Entropy
|
| 1639 |
+
23(1):18.
|
| 1640 |
+
https://doi.org/10.3390/e23010018.
|
| 1641 |
+
[53]
|
| 1642 |
+
Liu X, Kounadi O, Zurita-Milla R (2022) Incorporating spatial autocorrelation in
|
| 1643 |
+
machine learning models using spatial lag and eigenvector spatial filtering
|
| 1644 |
+
features.
|
| 1645 |
+
ISPRS
|
| 1646 |
+
International
|
| 1647 |
+
Journal
|
| 1648 |
+
of
|
| 1649 |
+
Geo-Information
|
| 1650 |
+
11(4):242.
|
| 1651 |
+
https://doi.org/10.3390/ijgi11040242.
|
| 1652 |
+
[54]
|
| 1653 |
+
MacKay DJC (1992) Bayesian interpolation. Neural computation 4(3):415−447.
|
| 1654 |
+
https://doi.org/10.1162/neco.1992.4.3.415.
|
| 1655 |
+
[55]
|
| 1656 |
+
Mayr A, Binder H, Gefeller O, Schmid M (2014) The evolution of boosting
|
| 1657 |
+
algorithms: From machine learning to statistical modelling. Methods of
|
| 1658 |
+
Information in Medicine 53(6):419–427. https://doi.org/10.3414/ME13-01-
|
| 1659 |
+
0122.
|
| 1660 |
+
[56]
|
| 1661 |
+
Mega T, Ushio T, Matsuda T, Kubota T, Kachi M, Oki R (2019) Gauge-adjusted
|
| 1662 |
+
global satellite mapping of precipitation. IEEE Transactions on Geoscience and
|
| 1663 |
+
Remote
|
| 1664 |
+
Sensing
|
| 1665 |
+
57(4):1928–1935.
|
| 1666 |
+
https://doi.org/10.1109/TGRS.2018.2870199.
|
| 1667 |
+
[57]
|
| 1668 |
+
Meyer H, Pebesma E (2021) Predicting into unknown space? Estimating the area
|
| 1669 |
+
of applicability of spatial prediction models. Methods in Ecology and Evolution
|
| 1670 |
+
12(9):1620–1633. https://doi.org/10.1111/2041-210X.13650.
|
| 1671 |
+
[58]
|
| 1672 |
+
Meyer H, Pebesma E (2022) Machine learning-based global maps of ecological
|
| 1673 |
+
variables and the challenge of assessing them. Nature Communications
|
| 1674 |
+
13(1):2208. https://doi.org/10.1038/s41467-022-29838-9.
|
| 1675 |
+
[59]
|
| 1676 |
+
Meyer H, Kühnlein M, Appelhans T, Nauss T (2016) Comparison of four machine
|
| 1677 |
+
learning algorithms for their applicability in satellite-based optical rainfall
|
| 1678 |
+
retrievals.
|
| 1679 |
+
Atmospheric
|
| 1680 |
+
Research
|
| 1681 |
+
169:424–433.
|
| 1682 |
+
https://doi.org/10.1016/j.atmosres.2015.09.021.
|
| 1683 |
+
[60]
|
| 1684 |
+
Militino AF, Ugarte MD, Pérez-Goya U (2023) Machine learning procedures for
|
| 1685 |
+
daily interpolation of rainfall in Navarre (Spain). Studies in Systems, Decision and
|
| 1686 |
+
Control 445:399–413. https://doi.org/10.1007/978-3-031-04137-2_34.
|
| 1687 |
+
|
| 1688 |
+
29
|
| 1689 |
+
|
| 1690 |
+
[61]
|
| 1691 |
+
Milborrow S (2021) earth: Multivariate adaptive regression splines. R package
|
| 1692 |
+
version 5.3.1. https://CRAN.R-project.org/package=earth.
|
| 1693 |
+
[62]
|
| 1694 |
+
Montero-Manso P, Athanasopoulos G, Hyndman RJ, Talagala TS (2020) FFORMA:
|
| 1695 |
+
Feature-based forecast model averaging. International Journal of Forecasting
|
| 1696 |
+
36(1):86−92. https://doi.org/10.1016/j.ijforecast.2019.02.011.
|
| 1697 |
+
[63]
|
| 1698 |
+
Natekin A, Knoll A (2013) Gradient boosting machines, a tutorial. Frontiers in
|
| 1699 |
+
Neurorobotics 7:21. https://doi.org/10.3389/fnbot.2013.00021.
|
| 1700 |
+
[64]
|
| 1701 |
+
Nguyen P, Ombadi M, Sorooshian S, Hsu K, AghaKouchak A, Braithwaite D,
|
| 1702 |
+
Ashouri H, Rose Thorstensen A (2018) The PERSIANN family of global satellite
|
| 1703 |
+
precipitation data: A review and evaluation of products. Hydrology and Earth
|
| 1704 |
+
System Sciences 22(11):5801–5816. https://doi.org/10.5194/hess-22-5801-
|
| 1705 |
+
2018.
|
| 1706 |
+
[65]
|
| 1707 |
+
Nguyen P, Shearer EJ, Tran H, Ombadi M, Hayatbini N, Palacios T, Huynh P,
|
| 1708 |
+
Braithwaite D, Updegraff G, Hsu K, Kuligowski B, Logan WS, Sorooshian S (2019)
|
| 1709 |
+
The CHRS data portal, an easily accessible public repository for PERSIANN global
|
| 1710 |
+
satellite
|
| 1711 |
+
precipitation
|
| 1712 |
+
data.
|
| 1713 |
+
Scientific
|
| 1714 |
+
Data
|
| 1715 |
+
6:180296.
|
| 1716 |
+
https://doi.org/10.1038/sdata.2018.296.
|
| 1717 |
+
[66]
|
| 1718 |
+
Nguyen GV, Le X-H, Van LN, Jung S, Yeon M, Lee G (2021) Application of random
|
| 1719 |
+
forest algorithm for merging multiple satellite precipitation products across
|
| 1720 |
+
South
|
| 1721 |
+
Korea.
|
| 1722 |
+
Remote
|
| 1723 |
+
Sensing
|
| 1724 |
+
13(20):4033.
|
| 1725 |
+
https://doi.org/10.3390/rs13204033.
|
| 1726 |
+
[67]
|
| 1727 |
+
Papacharalampous G, Tyralis H (2020) Hydrological time series forecasting using
|
| 1728 |
+
simple combinations: Big data testing and investigations on one-year ahead river
|
| 1729 |
+
flow
|
| 1730 |
+
predictability.
|
| 1731 |
+
Journal
|
| 1732 |
+
of
|
| 1733 |
+
Hydrology
|
| 1734 |
+
590:125205.
|
| 1735 |
+
https://doi.org/10.1016/j.jhydrol.2020.125205.
|
| 1736 |
+
[68]
|
| 1737 |
+
Papacharalampous G, Tyralis H (2022a) A review of machine learning concepts
|
| 1738 |
+
and methods for addressing challenges in probabilistic hydrological post-
|
| 1739 |
+
processing
|
| 1740 |
+
and
|
| 1741 |
+
forecasting.
|
| 1742 |
+
Frontiers
|
| 1743 |
+
in
|
| 1744 |
+
Water
|
| 1745 |
+
4:961954.
|
| 1746 |
+
https://doi.org/10.3389/frwa.2022.961954.
|
| 1747 |
+
[69]
|
| 1748 |
+
Papacharalampous G, Tyralis H (2022b) Time series features for supporting
|
| 1749 |
+
hydrometeorological explorations and predictions in ungauged locations using
|
| 1750 |
+
large datasets. Water 14(10):1657. https://doi.org/10.3390/w14101657.
|
| 1751 |
+
[70]
|
| 1752 |
+
Papacharalampous G, Tyralis H, Langousis A, Jayawardena AW, Sivakumar B,
|
| 1753 |
+
Mamassis N, Montanari A, Koutsoyiannis D (2019) Probabilistic hydrological
|
| 1754 |
+
post-processing at scale: Why and how to apply machine-learning quantile
|
| 1755 |
+
regression
|
| 1756 |
+
algorithms.
|
| 1757 |
+
Water
|
| 1758 |
+
11(10):2126.
|
| 1759 |
+
https://doi.org/10.3390/w11102126.
|
| 1760 |
+
[71]
|
| 1761 |
+
Papacharalampous G, Tyralis H, Pechlivanidis IG, Grimaldi S, Volpi E (2022)
|
| 1762 |
+
Massive feature extraction for explaining and foretelling hydroclimatic time
|
| 1763 |
+
series forecastability at the global scale. Geoscience Frontiers 13(3):101349.
|
| 1764 |
+
https://doi.org/10.1016/j.gsf.2022.101349.
|
| 1765 |
+
[72]
|
| 1766 |
+
Pebesma E (2018) Simple features for R: Standardized support for spatial vector
|
| 1767 |
+
data. The R Journal 10 (1):439−446. https://doi.org/10.32614/RJ-2018-009.
|
| 1768 |
+
[73]
|
| 1769 |
+
Pebesma E (2022) sf: Simple features for R. R package version 1.0-8.
|
| 1770 |
+
https://CRAN.R-project.org/package=sf.
|
| 1771 |
+
[74]
|
| 1772 |
+
Pedersen TL (2022) ggforce: Accelerating 'ggplot2'. R package version 0.4.1.
|
| 1773 |
+
https://cran.r-project.org/package=ggforce.
|
| 1774 |
+
|
| 1775 |
+
30
|
| 1776 |
+
|
| 1777 |
+
[75]
|
| 1778 |
+
Peterson TC, Vose RS (1997) An overview of the Global Historical Climatology
|
| 1779 |
+
Network temperature database. Bulletin of the American Meteorological Society
|
| 1780 |
+
78(12):2837–2849.
|
| 1781 |
+
https://doi.org/10.1175/1520-
|
| 1782 |
+
0477(1997)078<2837:AOOTGH>2.0.CO;2.
|
| 1783 |
+
[76]
|
| 1784 |
+
Petropoulos F, Svetunkov I (2020) A simple combination of univariate models.
|
| 1785 |
+
International
|
| 1786 |
+
Journal
|
| 1787 |
+
of
|
| 1788 |
+
Forecasting
|
| 1789 |
+
36(1):110−115.
|
| 1790 |
+
https://doi.org/10.1016/j.ijforecast.2019.01.006.
|
| 1791 |
+
[77]
|
| 1792 |
+
Pierce D (2021) ncdf4: Interface to Unidata netCDF (version 4 or earlier) format
|
| 1793 |
+
data files. R package version 1.19. https://CRAN.R-project.org/package=ncdf4.
|
| 1794 |
+
[78]
|
| 1795 |
+
R Core Team (2022) R: A language and environment for statistical computing. R
|
| 1796 |
+
Foundation for Statistical Computing, Vienna, Austria. https://www.R-
|
| 1797 |
+
project.org/.
|
| 1798 |
+
[79]
|
| 1799 |
+
Rata M, Douaoui A, Larid M, Douaik A (2020) Comparison of geostatistical
|
| 1800 |
+
interpolation methods to map annual rainfall in the Chéliff watershed, Algeria.
|
| 1801 |
+
Theoretical
|
| 1802 |
+
and
|
| 1803 |
+
Applied
|
| 1804 |
+
Climatology
|
| 1805 |
+
141(3–4):1009–1024.
|
| 1806 |
+
https://doi.org/10.1007/s00704-020-03218-z.
|
| 1807 |
+
[80]
|
| 1808 |
+
Ripley BD (1996) Pattern recognition and neural networks. Cambridge
|
| 1809 |
+
University Press, Cambridge. https://doi.org/10.1017/cbo9780511812651.
|
| 1810 |
+
[81]
|
| 1811 |
+
Ripley BD (2022) nnet: Feed-forward neural networks and multinomial log-
|
| 1812 |
+
linear
|
| 1813 |
+
models.
|
| 1814 |
+
R
|
| 1815 |
+
package
|
| 1816 |
+
version
|
| 1817 |
+
7.3-18.
|
| 1818 |
+
https://CRAN.R-
|
| 1819 |
+
project.org/package=nnet.
|
| 1820 |
+
[82]
|
| 1821 |
+
Rodriguez PP, Gianola D (2022) brnn: Bayesian regularization for feed-forward
|
| 1822 |
+
neural
|
| 1823 |
+
networks.
|
| 1824 |
+
R
|
| 1825 |
+
package
|
| 1826 |
+
version
|
| 1827 |
+
0.9.2.
|
| 1828 |
+
https://CRAN.R-
|
| 1829 |
+
project.org/package=brnn.
|
| 1830 |
+
[83]
|
| 1831 |
+
Saha A, Basu S, Datta A (2021) Random forests for spatially dependent data.
|
| 1832 |
+
Journal
|
| 1833 |
+
of
|
| 1834 |
+
the
|
| 1835 |
+
American
|
| 1836 |
+
Statistical
|
| 1837 |
+
Association.
|
| 1838 |
+
https://doi.org/10.1080/01621459.2021.1950003.
|
| 1839 |
+
[84]
|
| 1840 |
+
Salmani-Dehaghi N, Samani N (2021) Development of bias-correction PERSIANN-
|
| 1841 |
+
CDR models for the simulation and completion of precipitation time series.
|
| 1842 |
+
Atmospheric
|
| 1843 |
+
Environment
|
| 1844 |
+
246:117981.
|
| 1845 |
+
https://doi.org/10.1016/j.atmosenv.2020.117981.
|
| 1846 |
+
[85]
|
| 1847 |
+
Sekulić A, Kilibarda M, Heuvelink GBM, Nikolić M, Bajat B (2020a) Random forest
|
| 1848 |
+
spatial
|
| 1849 |
+
interpolation.
|
| 1850 |
+
Remote
|
| 1851 |
+
Sensing
|
| 1852 |
+
12(10):1687.
|
| 1853 |
+
https://doi.org/10.3390/rs12101687.
|
| 1854 |
+
[86]
|
| 1855 |
+
Sekulić A, Kilibarda M, Protić D, Tadić MP, Bajat B (2020b) Spatio-temporal
|
| 1856 |
+
regression kriging model of mean daily temperature for Croatia. Theoretical and
|
| 1857 |
+
Applied Climatology 140(1-2):101-114. https://doi.org/10.1007/s00704-019-
|
| 1858 |
+
03077-3.
|
| 1859 |
+
[87]
|
| 1860 |
+
Sekulić A, Kilibarda M, Protić D, Bajat B (2021) A high-resolution daily gridded
|
| 1861 |
+
meteorological dataset for Serbia made by random forest spatial interpolation.
|
| 1862 |
+
Scientific Data 8(1):123. https://doi.org/10.1038/s41597-021-00901-2.
|
| 1863 |
+
[88]
|
| 1864 |
+
Shen Z, Yong B (2021) Downscaling the GPM-based satellite precipitation
|
| 1865 |
+
retrievals using gradient boosting decision tree approach over Mainland China.
|
| 1866 |
+
Journal
|
| 1867 |
+
of
|
| 1868 |
+
Hydrology
|
| 1869 |
+
602:126803.
|
| 1870 |
+
https://doi.org/10.1016/j.jhydrol.2021.126803.
|
| 1871 |
+
[89]
|
| 1872 |
+
Shmueli G (2010) To explain or to predict?. Statistical Science 25(3):289–310.
|
| 1873 |
+
https://doi.org/10.1214/10-STS330.
|
| 1874 |
+
|
| 1875 |
+
31
|
| 1876 |
+
|
| 1877 |
+
[90]
|
| 1878 |
+
Spearman C (1904) The proof and measurement of association between two
|
| 1879 |
+
things.
|
| 1880 |
+
The
|
| 1881 |
+
American
|
| 1882 |
+
Journal
|
| 1883 |
+
of
|
| 1884 |
+
Psychology
|
| 1885 |
+
15(1):72–101.
|
| 1886 |
+
https://doi.org/10.2307/1412159.
|
| 1887 |
+
[91]
|
| 1888 |
+
Stone CJ, Hansen MH, Kooperberg C, Truong YK (1997) Polynomial splines and
|
| 1889 |
+
their tensor products in extended linear modeling. Annals of Statistics
|
| 1890 |
+
25(4):1371–1470. https://doi.org/10.1214/aos/1031594728.
|
| 1891 |
+
[92]
|
| 1892 |
+
Sun Q, Miao C, Duan Q, Ashouri H, Sorooshian S, Hsu K-L (2018) A review of global
|
| 1893 |
+
precipitation data sets: Data sources, estimation, and intercomparisons. Reviews
|
| 1894 |
+
of Geophysics 56(1):79-107. https://doi.org/10.1002/2017RG000574.
|
| 1895 |
+
[93]
|
| 1896 |
+
Talagala TS, Li F, Kang Y (2021) FFORMPP: Feature-based forecast model
|
| 1897 |
+
performance prediction. International Journal of Forecasting 38(3):920−943.
|
| 1898 |
+
https://doi.org/10.1016/j.ijforecast.2021.07.002.
|
| 1899 |
+
[94]
|
| 1900 |
+
Talebi H, Peeters LJM, Otto A, Tolosana-Delgado R (2022) A truly spatial random
|
| 1901 |
+
forests algorithm for geoscience data analysis and modelling. Mathematical
|
| 1902 |
+
Geosciences 54(1):1-22. https://doi.org/10.1007/s11004-021-09946-w.
|
| 1903 |
+
[95]
|
| 1904 |
+
Tang T, Chen T, Gui G (2022) A comparative evaluation of gauge-satellite-based
|
| 1905 |
+
merging products over multiregional complex terrain basin. IEEE Journal of
|
| 1906 |
+
Selected Topics in Applied Earth Observations and Remote Sensing 15:5275-
|
| 1907 |
+
5287. https://doi.org/10.1109/JSTARS.2022.3187983.
|
| 1908 |
+
[96]
|
| 1909 |
+
Tao Y, Gao X, Hsu K, Sorooshian S, Ihler A (2016) A deep neural network modeling
|
| 1910 |
+
framework to reduce bias in satellite precipitation products. Journal of
|
| 1911 |
+
Hydrometeorology 17(3):931-945. https://doi.org/10.1175/JHM-D-15-0075.1.
|
| 1912 |
+
[97]
|
| 1913 |
+
Tyralis H, Papacharalampous G (2021) Boosting algorithms in energy research:
|
| 1914 |
+
A systematic review. Neural Computing and Applications 33(21):14101-14117.
|
| 1915 |
+
https://doi.org/10.1007/s00521-021-05995-8.
|
| 1916 |
+
[98]
|
| 1917 |
+
Tyralis H, Papacharalampous G (2022a) A review of probabilistic forecasting and
|
| 1918 |
+
prediction with machine learning. https://arxiv.org/abs/2209.08307.
|
| 1919 |
+
[99]
|
| 1920 |
+
Tyralis H, Papacharalampous G (2022b) scoringfunctions: A collection of
|
| 1921 |
+
scoring
|
| 1922 |
+
functions
|
| 1923 |
+
for
|
| 1924 |
+
assessing
|
| 1925 |
+
point
|
| 1926 |
+
forecasts.
|
| 1927 |
+
https://CRAN.R-
|
| 1928 |
+
project.org/package=scoringfunctions.
|
| 1929 |
+
[100]
|
| 1930 |
+
Tyralis H, Papacharalampous G, Burnetas A, Langousis A (2019a) Hydrological
|
| 1931 |
+
post-processing using stacked generalization of quantile regression algorithms:
|
| 1932 |
+
Large-scale application over CONUS. Journal of Hydrology 577:123957.
|
| 1933 |
+
https://doi.org/10.1016/j.jhydrol.2019.123957.
|
| 1934 |
+
[101]
|
| 1935 |
+
Tyralis H, Papacharalampous G, Langousis A (2019b) A brief review of random
|
| 1936 |
+
forests for water scientists and practitioners and their recent history in water
|
| 1937 |
+
resources. Water 11(5):910. https://doi.org/10.3390/w11050910.
|
| 1938 |
+
[102]
|
| 1939 |
+
Tyralis H, Papacharalampous G, Tantanee S (2019c) How to explain and predict
|
| 1940 |
+
the shape parameter of the generalized extreme value distribution of streamflow
|
| 1941 |
+
extremes
|
| 1942 |
+
using
|
| 1943 |
+
a
|
| 1944 |
+
big
|
| 1945 |
+
dataset.
|
| 1946 |
+
Journal
|
| 1947 |
+
of
|
| 1948 |
+
Hydrology
|
| 1949 |
+
574:628-645.
|
| 1950 |
+
https://doi.org/10.1016/j.jhydrol.2019.04.070.
|
| 1951 |
+
[103]
|
| 1952 |
+
Tyralis H, Papacharalampous G, Langousis A (2021) Super ensemble learning for
|
| 1953 |
+
daily streamflow forecasting: Large-scale demonstration and comparison with
|
| 1954 |
+
multiple machine learning algorithms. Neural Computing and Applications
|
| 1955 |
+
33(8):3053-3068. https://doi.org/10.1007/s00521-020-05172-3.
|
| 1956 |
+
[104]
|
| 1957 |
+
Venables WN, Ripley BD (2002) Modern Applied Statistics with S. Fourth Edition.
|
| 1958 |
+
Springer, New York. ISBN 0-387-95457-0.
|
| 1959 |
+
|
| 1960 |
+
32
|
| 1961 |
+
|
| 1962 |
+
[105]
|
| 1963 |
+
Wadoux AMJ-C, Minasny B, McBratney AB (2020) Machine learning for digital soil
|
| 1964 |
+
mapping: Applications, challenges and suggested solutions. Earth-Science
|
| 1965 |
+
Reviews 210:103359. https://doi.org/10.1016/j.earscirev.2020.103359.
|
| 1966 |
+
[106]
|
| 1967 |
+
Wickham H (2022) tidyverse: Easily install and load the 'Tidyverse'. R package
|
| 1968 |
+
version 1.3.2. https://CRAN.R-project.org/package=tidyverse.
|
| 1969 |
+
[107]
|
| 1970 |
+
Wickham H, Averick M, Bryan J, Chang W, McGowan LD, François R, Grolemund
|
| 1971 |
+
G, Hayes A, Henry L,Hester J, Kuhn M, Pedersen TL, Miller E, Bache SM, Müller K,
|
| 1972 |
+
Ooms J, Robinson D, Paige Seidel DP, Spinu V, Takahashi K, Vaughan D, Wilke C,
|
| 1973 |
+
Woo K, Yutani H (2019) Welcome to the tidyverse. Journal of Open Source
|
| 1974 |
+
Software 4(43):1686. https://doi.org/10.21105/joss.01686.
|
| 1975 |
+
[108]
|
| 1976 |
+
Wickham H, Hester J, Chang W, Bryan J (2022) devtools: Tools to make
|
| 1977 |
+
developing R packages easier. R package version 2.4.5. https://CRAN.R-
|
| 1978 |
+
project.org/package=devtools.
|
| 1979 |
+
[109]
|
| 1980 |
+
Wolpert DH (1992) Stacked generalization. Neural Networks 5(2):241–259.
|
| 1981 |
+
https://doi.org/10.1016/S0893-6080(05)80023-1.
|
| 1982 |
+
[110]
|
| 1983 |
+
Wright MN (2022) ranger: A fast implementation of random forests. R package
|
| 1984 |
+
version 0.14.1. https://CRAN.R-project.org/package=ranger.
|
| 1985 |
+
[111]
|
| 1986 |
+
Wright MN, Ziegler A (2017) ranger: A fast implementation of random forests for
|
| 1987 |
+
high dimensional data in C++ and R. Journal of Statistical Software 77(1):1−17.
|
| 1988 |
+
https://doi.org/10.18637/jss.v077.i01.
|
| 1989 |
+
[112]
|
| 1990 |
+
Xie Y (2014) knitr: A Comprehensive Tool for Reproducible Research in R. In:
|
| 1991 |
+
Stodden V, Leisch F, Peng RD (Eds) Implementing Reproducible Computational
|
| 1992 |
+
Research. Chapman and Hall/CRC.
|
| 1993 |
+
[113]
|
| 1994 |
+
Xie Y (2015) Dynamic Documents with R and knitr, 2nd edition. Chapman and
|
| 1995 |
+
Hall/CRC.
|
| 1996 |
+
[114]
|
| 1997 |
+
Xie Y (2022) knitr: A general-purpose package for dynamic report generation
|
| 1998 |
+
in R. R package version 1.40. https://CRAN.R-project.org/package=knitr.
|
| 1999 |
+
[115]
|
| 2000 |
+
Xie Y, Allaire JJ, Grolemund G (2018) R Markdown: The Definitive Guide. Chapman
|
| 2001 |
+
and Hall/CRC. ISBN 9781138359338. https://bookdown.org/yihui/rmarkdown.
|
| 2002 |
+
[116]
|
| 2003 |
+
Xie Y, Dervieux C, Riederer E (2020) R Markdown Cookbook. Chapman and
|
| 2004 |
+
Hall/CRC. ISBN 9780367563837. https://bookdown.org/yihui/rmarkdown-
|
| 2005 |
+
cookbook.
|
| 2006 |
+
[117]
|
| 2007 |
+
Xiong L, Li S, Tang G, Strobl J (2022) Geomorphometry and terrain analysis: Data,
|
| 2008 |
+
methods, platforms and applications. Earth-Science Reviews 233:104191.
|
| 2009 |
+
https://doi.org/10.1016/j.earscirev.2022.104191.
|
| 2010 |
+
[118]
|
| 2011 |
+
Yang Z, Hsu K, Sorooshian S, Xu X, Braithwaite D, Verbist KMJ (2016) Bias
|
| 2012 |
+
adjustment of satellite-based precipitation estimation using gauge observations:
|
| 2013 |
+
A case study in Chile. Journal of Geophysical Research: Atmospheres
|
| 2014 |
+
121(8):3790-3806. https://doi.org/10.1002/2015JD024540.
|
| 2015 |
+
[119]
|
| 2016 |
+
Yang X, Yang S, Tan ML, Pan H, Zhang H, Wang G, He R, Wang Z (2022) Correcting
|
| 2017 |
+
the bias of daily satellite precipitation estimates in tropical regions using deep
|
| 2018 |
+
neural
|
| 2019 |
+
network.
|
| 2020 |
+
Journal
|
| 2021 |
+
of
|
| 2022 |
+
Hydrology
|
| 2023 |
+
608:127656.
|
| 2024 |
+
https://doi.org/10.1016/j.jhydrol.2022.127656.
|
| 2025 |
+
[120]
|
| 2026 |
+
Zandi O, Zahraie B, Nasseri M, Behrangi A (2022) Stacking machine learning
|
| 2027 |
+
models versus a locally weighted linear model to generate high-resolution
|
| 2028 |
+
monthly precipitation over a topographically complex area. Atmospheric
|
| 2029 |
+
Research 272:106159. https://doi.org/10.1016/j.atmosres.2022.106159.
|
| 2030 |
+
|
| 2031 |
+
33
|
| 2032 |
+
|
| 2033 |
+
[121]
|
| 2034 |
+
Zhang L, Li X, Zheng D, Zhang K, Ma Q, Zhao Y, Ge Y (2021) Merging multiple
|
| 2035 |
+
satellite-based precipitation products and gauge observations using a novel
|
| 2036 |
+
double machine learning approach. Journal of Hydrology 594:125969.
|
| 2037 |
+
https://doi.org/10.1016/j.jhydrol.2021.125969.
|
| 2038 |
+
|
B9AzT4oBgHgl3EQfTfz1/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
BtE4T4oBgHgl3EQf5Q6j/content/2301.05322v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c3fdf398e67a5e1115e9f51503d04c374fee4f4affdb2beab75cf9fdf8fb6dda
|
| 3 |
+
size 1252952
|
BtE4T4oBgHgl3EQf5Q6j/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5fc77552d9cdb7c6560c9f4cf5cd5fad2c798b310a47a2806b59cbbfed17af42
|
| 3 |
+
size 3407917
|
BtE4T4oBgHgl3EQf5Q6j/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bd3729bfa7358cd928d5e348b9940d7407fe0a125a67c6b2dcd3f91f2a8bb608
|
| 3 |
+
size 127975
|
CtFQT4oBgHgl3EQfOzYB/content/tmp_files/2301.13276v1.pdf.txt
ADDED
|
@@ -0,0 +1,330 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
DISTRIBUTED SWARM INTELLIGENCE
|
| 2 |
+
Karthik reddy Kanjula
|
| 3 |
+
School of Coumputing and Information
|
| 4 |
+
West Chester University of Pennsylvania
|
| 5 |
+
West Chester, PA 19383
|
| 6 |
+
karthikreddykanjula99@gmail.com
|
| 7 |
+
Sai Meghana Kolla
|
| 8 |
+
School of Mathematics and Computer Science
|
| 9 |
+
Pennsylvania state University
|
| 10 |
+
Harrisburg, PA 17057
|
| 11 |
+
szk6163@psu.edu
|
| 12 |
+
February 1, 2023
|
| 13 |
+
ABSTRACT
|
| 14 |
+
This paper presents the development of a distributed application that facilitates the un-
|
| 15 |
+
derstanding and application of swarm intelligence in solving optimization problems. The
|
| 16 |
+
platform comprises a search space of customizable random particles, allowing users to tailor
|
| 17 |
+
the solution to their specific needs. By leveraging the power of Ray distributed computing,
|
| 18 |
+
the application can support multiple users simultaneously, offering a flexible and scalable
|
| 19 |
+
solution. The primary objective of this project is to provide a user-friendly platform that
|
| 20 |
+
enhances the understanding and practical use of swarm intelligence in problem-solving.
|
| 21 |
+
1
|
| 22 |
+
Introduction
|
| 23 |
+
The Particle Swarm Optimization (PSO) algorithm is an approximation algorithm that finds the best solution
|
| 24 |
+
from all the explored feasible solutions for any problem that can be formulated into a mathematical equation.
|
| 25 |
+
In the field of algorithms and theoretical computer science, optimization problems are known by the name
|
| 26 |
+
"approximation" algorithms. In this project, we built a web application that hosts a PSO algorithm with
|
| 27 |
+
interactive features such that any person trying to solve a problem with PSO can leverage our distributed
|
| 28 |
+
application with Ray to solve it.
|
| 29 |
+
2
|
| 30 |
+
Motivation
|
| 31 |
+
The wide-range availability of models based on neural networks and machine learning algorithms explain
|
| 32 |
+
future of AI development in today’s technology-driven environment. Swarm Intelligence is a branch of AI
|
| 33 |
+
which is adapted from the nature to solve the problems faced by humans.
|
| 34 |
+
Swarm Intelligence (S.I.) was first proposed in 1989 by Gerardo Beni and Jing Wang, as the name implies S.I.
|
| 35 |
+
is collective intelligence. To explain, consider a flock of birds that travel together, every individual bird
|
| 36 |
+
can make a decision and all the birds in a flock communicate and come up with a decision to migrate to a
|
| 37 |
+
particular place in a particular pattern depending upon the season. There are many such examples in our
|
| 38 |
+
ecosystem that represent Swarm Intelligence like ant colonies, bee colonies, and schools of fish. The basic
|
| 39 |
+
idea is to bring in a set of agents or particles which have an intelligence of their own and these intelligent
|
| 40 |
+
systems communicate with each other and reach a common and near-optimal solution for a given problem [1].
|
| 41 |
+
As mentioned above, the flock of birds inspired developers to develop Particle Swarm Optimization
|
| 42 |
+
algorithm. In this algorithm, we will have a certain number of particles that will be working together by
|
| 43 |
+
communicating continuously to achieve a common goal. The applications of PSO in the real world are
|
| 44 |
+
limitless [2].
|
| 45 |
+
arXiv:2301.13276v1 [cs.AI] 30 Jan 2023
|
| 46 |
+
|
| 47 |
+
A PREPRINT - FEBRUARY 1, 2023
|
| 48 |
+
In the next generation of AI applications, the algorithm behaviour is understandable to the end-user when
|
| 49 |
+
interacting. These interactive applications create new and complex problems like high processing and
|
| 50 |
+
adaptability. With Ray, a distributed computing framework, new and complex system requirements such as
|
| 51 |
+
performance and scalability can be addressed. Ray provides a unified interface for expressing task-parallel
|
| 52 |
+
computation, which is powered by a single dynamic execution engine [3].
|
| 53 |
+
The framework we suggested for this project helps in solving problems such as energy storage optimization,
|
| 54 |
+
NP-hard problems, and others. Any such optimization problem that forms a mathematical equation
|
| 55 |
+
is solvable by reducing to this algorithm, using our framework makes it a scalable, distributed Python
|
| 56 |
+
application. The main motivation of our project is to introduce people to what swarm intelligence is
|
| 57 |
+
and how it can be achieved through PSO by providing them with a visualization of how the algorithm works.
|
| 58 |
+
3
|
| 59 |
+
Literature survey
|
| 60 |
+
The particle swarm optimization algorithm was first studied by Kennedy and Eberhart (1995) on bird
|
| 61 |
+
flocking and fish school behavior led to the development of this type of algorithm. The term boids is
|
| 62 |
+
a contraction of the term birdoid objects and is widely used to denote flocking creatures. Using the so-
|
| 63 |
+
cial environment concept they described the implement of the particle swarm optimization (PSO) method [4].
|
| 64 |
+
The particle swarm optimization algorithm implemented using python programming language is wrapped
|
| 65 |
+
with Bokeh for plotting and Panel for dash-boarding. The Panel API offers a high level of flexibility and
|
| 66 |
+
simplicity. Many of the most popular dashboard functions are provided directly on Panel objects and equally
|
| 67 |
+
across them, making them easier to deal with. Furthermore, altering a dashboard’s individual components,
|
| 68 |
+
as well as dynamically adding/removing/replacing them, is as simple as manipulating a list or dictionary in
|
| 69 |
+
Python. A number of basic requirements drove the decision to construct an API on top of Bokeh rather than
|
| 70 |
+
merely extend it [5].
|
| 71 |
+
The authors in paper [6] discussed about a significant issue faced by many domain scientists in figuring
|
| 72 |
+
out how to design a Python-based application that takes advantage of the parallelism with inherent
|
| 73 |
+
distributedness and heterogeneous computing. Domain scientists’ normal methodology is experimenting
|
| 74 |
+
with novel methods on tiny datasets before moving on to larger datasets. When the dataset size grows too
|
| 75 |
+
enormous to be processed on a single node, a tipping point is achieved, similarly a tipping point can also be
|
| 76 |
+
reached when accelerators are over utilized.
|
| 77 |
+
One of the solution to above problem is to use Ray. A worker in a Ray is a stateless process that performs
|
| 78 |
+
activities (remote functions) which are triggered by a driver or another process. As a process of distributing
|
| 79 |
+
the application, the system layer in Ray launches workers and assigns them tasks. A computationally
|
| 80 |
+
intensive task in any algorithm requires distributed solution to optimize performance, such tasks are
|
| 81 |
+
critically identified and automatically published among workers to solve them practically. A worker tries to
|
| 82 |
+
solve tasks in a sequential manner, with no local state restrained between them, was explained by [7]. Ray, a
|
| 83 |
+
distributed framework and the basic Ray core API patterns like remote functions as tasks are used in this
|
| 84 |
+
project to achieve distributive.
|
| 85 |
+
4
|
| 86 |
+
Design
|
| 87 |
+
System design can easily be put into three components. First, implementation of the algorithm. Second,
|
| 88 |
+
using bokeh,panel libraries to develop a dashboard for interaction and visualisation of particle swarm
|
| 89 |
+
optimization algorithm in a client/server approach in an assigned public network for multiple clients.
|
| 90 |
+
Lastly, the dashboard developed is then integrated with the ray framework to execute code asynchronously
|
| 91 |
+
while the ray framework takes care of the distribution process. This project implements a distributed web
|
| 92 |
+
application using ray to achieve distributed computing by parallelizing the code between assigned worker
|
| 93 |
+
nodes.
|
| 94 |
+
2
|
| 95 |
+
|
| 96 |
+
A PREPRINT - FEBRUARY 1, 2023
|
| 97 |
+
This project is distributed in three ways:
|
| 98 |
+
1. Inherently distributed particle swarm optimization algorithm :
|
| 99 |
+
As mentioned in the introduction, The Particle Swarm Optimization algorithm is inherently
|
| 100 |
+
distributed. Each individual particle communicates with one another and comes up with an optimal
|
| 101 |
+
decision. For instance, if the problem is to find a minimum point where x2 + y2 is minimum, the
|
| 102 |
+
particles searches the entire search space and each particle lays down the best position that is
|
| 103 |
+
found and based on all the results, the particles together will come up with the best possible solution.
|
| 104 |
+
2. Client/Server based dashboard:
|
| 105 |
+
Using Panel server we are hosting the application online that is available to a system in the same
|
| 106 |
+
wireless network. Every user that opens the application is a client and the computer on the which
|
| 107 |
+
the program is running acts a server.
|
| 108 |
+
3. Distributed Computing using Ray:
|
| 109 |
+
Multiple users accessing the application can increase the load on the computer on which it is
|
| 110 |
+
running, to overcome this Ray framework is used for distributed computing. Ray consists of a head
|
| 111 |
+
node connected with worker nodes that creates jobs with processes id’s and a set of worker nodes
|
| 112 |
+
including a head node to work on.
|
| 113 |
+
4.1
|
| 114 |
+
System Architecture
|
| 115 |
+
Figure 1: Architecture of Distributed Swarm Intelligence
|
| 116 |
+
1. Clients: Multiple users or clients can access the application simultaneously and work on the
|
| 117 |
+
interactive GUI.
|
| 118 |
+
2. User Interface: In the interactive UI, a user can individually interact with the particles of a swarm
|
| 119 |
+
and tweak the parameters and observe the behaviour of the particles.
|
| 120 |
+
3. Server: Requests from client are received by the server and the tasks are distributed among worker
|
| 121 |
+
nodes.
|
| 122 |
+
3
|
| 123 |
+
|
| 124 |
+
GUI
|
| 125 |
+
Particle Swarm
|
| 126 |
+
Swarm
|
| 127 |
+
Optimization
|
| 128 |
+
Parameters
|
| 129 |
+
PSU guest
|
| 130 |
+
Respond
|
| 131 |
+
P3
|
| 132 |
+
Display
|
| 133 |
+
Laptop
|
| 134 |
+
results
|
| 135 |
+
RayCluster
|
| 136 |
+
Worker1
|
| 137 |
+
Device
|
| 138 |
+
User
|
| 139 |
+
Global
|
| 140 |
+
Worker2
|
| 141 |
+
Head
|
| 142 |
+
Node
|
| 143 |
+
Control
|
| 144 |
+
Node
|
| 145 |
+
Scaler
|
| 146 |
+
System
|
| 147 |
+
Workern
|
| 148 |
+
Mobile
|
| 149 |
+
Head node
|
| 150 |
+
Device
|
| 151 |
+
DevicesA PREPRINT - FEBRUARY 1, 2023
|
| 152 |
+
5
|
| 153 |
+
Implementation
|
| 154 |
+
The project is implemented in Python. It is the best fit for the project because of the access to third party
|
| 155 |
+
libraries and frameworks for Dashboard and Distributed computing.
|
| 156 |
+
• Hardware Heterogeneity : The application can be accessed from any machine irrespective of the
|
| 157 |
+
OS.
|
| 158 |
+
• Resource Sharing : Using Ray, multiple computers can be connected together and share resources.
|
| 159 |
+
• Concurrency : Multiple users can connect to the network to access it.
|
| 160 |
+
• Scalability : With ray, any number of worker nodes can be added easily to distribute the computation
|
| 161 |
+
load.
|
| 162 |
+
5.1
|
| 163 |
+
Algorithm
|
| 164 |
+
Particle Swarm Optimization algorithm is implemented using python programming language. In Algorithm
|
| 165 |
+
1 below, the psuedo code for the PSO algorithm is written. In the algorithm, we first declare the swarm using
|
| 166 |
+
Particle class which has following properties :
|
| 167 |
+
pBest : Best position of the particle where the particle is fittest.
|
| 168 |
+
particlePosition : Particle present position.
|
| 169 |
+
particleError : Particle present error determined by fitness function.
|
| 170 |
+
Fitness function in the algorithm computes the value of the mathematical function with the position of the
|
| 171 |
+
particle, the value is also called error.For each particle, fitness is calculated for every position the particle is
|
| 172 |
+
in. Our goal here is to find the position where the value returned by the fitness function is minimum. If the
|
| 173 |
+
present fitness is better than the particle best fitness so far, we will update the particle’s best position. Global
|
| 174 |
+
best position is the best position among all the particles in the swarm. In every iteration, the global best and
|
| 175 |
+
particle best are updated and all the particles will move closer to the particle that gives global best position.
|
| 176 |
+
From there each particle moves randomly for a particular distance, this distance is calculated as velocity v in
|
| 177 |
+
every iteration and depends on learning factors : c1,c2 [8] [9].
|
| 178 |
+
Algorithm 1: Particle Swarm Optimization Algorithm
|
| 179 |
+
Result: Optimal Solution for a problem
|
| 180 |
+
p = Particle();
|
| 181 |
+
swarm = [p] * numberOfParticles;
|
| 182 |
+
while Error approximates to minimum possible value do
|
| 183 |
+
for p in swarm do
|
| 184 |
+
fp = fitness(particlePosition);
|
| 185 |
+
if fp is better than fitness(pBest) then
|
| 186 |
+
pBest = p particleError = fp
|
| 187 |
+
end
|
| 188 |
+
end
|
| 189 |
+
gBest = best particlePosition in swarm;
|
| 190 |
+
gError = best particleError in swarm;
|
| 191 |
+
for particle in swarm do
|
| 192 |
+
v = v + c1*rand*(pBest - particlePosition) + c2*rand*(gBest - particlePosition);
|
| 193 |
+
end
|
| 194 |
+
end
|
| 195 |
+
5.2
|
| 196 |
+
Bokeh & Panel
|
| 197 |
+
We used Panel, an open-source python library to create interactive visualization and dashboard. The
|
| 198 |
+
dashboard layout is designed with pn.row, pn.column to place a plot or widget in row & column. Any
|
| 199 |
+
widget placed in the panel is a container that has a certain functionality and user can utilize them to tweak
|
| 200 |
+
4
|
| 201 |
+
|
| 202 |
+
A PREPRINT - FEBRUARY 1, 2023
|
| 203 |
+
the parameters of the particle swarm algorithm. Additionally, we also deployed slider widgets using the
|
| 204 |
+
integer sliders functionality of the panel to choose the number of particles and also facilitated the user with a
|
| 205 |
+
drop-down box to choose from different mathematical functions. To plot graphs and achieve a continuous
|
| 206 |
+
streaming of particles we used a holoviews dynamic map container and the coordinates of the particles are
|
| 207 |
+
updated for every 3 seconds using a periodic callback function. The changes in the widgets are applied
|
| 208 |
+
to algorithm with slider value to plot accordingly. We have also create buttons when clicked will start the
|
| 209 |
+
swarm. Any intermediate changes during the streaming is also handled. This entire user-interface is hosted
|
| 210 |
+
by bokeh server using tornado, where tornado is a asynchronous python networking library.
|
| 211 |
+
5.3
|
| 212 |
+
Ray
|
| 213 |
+
Ray enabled us to make distributed computing possible with code changes. We have to initiate the ray
|
| 214 |
+
using ray.init function to initialize the ray context. A ray.remote decorator upon a function that will be
|
| 215 |
+
executed as a task in a different process. A .remote post-fix is used to get back the work done at processes of
|
| 216 |
+
a remote function method. The important concepts to understand in ray are ray nodes and ports, to run the
|
| 217 |
+
application in a distributed paradigm we start the process of distribution by starting the head node first and
|
| 218 |
+
later the worker nodes will be given the address of a head node to form a cluster and the ray worker nodes
|
| 219 |
+
are automatically scalable upon the workload of application. The inter-process communication between
|
| 220 |
+
each worker process is carried via TCP ports, an additional benefit of using ray nodes is their security.
|
| 221 |
+
5.4
|
| 222 |
+
Experimental analysis
|
| 223 |
+
The swarm particles visualization plot for the mathematical function : x2 + (y − 100)2.
|
| 224 |
+
Figure 2: 50 Particles solving a mathematical function
|
| 225 |
+
The swarm particles visualization plot for the mathematical function : (x − 234)2 + (y + 100)2.
|
| 226 |
+
5
|
| 227 |
+
|
| 228 |
+
Plot1:PSO foramathematical computation
|
| 229 |
+
Plot1:PSOforamathematical computation
|
| 230 |
+
400
|
| 231 |
+
400
|
| 232 |
+
200
|
| 233 |
+
200
|
| 234 |
+
axis
|
| 235 |
+
axis
|
| 236 |
+
0
|
| 237 |
+
-200
|
| 238 |
+
-200
|
| 239 |
+
-400
|
| 240 |
+
-400
|
| 241 |
+
-400
|
| 242 |
+
-200
|
| 243 |
+
0
|
| 244 |
+
200
|
| 245 |
+
400
|
| 246 |
+
-400
|
| 247 |
+
-200
|
| 248 |
+
0
|
| 249 |
+
200
|
| 250 |
+
400
|
| 251 |
+
十
|
| 252 |
+
十
|
| 253 |
+
Plot 1:PSOfor a mathematical computation
|
| 254 |
+
Plot1:PSOfora mathematical computation
|
| 255 |
+
400
|
| 256 |
+
400
|
| 257 |
+
200
|
| 258 |
+
200
|
| 259 |
+
axis
|
| 260 |
+
0
|
| 261 |
+
0
|
| 262 |
+
-200
|
| 263 |
+
-200
|
| 264 |
+
-400
|
| 265 |
+
-400A PREPRINT - FEBRUARY 1, 2023
|
| 266 |
+
Figure 3: 100 Particles solving a mathematical function
|
| 267 |
+
6
|
| 268 |
+
Conclusion
|
| 269 |
+
A web application for visualising the Particle Swarm Optimization algorithm is implemented with Ray for
|
| 270 |
+
scalability in this project. The computing process sent to Ray worker nodes has effectively progressed. In
|
| 271 |
+
our experimental analysis, the system architecture has met all desired distributed challenges. Similarly,
|
| 272 |
+
the effectiveness of swarm intelligence behaviour is now simple to understand with this application. For
|
| 273 |
+
future research, we would like to adapt this framework to other optimization problems and evaluate their
|
| 274 |
+
performance. Also, enable users to input their mathematical function in the dashboard for particles to
|
| 275 |
+
swarm and give an error plot of their function with PSO.
|
| 276 |
+
References
|
| 277 |
+
[1] Gupta, Sahil. Introduction to swarm intelligence. GeeksforGeeks, (2021, May 15). Retrieved March 5, 2022,
|
| 278 |
+
from https://www.geeksforgeeks.org/introduction-to-swarm-intelligence/
|
| 279 |
+
[2] Kennedy, J.; Eberhart, R. Particle swarm optimization. Proceedings of ICNN’95 - International Conference on
|
| 280 |
+
Neural Networks (1995), 4(0), 1942−1948, doi:10.1109/icnn.1995.488968.
|
| 281 |
+
[3] Moritz, Philipp, et al. Ray: A Distributed Framework for Emerging AI Applications. ArXiv.org, ArXiv, 16
|
| 282 |
+
Dec 2017, arXiv:1712.05889v2.
|
| 283 |
+
[4] Lindfield,
|
| 284 |
+
G.;
|
| 285 |
+
Penny,
|
| 286 |
+
J.
|
| 287 |
+
Particle
|
| 288 |
+
swarm
|
| 289 |
+
optimization
|
| 290 |
+
algorithms.
|
| 291 |
+
In-
|
| 292 |
+
troduction
|
| 293 |
+
to
|
| 294 |
+
Nature-Inspired
|
| 295 |
+
Optimization,
|
| 296 |
+
18
|
| 297 |
+
August
|
| 298 |
+
2017,
|
| 299 |
+
Retrieved
|
| 300 |
+
from
|
| 301 |
+
https://www.sciencedirect.com/science/article/pii/B9780128036365000037.
|
| 302 |
+
[5] Rudiger, P. Panel: A high-level app and dashboarding solution for the PyData ecosystem. Medium, (2019,
|
| 303 |
+
June 3)., https://medium.com/@philipp.jfr/panel-announcement-2107c2b15f52.
|
| 304 |
+
[6] Shirako, J., Hayashi, A., Paul, S. R., Tumanov, A., & Sarkar, V.
|
| 305 |
+
Automatic parallelization of
|
| 306 |
+
python programs for distributed heterogeneous computing. arXiv.org, arXiv, 11 March 2022, from
|
| 307 |
+
https://doi.org/10.48550/arXiv.2203.06233.
|
| 308 |
+
6
|
| 309 |
+
|
| 310 |
+
Plot1:PSOforamathematicalcomputation
|
| 311 |
+
400
|
| 312 |
+
200
|
| 313 |
+
axis
|
| 314 |
+
-200
|
| 315 |
+
-400
|
| 316 |
+
-400
|
| 317 |
+
-200
|
| 318 |
+
200
|
| 319 |
+
400
|
| 320 |
+
-200A PREPRINT - FEBRUARY 1, 2023
|
| 321 |
+
[7] Philipp Moritz and Robert Nishihara and Stephanie Wang and Alexey Tumanov and Richard Liaw and
|
| 322 |
+
Eric Liang and Melih Elibol and Zongheng Yang and William Paul and Michael I. Jordan and Ion Stoica
|
| 323 |
+
Ray: A Distributed Framework for Emerging AI Applications. inproceedings of 13th USENIX Symposium
|
| 324 |
+
on Operating Systems Design and Implementation (OSDI 18), October 2018, isbn 978-1-939133-08-3, Carlsbad,
|
| 325 |
+
CA,pages 561–577, USENIX Association.
|
| 326 |
+
[8] Slovik, Adam. Swarm Intelligence Algorithms: A Tutorial. 1st ed., CRC PRESS, 2020.
|
| 327 |
+
[9] Rooy, N. (n.d.). Particle swarm optimization from scratch with python. nathanrooy.github.io. Retrieved from
|
| 328 |
+
https://nathanrooy.github.io/posts/2016-08-17/simple-particle-swarm-optimization-with-python/
|
| 329 |
+
7
|
| 330 |
+
|
CtFQT4oBgHgl3EQfOzYB/content/tmp_files/load_file.txt
ADDED
|
@@ -0,0 +1,190 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf,len=189
|
| 2 |
+
page_content='DISTRIBUTED SWARM INTELLIGENCE Karthik reddy Kanjula School of Coumputing and Information West Chester University of Pennsylvania West Chester, PA 19383 karthikreddykanjula99@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 3 |
+
page_content='com Sai Meghana Kolla School of Mathematics and Computer Science Pennsylvania state University Harrisburg, PA 17057 szk6163@psu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 4 |
+
page_content='edu February 1, 2023 ABSTRACT This paper presents the development of a distributed application that facilitates the un- derstanding and application of swarm intelligence in solving optimization problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 5 |
+
page_content=' The platform comprises a search space of customizable random particles, allowing users to tailor the solution to their specific needs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 6 |
+
page_content=' By leveraging the power of Ray distributed computing, the application can support multiple users simultaneously, offering a flexible and scalable solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 7 |
+
page_content=' The primary objective of this project is to provide a user-friendly platform that enhances the understanding and practical use of swarm intelligence in problem-solving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 8 |
+
page_content=' 1 Introduction The Particle Swarm Optimization (PSO) algorithm is an approximation algorithm that finds the best solution from all the explored feasible solutions for any problem that can be formulated into a mathematical equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 9 |
+
page_content=' In the field of algorithms and theoretical computer science, optimization problems are known by the name "approximation" algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 10 |
+
page_content=' In this project, we built a web application that hosts a PSO algorithm with interactive features such that any person trying to solve a problem with PSO can leverage our distributed application with Ray to solve it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 11 |
+
page_content=' 2 Motivation The wide-range availability of models based on neural networks and machine learning algorithms explain future of AI development in today’s technology-driven environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 12 |
+
page_content=' Swarm Intelligence is a branch of AI which is adapted from the nature to solve the problems faced by humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 13 |
+
page_content=' Swarm Intelligence (S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 14 |
+
page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 15 |
+
page_content=') was first proposed in 1989 by Gerardo Beni and Jing Wang, as the name implies S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 16 |
+
page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 17 |
+
page_content=' is collective intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 18 |
+
page_content=' To explain, consider a flock of birds that travel together, every individual bird can make a decision and all the birds in a flock communicate and come up with a decision to migrate to a particular place in a particular pattern depending upon the season.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 19 |
+
page_content=' There are many such examples in our ecosystem that represent Swarm Intelligence like ant colonies, bee colonies, and schools of fish.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 20 |
+
page_content=' The basic idea is to bring in a set of agents or particles which have an intelligence of their own and these intelligent systems communicate with each other and reach a common and near-optimal solution for a given problem [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 21 |
+
page_content=' As mentioned above, the flock of birds inspired developers to develop Particle Swarm Optimization algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 22 |
+
page_content=' In this algorithm, we will have a certain number of particles that will be working together by communicating continuously to achieve a common goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 23 |
+
page_content=' The applications of PSO in the real world are limitless [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 24 |
+
page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 25 |
+
page_content='13276v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 26 |
+
page_content='AI] 30 Jan 2023 A PREPRINT - FEBRUARY 1, 2023 In the next generation of AI applications, the algorithm behaviour is understandable to the end-user when interacting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 27 |
+
page_content=' These interactive applications create new and complex problems like high processing and adaptability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 28 |
+
page_content=' With Ray, a distributed computing framework, new and complex system requirements such as performance and scalability can be addressed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 29 |
+
page_content=' Ray provides a unified interface for expressing task-parallel computation, which is powered by a single dynamic execution engine [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 30 |
+
page_content=' The framework we suggested for this project helps in solving problems such as energy storage optimization, NP-hard problems, and others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 31 |
+
page_content=' Any such optimization problem that forms a mathematical equation is solvable by reducing to this algorithm, using our framework makes it a scalable, distributed Python application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 32 |
+
page_content=' The main motivation of our project is to introduce people to what swarm intelligence is and how it can be achieved through PSO by providing them with a visualization of how the algorithm works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 33 |
+
page_content=' 3 Literature survey The particle swarm optimization algorithm was first studied by Kennedy and Eberhart (1995) on bird flocking and fish school behavior led to the development of this type of algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 34 |
+
page_content=' The term boids is a contraction of the term birdoid objects and is widely used to denote flocking creatures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 35 |
+
page_content=' Using the so- cial environment concept they described the implement of the particle swarm optimization (PSO) method [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 36 |
+
page_content=' The particle swarm optimization algorithm implemented using python programming language is wrapped with Bokeh for plotting and Panel for dash-boarding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 37 |
+
page_content=' The Panel API offers a high level of flexibility and simplicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 38 |
+
page_content=' Many of the most popular dashboard functions are provided directly on Panel objects and equally across them, making them easier to deal with.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 39 |
+
page_content=' Furthermore, altering a dashboard’s individual components, as well as dynamically adding/removing/replacing them, is as simple as manipulating a list or dictionary in Python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 40 |
+
page_content=' A number of basic requirements drove the decision to construct an API on top of Bokeh rather than merely extend it [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 41 |
+
page_content=' The authors in paper [6] discussed about a significant issue faced by many domain scientists in figuring out how to design a Python-based application that takes advantage of the parallelism with inherent distributedness and heterogeneous computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 42 |
+
page_content=' Domain scientists’ normal methodology is experimenting with novel methods on tiny datasets before moving on to larger datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 43 |
+
page_content=' When the dataset size grows too enormous to be processed on a single node, a tipping point is achieved, similarly a tipping point can also be reached when accelerators are over utilized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 44 |
+
page_content=' One of the solution to above problem is to use Ray.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 45 |
+
page_content=' A worker in a Ray is a stateless process that performs activities (remote functions) which are triggered by a driver or another process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 46 |
+
page_content=' As a process of distributing the application, the system layer in Ray launches workers and assigns them tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 47 |
+
page_content=' A computationally intensive task in any algorithm requires distributed solution to optimize performance, such tasks are critically identified and automatically published among workers to solve them practically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 48 |
+
page_content=' A worker tries to solve tasks in a sequential manner, with no local state restrained between them, was explained by [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 49 |
+
page_content=' Ray, a distributed framework and the basic Ray core API patterns like remote functions as tasks are used in this project to achieve distributive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 50 |
+
page_content=' 4 Design System design can easily be put into three components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 51 |
+
page_content=' First, implementation of the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 52 |
+
page_content=' Second, using bokeh,panel libraries to develop a dashboard for interaction and visualisation of particle swarm optimization algorithm in a client/server approach in an assigned public network for multiple clients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 53 |
+
page_content=' Lastly, the dashboard developed is then integrated with the ray framework to execute code asynchronously while the ray framework takes care of the distribution process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 54 |
+
page_content=' This project implements a distributed web application using ray to achieve distributed computing by parallelizing the code between assigned worker nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 55 |
+
page_content=' 2 A PREPRINT - FEBRUARY 1, 2023 This project is distributed in three ways: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 56 |
+
page_content=' Inherently distributed particle swarm optimization algorithm : As mentioned in the introduction, The Particle Swarm Optimization algorithm is inherently distributed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 57 |
+
page_content=' Each individual particle communicates with one another and comes up with an optimal decision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 58 |
+
page_content=' For instance, if the problem is to find a minimum point where x2 + y2 is minimum, the particles searches the entire search space and each particle lays down the best position that is found and based on all the results, the particles together will come up with the best possible solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 59 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 60 |
+
page_content=' Client/Server based dashboard: Using Panel server we are hosting the application online that is available to a system in the same wireless network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 61 |
+
page_content=' Every user that opens the application is a client and the computer on the which the program is running acts a server.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 62 |
+
page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 63 |
+
page_content=' Distributed Computing using Ray: Multiple users accessing the application can increase the load on the computer on which it is running, to overcome this Ray framework is used for distributed computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 64 |
+
page_content=' Ray consists of a head node connected with worker nodes that creates jobs with processes id’s and a set of worker nodes including a head node to work on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 65 |
+
page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 66 |
+
page_content='1 System Architecture Figure 1: Architecture of Distributed Swarm Intelligence 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 67 |
+
page_content=' Clients: Multiple users or clients can access the application simultaneously and work on the interactive GUI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 68 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 69 |
+
page_content=' User Interface: In the interactive UI, a user can individually interact with the particles of a swarm and tweak the parameters and observe the behaviour of the particles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 70 |
+
page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 71 |
+
page_content=' Server: Requests from client are received by the server and the tasks are distributed among worker nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 72 |
+
page_content=' 3 GUI Particle Swarm Swarm Optimization Parameters PSU guest Respond P3 Display Laptop results RayCluster Worker1 Device User Global Worker2 Head Node Control Node Scaler System Workern Mobile Head node Device DevicesA PREPRINT - FEBRUARY 1, 2023 5 Implementation The project is implemented in Python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 73 |
+
page_content=' It is the best fit for the project because of the access to third party libraries and frameworks for Dashboard and Distributed computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 74 |
+
page_content=' Hardware Heterogeneity : The application can be accessed from any machine irrespective of the OS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 75 |
+
page_content=' Resource Sharing : Using Ray, multiple computers can be connected together and share resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 76 |
+
page_content=' Concurrency : Multiple users can connect to the network to access it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 77 |
+
page_content=' Scalability : With ray, any number of worker nodes can be added easily to distribute the computation load.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 78 |
+
page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 79 |
+
page_content='1 Algorithm Particle Swarm Optimization algorithm is implemented using python programming language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 80 |
+
page_content=' In Algorithm 1 below, the psuedo code for the PSO algorithm is written.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 81 |
+
page_content=' In the algorithm, we first declare the swarm using Particle class which has following properties : pBest : Best position of the particle where the particle is fittest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 82 |
+
page_content=' particlePosition : Particle present position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 83 |
+
page_content=' particleError : Particle present error determined by fitness function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 84 |
+
page_content=' Fitness function in the algorithm computes the value of the mathematical function with the position of the particle, the value is also called error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 85 |
+
page_content='For each particle, fitness is calculated for every position the particle is in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 86 |
+
page_content=' Our goal here is to find the position where the value returned by the fitness function is minimum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 87 |
+
page_content=' If the present fitness is better than the particle best fitness so far, we will update the particle’s best position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 88 |
+
page_content=' Global best position is the best position among all the particles in the swarm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 89 |
+
page_content=' In every iteration, the global best and particle best are updated and all the particles will move closer to the particle that gives global best position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 90 |
+
page_content=' From there each particle moves randomly for a particular distance, this distance is calculated as velocity v in every iteration and depends on learning factors : c1,c2 [8] [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 91 |
+
page_content=' Algorithm 1: Particle Swarm Optimization Algorithm Result: Optimal Solution for a problem p = Particle();' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 92 |
+
page_content=' swarm = [p] * numberOfParticles;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 93 |
+
page_content=' while Error approximates to minimum possible value do for p in swarm do fp = fitness(particlePosition);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 94 |
+
page_content=' if fp is better than fitness(pBest) then pBest = p particleError = fp end end gBest = best particlePosition in swarm;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 95 |
+
page_content=' gError = best particleError in swarm;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 96 |
+
page_content=' for particle in swarm do v = v + c1*rand*(pBest - particlePosition) + c2*rand*(gBest - particlePosition);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 97 |
+
page_content=' end end 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 98 |
+
page_content='2 Bokeh & Panel We used Panel, an open-source python library to create interactive visualization and dashboard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 99 |
+
page_content=' The dashboard layout is designed with pn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 100 |
+
page_content='row, pn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 101 |
+
page_content='column to place a plot or widget in row & column.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 102 |
+
page_content=' Any widget placed in the panel is a container that has a certain functionality and user can utilize them to tweak 4 A PREPRINT - FEBRUARY 1, 2023 the parameters of the particle swarm algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 103 |
+
page_content=' Additionally, we also deployed slider widgets using the integer sliders functionality of the panel to choose the number of particles and also facilitated the user with a drop-down box to choose from different mathematical functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 104 |
+
page_content=' To plot graphs and achieve a continuous streaming of particles we used a holoviews dynamic map container and the coordinates of the particles are updated for every 3 seconds using a periodic callback function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 105 |
+
page_content=' The changes in the widgets are applied to algorithm with slider value to plot accordingly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 106 |
+
page_content=' We have also create buttons when clicked will start the swarm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 107 |
+
page_content=' Any intermediate changes during the streaming is also handled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 108 |
+
page_content=' This entire user-interface is hosted by bokeh server using tornado, where tornado is a asynchronous python networking library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 109 |
+
page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 110 |
+
page_content='3 Ray Ray enabled us to make distributed computing possible with code changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 111 |
+
page_content=' We have to initiate the ray using ray.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 112 |
+
page_content='init function to initialize the ray context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 113 |
+
page_content=' A ray.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 114 |
+
page_content='remote decorator upon a function that will be executed as a task in a different process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 115 |
+
page_content=' A .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 116 |
+
page_content='remote post-fix is used to get back the work done at processes of a remote function method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 117 |
+
page_content=' The important concepts to understand in ray are ray nodes and ports, to run the application in a distributed paradigm we start the process of distribution by starting the head node first and later the worker nodes will be given the address of a head node to form a cluster and the ray worker nodes are automatically scalable upon the workload of application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 118 |
+
page_content=' The inter-process communication between each worker process is carried via TCP ports, an additional benefit of using ray nodes is their security.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 119 |
+
page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 120 |
+
page_content='4 Experimental analysis The swarm particles visualization plot for the mathematical function : x2 + (y − 100)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 121 |
+
page_content=' Figure 2: 50 Particles solving a mathematical function The swarm particles visualization plot for the mathematical function : (x − 234)2 + (y + 100)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 122 |
+
page_content=' 5 Plot1:PSO foramathematical computation Plot1:PSOforamathematical computation 400 400 200 200 axis axis 0 200 200 400 400 400 200 0 200 400 400 200 0 200 400 十 十 Plot 1:PSOfor a mathematical computation Plot1:PSOfora mathematical computation 400 400 200 200 axis 0 0 200 200 400 400A PREPRINT - FEBRUARY 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 123 |
+
page_content=' 2023 Figure 3: 100 Particles solving a mathematical function 6 Conclusion A web application for visualising the Particle Swarm Optimization algorithm is implemented with Ray for scalability in this project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 124 |
+
page_content=' The computing process sent to Ray worker nodes has effectively progressed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 125 |
+
page_content=' In our experimental analysis, the system architecture has met all desired distributed challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 126 |
+
page_content=' Similarly, the effectiveness of swarm intelligence behaviour is now simple to understand with this application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 127 |
+
page_content=' For future research, we would like to adapt this framework to other optimization problems and evaluate their performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 128 |
+
page_content=' Also, enable users to input their mathematical function in the dashboard for particles to swarm and give an error plot of their function with PSO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 129 |
+
page_content=' References [1] Gupta, Sahil.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 130 |
+
page_content=' Introduction to swarm intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 131 |
+
page_content=' GeeksforGeeks, (2021, May 15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 132 |
+
page_content=' Retrieved March 5, 2022, from https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 133 |
+
page_content='geeksforgeeks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 134 |
+
page_content='org/introduction-to-swarm-intelligence/ [2] Kennedy, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 135 |
+
page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 136 |
+
page_content=' Eberhart, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 137 |
+
page_content=' Particle swarm optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 138 |
+
page_content=' Proceedings of ICNN’95 - International Conference on Neural Networks (1995), 4(0), 1942−1948, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 139 |
+
page_content='1109/icnn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 140 |
+
page_content='1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 141 |
+
page_content='488968.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 142 |
+
page_content=' [3] Moritz, Philipp, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 143 |
+
page_content=' Ray: A Distributed Framework for Emerging AI Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 144 |
+
page_content=' ArXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 145 |
+
page_content='org, ArXiv, 16 Dec 2017, arXiv:1712.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 146 |
+
page_content='05889v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 147 |
+
page_content=' [4] Lindfield, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 148 |
+
page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 149 |
+
page_content=' Penny, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 150 |
+
page_content=' Particle swarm optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 151 |
+
page_content=' In- troduction to Nature-Inspired Optimization, 18 August 2017, Retrieved from https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 152 |
+
page_content='sciencedirect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 153 |
+
page_content='com/science/article/pii/B9780128036365000037.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 154 |
+
page_content=' [5] Rudiger, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 155 |
+
page_content=' Panel: A high-level app and dashboarding solution for the PyData ecosystem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 156 |
+
page_content=' Medium, (2019, June 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 157 |
+
page_content=', https://medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 158 |
+
page_content='com/@philipp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 159 |
+
page_content='jfr/panel-announcement-2107c2b15f52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 160 |
+
page_content=' [6] Shirako, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 161 |
+
page_content=', Hayashi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 162 |
+
page_content=', Paul, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 163 |
+
page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 164 |
+
page_content=', Tumanov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 165 |
+
page_content=', & Sarkar, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 166 |
+
page_content=' Automatic parallelization of python programs for distributed heterogeneous computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 167 |
+
page_content=' arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 168 |
+
page_content='org, arXiv, 11 March 2022, from https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 169 |
+
page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 170 |
+
page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 171 |
+
page_content='2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 172 |
+
page_content='06233.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 173 |
+
page_content=' 6 Plot1:PSOforamathematicalcomputation 400 200 axis 200 400 400 200 200 400 200A PREPRINT - FEBRUARY 1, 2023 [7] Philipp Moritz and Robert Nishihara and Stephanie Wang and Alexey Tumanov and Richard Liaw and Eric Liang and Melih Elibol and Zongheng Yang and William Paul and Michael I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 174 |
+
page_content=' Jordan and Ion Stoica Ray: A Distributed Framework for Emerging AI Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 175 |
+
page_content=' inproceedings of 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), October 2018, isbn 978-1-939133-08-3, Carlsbad, CA,pages 561–577, USENIX Association.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 176 |
+
page_content=' [8] Slovik, Adam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 177 |
+
page_content=' Swarm Intelligence Algorithms: A Tutorial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 178 |
+
page_content=' 1st ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 179 |
+
page_content=', CRC PRESS, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 180 |
+
page_content=' [9] Rooy, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 181 |
+
page_content=' (n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 182 |
+
page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 183 |
+
page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 184 |
+
page_content=' Particle swarm optimization from scratch with python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 185 |
+
page_content=' nathanrooy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 186 |
+
page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 187 |
+
page_content='io.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 188 |
+
page_content=' Retrieved from https://nathanrooy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 189 |
+
page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
| 190 |
+
page_content='io/posts/2016-08-17/simple-particle-swarm-optimization-with-python/ 7' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CtFQT4oBgHgl3EQfOzYB/content/2301.13276v1.pdf'}
|
DtE3T4oBgHgl3EQfVAog/content/2301.04455v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:69b5f5920a6ef37439e0ea2b0ba4268363f7d9014571deabc3e6a85b5b816aab
|
| 3 |
+
size 356969
|
DtE3T4oBgHgl3EQfVAog/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a8702a87c56b4bb77f7bc006b581f4b8a8c9511950c5e75f219d3c48960d76c2
|
| 3 |
+
size 2228269
|
DtE3T4oBgHgl3EQfVAog/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a3f9e583b830b5c382630236e6c35910e00958fd9b302b01c869160440d2439d
|
| 3 |
+
size 75825
|
F9AyT4oBgHgl3EQfrPnq/content/tmp_files/2301.00559v1.pdf.txt
ADDED
|
@@ -0,0 +1,943 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Precisely Modeling the Potential of a Surface Electrode Ion Trap
|
| 2 |
+
Qingqing Qin,1, 2, 3, ∗ Ting Chen,1, 2, 3, ∗ Xinfang Zhang,4 Baoquan Ou,5, 2, 3 Jie
|
| 3 |
+
Zhang,1, 2, 3 Chunwang Wu,1, 2, 3 Yi Xie,1, 2, 3, † Wei Wu,1, 2, 3, ‡ and Pingxing Chen1, 2, 3
|
| 4 |
+
1Institute for Quantum Science and Technology, College of Science,
|
| 5 |
+
National University of Defense Technology, Changsha 410073, P. R. China
|
| 6 |
+
2Hunan Key Laboratory of Mechanism and Technology of Quantum Information, Changsha 410073, Hunan, P. R. China
|
| 7 |
+
3Hefei National Laboratory, Hefei 230088, P. R. China
|
| 8 |
+
4Institute for Quantum Information & State Key Laboratory of High Performance Computing,
|
| 9 |
+
College of Computer Science, National University of Defense Technology, Changsha 410073, China
|
| 10 |
+
5Department of Physics, College of Science, National University of Defense Technology, Changsha 410073, P. R. China
|
| 11 |
+
(Dated: January 3, 2023)
|
| 12 |
+
Accurately modeling the potential generated by electrode of a Paul trap is of great importance for either
|
| 13 |
+
precision metrology or quantum computing using ions in a Paul trap. For a rectangular shaped electrode, we
|
| 14 |
+
find a simple but highly accurate parametric expression for the spatial field distribution. Using this expression,
|
| 15 |
+
a method based on multi-objective optimization is presented to accurately characterize the spatial field strength
|
| 16 |
+
due to the electrodes and also the stray electric field. This method allows to utilize many different types of data
|
| 17 |
+
for optimization, such as the equilibrium position of ions in a linear string, trap frequencies and the equilibrium
|
| 18 |
+
position of a single ion, which therefore greatly improves the model accuracy. The errors of predicted secular
|
| 19 |
+
frequencies and average ion position are less than ±0.5% and 1.2 µm respectively, much better than the ones
|
| 20 |
+
predicted by existing method.
|
| 21 |
+
I.
|
| 22 |
+
INTRODUCTION
|
| 23 |
+
Trapped
|
| 24 |
+
ion
|
| 25 |
+
qubits
|
| 26 |
+
which
|
| 27 |
+
featured
|
| 28 |
+
in
|
| 29 |
+
long
|
| 30 |
+
coher-
|
| 31 |
+
ence time[1, 2],
|
| 32 |
+
high operation fidelity[3–5] and full
|
| 33 |
+
connectivity[6] are among the most promising candidates for
|
| 34 |
+
quantum computing.
|
| 35 |
+
Besides, string of ions in the linear
|
| 36 |
+
Paul trap has also been used to enhance the signal noise ra-
|
| 37 |
+
tio of quantum precision metrology[7] and optical frequency
|
| 38 |
+
standard[8, 9].
|
| 39 |
+
In these applications, precise control of the spatial trap
|
| 40 |
+
field are prerequisite.
|
| 41 |
+
For precision metrology, the energy
|
| 42 |
+
level homogeneity should be guaranteed for all the ions in
|
| 43 |
+
a crystal to ensure the uniformity of line shift[9]. Consid-
|
| 44 |
+
ering the Coulomb interaction between ions, trap potential
|
| 45 |
+
therefore need to be carefully engineered. For quantum com-
|
| 46 |
+
puting, string of ion qubits need to be splitted, swapped,
|
| 47 |
+
transported between different trapping regions and merged
|
| 48 |
+
by electric potential engineering, as required by the quantum
|
| 49 |
+
charge-coupled device (QCCD) scheme[10–12]. These oper-
|
| 50 |
+
ations usually require that the harmonic potential frequency
|
| 51 |
+
of the trap unaltered and the motional state of the logic ions
|
| 52 |
+
unheated, to avoid motional state squeezing[13] and loss in
|
| 53 |
+
fidelity[14–16]. Recently, a general protocol based on mo-
|
| 54 |
+
tional squeeze and displacement for trapped ion transport,
|
| 55 |
+
separation, and merging was proposed[17], which then re-
|
| 56 |
+
quires the engineering of time-varying potentials. In another
|
| 57 |
+
work, QCCD scheme has been demonstrated with the help of
|
| 58 |
+
coolant ions, where the requirement for heating control during
|
| 59 |
+
transport has been removed[11]. However, after each trans-
|
| 60 |
+
port stage, a time consuming ground-state cooling stage is re-
|
| 61 |
+
quired, which takes up most part of the computation period.
|
| 62 |
+
∗ These authors contributed equally to this paper.
|
| 63 |
+
† xieyi2015@nudt.edu.cn
|
| 64 |
+
‡ weiwu@nudt.edu.cn
|
| 65 |
+
Shuttling ions between different zones without heating is still
|
| 66 |
+
a uttermost goal in QCCD architecture. All these require the
|
| 67 |
+
precise knowledge and subtle control of the spatial potential.
|
| 68 |
+
Since the trap potential can only be solved by the superpo-
|
| 69 |
+
sition of all the field due to each electrode and the ambient
|
| 70 |
+
sources, acquiring the full information of them is of great con-
|
| 71 |
+
cern.
|
| 72 |
+
The planar surface electrode ion trap (SET) with the dc
|
| 73 |
+
electrode divided into several segments[18, 19] is a ideal plat-
|
| 74 |
+
form for realizing QCCD architecture. Usually, the whole
|
| 75 |
+
chip is divided into several trapping zones by dc electrodes.
|
| 76 |
+
Shuttling of ions between different zones are realized by con-
|
| 77 |
+
trolling the voltages on these dc electrodes.
|
| 78 |
+
Many meth-
|
| 79 |
+
ods have been developed to assist the trap-geometry design
|
| 80 |
+
and determine the trap operating parameters for best perfor-
|
| 81 |
+
mance. Analytic method has been established for planar elec-
|
| 82 |
+
trode of arbitrary shape[20, 21].
|
| 83 |
+
Particularly, analytic for-
|
| 84 |
+
mulas has been derived for planar rectangular electrode[22].
|
| 85 |
+
These methods provide much convenience for trap design.
|
| 86 |
+
However, since the finite size effect and gap between elec-
|
| 87 |
+
trodes have been ignored, the precision is not sufficient.
|
| 88 |
+
Alternatively, numerical simulation using standard electro-
|
| 89 |
+
static solvers can cover these effects, although with numerical
|
| 90 |
+
errors[23], such as finite element method (FEM) and bound-
|
| 91 |
+
ary element method (BEM)[24]. FEM requires a discretiza-
|
| 92 |
+
tion of the domain and usually result in unsmooth potential.
|
| 93 |
+
BEM method only needs to discretize the surface, so the cal-
|
| 94 |
+
culation is faster and the result is much smoother. Even so,
|
| 95 |
+
the true potential of an ion trap cannot be fully simulated. Be-
|
| 96 |
+
cause unexpected electrode defects, patch-potentials[25, 26],
|
| 97 |
+
wire bonds, and environmental potentials caused by nearby
|
| 98 |
+
entities in a real trap are apparently impossible to simulate,
|
| 99 |
+
not to mention the time-varying effects, such as coating of
|
| 100 |
+
trap surface due to the atomic source[27, 28] and charging up
|
| 101 |
+
of the trap materials[29, 30].
|
| 102 |
+
Not surprisingly, measuring the true potential directly be-
|
| 103 |
+
arXiv:2301.00559v1 [quant-ph] 2 Jan 2023
|
| 104 |
+
|
| 105 |
+
2
|
| 106 |
+
comes the most accurate method. The trapped ion is a good
|
| 107 |
+
field probe by itself for ac field[31], dc field[32, 33], and
|
| 108 |
+
electric field noise[34]. By shuttling ion along the trap axis,
|
| 109 |
+
the trap frequency as a characteristic of the local filed can
|
| 110 |
+
be precisely measured[35]. On the other side, using linear
|
| 111 |
+
ion crystals, measuring the equilibrium spacings of the ions
|
| 112 |
+
within the crystal allows us to derive the spatial distribution
|
| 113 |
+
of potential[36]. These two methods are complementary, fo-
|
| 114 |
+
cused on local and spatial electric field respectively. To mod-
|
| 115 |
+
eling the spatial potential of a trap for the purpose of shut-
|
| 116 |
+
tling ions, the latter is much preferred. However, the fact that
|
| 117 |
+
ion probes are discretely distributed along the trap axis makes
|
| 118 |
+
field interpolation inevitable for this method. To suppress the
|
| 119 |
+
interpolation error, higher ion density is preferred, which how-
|
| 120 |
+
ever, will reduce the sensitivity of the ion probe. Such a con-
|
| 121 |
+
tradiction limits the measurement accuracy and the smooth-
|
| 122 |
+
ness of spatial potential. As a consequence, the derived elec-
|
| 123 |
+
tric field is inaccuracy for trap frequency calculation, which is
|
| 124 |
+
related to the second derivative of the electric potential.
|
| 125 |
+
Here we demonstrate a optimization-based trap modeling
|
| 126 |
+
method, which can derive smooth and accurate spatial po-
|
| 127 |
+
tential and predict trap frequency with high accuracy. This
|
| 128 |
+
method based on the ansatzs that the axial electrode poten-
|
| 129 |
+
tial can be expressed with an parametric empirical expression,
|
| 130 |
+
and the stray field is statically of a simple form. The former
|
| 131 |
+
is verified by BEM simulation results, and the latter is gen-
|
| 132 |
+
erally true for limited trap region and experimental period.
|
| 133 |
+
Comparing with the existing method based on linear ion crys-
|
| 134 |
+
tals, this new one utilize numerical optimization in stead of
|
| 135 |
+
interpolating and differential procedure. Therefore, numeri-
|
| 136 |
+
cal errors introduced by interpolation and integration are sup-
|
| 137 |
+
pressed. What’s more, using the multi-objective optimization
|
| 138 |
+
method with constraints [37] make it possible to use the mea-
|
| 139 |
+
sured trap frequencies and the equilibrium position of a single
|
| 140 |
+
ion as auxiliary data, which helps to reduce some systematic
|
| 141 |
+
errors. The advantages in model accuracy of the new method
|
| 142 |
+
is then verified by comparing the predicted values of the two
|
| 143 |
+
different models with the experimental value.
|
| 144 |
+
The remainder of this paper is organized as follows. Sec-
|
| 145 |
+
tion II reviews the existing trap modeling methods and present
|
| 146 |
+
the principle of our optimization-based trap modeling method.
|
| 147 |
+
In section III, we describe the experimental scheme for data
|
| 148 |
+
collection. Then the main result of this paper is given in sec-
|
| 149 |
+
tion IV where the field strengths of electrodes and ambient
|
| 150 |
+
sources are obtained using the two methods, and the accuracy
|
| 151 |
+
of the two models are compared with respect to the experi-
|
| 152 |
+
mental data. And finally, we conclude in section V.
|
| 153 |
+
II.
|
| 154 |
+
THEORY
|
| 155 |
+
II.1.
|
| 156 |
+
Brief Review of the Existing Trap Modeling Methods
|
| 157 |
+
We first review two theoretical trap modeling methods. The
|
| 158 |
+
linear SET uses rf electrodes to provide the axial confinement,
|
| 159 |
+
dc electrodes to provide the axial confinement and shuttling
|
| 160 |
+
control. In our SET, all electrodes are approximately rectan-
|
| 161 |
+
gular and placed in a plane. The electrostatic potential of a
|
| 162 |
+
planar electrode can be calculated analytically. Referring to
|
| 163 |
+
the analytic model from M. G. House’ theory[22], if one sup-
|
| 164 |
+
pose the electrodes extend infinitely in the plane and with in-
|
| 165 |
+
finitely small gaps, the static potential of a rectangle electrode
|
| 166 |
+
with unit-voltage is of the form
|
| 167 |
+
φk(x,y,z) = 1
|
| 168 |
+
2π
|
| 169 |
+
�
|
| 170 |
+
arctan
|
| 171 |
+
�
|
| 172 |
+
(xk2−x)(zk2−z)
|
| 173 |
+
y√
|
| 174 |
+
y2+(xk2−x)2+(zk2−z)2
|
| 175 |
+
�
|
| 176 |
+
−arctan
|
| 177 |
+
�
|
| 178 |
+
(xk1−x)(zk2−z)
|
| 179 |
+
y√
|
| 180 |
+
y2+(xk1−x)2+(zk2−z)2
|
| 181 |
+
�
|
| 182 |
+
−arctan
|
| 183 |
+
�
|
| 184 |
+
(xk2−x)(zk1−z)
|
| 185 |
+
y√
|
| 186 |
+
y2+(xk2−x)2+(zk1−z)2
|
| 187 |
+
�
|
| 188 |
+
+arctan
|
| 189 |
+
�
|
| 190 |
+
(xk1−x)(zk1−z)
|
| 191 |
+
y√
|
| 192 |
+
y2+(xk1−x)2+(zk1−z)2
|
| 193 |
+
��
|
| 194 |
+
,
|
| 195 |
+
(1)
|
| 196 |
+
where (xk1,0,zk1) and (xk2,0,zk2) are the opposite corners of
|
| 197 |
+
the kth electrode.
|
| 198 |
+
Because the finite size effect and the influence of gap be-
|
| 199 |
+
tween electrodes are not included in this model, the potential
|
| 200 |
+
of an electrode is independent of the presence or absence of
|
| 201 |
+
other electrodes around this one, which doesn’t match the ac-
|
| 202 |
+
tual potential. More accurate electrostatic field can be numer-
|
| 203 |
+
ically calculated by standard BEM method.
|
| 204 |
+
The unit-voltage potential φk are created when a voltage
|
| 205 |
+
of 1V is applied to the kth electrode and 0V to all the other
|
| 206 |
+
electrodes. The total axial potential is a combined one due to
|
| 207 |
+
all the dc electrodes and a small axial component of the RF
|
| 208 |
+
pseudopotential. According to the superposition principle and
|
| 209 |
+
neglecting the component of the rf pseudopotential, the total
|
| 210 |
+
axial potential of the surface ion traps is equal to the sum of
|
| 211 |
+
independent potentials
|
| 212 |
+
Φt =
|
| 213 |
+
N
|
| 214 |
+
∑
|
| 215 |
+
k=1
|
| 216 |
+
Vkφk,
|
| 217 |
+
(2)
|
| 218 |
+
where, N is the number of dc electrodes, Vk is the voltage ap-
|
| 219 |
+
plied to the kth electrode. Therefore, the main target of mod-
|
| 220 |
+
eling the trap potential is to accurately determine the form of
|
| 221 |
+
φk function. Besides, there is complex ambient potential due
|
| 222 |
+
to patch-potentials, wire bonds, atomic coating, charging up,
|
| 223 |
+
etc. which is labeled as Φs in the following and need to be
|
| 224 |
+
determined.
|
| 225 |
+
These two theoretical methods are not able to handle the Φs
|
| 226 |
+
and are not precise enough for shuttling control. Therefore,
|
| 227 |
+
we are pursuing the measurement method for determine unit-
|
| 228 |
+
voltage potential φk.
|
| 229 |
+
We now briefly review the method demonstrated by M.
|
| 230 |
+
Brownnutt et al.[36]. We consider single-charged ions are
|
| 231 |
+
confined in one-dimension (1D), i.e. alone x axis with con-
|
| 232 |
+
fining potential Φt. Each ion, i, in the stationary linear chain
|
| 233 |
+
at position, xi, experience a Coulomb repulsion force due to
|
| 234 |
+
all other ions, j, given by
|
| 235 |
+
F(i)
|
| 236 |
+
ion =
|
| 237 |
+
e2
|
| 238 |
+
4πε0 ∑
|
| 239 |
+
j̸=i
|
| 240 |
+
|xi −xj|
|
| 241 |
+
(xi −xj)3 .
|
| 242 |
+
(3)
|
| 243 |
+
This force is equal and opposite to external force provided
|
| 244 |
+
by the confining potential, Fext(xi). The corresponding elec-
|
| 245 |
+
tric field intensity termed Eext(xi). Using the ion positions
|
| 246 |
+
|
| 247 |
+
3
|
| 248 |
+
FIG. 1. Ion string used as the potential probe. An linear ion chain is trapped above the SET and Doppler-cooled by the 397-nm and 866-nm
|
| 249 |
+
laser light. Every time the voltage on one of the dc electrodes is changed, the ion string will move to a new equilibrium position. The extension
|
| 250 |
+
feature of the ion string allows us to probe the spatial field distribution.
|
| 251 |
+
as interpolation points, the function Eext(xi) can be numeri-
|
| 252 |
+
cally integrated to give the instantaneous confining potential
|
| 253 |
+
in 1D with a unimportant unknown integration constant. It
|
| 254 |
+
should be mentioned that the force Eext(xi) contains two com-
|
| 255 |
+
ponents, Eext(xi) = Et(xi) + Es(xi), where Et(xi) is due to all
|
| 256 |
+
the dc electrodes and voltage dependent, Es(xi) is due to all
|
| 257 |
+
the other unknown sources and voltage independent, which is
|
| 258 |
+
also called stray field. The corresponding potentials are Φt
|
| 259 |
+
and Φs, respectively.
|
| 260 |
+
To measure the unit-voltage potential φk of the electrode
|
| 261 |
+
k, the voltage on the electrode of interest is repeatedly varied
|
| 262 |
+
by δ each time. The total potential Φ = Φt + Φs is depend
|
| 263 |
+
on the voltage on the electrode of interest, Vk, and also on
|
| 264 |
+
other electrodes. The constant voltages on other electrodes are
|
| 265 |
+
collectively termed VB. The unit-voltage potential provided by
|
| 266 |
+
the electrode of interest can be calculated by
|
| 267 |
+
φk(x) = Φt(x,Vk = 1,VB = 0) =
|
| 268 |
+
(4)
|
| 269 |
+
[Φ(x,Vk +δ,VB)−Φ(x,Vk,VB)]×1V/δ.
|
| 270 |
+
Since the changed voltage δ will move the positions of the
|
| 271 |
+
ions, xi, the potential Φt(x,1,0) then can be calculated for all
|
| 272 |
+
x where the two data sets, Φ(x,VA + δ,VB) and Φ(x,VA,VB)
|
| 273 |
+
overlap, with the help of data interpolation. Uncertainty due
|
| 274 |
+
to numerical errors can be significantly decreased by averag-
|
| 275 |
+
ing over the results of potentials for many values of δ. The
|
| 276 |
+
measurable regions is extend due to the fact that ion string
|
| 277 |
+
moves as δ is varied.
|
| 278 |
+
Error analysis indicates that the interpolation step limited
|
| 279 |
+
the accuracy of this method, since measurement uncertainty
|
| 280 |
+
of the field is smaller for lower ion densities. However, the
|
| 281 |
+
numerical interpolation become less accurate in the limit of
|
| 282 |
+
low ion density. Besides, the averaged Φt(x,1,0) still can
|
| 283 |
+
not guarantee the smoothness of potential, and therefore is not
|
| 284 |
+
good in local field accuracy.
|
| 285 |
+
II.2.
|
| 286 |
+
Basic Theory of The Optimization-Based Modeling
|
| 287 |
+
Method
|
| 288 |
+
We now propose a data processing method based on numer-
|
| 289 |
+
ical optimization, which minimize the error between model
|
| 290 |
+
prediction and the experimental data. In this method, data in-
|
| 291 |
+
terpolation is not necessary, since sampling points are chosen
|
| 292 |
+
at where the ions located. Besides, our method combined the
|
| 293 |
+
merit of analytical function and experimental measurement,
|
| 294 |
+
i.e. smooth and accurate. The optimization algorithm allows
|
| 295 |
+
the use of many different types of experimental data, which
|
| 296 |
+
further improves the model accuracy.
|
| 297 |
+
To avoid integration, the unit-voltage electric field inten-
|
| 298 |
+
sity of the kth electrode, Ek(x,1,0) instead of Φt(x,1,0) is to
|
| 299 |
+
be determined directly. This requires a parametric expression
|
| 300 |
+
of the electric field intensity. For rectangular electrode, the
|
| 301 |
+
partial derivatives of Eq.(1) provide a choice, where 1D distri-
|
| 302 |
+
bution along the trap axis can be derived by letting y to be the
|
| 303 |
+
trap height and z = 0. The parameters to be determined could
|
| 304 |
+
be xk1 and xk2. However, this equation is too complicated for
|
| 305 |
+
optimization purpose.
|
| 306 |
+
We found the 1D unit-voltage potential curve along x axis
|
| 307 |
+
derived either by Eq.(1) or BEM method can be well approxi-
|
| 308 |
+
mated by a unnormalized Lorentz curve with the error within
|
| 309 |
+
only a few percent. As shown in Fig. (2a), the unite-voltage
|
| 310 |
+
1D potential of the 8th electrode calculated by BEM method is
|
| 311 |
+
fitted very well with Lorentz function. The axial component
|
| 312 |
+
of the electric field strength also matches well with the first
|
| 313 |
+
derivative of the Lorentz function, as shown in Fig. (2b).
|
| 314 |
+
Therefore, we start with a ansatz for the parametric expres-
|
| 315 |
+
sion of the electric field intensity of the kth rectangular elec-
|
| 316 |
+
trode:
|
| 317 |
+
φk(x) =
|
| 318 |
+
Akγk
|
| 319 |
+
(x−xk)2 +γ2
|
| 320 |
+
k
|
| 321 |
+
.
|
| 322 |
+
(5)
|
| 323 |
+
The free parameters Ak and γk are to be determined, and
|
| 324 |
+
xk is the center position of the kth electrode.
|
| 325 |
+
Then the x
|
| 326 |
+
|
| 327 |
+
anharmonic
|
| 328 |
+
potential
|
| 329 |
+
397 & 866 nm
|
| 330 |
+
laser beam
|
| 331 |
+
397 nm
|
| 332 |
+
laser beam4
|
| 333 |
+
0.01
|
| 334 |
+
0.02
|
| 335 |
+
0.03
|
| 336 |
+
0.04
|
| 337 |
+
-600
|
| 338 |
+
-400
|
| 339 |
+
-200
|
| 340 |
+
0
|
| 341 |
+
200
|
| 342 |
+
400
|
| 343 |
+
600
|
| 344 |
+
-80
|
| 345 |
+
-40
|
| 346 |
+
0
|
| 347 |
+
40
|
| 348 |
+
80
|
| 349 |
+
(a)
|
| 350 |
+
BEM model
|
| 351 |
+
Lorentz fit
|
| 352 |
+
(b)
|
| 353 |
+
E
|
| 354 |
+
x
|
| 355 |
+
(V/m )
|
| 356 |
+
x (
|
| 357 |
+
m)
|
| 358 |
+
BEM model
|
| 359 |
+
Lorentz fit
|
| 360 |
+
FIG. 2. Lorentz fit of the 1D unit-voltage potential curve along x axis.
|
| 361 |
+
(a) The potential curve and (b) the axial component of the electric
|
| 362 |
+
field strength. The blue solid lines are calculated by BEM method,
|
| 363 |
+
and the red dashed lines are derived by the fitted Lorentz function.
|
| 364 |
+
component of electric field intensity can be calculated by
|
| 365 |
+
Ek(x) = −∂φk(x)/∂x. This set of parametric functions is then
|
| 366 |
+
used to express the x component of the electric field intensity
|
| 367 |
+
E(x) = ∑N
|
| 368 |
+
k=1VkEk(x)+Es(x). The basic idea of optimization
|
| 369 |
+
method is to minimize the sum of squared errors of the pre-
|
| 370 |
+
dicted and measured trapping force Fext(xi) over all the ions
|
| 371 |
+
under all the different voltage settings.
|
| 372 |
+
We will have to assume that the stray electric field keeps
|
| 373 |
+
unchanged during the measurement. Since the trap region is
|
| 374 |
+
relatively small, we take the second ansatz that the axial dis-
|
| 375 |
+
tribution of stray electric field is of the form
|
| 376 |
+
Es(x) = ax2 +bx+c,
|
| 377 |
+
(6)
|
| 378 |
+
where, a, b and c are undetermined parameters.
|
| 379 |
+
Other than the equilibrium position of the ions string, the
|
| 380 |
+
secular motion frequency data at different voltage settings can
|
| 381 |
+
also be used to determine the model parameters. Unlike the
|
| 382 |
+
equilibrium position of trapped ion, the secular motion fre-
|
| 383 |
+
quency is related to the second derivative of the potential at
|
| 384 |
+
the location of potential minimum, D(xi) = ∑N
|
| 385 |
+
k=1VkDk(xi) +
|
| 386 |
+
Ds(xi) as follows:
|
| 387 |
+
ωx =
|
| 388 |
+
�
|
| 389 |
+
eD(xi)
|
| 390 |
+
Mion
|
| 391 |
+
,
|
| 392 |
+
(7)
|
| 393 |
+
with xi the equilibrium position, and the notation Dk(xi) =
|
| 394 |
+
∂ 2φk(x)
|
| 395 |
+
∂x2
|
| 396 |
+
|x=xi, Ds(xi) = ∂ 2Φs(x)
|
| 397 |
+
∂x2
|
| 398 |
+
|x=xi.
|
| 399 |
+
Furthermore, the equilibrium position of a single ion
|
| 400 |
+
trapped under certain voltages can be used as constrains for
|
| 401 |
+
the solution. This position xi is determined in trap model by
|
| 402 |
+
the root of E(xi) = 0. Compared with the ion string data set,
|
| 403 |
+
the single ion data set is much more accurate, since error only
|
| 404 |
+
comes from the position uncertainty of the ion itself. But it is
|
| 405 |
+
poorer in spatial extension.
|
| 406 |
+
Using data sets of different types and characteristics will
|
| 407 |
+
modify the local field precision. To fully utilizing all these
|
| 408 |
+
data sets, the modeling process can now be summarized as a
|
| 409 |
+
multi-objective optimization problem with the objective func-
|
| 410 |
+
tion:
|
| 411 |
+
t1 = ∑
|
| 412 |
+
i, j
|
| 413 |
+
|�Eext(Uj,x j,i)−Es(xj,i)−
|
| 414 |
+
N
|
| 415 |
+
∑
|
| 416 |
+
k=1
|
| 417 |
+
Vj,kEk(x j,i)|2
|
| 418 |
+
t2 = ∑
|
| 419 |
+
j
|
| 420 |
+
���� �ωx(Uj,xj)−
|
| 421 |
+
�
|
| 422 |
+
eDs(xj)
|
| 423 |
+
M
|
| 424 |
+
+
|
| 425 |
+
N
|
| 426 |
+
∑
|
| 427 |
+
k=1
|
| 428 |
+
eVj,kDk(x j)
|
| 429 |
+
M
|
| 430 |
+
����
|
| 431 |
+
2
|
| 432 |
+
subject to
|
| 433 |
+
|Es(xj)+∑N
|
| 434 |
+
k=1Vj,kEk(xj)| ≤ ∆�E.
|
| 435 |
+
Where, xj,i (xj) is the position of ith ion in a linear chain
|
| 436 |
+
(i omitted for a single ion) under the jth voltage settings Uj,
|
| 437 |
+
in which the kth electrode is of the voltage Vj,k, �Eext and �ωx
|
| 438 |
+
are the position dependent measured electric field intensity
|
| 439 |
+
and secular frequency under certain voltage settings, N is the
|
| 440 |
+
number of electrodes. The undetermined parameters Ak and
|
| 441 |
+
γk are contained in the expressions of Es, Ek, Ds and Dk.
|
| 442 |
+
The constraint restrict the predicted position of a single ion
|
| 443 |
+
located at the measured xi, with a field intensity uncertainty
|
| 444 |
+
∆�E = M �ω2
|
| 445 |
+
x ∆x/e due to the random error of the ion position
|
| 446 |
+
∆x.
|
| 447 |
+
The number of undetermined parameters is 2N + 3, which
|
| 448 |
+
is increased linearly with the number of electrodes involved.
|
| 449 |
+
For the case that working electrode pairs less than 10, as illus-
|
| 450 |
+
trated in this work, the problem can be solved by global opti-
|
| 451 |
+
mization algorithm, such as Differential Evolution. For larger
|
| 452 |
+
number of electrodes, we suggest that the electrodes should
|
| 453 |
+
be divided into several groups, each group of the electrodes
|
| 454 |
+
should be able to trap ions and then can be experimentally
|
| 455 |
+
modeled separately. By this way, the optimization process
|
| 456 |
+
will be more efficient. Besides, the experimental period will
|
| 457 |
+
be shorter, and the assumption of constant stray field should
|
| 458 |
+
be better satisfied.
|
| 459 |
+
III.
|
| 460 |
+
EXPERIMENTAL SCHEME
|
| 461 |
+
Our linear SET is a "five wire" trap. The apparatus is de-
|
| 462 |
+
scribed in reference[38]. The trap consists of fifteen pairs of
|
| 463 |
+
dc electrodes, as shown in Fig.[3]. The dc electrodes named
|
| 464 |
+
from 1a(b) to 15a(b) are used for axial confinement. The
|
| 465 |
+
other electrodes RF1(2) and GND provide the transverse con-
|
| 466 |
+
finement.
|
| 467 |
+
The radio-frequency loaded to the trap is about
|
| 468 |
+
Ωr f = 2π ×22.7 MHz, and lead to a transverse trap frequency
|
| 469 |
+
about 2π × 2.6 MHz. Such a tight confinement allows us to
|
| 470 |
+
push the ion crystal move along the trap axis while do not
|
| 471 |
+
vary the trapping height too much. The axial confining po-
|
| 472 |
+
tential is provided by 9 channels of the DAC device, with the
|
| 473 |
+
output range (−10V ∼ 10V). Only the central nine (i.e. 4a(b)
|
| 474 |
+
to 12a(b)) out of the fifteen pairs of DC electrodes are used,
|
| 475 |
+
with the rest pairs grounded.
|
| 476 |
+
The 40Ca+ ions are loaded by three step photo-ionization of
|
| 477 |
+
the Ca atoms using 423-nm and 732-nm laser light[39], after
|
| 478 |
+
heating the atom oven. A linear chain of the 40Ca+ ions is
|
| 479 |
+
confined in an anharmonic potential along trap axis. The min-
|
| 480 |
+
imum spacing between adjacent ions is above 10 µm to en-
|
| 481 |
+
sure the field sensing sensitivity. The linear chain is Doppler
|
| 482 |
+
cooled with 397- and 866-nm laser light. We have two 397-nm
|
| 483 |
+
|
| 484 |
+
5
|
| 485 |
+
FIG. 3. Schematic diagram of the "five wire" linear ion trap. The dc
|
| 486 |
+
electrodes located at the y = 0 plane are labeled from 1a(b) to 15a(b)
|
| 487 |
+
above (below) the radiofrequency electrodes RF1(2) and GND.
|
| 488 |
+
laser beams, one is along the (1,0,0) dirction which provides
|
| 489 |
+
cooling in the x direction. The other is slightly tilted from
|
| 490 |
+
(1,0,1) direction, providing most cooling component in the z
|
| 491 |
+
and x direction, and only a little component in the y direction.
|
| 492 |
+
It is not important since the sensitive surface of the camera is
|
| 493 |
+
perpendicular to this direction. Cooling ions to the Doppler-
|
| 494 |
+
limit is not necessary, but minimize the micromotion is impor-
|
| 495 |
+
tant. When the voltages are identically applied to the "ia" and
|
| 496 |
+
"ib" electrodes, micromotion is negligible in the z direction in
|
| 497 |
+
our trap, since we found the position of the ions observed on
|
| 498 |
+
the camera is independent of the rf power. Coarse micromo-
|
| 499 |
+
tion reduction in the y direction is achieved by adjusting the
|
| 500 |
+
height of the ions above the trap until the images of the indi-
|
| 501 |
+
vidual ions are best localized on the camera. The number of
|
| 502 |
+
ions in the chain will decrease gradually due to the collision
|
| 503 |
+
with residual gas in the vacuum chamber. Loading process
|
| 504 |
+
will be launched to keep the ion number within 6 to 19 in a
|
| 505 |
+
experiment.
|
| 506 |
+
A pair of the electrodes labeled "ia" and "ib" (4 ≤ i ≤ 12)
|
| 507 |
+
are provided with the same voltage, and the unit-voltage po-
|
| 508 |
+
tential are determined by pairs. In the crystal data set acquisi-
|
| 509 |
+
tion stage, voltage on each ith pair are repeatedly updated with
|
| 510 |
+
a voltage increment of δi ∼ 0.02V while keeping all the other
|
| 511 |
+
voltages unchanged, pushing the ion crystal move across the
|
| 512 |
+
region of interest (∼ 280µm, limited by the beam width of
|
| 513 |
+
diagonal 397-nm laser). In this way, the unit-voltage electric
|
| 514 |
+
field intensity of the pair of electrode "ia" and "ib" can be cal-
|
| 515 |
+
culated as a whole. Note that keeping the δi constant for a
|
| 516 |
+
specific ith electrode and the other voltages constant is neces-
|
| 517 |
+
sary for the interpolation method, but not for ours. Instead, to
|
| 518 |
+
cover wider operating voltage range and mitigate systematic
|
| 519 |
+
error, change voltages on different electrodes simultaneously
|
| 520 |
+
is preferred. In contrast, recording all the voltages on each
|
| 521 |
+
electrode is necessary for the latter but not for the former. We
|
| 522 |
+
follow all the requirements of two methods in the experiment,
|
| 523 |
+
such that results of the two methods can be compared using
|
| 524 |
+
the same set of experimental data.
|
| 525 |
+
Every time the voltage of the dc electrode is updated,
|
| 526 |
+
an image of the linear ion crystal is photographed by an
|
| 527 |
+
Electron-Multiplying charge-coupled device (EMCCD iXon
|
| 528 |
+
Ultra 888). The custom-made lens provide about 19 times am-
|
| 529 |
+
plification, which results in resolution of 0.676µm per pixel
|
| 530 |
+
size. The exact magnification of the system is calibrated by
|
| 531 |
+
taking the image of a trap electrode with known width, and
|
| 532 |
+
checked by the image of two trapped ions, whose distance can
|
| 533 |
+
be precisely calculated by the measured trap frequency.
|
| 534 |
+
Position of a ion in the crystal under voltages Uj is deter-
|
| 535 |
+
mined by 2D Gaussian fit. For each ion i, we first derive the
|
| 536 |
+
center of mass position, then the image is divided into sec-
|
| 537 |
+
tions, each contains only one ion, and the dividing line is in
|
| 538 |
+
the middle of two adjacent ions. Then the 2D Gaussian fit is
|
| 539 |
+
applicable for each ion. The position error is estimated by the
|
| 540 |
+
fitting quality, to be less than 0.12 µm. These positions xj,i
|
| 541 |
+
are used to calculate the electric field intensity �Eext(Uj,xj,i)
|
| 542 |
+
by Eq. (3) and E = F/e.
|
| 543 |
+
The procedure of acquiring the equilibrium position of a
|
| 544 |
+
single ion xj under certain voltages Uj is just similar. After
|
| 545 |
+
changing the voltage settings, the image of a single trapped
|
| 546 |
+
ion is taken, and 2D Gaussian fit is used to determine the ion
|
| 547 |
+
position. During the measurement, the exact position of both
|
| 548 |
+
the ion trap and the image system are kept unchanged.
|
| 549 |
+
The secular frequencies �ωx(Uj,xj) are then measured by
|
| 550 |
+
resonant excitation, with the equilibrium positions and volt-
|
| 551 |
+
age settings recorded at the same time. The excitation signal
|
| 552 |
+
provided by a sine wave generator is connected to the outer-
|
| 553 |
+
most dc electrode. To achieve uttermost accuracy in secular
|
| 554 |
+
frequency, we use a single trapped ion and very weak reso-
|
| 555 |
+
nant excitation signal. Fluorescence level will change when
|
| 556 |
+
the excitation frequency sweep across resonance point. The
|
| 557 |
+
measurement uncertainty is less than ±0.5kHz.
|
| 558 |
+
In our experiment, the number of undetermined parameters
|
| 559 |
+
is as large as 21, therefore the data sets should be large enough
|
| 560 |
+
to reduce the parameter uncertainty. We take over 30 pictures
|
| 561 |
+
for each pair of electrodes under different voltages, each pic-
|
| 562 |
+
ture contains 6 to 19 ions, together with 20 secular frequencies
|
| 563 |
+
and 2 single trapped ion’s positions (more should be better).
|
| 564 |
+
The total number of data points is up to 3484, which are all
|
| 565 |
+
used in the optimization method. The interpolation method,
|
| 566 |
+
however, can only make use of part of them. Except the secu-
|
| 567 |
+
lar frequencies and single trapped ion’s positions, the position
|
| 568 |
+
data near the ends of the ion chain are not useful, since the
|
| 569 |
+
number of overlapped samples are not enough for average to
|
| 570 |
+
reduce the random error. For comparison, both the interpola-
|
| 571 |
+
tion method and our optimization one are used to derive the
|
| 572 |
+
unit-voltage field intensity for each pair of dc electrodes and
|
| 573 |
+
also for the stray field.
|
| 574 |
+
IV.
|
| 575 |
+
RESULT
|
| 576 |
+
We first use the interpolation method proposed by M.
|
| 577 |
+
Brownnutt et al.[36] to calculate unit-voltage electric field in-
|
| 578 |
+
tensity of the dc electrodes by pairs, using only the ion crystal
|
| 579 |
+
data set. As is shown in black dotted lines in Fig. 4, random
|
| 580 |
+
fluctuation of the derived field intensity is obvious, especially
|
| 581 |
+
at the two ends of the region, where the samples for aver-
|
| 582 |
+
aging is very few. For better modeling the trap we smooth
|
| 583 |
+
these curves by fitting them with Lorentz function according
|
| 584 |
+
|
| 585 |
+
(X2, Z2)
|
| 586 |
+
a
|
| 587 |
+
150
|
| 588 |
+
RF1
|
| 589 |
+
GND
|
| 590 |
+
0
|
| 591 |
+
x
|
| 592 |
+
RF2
|
| 593 |
+
1b
|
| 594 |
+
15b6
|
| 595 |
+
to Eq.(5), but only restricted to data within −110 ∼ 110µm to
|
| 596 |
+
avoid obvious errors, as is shown in blue lines in Fig. 4.
|
| 597 |
+
Also shown in this figure, the red lines are derived by the
|
| 598 |
+
newly proposed optimization method, using all the collected
|
| 599 |
+
data without discarding the ends. The optimization target t1
|
| 600 |
+
and t2 are combined and balanced with a weighting factor.
|
| 601 |
+
Two positions of a single trapped ion under different voltages
|
| 602 |
+
are used for constraint the solution.
|
| 603 |
+
-150
|
| 604 |
+
-100
|
| 605 |
+
-50
|
| 606 |
+
0
|
| 607 |
+
50
|
| 608 |
+
100
|
| 609 |
+
150
|
| 610 |
+
-100
|
| 611 |
+
-50
|
| 612 |
+
0
|
| 613 |
+
50
|
| 614 |
+
100
|
| 615 |
+
150
|
| 616 |
+
12
|
| 617 |
+
11
|
| 618 |
+
10
|
| 619 |
+
9
|
| 620 |
+
8
|
| 621 |
+
7
|
| 622 |
+
6
|
| 623 |
+
5
|
| 624 |
+
E
|
| 625 |
+
x
|
| 626 |
+
(V/m )
|
| 627 |
+
x position (
|
| 628 |
+
m)
|
| 629 |
+
interpolation method
|
| 630 |
+
smoothed by fitting
|
| 631 |
+
optimization method
|
| 632 |
+
4
|
| 633 |
+
FIG. 4. Unit-voltage electric field intensity curves of electrode 4−
|
| 634 |
+
12 derived by different methods. The curve correspond to the ith
|
| 635 |
+
electrode is label by number i to the left. The black solid lines with
|
| 636 |
+
dots are derived by interpolation method. The discrete data within the
|
| 637 |
+
range (−110,110)µm are fitted using the Lorentz function Eq. (5) to
|
| 638 |
+
be the blue solid lines. The red solid lines are derived by optimization
|
| 639 |
+
method, with all the experimental data included.
|
| 640 |
+
Stray electric field can also be derived. In the optimization
|
| 641 |
+
method it is solved directly. But in the interpolation method,
|
| 642 |
+
the stray electric field has been subtracted as a background.
|
| 643 |
+
In this case, we calculate the residual error between measured
|
| 644 |
+
electric field and the predicted one after all the unit-voltage
|
| 645 |
+
electric field intensity has been derived, and all the data are
|
| 646 |
+
taken into account by optimization method to determine the
|
| 647 |
+
parameters of stray field in Eq.(6). The stray electric field
|
| 648 |
+
strength along the axis are separately derived by two methods,
|
| 649 |
+
as shown in Fig. (5). It is hard to say which curve is more
|
| 650 |
+
accurate up to now. Both of the two curves indicate that the
|
| 651 |
+
main source of stray field is not far from trap center. It may
|
| 652 |
+
come from the constant voltage offset of certain electrode or
|
| 653 |
+
the light charging effect due to the laser beams.
|
| 654 |
+
For simplicity, the unit-voltage field intensity of each elec-
|
| 655 |
+
trode in blue (red) curves in Fig. 4 together with the stray
|
| 656 |
+
field in blue (red) curve in Fig. 5 are referred as trap model
|
| 657 |
+
established by interpolation (optimization) method.
|
| 658 |
+
To assess the accuracy of derived trap models, the equi-
|
| 659 |
+
librium positions for certain number of ions and the secular
|
| 660 |
+
frequencies under experimental voltages are calculated using
|
| 661 |
+
the two trap models, and compared with the measured re-
|
| 662 |
+
sults. With the trap models derived above, one can simulate
|
| 663 |
+
the equilibrium position of each ion in a linear chain either
|
| 664 |
+
by simulated annealing method[40] or by molecular dynam-
|
| 665 |
+
-150
|
| 666 |
+
-100
|
| 667 |
+
-50
|
| 668 |
+
0
|
| 669 |
+
50
|
| 670 |
+
100
|
| 671 |
+
150
|
| 672 |
+
-9
|
| 673 |
+
-8
|
| 674 |
+
-7
|
| 675 |
+
-6
|
| 676 |
+
-5
|
| 677 |
+
-4
|
| 678 |
+
-3
|
| 679 |
+
interpolation method
|
| 680 |
+
optimiation method
|
| 681 |
+
E
|
| 682 |
+
s
|
| 683 |
+
(V/m )
|
| 684 |
+
x (
|
| 685 |
+
m)
|
| 686 |
+
FIG. 5.
|
| 687 |
+
The stray electric field intensity Es. The blue dashed line
|
| 688 |
+
(red solid line) is derived by using the trap model according to inter-
|
| 689 |
+
polation (optimization) method.
|
| 690 |
+
ics simulation[41]. We use the velocity-verlet algorithmis for
|
| 691 |
+
1D molecular dynamics simulation, and large damping is ap-
|
| 692 |
+
plied to speed up the equilibrium process.
|
| 693 |
+
4
|
| 694 |
+
6
|
| 695 |
+
8
|
| 696 |
+
10
|
| 697 |
+
12
|
| 698 |
+
0.0
|
| 699 |
+
0.5
|
| 700 |
+
1.0
|
| 701 |
+
1.5
|
| 702 |
+
2.0
|
| 703 |
+
2.5
|
| 704 |
+
0
|
| 705 |
+
5
|
| 706 |
+
10
|
| 707 |
+
15
|
| 708 |
+
20
|
| 709 |
+
25
|
| 710 |
+
30
|
| 711 |
+
-1
|
| 712 |
+
0
|
| 713 |
+
1
|
| 714 |
+
2
|
| 715 |
+
x e rror (
|
| 716 |
+
m )
|
| 717 |
+
electrode index
|
| 718 |
+
interpolation method
|
| 719 |
+
optimization method
|
| 720 |
+
(a)
|
| 721 |
+
(b)
|
| 722 |
+
z
|
| 723 |
+
e rror (% )
|
| 724 |
+
data index
|
| 725 |
+
interpolation method
|
| 726 |
+
optimization method
|
| 727 |
+
FIG. 6. Errors of the x positions and axial secular motion frequency
|
| 728 |
+
predicted by two different trap models. (a) Mean errors of the pre-
|
| 729 |
+
dicted x positions. The position errors of ions belong to the same
|
| 730 |
+
voltage-varying electrode are averaged and shown with the corre-
|
| 731 |
+
sponding electrode index. (b) Errors of the axial secular frequen-
|
| 732 |
+
cies, with the x axis represents the index of measured data. The blue
|
| 733 |
+
dashed line with diamond (red solid line with dot) is according to the
|
| 734 |
+
trap model derived by interpolation (optimization) method.
|
| 735 |
+
For each electrode k, we choose five voltage settings with
|
| 736 |
+
different Vk for molecular dynamics simulation.
|
| 737 |
+
The five
|
| 738 |
+
chosen Vk include the maximum and minimum experimen-
|
| 739 |
+
tal values with the spacings as evenly as possible. The sim-
|
| 740 |
+
ulated equilibrium positions then are compared with the mea-
|
| 741 |
+
sured ones.
|
| 742 |
+
The position errors of the ion belongs to the
|
| 743 |
+
same voltage-varying electrode are averaged and shown in
|
| 744 |
+
Fig. 6(a). Obviously, the errors are generally smaller using
|
| 745 |
+
|
| 746 |
+
7
|
| 747 |
+
the model derived by optimization method than the one by in-
|
| 748 |
+
terpolation. The worst error is about 1.2 µm for optimization
|
| 749 |
+
method and 2.2 µm for interpolation method. And we found
|
| 750 |
+
the errors are relatively larger for the 5th, 8th and 12th elec-
|
| 751 |
+
trodes, for both of the two trap models. It indicates that there
|
| 752 |
+
are some systematic errors in these model. We guess it comes
|
| 753 |
+
from the assumption that the stray electric field keeps con-
|
| 754 |
+
stant during the data acquisition period. Since the Ca oven are
|
| 755 |
+
repeatedly turned on as well as the mW-level 423-nm photo-
|
| 756 |
+
ionization laser to replenish the 40Ca+. These operations may
|
| 757 |
+
change the status of atomic coating and photo-induced charg-
|
| 758 |
+
ing up, which lead to variation of the stray field. This is the
|
| 759 |
+
major drawback of the optimization method.
|
| 760 |
+
Fig. 6(b) shows the relative errors of predicted axial secular
|
| 761 |
+
frequencies calculated using Eq. (7) for the two different mod-
|
| 762 |
+
els. Note that secular frequencies of the first 20 points are used
|
| 763 |
+
to calculate target t2 for optimaization, and the last 11 ones are
|
| 764 |
+
just for check. The general trend of the two curves are very
|
| 765 |
+
similar, but the errors derived by interpolation method has a
|
| 766 |
+
offset of about 0.75%. There are good reasons to believe that
|
| 767 |
+
the contribution of optimization target t2 provide the suppres-
|
| 768 |
+
sion of this offset error. The trap model derived by optimiza-
|
| 769 |
+
tion method shows high accuracy in predicting the secular mo-
|
| 770 |
+
tion frequency, with the error all below 0.5%. Robustness also
|
| 771 |
+
show when the trapping conditions are extended beyond the
|
| 772 |
+
experimental region where we derive the trap model, e.g. sec-
|
| 773 |
+
ular frequencies used for optimization are ranged from 190 to
|
| 774 |
+
380 kHz, when it extended to 600kHz, the error of predicted
|
| 775 |
+
frequency is still within 0.5%.
|
| 776 |
+
V.
|
| 777 |
+
CONCLUSION AND DISCUSSION
|
| 778 |
+
A method has been presented to derive a smooth and accu-
|
| 779 |
+
rate SET potential model based on multi-objective optimiza-
|
| 780 |
+
tion method. This method combines the advantages of BEM
|
| 781 |
+
simulation and experimental measurement, namely, highly
|
| 782 |
+
curve smoothness and model accuracy. It naturally allow uti-
|
| 783 |
+
lizing many different types of data, such as positions of ions
|
| 784 |
+
in strings, secular frequencies and positions of single trapped
|
| 785 |
+
ions under different trapping voltages. Therefore, it can miti-
|
| 786 |
+
gate systematic errors of different sources, and promise higher
|
| 787 |
+
accuracy in the prediction of trap frequency and spatial field
|
| 788 |
+
than any existing method. The higher accuracy is verified by
|
| 789 |
+
comparing the errors of predicted equilibrium positions and
|
| 790 |
+
the secular frequencies with those derived by the existing in-
|
| 791 |
+
terpolation method.
|
| 792 |
+
Our method relies on the parametric expression of electric
|
| 793 |
+
field intensity. The Lorentz function is found to be accurately
|
| 794 |
+
enough for rectangular electrode in this work. Although this
|
| 795 |
+
method is developed in the SET system, we believe it can also
|
| 796 |
+
work for segmented 3D traps, except that the empirical ex-
|
| 797 |
+
pression of electric potential should be replaced. Our method
|
| 798 |
+
generally requires that the stray field keeps constant during
|
| 799 |
+
the data acquisition period, and then the 1d stray electric field
|
| 800 |
+
intensity can be determined. If too much electrodes are in-
|
| 801 |
+
volved, the global optimization algorithm will become less
|
| 802 |
+
efficient, and the experimental period will last longer. In this
|
| 803 |
+
case, the stray field are more tend to change. This can be
|
| 804 |
+
solved by dividing the electrodes into several groups and each
|
| 805 |
+
group of electrodes could be modeled separately.
|
| 806 |
+
This method can be extend to determine 2D even 3D poten-
|
| 807 |
+
tial, in principle. The ability to establish accurate trap model
|
| 808 |
+
provide a practical tool for precisely trapping potential con-
|
| 809 |
+
trol, which may find application in ion transport and multi-ion
|
| 810 |
+
based quantum precision metrology.
|
| 811 |
+
ACKNOWLEDGMENTS
|
| 812 |
+
This work is supported by the National Natural Science
|
| 813 |
+
Foundation of China under Grant No.
|
| 814 |
+
11904402, No.
|
| 815 |
+
12204543, the Innovation Program for Quantum Science and
|
| 816 |
+
Technology (2021ZD0301605), and the National Natural Sci-
|
| 817 |
+
ence Foundation of China under Grant No. 12004430, No.
|
| 818 |
+
12074433, No. 12174447 and No. 12174448.
|
| 819 |
+
[1] Y. Wang, M. Um, J. Zhang, S. An, M. Lyu, J.-N. Zhang, L.-M.
|
| 820 |
+
Duan, D. Yum, and K. Kim, Nature Photonics 11, 646 (2017).
|
| 821 |
+
[2] P. Wang, C.-Y. Luan, M. Qiao, M. Um, J. Zhang, Y. Wang,
|
| 822 |
+
X. Yuan, M. Gu, J. Zhang, and K. Kim, Nature communica-
|
| 823 |
+
tions 12, 1 (2021).
|
| 824 |
+
[3] C. J. Ballance, T. P. Harty, N. M. Linke, M. A. Sepiol, and
|
| 825 |
+
D. M. Lucas, Phys. Rev. Lett. 117, 060504 (2016).
|
| 826 |
+
[4] J. P. Gaebler, T. R. Tan, Y. Lin, Y. Wan, R. Bowler, A. C.
|
| 827 |
+
Keith, S. Glancy, K. Coakley, E. Knill, D. Leibfried, and D. J.
|
| 828 |
+
Wineland, Phys. Rev. Lett. 117, 060505 (2016).
|
| 829 |
+
[5] R.
|
| 830 |
+
Srinivas,
|
| 831 |
+
S.
|
| 832 |
+
Burd,
|
| 833 |
+
H.
|
| 834 |
+
Knaack,
|
| 835 |
+
R.
|
| 836 |
+
Sutherland,
|
| 837 |
+
A.
|
| 838 |
+
Kwiatkowski,
|
| 839 |
+
S.
|
| 840 |
+
Glancy,
|
| 841 |
+
E.
|
| 842 |
+
Knill,
|
| 843 |
+
D.
|
| 844 |
+
Wineland,
|
| 845 |
+
D. Leibfried, A. C. Wilson, et al., Nature 597, 209 (2021).
|
| 846 |
+
[6] N. M. Linke, D. Maslov, M. Roetteler, S. Debnath, C. Figgatt,
|
| 847 |
+
K. A. Landsman, K. Wright, and C. Monroe, Proceedings of
|
| 848 |
+
the National Academy of Sciences 114, 3305 (2017).
|
| 849 |
+
[7] O. Hosten, N. J. Engelsen, R. Krishnakumar, and M. A. Kase-
|
| 850 |
+
vich, Nature 529, 505 (2016).
|
| 851 |
+
[8] V. Bužek, R. Derka, and S. Massar, Phys. Rev. Lett. 82, 2207
|
| 852 |
+
(1999).
|
| 853 |
+
[9] A. D. Ludlow, M. M. Boyd, J. Ye, E. Peik, and P. O. Schmidt,
|
| 854 |
+
Rev. Mod. Phys. 87, 637 (2015).
|
| 855 |
+
[10] J. P. Home, D. Hanneke, J. D. Jost, J. M. Amini, D. Leibfried,
|
| 856 |
+
and D. J. Wineland, Science 325, 1227 (2009).
|
| 857 |
+
[11] J. M. Pino, J. M. Dreiling, C. Figgatt, J. P. Gaebler, S. A. Moses,
|
| 858 |
+
M. Allman, C. Baldwin, M. Foss-Feig, D. Hayes, K. Mayer,
|
| 859 |
+
C. Ryan-Anderson, and B. Neyenhuis, Nature 592, 209 (2021).
|
| 860 |
+
[12] D. Kielpinski, C. Monroe, and D. Wineland, Nature 417, 709
|
| 861 |
+
(2002).
|
| 862 |
+
[13] H. Fürst, M. H. Goerz, U. Poschinger, M. Murphy, S. Mon-
|
| 863 |
+
tangero, T. Calarco, F. Schmidt-Kaler, K. Singer,
|
| 864 |
+
and C. P.
|
| 865 |
+
Koch, New Journal of Physics 16, 075007 (2014).
|
| 866 |
+
[14] B. P. Ruzic, T. A. Barrick, J. D. Hunker, R. J. Law, B. K. Mc-
|
| 867 |
+
Farland, H. J. McGuinness, L. P. Parazzoli, J. D. Sterk, J. W.
|
| 868 |
+
Van Der Wall, and D. Stick, Phys. Rev. A 105, 052409 (2022).
|
| 869 |
+
[15] R. Bowler, J. Gaebler, Y. Lin, T. R. Tan, D. Hanneke, J. D. Jost,
|
| 870 |
+
|
| 871 |
+
8
|
| 872 |
+
J. P. Home, D. Leibfried, and D. J. Wineland, Phys. Rev. Lett.
|
| 873 |
+
109, 080502 (2012).
|
| 874 |
+
[16] A. Walther, F. Ziesel, T. Ruster, S. T. Dawkins, K. Ott, M. Het-
|
| 875 |
+
trich, K. Singer, F. Schmidt-Kaler, and U. Poschinger, Phys.
|
| 876 |
+
Rev. Lett. 109, 080501 (2012).
|
| 877 |
+
[17] R. T. Sutherland, S. C. Burd, D. H. Slichter, S. B. Libby, and
|
| 878 |
+
D. Leibfried, Phys. Rev. Lett. 127, 083201 (2021).
|
| 879 |
+
[18] J. Chiaverini, R. B. Blakestad, J. Britton, J. D. Jost, C. Langer,
|
| 880 |
+
D. Leibfried, R. Ozeri,
|
| 881 |
+
and D. J. Wineland, Quantum Info.
|
| 882 |
+
Comput. 5, 419 (2005).
|
| 883 |
+
[19] S. Seidelin,
|
| 884 |
+
J. Chiaverini,
|
| 885 |
+
R. Reichle,
|
| 886 |
+
J. J. Bollinger,
|
| 887 |
+
D. Leibfried, J. Britton, J. H. Wesenberg, R. B. Blakestad, R. J.
|
| 888 |
+
Epstein, D. B. Hume, W. M. Itano, J. D. Jost, C. Langer, R. Oz-
|
| 889 |
+
eri, N. Shiga, and D. J. Wineland, Phys. Rev. Lett. 96, 253003
|
| 890 |
+
(2006).
|
| 891 |
+
[20] J. H. Wesenberg, Phys. Rev. A 78, 063410 (2008).
|
| 892 |
+
[21] M. H. Oliveira and J. A. Miranda, European Journal of Physics
|
| 893 |
+
22, 31 (2001).
|
| 894 |
+
[22] M. G. House, Phys. Rev. A 78, 033402 (2008).
|
| 895 |
+
[23] B. Brki´c, S. Taylor, J. F. Ralph, and N. France, Phys. Rev. A
|
| 896 |
+
73, 012326 (2006).
|
| 897 |
+
[24] K. Singer, U. Poschinger, M. Murphy, P. Ivanov, F. Ziesel,
|
| 898 |
+
T. Calarco, and F. Schmidt-Kaler, Rev. Mod. Phys. 82, 2609
|
| 899 |
+
(2010).
|
| 900 |
+
[25] Q. A. Turchette, D. Kielpinski, B. E. King, D. Leibfried, D. M.
|
| 901 |
+
Meekhof, C. J. Myatt, M. A. Rowe, C. A. Sackett, C. S. Wood,
|
| 902 |
+
W. M. Itano, C. Monroe, and D. J. Wineland, Phys. Rev. A 61,
|
| 903 |
+
063418 (2000).
|
| 904 |
+
[26] D. T. C. Allcock, L. Guidoni, T. P. Harty, C. J. Ballance, M. G.
|
| 905 |
+
Blain, A. M. Steane, and D. M. Lucas, New Journal of Physics
|
| 906 |
+
13, 123023 (2011).
|
| 907 |
+
[27] N. Daniilidis, S. Narayanan, S. A. Möller, R. Clark, T. E.
|
| 908 |
+
Lee, P. J. Leek, A. Wallraff, S. Schulz, F. Schmidt-Kaler, and
|
| 909 |
+
H. Häffner, New Journal of Physics 13, 013032 (2011).
|
| 910 |
+
[28] S. Narayanan, N. Daniilidis, S. Möller, R. Clark, F. Ziesel,
|
| 911 |
+
K. Singer, F. Schmidt-Kaler, and H. Häffner, Journal of Ap-
|
| 912 |
+
plied Physics 110, 114909 (2011).
|
| 913 |
+
[29] D. Allcock, T. Harty, H. Janacek, N. Linke, C. Ballance,
|
| 914 |
+
A. Steane, D. Lucas, R. Jarecki, S. Habermehl, M. Blain, et al.,
|
| 915 |
+
Applied Physics B 107, 913 (2012).
|
| 916 |
+
[30] S. X. Wang, G. Hao Low, N. S. Lachenmyer, Y. Ge, P. F.
|
| 917 |
+
Herskind, and I. L. Chuang, Journal of Applied Physics 110,
|
| 918 |
+
104901 (2011).
|
| 919 |
+
[31] M. J. Biercuk, H. Uys, J. W. Britton, A. P. VanDevender, and
|
| 920 |
+
J. J. Bollinger, Nature nanotechnology 5, 646 (2010).
|
| 921 |
+
[32] D. Berkeland, J. Miller, J. C. Bergquist, W. M. Itano, and D. J.
|
| 922 |
+
Wineland, Journal of applied physics 83, 5025 (1998).
|
| 923 |
+
[33] M. Harlander, M. Brownnutt, W. Hänsel, and R. Blatt, New
|
| 924 |
+
Journal of Physics 12, 093035 (2010).
|
| 925 |
+
[34] M. Brownnutt, M. Kumph, P. Rabl, and R. Blatt, Rev. Mod.
|
| 926 |
+
Phys. 87, 1419 (2015).
|
| 927 |
+
[35] G. Huber, F. Ziesel, U. Poschinger, K. Singer, and F. Schmidt-
|
| 928 |
+
Kaler, Applied Physics B 100, 725 (2010).
|
| 929 |
+
[36] M. Brownnutt, M. Harlander, W. Hänsel, and R. Blatt, Applied
|
| 930 |
+
Physics B 107, 1125 (2012).
|
| 931 |
+
[37] X. Zhang, B. Ou, T. Chen, Y. Xie, W. Wu, and P. Chen, Physica
|
| 932 |
+
Scripta 95, 045103 (2020).
|
| 933 |
+
[38] B. Ou, J. Zhang, X. Zhang, Y. Xie, T. Chen, C. Wu, W. Wu, and
|
| 934 |
+
P. Chen, SCIENCE CHINA Physics, Mechanics & Astronomy
|
| 935 |
+
59, 1 (2016).
|
| 936 |
+
[39] J. Zhang, Y. Xie, P.-f. Liu, B.-q. Ou, W. Wu, and P.-x. Chen,
|
| 937 |
+
Applied Physics B 123, 1 (2017).
|
| 938 |
+
[40] W.-B. Wu, C.-W. Wu, J. Li, B.-Q. Ou, Y. Xie, W. Wu, and P.-X.
|
| 939 |
+
Chen, Chinese Physics B 26, 080303 (2017).
|
| 940 |
+
[41] C. B. Zhang, D. Offenberg, B. Roth, M. A. Wilson,
|
| 941 |
+
and
|
| 942 |
+
S. Schiller, Phys. Rev. A 76, 012719 (2007).
|
| 943 |
+
|
F9AyT4oBgHgl3EQfrPnq/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
FdE0T4oBgHgl3EQfhAGj/content/tmp_files/2301.02426v1.pdf.txt
ADDED
|
@@ -0,0 +1,1923 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
arXiv:2301.02426v1 [math.ST] 6 Jan 2023
|
| 2 |
+
Reversibility of elliptical slice sampling revisited
|
| 3 |
+
Mareike Hasenpflug∗, Viacheslav Natarovskii†, Daniel Rudolf ∗,‡
|
| 4 |
+
January 9, 2023
|
| 5 |
+
Abstract
|
| 6 |
+
We discuss the well-definedness of elliptical slice sampling, a Markov
|
| 7 |
+
chain approach for approximate sampling of posterior distributions intro-
|
| 8 |
+
duced by Murray, Adams and MacKay 2010.
|
| 9 |
+
We point to a regularity
|
| 10 |
+
requirement and provide an alternative proof of the reversibility property.
|
| 11 |
+
In particular, this guarantees the correctness of the slice sampling scheme
|
| 12 |
+
also on infinite-dimensional separable Hilbert spaces.
|
| 13 |
+
Keywords: elliptical slice sampling, reversibility, shrinkage procedure
|
| 14 |
+
Classification. Primary: 65C40; Secondary: 60J22, 65C05.
|
| 15 |
+
1
|
| 16 |
+
Introduction
|
| 17 |
+
Markov chain Monte Carlo simulations are one of the major tools for approximate
|
| 18 |
+
sampling of posterior distributions in the context of Bayesian inference. Ellipti-
|
| 19 |
+
cal slice sampling (ESS), which has been proposed in [Murray et al., 2010], pro-
|
| 20 |
+
vides a popular algorithmic transition mechanism, see e.g. [Nishihara et al., 2014,
|
| 21 |
+
Murray and Graham, 2016, Lie et al., 2021], that leads to a Markov chain which
|
| 22 |
+
suits the goal of approximate sampling.
|
| 23 |
+
The idea of ESS is based on a particular Gaussian Metropolis random walk,
|
| 24 |
+
see [Neal, 1999], that is nowadays sometimes called preconditioned Crank-Nicolson
|
| 25 |
+
Metropolis (see [Cotter et al., 2013, Rudolf and Sprungk, 2018]), and the shrink-
|
| 26 |
+
age procedure also due to Neal, see [Neal, 2003]. Given the current state, a suitable
|
| 27 |
+
acceptance region (a level set) as well as an ellipse is randomly chosen and then,
|
| 28 |
+
∗Faculty of Computer Science and Mathematics, Universit¨at Passau, Innstraße 33, 94032
|
| 29 |
+
Passau, Email: mareike.hasenpflug@uni-passau.de, daniel.rudolf@uni-passau.de
|
| 30 |
+
†Expert Analytics GmbH, Hubertusstraße 83, 82131 Gauting, Email:
|
| 31 |
+
viacheslav.nata-
|
| 32 |
+
rovskii@expertanalytics.de
|
| 33 |
+
‡Felix-Bernstein-Institute for Mathematical Statistics in the Biosciences, Goldschmidtstraße
|
| 34 |
+
7, 37077 G¨ottingen
|
| 35 |
+
1
|
| 36 |
+
|
| 37 |
+
the next instance of the Markov chain is generated on the ellipse intersected with
|
| 38 |
+
the acceptance region by using the aforementioned shrinkage procedure.
|
| 39 |
+
Appreciated advantages of ESS compared to the Gaussian Metropolis ran-
|
| 40 |
+
dom walk are that there are no rejections, that there is no tuning of a step-size
|
| 41 |
+
parameter required and that it allows for larger jumps, since a richer choice of
|
| 42 |
+
possible updates is available [Murray et al., 2010]. From the theory side, recently
|
| 43 |
+
in [Natarovskii et al., 2021a] geometric convergence on finite-dimensional spaces
|
| 44 |
+
has been proven under weak assumptions on the target distribution. Moreover,
|
| 45 |
+
numerical experiments in [Murray et al., 2010, Natarovskii et al., 2021a] indicate
|
| 46 |
+
dimension independent performance of ESS. That motivates the question of well-
|
| 47 |
+
definedness, which includes the reversibility regarding the target, on (possibly)
|
| 48 |
+
infinite-dimensional separable Hilbert spaces. Our contribution regarding ESS is
|
| 49 |
+
threefold:
|
| 50 |
+
1. The algorithm contains a ‘shrinkage loop’ and we provide a sufficient condi-
|
| 51 |
+
tion on the distribution of interest for the termination of that loop, which
|
| 52 |
+
leads to the well-definedness of the transition mechanism and the correspond-
|
| 53 |
+
ing Markov chain.
|
| 54 |
+
2. We illuminate that the process on the ellipse actually relies on a Markov
|
| 55 |
+
chain on [0, 2π) that is reversible w.r.t. the uniform distribution on a suitably
|
| 56 |
+
transformed acceptance region.
|
| 57 |
+
3. We provide alternative arguments to [Murray et al., 2010, Section 2.3] for
|
| 58 |
+
proving reversibility of the ESS transition kernel in infinite-dimensional set-
|
| 59 |
+
tings, which particularly implies that the target measure is a stationary
|
| 60 |
+
distribution.
|
| 61 |
+
In contrast to the finite-dimensional framework, for which ESS has been pro-
|
| 62 |
+
posed, we consider a (possibly) infinite-dimensional scenario.
|
| 63 |
+
This means that
|
| 64 |
+
the distribution of interest is specified on an infinite-dimensional separable Hilbert
|
| 65 |
+
space H which is equipped with its corresponding Borel σ-algebra B(H).
|
| 66 |
+
We
|
| 67 |
+
will see and emphasize that ESS is well-defined in such a framework. We con-
|
| 68 |
+
sider ̺ : H → (0, ∞) as likelihood function and a Gaussian reference measure
|
| 69 |
+
µ0 = N (0, C) defined on H as prior distribution, where C : H → H is a non-
|
| 70 |
+
singular covariance operator1. Then, the probability measure of interest, the pos-
|
| 71 |
+
terior distribution, denoted by µ, is given as
|
| 72 |
+
µ(dx) = 1
|
| 73 |
+
Z ̺(x)µ0(dx)
|
| 74 |
+
with normalizing constant Z =
|
| 75 |
+
�
|
| 76 |
+
H ̺(x)µ0(dx). We consider in the following ESS
|
| 77 |
+
for approximate sampling of µ and show that if ̺ is lower-semicontinuous, i.e., the
|
| 78 |
+
super level sets of ̺ are open sets (within H), then the while-loop in the shrinkage
|
| 79 |
+
1This means that C : H → H is a linear bounded, self-adjoint and positive trace class operator
|
| 80 |
+
with ker C = {0}.
|
| 81 |
+
2
|
| 82 |
+
|
| 83 |
+
procedure terminates and the transition mechanism leads to a transition kernel
|
| 84 |
+
that is reversible with respect to (w.r.t.) µ.
|
| 85 |
+
We briefly outline the structure of the paper. At the beginning of Section 2
|
| 86 |
+
we provide the general setting and the transition mechanisms in algorithmic form.
|
| 87 |
+
Then, we motivate with a simple example the issue regarding the termination
|
| 88 |
+
criterion formulated in the algorithms and develop a representation of a transition
|
| 89 |
+
kernel that corresponds to the shrinkage procedure on the circle. In Section 2.3
|
| 90 |
+
we prove that the aforementioned kernel is reversible w.r.t. a suitable uniform
|
| 91 |
+
distribution on a subset of the circle.
|
| 92 |
+
Finally, in Section 3 we show how the
|
| 93 |
+
reversibility of the shrinkage carries over to the transition kernel of ESS.
|
| 94 |
+
2
|
| 95 |
+
Preliminaries and notation
|
| 96 |
+
We state two equivalent versions of the transition mechanism of elliptical slice
|
| 97 |
+
sampling in algorithmic form and provide our notation.
|
| 98 |
+
Let (Ω, F, P) be the
|
| 99 |
+
underlying probability space of all subsequently used random variables. On the
|
| 100 |
+
real line R, equipped with its canonical Borel σ-algebra B(R), let λ(·) denote the
|
| 101 |
+
Lebesgue measure. For bounded I ∈ B(R), with λ(I) > 0 let UI be the uniform
|
| 102 |
+
distribution on I. In Algorithm 2.1 the transition mechanism of elliptical slice
|
| 103 |
+
sampling, as stated in [Murray et al., 2010], is presented.
|
| 104 |
+
Algorithm 2.1 Elliptical slice sampling
|
| 105 |
+
Input: ̺ and xin ∈ H considered as current state;
|
| 106 |
+
Output: xout ∈ H considered as the next state;
|
| 107 |
+
1: Draw T ∼ U(0,̺(xin)), call the result t;
|
| 108 |
+
2: Draw W ∼ µ0 = N (0, C), call the result w;
|
| 109 |
+
3: Draw Γ ∼ U[0,2π), call the result γ;
|
| 110 |
+
4: Set γmin := γ − 2π and set γmax := γ;
|
| 111 |
+
5: while ̺(cos(γ)xin + sin(γ)w) ≤ t do
|
| 112 |
+
6:
|
| 113 |
+
if γ < 0 then
|
| 114 |
+
7:
|
| 115 |
+
Set γmin := γ;
|
| 116 |
+
8:
|
| 117 |
+
else
|
| 118 |
+
9:
|
| 119 |
+
Set γmax := γ;
|
| 120 |
+
10:
|
| 121 |
+
end if
|
| 122 |
+
11:
|
| 123 |
+
Draw Γ ∼ U(γmin,γmax), call the result γ;
|
| 124 |
+
12: end while
|
| 125 |
+
13: return xout := cos(γ)xin + sin(γ)w.
|
| 126 |
+
For the analysis below it is convenient to reformulate and split the transition
|
| 127 |
+
mechanism of Algorithm 2.1. For this define for t ≥ 0 the (super-) level set of ̺
|
| 128 |
+
w.r.t. t as
|
| 129 |
+
H(t) := {x ∈ H: ̺(x) > t}
|
| 130 |
+
3
|
| 131 |
+
|
| 132 |
+
and for α, β ∈ [0, 2π) let
|
| 133 |
+
I(α, β) :=
|
| 134 |
+
�
|
| 135 |
+
[0, β) ∪ [α, 2π)
|
| 136 |
+
α ≥ β
|
| 137 |
+
[α, β),
|
| 138 |
+
α < β
|
| 139 |
+
be the notation of an interval that respects the geometry of the circle. Observe
|
| 140 |
+
that I(α, β) ∩ I(β, α) = ∅ and I(α, β) ∪ I(β, α) = [0, 2π). A useful identity is
|
| 141 |
+
readily available by distinguishing different cases:
|
| 142 |
+
Lemma 2.1. For any α, β, γ ∈ [0, 2π) we have 1I(α,β)(γ) = 1I(γ,α)(β) = 1I(β,γ)(α).
|
| 143 |
+
For given x, w ∈ H define the function px,w : [0, 2π) → H as
|
| 144 |
+
px,w(θ) := cos(θ)x + sin(θ)w,
|
| 145 |
+
which describes an ellipse in H with conjugate diameters determined by x, w. We
|
| 146 |
+
remind the reader on the definition of the pre-image of px,w, that is, for A ∈ B(H)
|
| 147 |
+
given as
|
| 148 |
+
p−1
|
| 149 |
+
x,w(A) := {θ ∈ [0, 2π): px,w(θ) ∈ A}.
|
| 150 |
+
It determines the part of [0, 2π) that leads via px,w to elements on the ellipse
|
| 151 |
+
intersected with A. In the aforementioned reformulation of Algorithm 2.1 we aim
|
| 152 |
+
to highlight the structure of the elliptical slice sampling approach. It is given in
|
| 153 |
+
Algorithm 2.3, calling Algorithm 2.2 as a built-in procedure. The procedure gives
|
| 154 |
+
a transition mechanism on a set S ∈ B([0, 2π)).
|
| 155 |
+
Algorithm 2.2 Shrinkage, called as shrink(θin, S)
|
| 156 |
+
Input: S ∈ B([0, 2π)), θin ∈ S considered as current state;
|
| 157 |
+
Output: θout ∈ S considered as next state;
|
| 158 |
+
1: Set i := 1 and draw Γi ∼ U[0,2π), call the result γi;
|
| 159 |
+
2: Set γmin
|
| 160 |
+
i
|
| 161 |
+
:= γi and γmax
|
| 162 |
+
i
|
| 163 |
+
:= γi;
|
| 164 |
+
3: while γi /∈ S do
|
| 165 |
+
4:
|
| 166 |
+
if γi ∈ I(γmin
|
| 167 |
+
i
|
| 168 |
+
, θin) then
|
| 169 |
+
5:
|
| 170 |
+
Set γmin
|
| 171 |
+
i+1 := γi and γmax
|
| 172 |
+
i+1 := γmax
|
| 173 |
+
i
|
| 174 |
+
;
|
| 175 |
+
6:
|
| 176 |
+
else
|
| 177 |
+
7:
|
| 178 |
+
Set γmin
|
| 179 |
+
i+1 := γmin
|
| 180 |
+
i
|
| 181 |
+
and γmax
|
| 182 |
+
i+1 := γi;
|
| 183 |
+
8:
|
| 184 |
+
end if
|
| 185 |
+
9:
|
| 186 |
+
Draw Γi+1 ∼ UI(γmin
|
| 187 |
+
i+1 ,γmax
|
| 188 |
+
i+1 ), call the result γi+1;
|
| 189 |
+
10:
|
| 190 |
+
Set i := i + 1;
|
| 191 |
+
11: end while
|
| 192 |
+
12: return θout := γi.
|
| 193 |
+
Comparing Algorithm 2.1 and Algorithm 2.3 one observes that line 1, line 2
|
| 194 |
+
and the return-line coincide (after γ has been computed). Given realizations t and
|
| 195 |
+
w line 3 until line 12 of Algorithm 2.1, including the while-loop, correspond to
|
| 196 |
+
calling the shrinkage procedure of Algorithm 2.2 within Algorithm 2.3 with input
|
| 197 |
+
4
|
| 198 |
+
|
| 199 |
+
Algorithm 2.3 Reformulated Elliptical slice sampling
|
| 200 |
+
Input: ̺ and xin ∈ H considered as current state;
|
| 201 |
+
Output: xout ∈ H considered as next state;
|
| 202 |
+
1: Draw T ∼ U(0,̺(xin)), call the result t;
|
| 203 |
+
2: Draw W ∼ µ0 = N (0, C), call the result w;
|
| 204 |
+
3: Set γ := shrink(0, p−1
|
| 205 |
+
xin,w(H(t)));
|
| 206 |
+
(Algorithm 2.2)
|
| 207 |
+
4: return xout := cos(γ)xin + sin(γ)w.
|
| 208 |
+
θin = 0 and S = p−1
|
| 209 |
+
xin,w(H(t)). For convincing yourself that those parts also coincide
|
| 210 |
+
note that
|
| 211 |
+
p−1
|
| 212 |
+
xin,w(H(t)) = {θ ∈ [0, 2π): ̺(cos(θ)xin + sin(θ)w) > t}
|
| 213 |
+
and therefore the termination criterion in the while-loops remains the same. More-
|
| 214 |
+
over, the 2π-periodicity of the function pxin,w is exploited in the construction of the
|
| 215 |
+
shrinked intervals in the while-loop in Algorithm 2.1, whereas in Algorithm 2.2 we
|
| 216 |
+
work with the generalized intervals I(α, β) for given α, β ∈ [0, 2π). To finally con-
|
| 217 |
+
vince yourself that indeed the same transitions are performed it is useful to specify
|
| 218 |
+
how one samples uniformly distributed in the generalized intervals. Namely, for
|
| 219 |
+
α < β, just sample uniformly distributed in [α, β) to get a realization w.r.t. UI(α,β).
|
| 220 |
+
For α ≥ β and uniform sampling in I(α, β), draw V ∼ U[α−2π,β) with result v and
|
| 221 |
+
set the output as v + 2π if v ∈ [α − 2π, 0) and v otherwise. Employing this pro-
|
| 222 |
+
cedure for realizing UI(α,β) in Algorithm 2.2, driven by the same random numbers
|
| 223 |
+
as the interval sampling in Algorithm 2.1, yields finally the same transitions and
|
| 224 |
+
angles2.
|
| 225 |
+
2.1
|
| 226 |
+
Properties and notation of the shrinkage procedure
|
| 227 |
+
The shrinkage procedure of Algorithm 2.2 (and Algorithm 2.1) is only well-defined
|
| 228 |
+
if the while-loop terminates. In particular, if λ(S) = 0, then for any I(α, β), with
|
| 229 |
+
α, β ∈ [0, 2π), and V ∼ UI(α,β) we have
|
| 230 |
+
P(V ∈ S) = UI(α,β)(S) = λ(S ∩ I(α, β))
|
| 231 |
+
λ(I(α, β))
|
| 232 |
+
= 0.
|
| 233 |
+
(1)
|
| 234 |
+
Consequently, for an input S ∈ B([0, 2π)) with λ(S) = 0 the shrinkage procedure
|
| 235 |
+
of Algorithm 2.2 does not terminate almost surely, since in line 9 there one chooses
|
| 236 |
+
uniformly distributed in a suitable generalized interval and by (1) the probability
|
| 237 |
+
to be in S is zero.
|
| 238 |
+
With the following illustrating example we illuminate the
|
| 239 |
+
well-definedness problem in terms of Algorithm 2.3 with a toy scenario.
|
| 240 |
+
Example 2.2. For d ∈ N consider H = Rd and let ε > 0 as well as µ = N (0, I) be
|
| 241 |
+
the standard normal distribution in Rd with I ∈ Rd×d being the identity matrix.
|
| 242 |
+
2The angles and shrinked intervals coincide up to transformation to [0, 2π).
|
| 243 |
+
5
|
| 244 |
+
|
| 245 |
+
Moreover, let ̺: Rd → [ε, 1 + ε] be given as
|
| 246 |
+
̺(x) = 1[0,1]d(x) + ε,
|
| 247 |
+
x ∈ Rd.
|
| 248 |
+
Observe that H(t) = [0, 1]d for t > ε. We see that the fact that this is a closed set
|
| 249 |
+
might lead (for certain inputs) to a well-definedness issue. For
|
| 250 |
+
w ∈
|
| 251 |
+
�
|
| 252 |
+
( �w(1), . . . , �w(d)) ∈ Rd: ∃i, j ∈ {1, . . . , d} s.t. �w(i) < 0, �w(j) > 0
|
| 253 |
+
�
|
| 254 |
+
we have
|
| 255 |
+
p0,w([0, 2π)) = { �w ∈ Rd: �w = sw, s ∈ [−1, 1)},
|
| 256 |
+
p0,w([0, 2π)) ∩ H(t) = {0} ⊂ Rd,
|
| 257 |
+
such that p−1
|
| 258 |
+
0,w(H(t)) = {0} ⊂ [0, 2π). For the random variables T and W as in
|
| 259 |
+
Algorithm 2.3 we obtain that
|
| 260 |
+
P
|
| 261 |
+
�
|
| 262 |
+
λ(p−1
|
| 263 |
+
0,W(H(T))) = 0
|
| 264 |
+
�
|
| 265 |
+
=
|
| 266 |
+
2d − 2
|
| 267 |
+
2d(1 + ε).
|
| 268 |
+
Thus, for input xin = 0 and ̺, with the former probability the while-loop in the
|
| 269 |
+
shrinkage procedure does not terminate.
|
| 270 |
+
In the following we introduce the mathematical objects to formulate suffi-
|
| 271 |
+
cient conditions for guaranteeing an almost sure termination of the aforementioned
|
| 272 |
+
while-loops and a desired reversibility property of the shrinkage procedure.
|
| 273 |
+
We start with notation.
|
| 274 |
+
For probability measures µ, ν defined on possibly
|
| 275 |
+
different measurable spaces the corresponding product measure on the Cartesian
|
| 276 |
+
product space is denoted as µ ⊗ ν. Moreover, for the Dirac measure at v (on an
|
| 277 |
+
arbitrary measurable space) we write δv(·). Having two random variables/vectors
|
| 278 |
+
X, Y we denote the distribution of X as PX and the conditional distribution of X
|
| 279 |
+
given Y as PX|Y .
|
| 280 |
+
Fix S ∈ B([0, 2π)) and let θ ∈ S with θin = θ. Define
|
| 281 |
+
Λ := {(γ, γmin, γmax) ∈ [0, 2π)3: γ ∈ I(γmin, γmax)},
|
| 282 |
+
Λθ := {(γ, γmin, γmax) ∈ [0, 2π)3: γ, θ ∈ I(γmin, γmax)}.
|
| 283 |
+
Considering z1 = (γ1, γmin
|
| 284 |
+
1
|
| 285 |
+
, γmax
|
| 286 |
+
1
|
| 287 |
+
) from Algorithm 2.2 as realization of a random
|
| 288 |
+
vector Z1 = (Γ1, Γmin
|
| 289 |
+
1
|
| 290 |
+
, Γmax
|
| 291 |
+
1
|
| 292 |
+
) on ([0, 2π)3, B([0, 2π)3) we have by line 1-2 of the
|
| 293 |
+
aforementioned procedure that the distribution of Z1 is given by
|
| 294 |
+
PZ1(C) =
|
| 295 |
+
� 2π
|
| 296 |
+
0
|
| 297 |
+
δ(γ,γ,γ)(C) dγ
|
| 298 |
+
2π,
|
| 299 |
+
C ∈ B([0, 2π)3).
|
| 300 |
+
(2)
|
| 301 |
+
Assume that Θ is a random variable mapping to S with distribution PΘ and
|
| 302 |
+
consider θ ∈ S with θ = θin as realization of Θ. Given Θ = θ, note that Γ1 ∈
|
| 303 |
+
I(Γmin
|
| 304 |
+
1
|
| 305 |
+
, Γmax
|
| 306 |
+
1
|
| 307 |
+
) and θ ∈ S ⊆ I(Γmin
|
| 308 |
+
1
|
| 309 |
+
, Γmax
|
| 310 |
+
1
|
| 311 |
+
), such that Z1(ω) ∈ Λθ for all ω ∈ Ω.
|
| 312 |
+
6
|
| 313 |
+
|
| 314 |
+
Moreover, given Θ = θ the sequence (zn)n∈N, with zn = (γn, γmin
|
| 315 |
+
n
|
| 316 |
+
, γmax
|
| 317 |
+
n
|
| 318 |
+
) ∈ [0, 2π)3,
|
| 319 |
+
from iterating over lines 4-9 (ignoring the stopping criterion in the while loop)
|
| 320 |
+
of Algorithm 2.2 is a realization of a sequence of random variables (Zn)n∈N with
|
| 321 |
+
Zn = (Γn, Γmin
|
| 322 |
+
n , Γmax
|
| 323 |
+
n
|
| 324 |
+
). For illustrative purposes we provide the dependency graph
|
| 325 |
+
of (Zn)n∈N conditioned on Θ = θ:
|
| 326 |
+
Γmin
|
| 327 |
+
1
|
| 328 |
+
, Γmax
|
| 329 |
+
1
|
| 330 |
+
Γmin
|
| 331 |
+
2
|
| 332 |
+
, Γmax
|
| 333 |
+
2
|
| 334 |
+
Γmin
|
| 335 |
+
3
|
| 336 |
+
, Γmax
|
| 337 |
+
3
|
| 338 |
+
. . .
|
| 339 |
+
Γ1
|
| 340 |
+
Γ2
|
| 341 |
+
Γ3
|
| 342 |
+
. . .
|
| 343 |
+
From the algorithmic description one can read off the following conditional distri-
|
| 344 |
+
bution properties
|
| 345 |
+
PΓn+1|Γmin
|
| 346 |
+
n+1,Γmax
|
| 347 |
+
n+1,Θ(A) = PΓn+1|Γmin
|
| 348 |
+
n+1,Γmax
|
| 349 |
+
n+1(A) = UI(Γmin
|
| 350 |
+
n+1,Γmax
|
| 351 |
+
n+1)(A),
|
| 352 |
+
(3)
|
| 353 |
+
PΓmin
|
| 354 |
+
n+1,Γmax
|
| 355 |
+
n+1|Zn,Θ(B) = 1I(Γmin
|
| 356 |
+
n
|
| 357 |
+
,Θ)(Γn)δ(Γn,Γmax
|
| 358 |
+
n
|
| 359 |
+
)(B) + 1I(Θ,Γmax
|
| 360 |
+
n
|
| 361 |
+
)(Γn)δ(Γmin
|
| 362 |
+
n
|
| 363 |
+
,Γn)(B),
|
| 364 |
+
(4)
|
| 365 |
+
for A ∈ B([0, 2π)), B ∈ B([0, 2π)2) and Zn ∈ ΛΘ almost surely. Moreover, condi-
|
| 366 |
+
tioned on Θ the sequence of random variables (Zn)n∈N satisfies the Markov prop-
|
| 367 |
+
erty, i.e.,
|
| 368 |
+
PZn+1|Z1,...,Zn,Θ(C) =PZn+1|Zn,Θ(C),
|
| 369 |
+
C ∈ B([0, 2π)3).
|
| 370 |
+
(5)
|
| 371 |
+
From (3) and (4) the right-hand side of the previous equation can be represented
|
| 372 |
+
as
|
| 373 |
+
PZn+1|Zn,Θ(A × B) =
|
| 374 |
+
�
|
| 375 |
+
B
|
| 376 |
+
PΓn+1|Γmin
|
| 377 |
+
n+1=γmin,Γmax
|
| 378 |
+
n+1=γmax,Θ(A) PΓmin
|
| 379 |
+
n+1,Γmax
|
| 380 |
+
n+1|Zn,Θ(dγmindγmax)
|
| 381 |
+
= 1I(Γmin
|
| 382 |
+
n
|
| 383 |
+
,Θ)(Γn)
|
| 384 |
+
�
|
| 385 |
+
B
|
| 386 |
+
UI(γmin,γmax)(A)δ(Γn,Γmax
|
| 387 |
+
n
|
| 388 |
+
)(dγmindγmax)
|
| 389 |
+
+ 1I(Θ,Γmax
|
| 390 |
+
n
|
| 391 |
+
)(Γn)
|
| 392 |
+
�
|
| 393 |
+
B
|
| 394 |
+
UI(γmin,γmax)(A)δ(Γmin
|
| 395 |
+
n
|
| 396 |
+
,Γn)(dγmindγmax).
|
| 397 |
+
We can rewrite this in terms of a transition kernel. Given Θ = θ and current state
|
| 398 |
+
z = (γ, γmin, γmax) ∈ Λθ we define a transition kernel Rθ on Λθ × B([0, 2π)3) by
|
| 399 |
+
Rθ((γ, γmin, γmax), C) := PZn+1|Zn=(γ,γmin,γmax),Θ=θ(C)
|
| 400 |
+
= 1I(γmin,θ)(γ)
|
| 401 |
+
�
|
| 402 |
+
C
|
| 403 |
+
UI(αmin,αmax)(dα)δγ(dαmin)δγmax(dαmax)
|
| 404 |
+
+ 1I(θ,γmax)(γ)
|
| 405 |
+
�
|
| 406 |
+
C
|
| 407 |
+
UI(αmin,αmax)(dα)δγmin(dαmin)δγ(dαmax),
|
| 408 |
+
C ∈ B([0, 2π)3).
|
| 409 |
+
We note the following properties:
|
| 410 |
+
Lemma 2.3. For any z = (γ, γmin, γmax) ∈ Λθ and any C ∈ B([0, 2π)3) we have
|
| 411 |
+
Rθ((γ, γmin, γmax), C)
|
| 412 |
+
= 1I(γ,γmin)(θ) · UI(γ,γmax) ⊗ δ(γ,γmax)(C) + 1I(γmax,γ)(θ) · UI(γmin,γ) ⊗ δ(γmin,γ)(C)
|
| 413 |
+
= 1I(γ,γmax)(θ) · UI(γ,γmax) ⊗ δ(γ,γmax)(C) + 1I(γmin,γ)(θ) · UI(γmin,γ) ⊗ δ(γmin,γ)(C),
|
| 414 |
+
as well as Rθ((γ, γmin, γmax), Λθ) = 1.
|
| 415 |
+
7
|
| 416 |
+
|
| 417 |
+
Proof. The first equality follows by Lemma 2.1 and the second equality by taking
|
| 418 |
+
z ∈ Λθ, in particular, θ ∈ I(γmin, γmax) into account.
|
| 419 |
+
For showing Rθ((γ, γmin, γmax), Λθ) = 1 note that again by Lemma 2.1 we have
|
| 420 |
+
Λθ = {(γ, γmin, γmax) ∈ [0, 2π)3: γmin ∈ I(γmax, θ), γ ∈ I(γmin, γmax)}.
|
| 421 |
+
Hence
|
| 422 |
+
UI(γ,γmax) ⊗ δ(γ,γmax)(Λθ)
|
| 423 |
+
=
|
| 424 |
+
� 2π
|
| 425 |
+
0
|
| 426 |
+
�
|
| 427 |
+
I(αmax,θ)
|
| 428 |
+
�
|
| 429 |
+
I(αmin,αmax)
|
| 430 |
+
UI(γ,γmax) ⊗ δ(γ,γmax)(dαdαmindαmax)
|
| 431 |
+
=
|
| 432 |
+
�
|
| 433 |
+
I(γ,γmax)
|
| 434 |
+
UI(γ,γmax)(dα) = 1,
|
| 435 |
+
and by the same arguments UI(γmin,γ) ⊗ δ(γmin,γ)(Λθ) = 1. Since γ ∈ I(γmin, γmax)
|
| 436 |
+
we have
|
| 437 |
+
I(γmin, γmax) = I(γmin, γ) ∪ I(γ, γmax)
|
| 438 |
+
and
|
| 439 |
+
I(γmin, γ) ∩ I(γ, γmax) = ∅,
|
| 440 |
+
such that either the first or the second summand in Rθ((γ, γmin, γmax), Λθ) is 0,
|
| 441 |
+
whereas the other one is 1.
|
| 442 |
+
A useful representation of the transition kernel in terms of random variables
|
| 443 |
+
follows readily from the previous lemma.
|
| 444 |
+
Lemma 2.4. For any z ∈ Λθ, any C ∈ B([0, 2π)3) and any n ≥ 2 we have
|
| 445 |
+
Rθ(z, C) = E[1I(Γmin
|
| 446 |
+
n
|
| 447 |
+
,Γmax
|
| 448 |
+
n
|
| 449 |
+
)(θ) UI(Γmin
|
| 450 |
+
n
|
| 451 |
+
,Γmax
|
| 452 |
+
n
|
| 453 |
+
) ⊗ δ(Γmin
|
| 454 |
+
n
|
| 455 |
+
,Γmax
|
| 456 |
+
n
|
| 457 |
+
)(C) | Zn−1 = z, Θ = θ]. (6)
|
| 458 |
+
We add another property regarding the distribution of Θ given Z1, . . . , Zn that
|
| 459 |
+
is proven in Appendix A.1
|
| 460 |
+
Lemma 2.5. For any n ∈ N and A ∈ B([0, 2π)) we have
|
| 461 |
+
PΘ|Z1,...,Zn(A) = PΘ|Γmin
|
| 462 |
+
n
|
| 463 |
+
,Γmax
|
| 464 |
+
n
|
| 465 |
+
(A) = PΘ(A ∩ I(Γmin
|
| 466 |
+
n , Γmax
|
| 467 |
+
n
|
| 468 |
+
))
|
| 469 |
+
PΘ(I(Γmin
|
| 470 |
+
n , Γmax
|
| 471 |
+
n
|
| 472 |
+
))
|
| 473 |
+
.
|
| 474 |
+
(7)
|
| 475 |
+
2.2
|
| 476 |
+
Stopping of the shrinkage procedure
|
| 477 |
+
Now we are aiming to take the stopping criterion within the while loop of Algo-
|
| 478 |
+
rithm 2.2 into account. For this we introduce the σ-algebras Fn := σ(Z1, . . . , Zn)
|
| 479 |
+
and the natural filtration {Fn}n∈N of (Zn)n∈N. We define the (random) termination
|
| 480 |
+
time τS of the while-loop as the first n ∈ N where Γn is in S, i.e.,
|
| 481 |
+
τS := inf
|
| 482 |
+
�
|
| 483 |
+
n ∈ N : Zn ∈ S × [0, 2π)2�
|
| 484 |
+
,
|
| 485 |
+
(8)
|
| 486 |
+
8
|
| 487 |
+
|
| 488 |
+
where by convention inf ∅ = ∞. Note that τS is a stopping time w.r.t. the natural
|
| 489 |
+
filtration, since
|
| 490 |
+
{τS = n} =
|
| 491 |
+
n−1
|
| 492 |
+
�
|
| 493 |
+
k=1
|
| 494 |
+
�
|
| 495 |
+
Zk /∈ S × [0, 2π)2�
|
| 496 |
+
∩
|
| 497 |
+
�
|
| 498 |
+
Zn ∈ S × [0, 2π)2�
|
| 499 |
+
∈ Fn,
|
| 500 |
+
for any n ∈ N. Moreover, the transition mechanism of Algorithm 2.2 for input S
|
| 501 |
+
and θ can be formulated in terms of a transition kernel if the while-loop conditioned
|
| 502 |
+
on Θ = θ terminates almost surely, that is, P(τS < ∞ | Θ = θ) = 1. Now we
|
| 503 |
+
provide a sufficient condition for that property.
|
| 504 |
+
Lemma 2.6. Assume that S ∈ B([0, 2π)) is an open set. Then, for any θ ∈ S we
|
| 505 |
+
have P(τS < ∞ | Θ = θ) = 1.
|
| 506 |
+
Proof. By the fact that S is open and θ ∈ S there exists an neighborhood of θ
|
| 507 |
+
with positive Lebesgue measure that is contained in S. In other words, there is an
|
| 508 |
+
ε > 0 such that Iθ := I(θε,−, θε,+) ⊆ S, where
|
| 509 |
+
θε,− = θ − ε
|
| 510 |
+
mod 2π,
|
| 511 |
+
θε,+ = θ + ε
|
| 512 |
+
mod 2π,
|
| 513 |
+
with θ ∈ Iθ. Furthermore, note that λ(Iθ) = 2ε. Set �S := S × [0, 2π)2 and observe
|
| 514 |
+
that for any γmin, γmax ∈ [0, 2π) with θ ∈ I(γmin, γmax) we have
|
| 515 |
+
UI(γmin,γmax)(S) = λ(S ∩ I(γmin, γmax))
|
| 516 |
+
λ(I(γmin, γmax))
|
| 517 |
+
≥
|
| 518 |
+
|
| 519 |
+
|
| 520 |
+
|
| 521 |
+
|
| 522 |
+
|
| 523 |
+
1
|
| 524 |
+
γmin, γmax ∈ Iθ
|
| 525 |
+
ε
|
| 526 |
+
λ(I(γmin,γmax))
|
| 527 |
+
γmin ∈ Iθ, γmax ̸∈ Iθ
|
| 528 |
+
or
|
| 529 |
+
γmin ̸∈ Iθ, γmax ∈ Iθ
|
| 530 |
+
2ε
|
| 531 |
+
λ(I(γmin,γmax))
|
| 532 |
+
γmin, γmax ̸∈ Iθ
|
| 533 |
+
≥ ε
|
| 534 |
+
2π.
|
| 535 |
+
Using this estimate, we obtain for any z = (γ, γmin, γmax) ∈ Λθ that
|
| 536 |
+
Rθ(z, �S) = 1I(γ,γmax)(θ)UI(γ,γmax)(S) + 1I(γmin,γ)(θ)UI(γmin,γ)(S) ≥ ε
|
| 537 |
+
2π.
|
| 538 |
+
Recall that Z1 with PZ1 from (2) satisfies Z1 ∈ Λθ almost surely. Now applying
|
| 539 |
+
the former estimate iteratively leads to
|
| 540 |
+
P(Z1, . . . , Zn ̸∈ �Sn | Θ = θ)
|
| 541 |
+
=
|
| 542 |
+
n−1
|
| 543 |
+
�
|
| 544 |
+
��
|
| 545 |
+
�
|
| 546 |
+
�
|
| 547 |
+
�Sc · · ·
|
| 548 |
+
�
|
| 549 |
+
�Sc Rθ(zn−1, �Sc)Rθ(zn−2, dzn−1) · · ·Rθ(z1, dz2)PZ1(dz1)
|
| 550 |
+
≤
|
| 551 |
+
�
|
| 552 |
+
1 − ε
|
| 553 |
+
2π
|
| 554 |
+
�
|
| 555 |
+
P(Z1, . . . , Zn−1 ̸∈ �Sn−1 | Θ = θ)
|
| 556 |
+
≤ · · · ≤
|
| 557 |
+
�
|
| 558 |
+
1 − ε
|
| 559 |
+
2π
|
| 560 |
+
�n−1
|
| 561 |
+
PZ1(�S) ≤
|
| 562 |
+
�
|
| 563 |
+
1 − ε
|
| 564 |
+
2π
|
| 565 |
+
�n−1
|
| 566 |
+
,
|
| 567 |
+
9
|
| 568 |
+
|
| 569 |
+
such that
|
| 570 |
+
P(τS = ∞ | Θ = θ) ≤ lim
|
| 571 |
+
n→∞ P(τS > n | Θ = θ)
|
| 572 |
+
≤ lim
|
| 573 |
+
n→∞ P(Z1, . . . , Zn ̸∈ �Sn | Θ = θ) ≤ 0
|
| 574 |
+
and the proof is finished.
|
| 575 |
+
Corollary 2.7. Assume that ̺: H → (0, ∞) is lower semi-continuous, that is, all
|
| 576 |
+
level sets H(t) are open sets, then
|
| 577 |
+
P(τp−1
|
| 578 |
+
x,w(H(t)) < ∞ | Θ = 0) = 1,
|
| 579 |
+
∀x, w ∈ H
|
| 580 |
+
and
|
| 581 |
+
t ∈ (0, ̺(x)).
|
| 582 |
+
Proof. By the continuity of px,w and the fact that H(t) is open we also have that
|
| 583 |
+
p−1
|
| 584 |
+
x,w(H(t)) ⊆ [0, 2π) is open with 0 ∈ p−1
|
| 585 |
+
x,w(H(t)). Therefore the statement follows
|
| 586 |
+
by Lemma 2.6.
|
| 587 |
+
The previous corollary tells us that whenever ̺ is lower semi-continuous, then
|
| 588 |
+
calling Algorithm 2.2 with input S = p−1
|
| 589 |
+
x,w(H(t)) and θin = 0 terminates almost
|
| 590 |
+
surely, such that Algorithm 2.3 also terminates and is well-defined.
|
| 591 |
+
Remark 2.8. Usually the non-termination issue does not seem to have a big in-
|
| 592 |
+
fluence in applications since most densities of interest have open level sets. For
|
| 593 |
+
example every continuous density is lower semi-continuous.
|
| 594 |
+
Even if they have
|
| 595 |
+
single outliers for which the algorithm would not terminate, in practice, the algo-
|
| 596 |
+
rithm would shrink and shrink and at some point, because a computer works with
|
| 597 |
+
machine precision, the shrinked interval cannot be distinguished anymore from
|
| 598 |
+
the current state such that it will eventually accept and return the current as the
|
| 599 |
+
next instance. In that case the algorithmic and mathematical description does not
|
| 600 |
+
coincide with the implementation.
|
| 601 |
+
2.3
|
| 602 |
+
Reversibility of the shrinkage procedure
|
| 603 |
+
Now we introduce the stopped random variable ZτS of the Markov chain (Zn)n∈N.
|
| 604 |
+
For the formal definition on the event τS = ∞ use an arbitrary random variable
|
| 605 |
+
Z∞, that is assumed to be measurable w.r.t. F∞ := σ(Zk, k ∈ N). We set
|
| 606 |
+
ZτS(ω) := Z∞(ω)1{τS=∞}(ω) +
|
| 607 |
+
∞
|
| 608 |
+
�
|
| 609 |
+
k=1
|
| 610 |
+
Zk(ω)1{τS=k}(ω),
|
| 611 |
+
ω ∈ Ω.
|
| 612 |
+
Notice that ZτS is indeed measurable w.r.t. the τS-induced σ-algebra
|
| 613 |
+
FτS := {A ∈ F : A ∩ {τS = k} ∈ Fk, k ∈ N},
|
| 614 |
+
since for any A ∈ F and k ∈ N we have
|
| 615 |
+
{ZτS ∈ A} ∩ {τS = k} = {Zk ∈ A} ∩ {τS = k} ∈ Fk.
|
| 616 |
+
10
|
| 617 |
+
|
| 618 |
+
Thus, ZτS = (ΓτS, Γmin
|
| 619 |
+
τS , Γmax
|
| 620 |
+
τS ) is a [0, 2π)3-valued random variable and its compo-
|
| 621 |
+
nents ΓτS, Γmin
|
| 622 |
+
τS , Γmax
|
| 623 |
+
τS
|
| 624 |
+
are [0, 2π)-valued random variables on the probability space
|
| 625 |
+
(Ω, FτS, P).
|
| 626 |
+
Now for given S ∈ B([0, 2π)) and arbitrary θin ∈ S, after the whole construc-
|
| 627 |
+
tion, we are able to state the transition kernel QS on S × B(S) that corresponds
|
| 628 |
+
to the transition mechanism of Algorithm 2.2. It is given as
|
| 629 |
+
QS(θin, F) = P(ΓτS ∈ F, τS < ∞ | Θ = θin).
|
| 630 |
+
(9)
|
| 631 |
+
Now we formulate the main result regarding the transition kernel QS that is es-
|
| 632 |
+
sentially used in verifying the reversibility of ESS.
|
| 633 |
+
Theorem 2.9. Let S ∈ B([0, 2π)) be an open set. Then, QS is reversible w.r.t.
|
| 634 |
+
the uniform distribution US.
|
| 635 |
+
Proof. By the Markov property of (Zn)n∈N conditioned on Θ = θ and Lemma 2.4
|
| 636 |
+
we have
|
| 637 |
+
P(ΓτS ∈ F, τS < ∞ | Θ = θ) =
|
| 638 |
+
∞
|
| 639 |
+
�
|
| 640 |
+
k=1
|
| 641 |
+
P[ΓτS ∈ F, τS = k | Θ = θ]
|
| 642 |
+
=
|
| 643 |
+
∞
|
| 644 |
+
�
|
| 645 |
+
k=1
|
| 646 |
+
P[Γk ∈ F ∩ S, Γ1 ∈ Sc, . . . , Γk−1 ∈ Sc | Θ = θ]
|
| 647 |
+
=
|
| 648 |
+
∞
|
| 649 |
+
�
|
| 650 |
+
k=1
|
| 651 |
+
E
|
| 652 |
+
�
|
| 653 |
+
1F(Γk)
|
| 654 |
+
k−1
|
| 655 |
+
�
|
| 656 |
+
i=1
|
| 657 |
+
1Sc(Γi) | Θ = θ
|
| 658 |
+
�
|
| 659 |
+
=
|
| 660 |
+
∞
|
| 661 |
+
�
|
| 662 |
+
k=1
|
| 663 |
+
E
|
| 664 |
+
�
|
| 665 |
+
E
|
| 666 |
+
�
|
| 667 |
+
1F(Γk)
|
| 668 |
+
k−1
|
| 669 |
+
�
|
| 670 |
+
i=1
|
| 671 |
+
1Sc(Γi) | Z1, . . . , Zk−1, Θ
|
| 672 |
+
�
|
| 673 |
+
| Θ = θ
|
| 674 |
+
�
|
| 675 |
+
=
|
| 676 |
+
∞
|
| 677 |
+
�
|
| 678 |
+
k=1
|
| 679 |
+
E
|
| 680 |
+
�k−1
|
| 681 |
+
�
|
| 682 |
+
i=1
|
| 683 |
+
1Sc(Γi)E [1F(Γk) | Z1, . . . , Zk−1, Θ] | Θ = θ
|
| 684 |
+
�
|
| 685 |
+
=
|
| 686 |
+
∞
|
| 687 |
+
�
|
| 688 |
+
k=1
|
| 689 |
+
E
|
| 690 |
+
�k−1
|
| 691 |
+
�
|
| 692 |
+
i=1
|
| 693 |
+
1Sc(Γi)E
|
| 694 |
+
�
|
| 695 |
+
1F ×[0,2π)2(Zk) | Zk−1, Θ
|
| 696 |
+
�
|
| 697 |
+
| Θ = θ
|
| 698 |
+
�
|
| 699 |
+
=
|
| 700 |
+
(6)
|
| 701 |
+
∞
|
| 702 |
+
�
|
| 703 |
+
k=1
|
| 704 |
+
E
|
| 705 |
+
�k−1
|
| 706 |
+
�
|
| 707 |
+
i=1
|
| 708 |
+
1Sc(Γi)E
|
| 709 |
+
�
|
| 710 |
+
1I(Γmin
|
| 711 |
+
k
|
| 712 |
+
,Γmax
|
| 713 |
+
k
|
| 714 |
+
)(θ) UI(Γmin
|
| 715 |
+
k
|
| 716 |
+
,Γmax
|
| 717 |
+
k
|
| 718 |
+
)(F) | Zk−1, Θ
|
| 719 |
+
�
|
| 720 |
+
| Θ = θ
|
| 721 |
+
�
|
| 722 |
+
=
|
| 723 |
+
∞
|
| 724 |
+
�
|
| 725 |
+
k=1
|
| 726 |
+
E
|
| 727 |
+
�k−1
|
| 728 |
+
�
|
| 729 |
+
i=1
|
| 730 |
+
1Sc(Γi)E
|
| 731 |
+
�
|
| 732 |
+
1I(Γmin
|
| 733 |
+
k
|
| 734 |
+
,Γmax
|
| 735 |
+
k
|
| 736 |
+
)(θ) UI(Γmin
|
| 737 |
+
k
|
| 738 |
+
,Γmax
|
| 739 |
+
k
|
| 740 |
+
)(F) | Z1, . . . , Zk−1, Θ
|
| 741 |
+
�
|
| 742 |
+
| Θ = θ
|
| 743 |
+
�
|
| 744 |
+
=
|
| 745 |
+
∞
|
| 746 |
+
�
|
| 747 |
+
k=1
|
| 748 |
+
E
|
| 749 |
+
�
|
| 750 |
+
E
|
| 751 |
+
�k−1
|
| 752 |
+
�
|
| 753 |
+
i=1
|
| 754 |
+
1Sc(Γi)1I(Γmin
|
| 755 |
+
k
|
| 756 |
+
,Γmax
|
| 757 |
+
k
|
| 758 |
+
)(θ) UI(Γmin
|
| 759 |
+
k
|
| 760 |
+
,Γmax
|
| 761 |
+
k
|
| 762 |
+
)(F) | Z1, . . . , Zk−1, Θ
|
| 763 |
+
�
|
| 764 |
+
| Θ = θ
|
| 765 |
+
�
|
| 766 |
+
=
|
| 767 |
+
∞
|
| 768 |
+
�
|
| 769 |
+
k=1
|
| 770 |
+
E
|
| 771 |
+
�k−1
|
| 772 |
+
�
|
| 773 |
+
i=1
|
| 774 |
+
1Sc(Γi)1I(Γmin
|
| 775 |
+
k
|
| 776 |
+
,Γmax
|
| 777 |
+
k
|
| 778 |
+
)(θ) UI(Γmin
|
| 779 |
+
k
|
| 780 |
+
,Γmax
|
| 781 |
+
k
|
| 782 |
+
)(F) | Θ = θ
|
| 783 |
+
�
|
| 784 |
+
.
|
| 785 |
+
11
|
| 786 |
+
|
| 787 |
+
Now, for arbitrary F, G ∈ B(S) we have
|
| 788 |
+
�
|
| 789 |
+
G
|
| 790 |
+
QS(θ, F) US(dθ) =
|
| 791 |
+
∞
|
| 792 |
+
�
|
| 793 |
+
k=1
|
| 794 |
+
E
|
| 795 |
+
�
|
| 796 |
+
1G(Θ)
|
| 797 |
+
k−1
|
| 798 |
+
�
|
| 799 |
+
i=1
|
| 800 |
+
1Sc(Γi)1I(Γmin
|
| 801 |
+
k
|
| 802 |
+
,Γmax
|
| 803 |
+
k
|
| 804 |
+
)(Θ) UI(Γmin
|
| 805 |
+
k
|
| 806 |
+
,Γmax
|
| 807 |
+
k
|
| 808 |
+
)(F)
|
| 809 |
+
�
|
| 810 |
+
,
|
| 811 |
+
(10)
|
| 812 |
+
with random variable Θ ∼ US. Note that, since F ∈ B(S) we have
|
| 813 |
+
UI(Γmin
|
| 814 |
+
k
|
| 815 |
+
,Γmax
|
| 816 |
+
k
|
| 817 |
+
)(F) = λ(F ∩ I(Γmin
|
| 818 |
+
k
|
| 819 |
+
, Γmax
|
| 820 |
+
k
|
| 821 |
+
))
|
| 822 |
+
λ(I(Γmin
|
| 823 |
+
k
|
| 824 |
+
, Γmax
|
| 825 |
+
k
|
| 826 |
+
))
|
| 827 |
+
= US(F ∩ I(Γmin
|
| 828 |
+
k
|
| 829 |
+
, Γmax
|
| 830 |
+
k
|
| 831 |
+
))
|
| 832 |
+
US(I(Γmin
|
| 833 |
+
k
|
| 834 |
+
, Γmax
|
| 835 |
+
k
|
| 836 |
+
))
|
| 837 |
+
.
|
| 838 |
+
Using that and Lemma 2.5 we modify the expectation within the sum and obtain
|
| 839 |
+
E
|
| 840 |
+
�
|
| 841 |
+
1G(Θ)
|
| 842 |
+
k−1
|
| 843 |
+
�
|
| 844 |
+
i=1
|
| 845 |
+
1Sc(Γi)1I(Γmin
|
| 846 |
+
k
|
| 847 |
+
,Γmax
|
| 848 |
+
k
|
| 849 |
+
)(Θ) UI(Γmin
|
| 850 |
+
k
|
| 851 |
+
,Γmax
|
| 852 |
+
k
|
| 853 |
+
)(F)
|
| 854 |
+
�
|
| 855 |
+
=E
|
| 856 |
+
�k−1
|
| 857 |
+
�
|
| 858 |
+
i=1
|
| 859 |
+
1Sc(Γi)1G∩I(Γmin
|
| 860 |
+
k
|
| 861 |
+
,Γmax
|
| 862 |
+
k
|
| 863 |
+
)(Θ) US(F ∩ I(Γmin
|
| 864 |
+
k
|
| 865 |
+
, Γmax
|
| 866 |
+
k
|
| 867 |
+
))
|
| 868 |
+
US(I(Γmin
|
| 869 |
+
k
|
| 870 |
+
, Γmax
|
| 871 |
+
k
|
| 872 |
+
))
|
| 873 |
+
�
|
| 874 |
+
=E
|
| 875 |
+
�
|
| 876 |
+
E
|
| 877 |
+
� k−1
|
| 878 |
+
�
|
| 879 |
+
i=1
|
| 880 |
+
1Sc(Γi)1G∩I(Γmin
|
| 881 |
+
k
|
| 882 |
+
,Γmax
|
| 883 |
+
k
|
| 884 |
+
)(Θ) US(F ∩ I(Γmin
|
| 885 |
+
k
|
| 886 |
+
, Γmax
|
| 887 |
+
k
|
| 888 |
+
))
|
| 889 |
+
US(I(Γmin
|
| 890 |
+
k
|
| 891 |
+
, Γmax
|
| 892 |
+
k
|
| 893 |
+
))
|
| 894 |
+
| Z1, . . . , Zk−1, Γmin
|
| 895 |
+
k
|
| 896 |
+
, Γmax
|
| 897 |
+
k
|
| 898 |
+
��
|
| 899 |
+
=E
|
| 900 |
+
� k−1
|
| 901 |
+
�
|
| 902 |
+
i=1
|
| 903 |
+
1Sc(Γi) US(F ∩ I(Γmin
|
| 904 |
+
k
|
| 905 |
+
, Γmax
|
| 906 |
+
k
|
| 907 |
+
))
|
| 908 |
+
US(I(Γmin
|
| 909 |
+
k
|
| 910 |
+
, Γmax
|
| 911 |
+
k
|
| 912 |
+
))
|
| 913 |
+
E
|
| 914 |
+
�
|
| 915 |
+
1G∩I(Γmin
|
| 916 |
+
k
|
| 917 |
+
,Γmax
|
| 918 |
+
k
|
| 919 |
+
)(Θ) | Z1, . . . , Zk−1, Γmin
|
| 920 |
+
k
|
| 921 |
+
, Γmax
|
| 922 |
+
k
|
| 923 |
+
��
|
| 924 |
+
=
|
| 925 |
+
(7)E
|
| 926 |
+
� k−1
|
| 927 |
+
�
|
| 928 |
+
i=1
|
| 929 |
+
1Sc(Γi) US(F ∩ I(Γmin
|
| 930 |
+
k
|
| 931 |
+
, Γmax
|
| 932 |
+
k
|
| 933 |
+
))
|
| 934 |
+
US(I(Γmin
|
| 935 |
+
k
|
| 936 |
+
, Γmax
|
| 937 |
+
k
|
| 938 |
+
))
|
| 939 |
+
US(G ∩ I(Γmin
|
| 940 |
+
k
|
| 941 |
+
, Γmax
|
| 942 |
+
k
|
| 943 |
+
)
|
| 944 |
+
US(I(Γmin
|
| 945 |
+
k
|
| 946 |
+
, Γmax
|
| 947 |
+
k
|
| 948 |
+
)
|
| 949 |
+
�
|
| 950 |
+
,
|
| 951 |
+
where we also used in the last equation that PΘ = US. Now we can reverse the
|
| 952 |
+
roles of F and G, such that arguing backwards leads to
|
| 953 |
+
�
|
| 954 |
+
G
|
| 955 |
+
QS(θ, F) US(dθ) =
|
| 956 |
+
�
|
| 957 |
+
F
|
| 958 |
+
QS(θ, G) US(dθ),
|
| 959 |
+
which shows the claimed reversibility.
|
| 960 |
+
We finish this section with stating a pushforward invariance property of the
|
| 961 |
+
transition kernel QS. For general properties regarding pushforward transition ker-
|
| 962 |
+
nels we refer to [Rudolf and Sprungk, 2022].
|
| 963 |
+
Lemma 2.10. Let S ∈ B([0, 2π)) be an open set. For θ ∈ S define the function
|
| 964 |
+
gθ : [0, 2π) → [0, 2π) by gθ(α) = (θ − α) mod 2π. Then
|
| 965 |
+
Qg−1
|
| 966 |
+
θ
|
| 967 |
+
(S)(g−1
|
| 968 |
+
θ (θ), g−1
|
| 969 |
+
θ (B)) = QS(θ, B),
|
| 970 |
+
B ∈ B(S).
|
| 971 |
+
The proof of the former lemma is shifted to the appendix, see Section A.2.
|
| 972 |
+
12
|
| 973 |
+
|
| 974 |
+
3
|
| 975 |
+
Reversibility of elliptical slice sampling
|
| 976 |
+
With the representation of the transition mechanism of the shrinkage procedure
|
| 977 |
+
from Algorithm 2.2 in terms of the transition kernel QS we are able to state
|
| 978 |
+
the transition kernel, say H, of elliptical slice sampling that corresponds to the
|
| 979 |
+
transition mechanism of Algorithm 2.3. For xin ∈ H and A ∈ B(H) it is given as
|
| 980 |
+
H(xin, A) =
|
| 981 |
+
1
|
| 982 |
+
̺(xin)
|
| 983 |
+
� ̺(xin)
|
| 984 |
+
0
|
| 985 |
+
�
|
| 986 |
+
H
|
| 987 |
+
QS(xin,w,t)(0, p−1
|
| 988 |
+
xin,w(H(t) ∩ A)) µ0(dw)dt,
|
| 989 |
+
(11)
|
| 990 |
+
where S(xin, w, t) := p−1
|
| 991 |
+
xin,w(H(t)) for w ∈ H and t ∈ (0, ∞).
|
| 992 |
+
Here we verify
|
| 993 |
+
that the reversibility of the shrinkage procedure w.r.t. the uniform distribution
|
| 994 |
+
on S(xin, w, t) carries over to the reversibility of H w.r.t. µ of the corresponding
|
| 995 |
+
elliptical slice sampler. We start with an auxiliary tool.
|
| 996 |
+
Lemma 3.1. Let X and Y be independent random variables mapping to H, each
|
| 997 |
+
distributed according to µ0 = N (0, C), with C : H → H being a non-singular
|
| 998 |
+
covariance operator. For any θ ∈ [0, 2π) let T (θ) : H × H → H × H be given by
|
| 999 |
+
T (θ)(x, y) := (x cos θ + y sin θ, x sin θ − y cos θ).
|
| 1000 |
+
Then
|
| 1001 |
+
E(f(θ, X, Y )) = E(f(θ, T (θ)(X, Y ))),
|
| 1002 |
+
θ ∈ [0, 2π),
|
| 1003 |
+
(12)
|
| 1004 |
+
for any f : [0, 2π) × H2 → R for which one of the expectations exists.
|
| 1005 |
+
Proof. By the fact that X, Y ∼ µ0 = N (0, C) are independent, we have that the
|
| 1006 |
+
random vector
|
| 1007 |
+
�X
|
| 1008 |
+
Y
|
| 1009 |
+
�
|
| 1010 |
+
on H × H is distributed according to N
|
| 1011 |
+
��0
|
| 1012 |
+
0
|
| 1013 |
+
�
|
| 1014 |
+
,
|
| 1015 |
+
�C
|
| 1016 |
+
0
|
| 1017 |
+
0
|
| 1018 |
+
C
|
| 1019 |
+
��
|
| 1020 |
+
.
|
| 1021 |
+
Note that
|
| 1022 |
+
T (θ)(x, y)t =
|
| 1023 |
+
�
|
| 1024 |
+
cos θI
|
| 1025 |
+
sin θI
|
| 1026 |
+
sin θI
|
| 1027 |
+
− cos θI
|
| 1028 |
+
� �
|
| 1029 |
+
x
|
| 1030 |
+
y
|
| 1031 |
+
�
|
| 1032 |
+
,
|
| 1033 |
+
where I : H → H denotes the identity operator. Thus, by the linear transfor-
|
| 1034 |
+
mation theorem for Gaussian measures, see e.g.
|
| 1035 |
+
[Da Prato and Zabczyk, 2002,
|
| 1036 |
+
Proposition 1.2.3], we obtain that the vector T (θ)(X, Y )t is distributed according
|
| 1037 |
+
to
|
| 1038 |
+
N
|
| 1039 |
+
��cos θI
|
| 1040 |
+
sin θI
|
| 1041 |
+
sin θI
|
| 1042 |
+
− cos θI
|
| 1043 |
+
� �0
|
| 1044 |
+
0
|
| 1045 |
+
�
|
| 1046 |
+
,
|
| 1047 |
+
�cos θI
|
| 1048 |
+
sin θI
|
| 1049 |
+
sin θI
|
| 1050 |
+
− cos θI
|
| 1051 |
+
� �C
|
| 1052 |
+
0
|
| 1053 |
+
0
|
| 1054 |
+
C
|
| 1055 |
+
� �cos θI
|
| 1056 |
+
sin θI
|
| 1057 |
+
sin θI
|
| 1058 |
+
− cos θI
|
| 1059 |
+
��
|
| 1060 |
+
= N
|
| 1061 |
+
��0
|
| 1062 |
+
0
|
| 1063 |
+
�
|
| 1064 |
+
,
|
| 1065 |
+
�C
|
| 1066 |
+
0
|
| 1067 |
+
0
|
| 1068 |
+
C
|
| 1069 |
+
��
|
| 1070 |
+
.
|
| 1071 |
+
Hence, the distributions of (X, Y ) and T (θ)(X, Y ) coincide, such that (12) holds.
|
| 1072 |
+
By combining the previous lemmas we can prove our main result.
|
| 1073 |
+
13
|
| 1074 |
+
|
| 1075 |
+
Theorem 3.2. Let ̺: H → (0, ∞) be lower-semicontinuous. Then, H is reversible
|
| 1076 |
+
w.r.t. µ.
|
| 1077 |
+
Proof. For any x, w ∈ H and t ∈ (0, ∞) set S(x, w, t) := p−1
|
| 1078 |
+
x,w(H(t)). Observe that
|
| 1079 |
+
by the lower-semicontinuity H(t) is an open set. Moreover, by the continuity of
|
| 1080 |
+
px,w we have that S(x, w, t) is an open set in B([0, 2π)). Hence, by Theorem 2.9 the
|
| 1081 |
+
transition kernel of the shrinkage procedure QS(x,w,t) is reversible w.r.t. US(x,w,t)
|
| 1082 |
+
for any x, w ∈ H and t ∈ (0, ∞), that is, for F, G ∈ B([0, 2π)) we have
|
| 1083 |
+
�
|
| 1084 |
+
F
|
| 1085 |
+
QS(x,w,t)(θ, G) US(x,w,t)(dθ) =
|
| 1086 |
+
�
|
| 1087 |
+
G
|
| 1088 |
+
QS(x,w,t)(θ, F) US(x,w,t)(dθ).
|
| 1089 |
+
(13)
|
| 1090 |
+
Using the former equality we prove for any A, B ∈ B(H) that
|
| 1091 |
+
�
|
| 1092 |
+
A
|
| 1093 |
+
H(x, B)̺(x)µ0(dx) =
|
| 1094 |
+
�
|
| 1095 |
+
B
|
| 1096 |
+
H(x, A)̺(x)µ0(dx),
|
| 1097 |
+
(14)
|
| 1098 |
+
which verifies the desired reversibility w.r.t. µ. For A, B ∈ B(H), x, y ∈ H and
|
| 1099 |
+
t ∈ (0, ∞) we use the notation
|
| 1100 |
+
A(x, y, t) := p−1
|
| 1101 |
+
x,y(A ∩ H(t))
|
| 1102 |
+
and
|
| 1103 |
+
B(x, y, t) := p−1
|
| 1104 |
+
x,y(B ∩ H(t)).
|
| 1105 |
+
Using 1(0,̺(x))(t) = 1H(t)(x) and 1H(t)∩A(x) = 1A(x,y,t)(0) we obtain
|
| 1106 |
+
�
|
| 1107 |
+
A
|
| 1108 |
+
H(x, B)̺(x)µ0(dx)
|
| 1109 |
+
=
|
| 1110 |
+
�
|
| 1111 |
+
H
|
| 1112 |
+
� ∞
|
| 1113 |
+
0
|
| 1114 |
+
�
|
| 1115 |
+
H
|
| 1116 |
+
1A(x)1(0,̺(x))(t)QS(x,y,t)(0, B(x, y, t))µ0(dy) dt µ0(dx)
|
| 1117 |
+
=
|
| 1118 |
+
� ∞
|
| 1119 |
+
0
|
| 1120 |
+
�
|
| 1121 |
+
H
|
| 1122 |
+
�
|
| 1123 |
+
H
|
| 1124 |
+
1H(t)∩A(x)QS(x,y,t)(0, B(x, y, t))µ0(dy) µ0(dx) dt
|
| 1125 |
+
=
|
| 1126 |
+
� ∞
|
| 1127 |
+
0
|
| 1128 |
+
�
|
| 1129 |
+
H
|
| 1130 |
+
�
|
| 1131 |
+
H
|
| 1132 |
+
1A(x,y,t)(0)QS(x,y,t)(0, B(x, y, t))µ0(dy) µ0(dx) dt.
|
| 1133 |
+
By the fact that S(x, y, t) is open and non-empty (at least for those t occurring in
|
| 1134 |
+
the last expression above), we have λ(S(x, y, t)) > 0, such that we can write
|
| 1135 |
+
�
|
| 1136 |
+
A
|
| 1137 |
+
H(x, B)̺(x)µ0(dx)
|
| 1138 |
+
=
|
| 1139 |
+
� ∞
|
| 1140 |
+
0
|
| 1141 |
+
� 2π
|
| 1142 |
+
0
|
| 1143 |
+
�
|
| 1144 |
+
H
|
| 1145 |
+
�
|
| 1146 |
+
H
|
| 1147 |
+
1S(x,y,t)(θ)1A(x,y,t)(0)QS(x,y,t)(0, B(x, y, t))
|
| 1148 |
+
λ(S(x, y, t))
|
| 1149 |
+
µ0(dy) µ0(dx) dθ dt
|
| 1150 |
+
=
|
| 1151 |
+
� ∞
|
| 1152 |
+
0
|
| 1153 |
+
� 2π
|
| 1154 |
+
0
|
| 1155 |
+
E(ft(θ, X, Y ))dθdt,
|
| 1156 |
+
where X, Y are independent µ0-distributed random variables and
|
| 1157 |
+
ft(θ, x, y) = 1S(x,y,t)(θ)1A(x,y,t)(0)QS(x,y,t) (0, B(x, y, t))
|
| 1158 |
+
λ(S(x, y, t))
|
| 1159 |
+
.
|
| 1160 |
+
14
|
| 1161 |
+
|
| 1162 |
+
We have by Lemma 3.1 that E(ft(θ, X, Y )) = E(ft(θ, T (θ)(X, Y ))) for any θ ∈
|
| 1163 |
+
[0, 2π) and therefore
|
| 1164 |
+
�
|
| 1165 |
+
A
|
| 1166 |
+
H(x, B)̺(x)µ0(dx) =
|
| 1167 |
+
� ∞
|
| 1168 |
+
0
|
| 1169 |
+
� 2π
|
| 1170 |
+
0
|
| 1171 |
+
E(ft(θ, T (θ)(X, Y )))dθdt
|
| 1172 |
+
=
|
| 1173 |
+
� ∞
|
| 1174 |
+
0
|
| 1175 |
+
�
|
| 1176 |
+
H
|
| 1177 |
+
�
|
| 1178 |
+
H
|
| 1179 |
+
� 2π
|
| 1180 |
+
0
|
| 1181 |
+
ft(θ, T (θ)(x, y))d�� µ0(dy) µ0(dx) dt.
|
| 1182 |
+
For arbitrary θ ∈ [0, 2π) define the function gθ(α) := (θ − α) mod 2π for α ∈
|
| 1183 |
+
[0, 2π) and note that, by using angle sum identities of trigonometric functions, we
|
| 1184 |
+
have
|
| 1185 |
+
pT (θ)(x,y)(α) = px,y(gθ(α)),
|
| 1186 |
+
∀α ∈ [0, 2π).
|
| 1187 |
+
By exploiting the previous equality we have for C ∈ B(H) that
|
| 1188 |
+
α ∈ C(T (θ)(x, y), t)
|
| 1189 |
+
⇐⇒
|
| 1190 |
+
gθ(α) ∈ C(x, y, t).
|
| 1191 |
+
Thus, C(T (θ)(x, y), t) = g−1
|
| 1192 |
+
θ (C(x, y, t)). In particular, we have λ(S(T (θ)(x, y), t)) =
|
| 1193 |
+
λ(S(x, y, t)) as well as
|
| 1194 |
+
QS(T (θ)(x,y),t)(0, B(T (θ)(x, y), t))) = Qg−1
|
| 1195 |
+
θ
|
| 1196 |
+
(S(x,y,t))(g−1
|
| 1197 |
+
θ (θ), g−1
|
| 1198 |
+
θ (B(x, y, t)))
|
| 1199 |
+
= QS(x,y,t)(θ, B(x, y, t)),
|
| 1200 |
+
where the latter equality follows by Lemma 2.10. This yields
|
| 1201 |
+
ft(θ, T (θ)(x, y)) = 1S(T (θ)(x,y),t)(θ)1A(T (θ)(x,y),t)(0)QS(T (θ)(x,y),t)(0, B(T (θ)(x, y), t))
|
| 1202 |
+
λ(S(T (θ)(x, y), t)
|
| 1203 |
+
= 1S(x,y,t)(0)1A(x,y,t)(θ)QS(x,y,t)(θ, B(t, x, y))
|
| 1204 |
+
λ(S(x, y, t))
|
| 1205 |
+
.
|
| 1206 |
+
The previous representation and the fact that 1S(x,y,t)(0) = 1H(t)(x) gives
|
| 1207 |
+
� 2π
|
| 1208 |
+
0
|
| 1209 |
+
ft(θ, T (θ)(x, y)) dθ = 1H(t)(x)
|
| 1210 |
+
� 2π
|
| 1211 |
+
0
|
| 1212 |
+
1A(x,y,t)(θ)QS(x,y,t)(θ, B(x, y, t))
|
| 1213 |
+
dθ
|
| 1214 |
+
λ(S(x, y, t))
|
| 1215 |
+
= 1H(t)(x)
|
| 1216 |
+
�
|
| 1217 |
+
A(x,y,t)
|
| 1218 |
+
QS(x,y,t) (θ, B(x, y, t)) US(x,y,t)(dθ).
|
| 1219 |
+
Altogether we obtain
|
| 1220 |
+
�
|
| 1221 |
+
A
|
| 1222 |
+
H(x, B)̺(x)µ0(dx)
|
| 1223 |
+
=
|
| 1224 |
+
� ∞
|
| 1225 |
+
0
|
| 1226 |
+
�
|
| 1227 |
+
H(t)
|
| 1228 |
+
�
|
| 1229 |
+
H
|
| 1230 |
+
�
|
| 1231 |
+
A(x,y,t)
|
| 1232 |
+
QS(x,y,t)(θ, B(x, y, t))US(x,y,t)(dθ) µ0(dy) µ0(dx) dt.
|
| 1233 |
+
Hence, by (13) arguing backwards by the same arguments as for deriving the
|
| 1234 |
+
previous identity we obtain the reversibility condition from (14).
|
| 1235 |
+
15
|
| 1236 |
+
|
| 1237 |
+
4
|
| 1238 |
+
Summary and outlook
|
| 1239 |
+
Let us summarize our main findings. We provide a proof of reversibility of ESS,
|
| 1240 |
+
where the underlying state space of the corresponding Markov chain can be a
|
| 1241 |
+
possibly infinite-dimensional Hilbert space.
|
| 1242 |
+
On the way to that we point to a
|
| 1243 |
+
(weak) qualitative regularity condition of the likelihood function (̺ is assumed to
|
| 1244 |
+
be lower semicontinuous) that guarantees that the appearing while loop terminates
|
| 1245 |
+
and therefore leads to a well-defined transition kernel. Moreover, with (11) we de-
|
| 1246 |
+
veloped a representation of the transition kernel of ESS. Our approach illuminates
|
| 1247 |
+
the hybrid slice sampling structure, cf. [�Latuszy´nski and Rudolf, 2014], in terms
|
| 1248 |
+
of the reversibility of the shrinkage procedure, see Theorem 2.9, w.r.t. the uniform
|
| 1249 |
+
distribution on a subset of the angle space [0, 2π).
|
| 1250 |
+
The formerly developed representations and tools might path the way for an
|
| 1251 |
+
analysis of the spectral gap of ESS regarding dimension independent behavior. A
|
| 1252 |
+
strictly positive spectral gap is a desirable property of a Markov chain w.r.t. mix-
|
| 1253 |
+
ing properties as well as the theoretical assessment of the mean squared error of
|
| 1254 |
+
Markov chain Monte Carlo for the approximations of expectations according to
|
| 1255 |
+
µ, for details see for example [Rudolf, 2012]. Coupling constructions as have been
|
| 1256 |
+
derived for simple slice sampling in [Natarovskii et al., 2021b] might be a promis-
|
| 1257 |
+
ing approach for addressing the verification of the existence of such a positive
|
| 1258 |
+
spectral gap on H. Moreover, it also seems advantageous to further explore the
|
| 1259 |
+
structural similarity between Metropolis-Hastings and slice sampling approaches.
|
| 1260 |
+
In particular, approximate (elliptical) slice sampling that relies on evaluations of
|
| 1261 |
+
proxys of the likelihood function are interesting. Here stability investigations as
|
| 1262 |
+
e.g. delivered in [Habeck et al., 2020, Sprungk, 2020] might be used to obtain per-
|
| 1263 |
+
turbation theoretical results for ESS as presented in [Rudolf and Schweizer, 2018,
|
| 1264 |
+
Medina-Aguayo et al., 2020] for approximate Metropolis Hastings. Eventually, the
|
| 1265 |
+
theoretical investigation of ESS might be useful to verify the reversibility property
|
| 1266 |
+
also for other slice sampling schemes that rely on the shrinkage procedure.
|
| 1267 |
+
Acknowledgements
|
| 1268 |
+
Mareike Hasenpflug gratefully acknowledges support of the DFG within project
|
| 1269 |
+
432680300 – SFB 1456 subproject B02. The authors thank Michael Habeck, Philip
|
| 1270 |
+
Sch¨ar and Bj¨orn Sprungk for comments on a preliminary version of the manuscript
|
| 1271 |
+
and fruitful discussions about this topic.
|
| 1272 |
+
A
|
| 1273 |
+
Technical proofs
|
| 1274 |
+
A.1
|
| 1275 |
+
Proof of Lemma 2.5
|
| 1276 |
+
Proof. By induction over n ∈ N we prove
|
| 1277 |
+
E[1A(Θ) | Z1, . . . , Zn] = PΘ(A ∩ I(Γmin
|
| 1278 |
+
n , Γmax
|
| 1279 |
+
n
|
| 1280 |
+
))
|
| 1281 |
+
PΘ(I(Γmin
|
| 1282 |
+
n , Γmax
|
| 1283 |
+
n
|
| 1284 |
+
))
|
| 1285 |
+
,
|
| 1286 |
+
(15)
|
| 1287 |
+
16
|
| 1288 |
+
|
| 1289 |
+
from which the statement follows readily. We start with the base case, i.e., con-
|
| 1290 |
+
sider n = 1.
|
| 1291 |
+
Note that Z1 = (Γ1, Γmin
|
| 1292 |
+
1
|
| 1293 |
+
, Γmax
|
| 1294 |
+
1
|
| 1295 |
+
) is independent of Θ and that
|
| 1296 |
+
I(Γmin
|
| 1297 |
+
1
|
| 1298 |
+
, Γmax
|
| 1299 |
+
1
|
| 1300 |
+
) = I(Γ1, Γ1) = [0, 2π). Using those properties yields
|
| 1301 |
+
E(1A(Θ) | Z1) = PΘ(A) = PΘ(A ∩ I(Γ1, Γ1))
|
| 1302 |
+
PΘ(I(Γ1, Γ1))
|
| 1303 |
+
= PΘ(A ∩ I(Γmin
|
| 1304 |
+
1
|
| 1305 |
+
, Γmax
|
| 1306 |
+
1
|
| 1307 |
+
))
|
| 1308 |
+
PΘ(I(Γmin
|
| 1309 |
+
1
|
| 1310 |
+
, Γmax
|
| 1311 |
+
1
|
| 1312 |
+
))
|
| 1313 |
+
,
|
| 1314 |
+
which verifies (15) for n = 1.
|
| 1315 |
+
Assume that (15) is true for n, we are going to prove it for n+1. Observe that,
|
| 1316 |
+
as Θ ∈ I(Γmin
|
| 1317 |
+
n , Γmax
|
| 1318 |
+
n
|
| 1319 |
+
) and Γn ∈ I(Γmin
|
| 1320 |
+
n , Γmax
|
| 1321 |
+
n
|
| 1322 |
+
) almost surely, we have the following
|
| 1323 |
+
two implications
|
| 1324 |
+
Θ ∈ I(Γn, Γmax
|
| 1325 |
+
n
|
| 1326 |
+
) =⇒ Γn ∈ I(Γmin
|
| 1327 |
+
n , Θ) =⇒ Γmin
|
| 1328 |
+
n+1 = Γn, Γmax
|
| 1329 |
+
n+1 = Γmax
|
| 1330 |
+
n
|
| 1331 |
+
,
|
| 1332 |
+
(16)
|
| 1333 |
+
Θ ∈ I(Γmin
|
| 1334 |
+
n , Γn) =⇒ Γn ∈ I(Θ, Γmax
|
| 1335 |
+
n
|
| 1336 |
+
) =⇒ Γmin
|
| 1337 |
+
n+1 = Γmin
|
| 1338 |
+
n , Γmax
|
| 1339 |
+
n+1 = Γn.
|
| 1340 |
+
(17)
|
| 1341 |
+
Moreover, by the induction assumption, the fact that Γn ∈ I(Γmin
|
| 1342 |
+
n , Γmax
|
| 1343 |
+
n
|
| 1344 |
+
) almost
|
| 1345 |
+
surely and disintegration, we have
|
| 1346 |
+
E[1I(Γn,Γmax
|
| 1347 |
+
n
|
| 1348 |
+
)(Θ) | Z1, . . . , Zn] = PΘ(I(Γn, Γmax
|
| 1349 |
+
n
|
| 1350 |
+
))
|
| 1351 |
+
PΘ(I(Γmin
|
| 1352 |
+
n , Γmax
|
| 1353 |
+
n
|
| 1354 |
+
)),
|
| 1355 |
+
(18)
|
| 1356 |
+
E[1I(Γmin
|
| 1357 |
+
n
|
| 1358 |
+
,Γn)(Θ) | Z1, . . . , Zn] =
|
| 1359 |
+
PΘ(I(Γmin
|
| 1360 |
+
n , Γn))
|
| 1361 |
+
PΘ(I(Γmin
|
| 1362 |
+
n , Γmax
|
| 1363 |
+
n
|
| 1364 |
+
)).
|
| 1365 |
+
(19)
|
| 1366 |
+
For arbitrary A ∈ B([0, 2π)), Ci ∈ B([0, 2π)3) with i = 1, . . . , n + 1 we verify
|
| 1367 |
+
E
|
| 1368 |
+
�
|
| 1369 |
+
1A(Θ)
|
| 1370 |
+
n+1
|
| 1371 |
+
�
|
| 1372 |
+
i=1
|
| 1373 |
+
1Ci(Zi)
|
| 1374 |
+
�
|
| 1375 |
+
= E
|
| 1376 |
+
� n+1
|
| 1377 |
+
�
|
| 1378 |
+
i=1
|
| 1379 |
+
1Ci(Zi)PΘ(A ∩ I(Γmin
|
| 1380 |
+
n+1, Γmax
|
| 1381 |
+
n+1))
|
| 1382 |
+
PΘ(I(Γmin
|
| 1383 |
+
n+1, Γmax
|
| 1384 |
+
n+1))
|
| 1385 |
+
�
|
| 1386 |
+
.
|
| 1387 |
+
(20)
|
| 1388 |
+
Hence by the definition of the conditional distribution/expectation and the fact
|
| 1389 |
+
that Cartesian product sets of the above form generate the σ-algebra B([0, 2π)3(n+1))
|
| 1390 |
+
we obtain (15). For proving (20) we observe that
|
| 1391 |
+
E
|
| 1392 |
+
�
|
| 1393 |
+
1A(Θ)
|
| 1394 |
+
n+1
|
| 1395 |
+
�
|
| 1396 |
+
i=1
|
| 1397 |
+
1Ci(Zi)
|
| 1398 |
+
�
|
| 1399 |
+
= E
|
| 1400 |
+
�
|
| 1401 |
+
E
|
| 1402 |
+
�
|
| 1403 |
+
1A(Θ)
|
| 1404 |
+
n+1
|
| 1405 |
+
�
|
| 1406 |
+
i=1
|
| 1407 |
+
1Ci(Zi) | Z1, . . . , Zn, Θ
|
| 1408 |
+
��
|
| 1409 |
+
= E
|
| 1410 |
+
�
|
| 1411 |
+
1A(Θ)
|
| 1412 |
+
n
|
| 1413 |
+
�
|
| 1414 |
+
i=1
|
| 1415 |
+
1Ci(Zi) P (Zn+1 ∈ Cn+1 | Z1, . . . , Zn, Θ)
|
| 1416 |
+
�
|
| 1417 |
+
= E
|
| 1418 |
+
�
|
| 1419 |
+
1A(Θ)
|
| 1420 |
+
n
|
| 1421 |
+
�
|
| 1422 |
+
i=1
|
| 1423 |
+
1Ci(Zi) P (Zn+1 ∈ Cn+1 | Zn, Θ)
|
| 1424 |
+
�
|
| 1425 |
+
= E
|
| 1426 |
+
�
|
| 1427 |
+
1A(Θ)
|
| 1428 |
+
n
|
| 1429 |
+
�
|
| 1430 |
+
i=1
|
| 1431 |
+
1Ci(Zi) RΘ(Zn, Cn+1)
|
| 1432 |
+
�
|
| 1433 |
+
.
|
| 1434 |
+
17
|
| 1435 |
+
|
| 1436 |
+
By Lemma 2.3 we conclude from the previous calculation that
|
| 1437 |
+
E
|
| 1438 |
+
�
|
| 1439 |
+
1A(Θ)
|
| 1440 |
+
n+1
|
| 1441 |
+
�
|
| 1442 |
+
i=1
|
| 1443 |
+
1Ci(Zi)
|
| 1444 |
+
�
|
| 1445 |
+
= E
|
| 1446 |
+
�
|
| 1447 |
+
n
|
| 1448 |
+
�
|
| 1449 |
+
i=1
|
| 1450 |
+
1Ci(Zi) 1A∩I(Γn,Γmax
|
| 1451 |
+
n
|
| 1452 |
+
)(Θ)δ(Γn,Γmax
|
| 1453 |
+
n
|
| 1454 |
+
) ⊗ UI(Γn,Γmax
|
| 1455 |
+
n
|
| 1456 |
+
)(Cn+1)
|
| 1457 |
+
�
|
| 1458 |
+
+ E
|
| 1459 |
+
�
|
| 1460 |
+
n
|
| 1461 |
+
�
|
| 1462 |
+
i=1
|
| 1463 |
+
1Ci(Zi) 1A∩I(Γmin
|
| 1464 |
+
n
|
| 1465 |
+
,Γn)(Θ)δ(Γmin
|
| 1466 |
+
n
|
| 1467 |
+
,Γn) ⊗ UI(Γmin
|
| 1468 |
+
n
|
| 1469 |
+
,Γn)(Cn+1)
|
| 1470 |
+
�
|
| 1471 |
+
= E
|
| 1472 |
+
�
|
| 1473 |
+
n
|
| 1474 |
+
�
|
| 1475 |
+
i=1
|
| 1476 |
+
1Ci(Zi) δ(Γn,Γmax
|
| 1477 |
+
n
|
| 1478 |
+
) ⊗ UI(Γn,Γmax
|
| 1479 |
+
n
|
| 1480 |
+
)(Cn+1) E[1A∩I(Γn,Γmax
|
| 1481 |
+
n
|
| 1482 |
+
)(Θ) | Z1, . . . , Zn]
|
| 1483 |
+
�
|
| 1484 |
+
+ E
|
| 1485 |
+
�
|
| 1486 |
+
n
|
| 1487 |
+
�
|
| 1488 |
+
i=1
|
| 1489 |
+
1Ci(Zi) δ(Γmin
|
| 1490 |
+
n
|
| 1491 |
+
,Γn) ⊗ UI(Γmin
|
| 1492 |
+
n
|
| 1493 |
+
,Γn)(Cn+1) E[1A∩I(Γmin
|
| 1494 |
+
n
|
| 1495 |
+
,Γn)(Θ) | Z1, . . . , Zn]
|
| 1496 |
+
�
|
| 1497 |
+
.
|
| 1498 |
+
For abbreviating the notation define
|
| 1499 |
+
Hmax(Z1, . . . , Zn) :=
|
| 1500 |
+
n
|
| 1501 |
+
�
|
| 1502 |
+
i=1
|
| 1503 |
+
1Ci(Zi) δ(Γn,Γmax
|
| 1504 |
+
n
|
| 1505 |
+
) ⊗ UI(Γn,Γmax
|
| 1506 |
+
n
|
| 1507 |
+
)(Cn+1),
|
| 1508 |
+
Hmin(Z1, . . . , Zn) :=
|
| 1509 |
+
n
|
| 1510 |
+
�
|
| 1511 |
+
i=1
|
| 1512 |
+
1Ci(Zi) δ(Γmin
|
| 1513 |
+
n
|
| 1514 |
+
,Γn) ⊗ UI(Γmin
|
| 1515 |
+
n
|
| 1516 |
+
,Γn)(Cn+1).
|
| 1517 |
+
Using the induction assumption and the fact that Γn ∈ I(Γmin
|
| 1518 |
+
n , Γmax
|
| 1519 |
+
n
|
| 1520 |
+
) almost surely
|
| 1521 |
+
implies I(Γmin
|
| 1522 |
+
n , Γn) ⊂ I(Γmin
|
| 1523 |
+
n , Γmax
|
| 1524 |
+
n
|
| 1525 |
+
) and I(Γn, Γmax
|
| 1526 |
+
n
|
| 1527 |
+
) ⊂ I(Γmin
|
| 1528 |
+
n , Γmax
|
| 1529 |
+
n
|
| 1530 |
+
) almost surely
|
| 1531 |
+
we have
|
| 1532 |
+
E
|
| 1533 |
+
�
|
| 1534 |
+
1A(Θ)
|
| 1535 |
+
n+1
|
| 1536 |
+
�
|
| 1537 |
+
i=1
|
| 1538 |
+
1Ci(Zi)
|
| 1539 |
+
�
|
| 1540 |
+
= E
|
| 1541 |
+
�
|
| 1542 |
+
Hmax(Z1, . . . , Zn) PΘ(A ∩ I(Γn, Γmax
|
| 1543 |
+
n
|
| 1544 |
+
))
|
| 1545 |
+
PΘ(I(Γmin
|
| 1546 |
+
n , Γmax
|
| 1547 |
+
n
|
| 1548 |
+
))
|
| 1549 |
+
�
|
| 1550 |
+
+ E
|
| 1551 |
+
�
|
| 1552 |
+
Hmin(Z1, . . . , Zn) PΘ(A ∩ I(Γmin
|
| 1553 |
+
n , Γn))
|
| 1554 |
+
PΘ(I(Γmin
|
| 1555 |
+
n , Γmax
|
| 1556 |
+
n
|
| 1557 |
+
))
|
| 1558 |
+
�
|
| 1559 |
+
= E
|
| 1560 |
+
�
|
| 1561 |
+
Hmax(Z1, . . . , Zn) PΘ(I(Γn, Γmax
|
| 1562 |
+
n
|
| 1563 |
+
))
|
| 1564 |
+
PΘ(I(Γmin
|
| 1565 |
+
n , Γmax
|
| 1566 |
+
n
|
| 1567 |
+
))
|
| 1568 |
+
PΘ(A ∩ I(Γn, Γmax
|
| 1569 |
+
n
|
| 1570 |
+
))
|
| 1571 |
+
PΘ(I(Γn, Γmax
|
| 1572 |
+
n
|
| 1573 |
+
))
|
| 1574 |
+
�
|
| 1575 |
+
+ E
|
| 1576 |
+
�
|
| 1577 |
+
Hmin(Z1, . . . , Zn) PΘ(I(Γmin
|
| 1578 |
+
n , Γn))
|
| 1579 |
+
PΘ(I(Γmin
|
| 1580 |
+
n , Γmax
|
| 1581 |
+
n
|
| 1582 |
+
))
|
| 1583 |
+
PΘ(A ∩ I(Γmin
|
| 1584 |
+
n , Γn))
|
| 1585 |
+
PΘ(I(Γmin
|
| 1586 |
+
n , Γn))
|
| 1587 |
+
�
|
| 1588 |
+
.
|
| 1589 |
+
18
|
| 1590 |
+
|
| 1591 |
+
Using (18) as well as (19) we get
|
| 1592 |
+
E
|
| 1593 |
+
�
|
| 1594 |
+
1A(Θ)
|
| 1595 |
+
n+1
|
| 1596 |
+
�
|
| 1597 |
+
i=1
|
| 1598 |
+
1Ci(Zi)
|
| 1599 |
+
�
|
| 1600 |
+
= E
|
| 1601 |
+
�
|
| 1602 |
+
Hmax(Z1, . . . , Zn) E[1I(Γn,Γmax
|
| 1603 |
+
n
|
| 1604 |
+
)(Θ) | Z1, . . . , Zn] · PΘ(A ∩ I(Γn, Γmax
|
| 1605 |
+
n
|
| 1606 |
+
))
|
| 1607 |
+
PΘ(I(Γn, Γmax
|
| 1608 |
+
n
|
| 1609 |
+
))
|
| 1610 |
+
�
|
| 1611 |
+
+ E
|
| 1612 |
+
�
|
| 1613 |
+
Hmin(Z1, . . . , Zn) E[1I(Γmin
|
| 1614 |
+
n
|
| 1615 |
+
,Γn)(Θ) | Z1, . . . , Zn] · PΘ(A ∩ I(Γmin
|
| 1616 |
+
n , Γn))
|
| 1617 |
+
PΘ(I(Γmin
|
| 1618 |
+
n , Γn))
|
| 1619 |
+
�
|
| 1620 |
+
= E
|
| 1621 |
+
�
|
| 1622 |
+
E
|
| 1623 |
+
�
|
| 1624 |
+
Hmax(Z1, . . . , Zn) 1I(Γn,Γmax
|
| 1625 |
+
n
|
| 1626 |
+
)(Θ) · PΘ(A ∩ I(Γn, Γmax
|
| 1627 |
+
n
|
| 1628 |
+
))
|
| 1629 |
+
PΘ(I(Γn, Γmax
|
| 1630 |
+
n
|
| 1631 |
+
))
|
| 1632 |
+
| Z1, . . . , Zn
|
| 1633 |
+
��
|
| 1634 |
+
+ E
|
| 1635 |
+
�
|
| 1636 |
+
E
|
| 1637 |
+
�
|
| 1638 |
+
Hmin(Z1, . . . , Zn) 1I(Γmin
|
| 1639 |
+
n
|
| 1640 |
+
,Γn)(Θ) · PΘ(A ∩ I(Γmin
|
| 1641 |
+
n , Γn))
|
| 1642 |
+
PΘ(I(Γmin
|
| 1643 |
+
n , Γn))
|
| 1644 |
+
| Z1, . . . , Zn
|
| 1645 |
+
��
|
| 1646 |
+
.
|
| 1647 |
+
Denoting
|
| 1648 |
+
TI(Γmin
|
| 1649 |
+
n+1,Γmax
|
| 1650 |
+
n+1)(A) := PΘ(A ∩ I(Γmin
|
| 1651 |
+
n+1, Γmax
|
| 1652 |
+
n+1))
|
| 1653 |
+
PΘ(I(Γmin
|
| 1654 |
+
n+1, Γmax
|
| 1655 |
+
n+1))
|
| 1656 |
+
and exploiting (16) as well as (17) gives
|
| 1657 |
+
E
|
| 1658 |
+
�
|
| 1659 |
+
1A(Θ)
|
| 1660 |
+
n+1
|
| 1661 |
+
�
|
| 1662 |
+
i=1
|
| 1663 |
+
1Ci(Zi)
|
| 1664 |
+
�
|
| 1665 |
+
= E
|
| 1666 |
+
�
|
| 1667 |
+
n
|
| 1668 |
+
�
|
| 1669 |
+
i=1
|
| 1670 |
+
1Ci(Zi) δ(Γmin
|
| 1671 |
+
n+1,Γmax
|
| 1672 |
+
n+1) ⊗ UI(Γmin
|
| 1673 |
+
n+1,Γmax
|
| 1674 |
+
n+1)(Cn+1) 1I(Γn,Γmax
|
| 1675 |
+
n
|
| 1676 |
+
)(Θ)TI(Γmin
|
| 1677 |
+
n+1,Γmax
|
| 1678 |
+
n+1)(A)
|
| 1679 |
+
�
|
| 1680 |
+
+ E
|
| 1681 |
+
�
|
| 1682 |
+
n
|
| 1683 |
+
�
|
| 1684 |
+
i=1
|
| 1685 |
+
1Ci(Zi) δ(Γmin
|
| 1686 |
+
n+1,Γmax
|
| 1687 |
+
n+1) ⊗ UI(Γmin
|
| 1688 |
+
n+1,Γmax
|
| 1689 |
+
n+1)(Cn+1) 1I(Γmin
|
| 1690 |
+
n
|
| 1691 |
+
,Γn)(Θ)TI(Γmin
|
| 1692 |
+
n+1,Γmax
|
| 1693 |
+
n+1)(A)
|
| 1694 |
+
�
|
| 1695 |
+
.
|
| 1696 |
+
By the fact that Θ, Γn ∈ I(Γmin
|
| 1697 |
+
n , Γmax
|
| 1698 |
+
n
|
| 1699 |
+
) almost surely, we have 1I(Γn,Γmax
|
| 1700 |
+
n
|
| 1701 |
+
)(Θ) +
|
| 1702 |
+
1I(Γmin
|
| 1703 |
+
n
|
| 1704 |
+
,Γn)(Θ) = 1 almost surely, such that
|
| 1705 |
+
E
|
| 1706 |
+
�
|
| 1707 |
+
1A(Θ)
|
| 1708 |
+
n+1
|
| 1709 |
+
�
|
| 1710 |
+
i=1
|
| 1711 |
+
1Ci(Zi)
|
| 1712 |
+
�
|
| 1713 |
+
= E
|
| 1714 |
+
�
|
| 1715 |
+
n
|
| 1716 |
+
�
|
| 1717 |
+
i=1
|
| 1718 |
+
1Ci(Zi) δ(Γmin
|
| 1719 |
+
n+1,Γmax
|
| 1720 |
+
n+1) ⊗ UI(Γmin
|
| 1721 |
+
n+1,Γmax
|
| 1722 |
+
n+1)(Cn+1) TI(Γmin
|
| 1723 |
+
n+1,Γmax
|
| 1724 |
+
n+1)(A)
|
| 1725 |
+
�
|
| 1726 |
+
.
|
| 1727 |
+
By virtue of (3) we have
|
| 1728 |
+
E[1Cn+1(Zn+1) | Γmin
|
| 1729 |
+
n+1, Γmax
|
| 1730 |
+
n+1] = δ(Γmin
|
| 1731 |
+
n+1,Γmax
|
| 1732 |
+
n+1) ⊗ UI(Γmin
|
| 1733 |
+
n+1,Γmax
|
| 1734 |
+
n+1)(Cn+1)
|
| 1735 |
+
= E[1Cn+1(Zn+1) | Z1, . . . , Zn, Γmin
|
| 1736 |
+
n+1, Γmax
|
| 1737 |
+
n+1],
|
| 1738 |
+
19
|
| 1739 |
+
|
| 1740 |
+
such that
|
| 1741 |
+
E
|
| 1742 |
+
�
|
| 1743 |
+
1A(Θ)
|
| 1744 |
+
n+1
|
| 1745 |
+
�
|
| 1746 |
+
i=1
|
| 1747 |
+
1Ci(Zi)
|
| 1748 |
+
�
|
| 1749 |
+
= E
|
| 1750 |
+
�
|
| 1751 |
+
n
|
| 1752 |
+
�
|
| 1753 |
+
i=1
|
| 1754 |
+
1Ci(Zi) E[1Cn+1(Zn+1) | Z1, . . . , Zn, Γmin
|
| 1755 |
+
n+1, Γmax
|
| 1756 |
+
n+1] TI(Γmin
|
| 1757 |
+
n+1,Γmax
|
| 1758 |
+
n+1)(A)
|
| 1759 |
+
�
|
| 1760 |
+
= E
|
| 1761 |
+
�
|
| 1762 |
+
E
|
| 1763 |
+
� n+1
|
| 1764 |
+
�
|
| 1765 |
+
i=1
|
| 1766 |
+
1Ci(Zi) TI(Γmin
|
| 1767 |
+
n+1,Γmax
|
| 1768 |
+
n+1)(A) | Z1, . . . , Zn, Γmin
|
| 1769 |
+
n+1, Γmax
|
| 1770 |
+
n+1
|
| 1771 |
+
��
|
| 1772 |
+
= E
|
| 1773 |
+
� n+1
|
| 1774 |
+
�
|
| 1775 |
+
i=1
|
| 1776 |
+
1Ci(Zi) PΘ(A ∩ I(Γmin
|
| 1777 |
+
n+1, Γmax
|
| 1778 |
+
n+1))
|
| 1779 |
+
PΘ(I(Γmin
|
| 1780 |
+
n+1, Γmax
|
| 1781 |
+
n+1))
|
| 1782 |
+
�
|
| 1783 |
+
.
|
| 1784 |
+
Finally observing that
|
| 1785 |
+
PΘ(A∩I(Γmin
|
| 1786 |
+
n+1,Γmax
|
| 1787 |
+
n+1))
|
| 1788 |
+
PΘ(I(Γmin
|
| 1789 |
+
n+1,Γmax
|
| 1790 |
+
n+1))
|
| 1791 |
+
is measurable w.r.t. σ(Z1, . . . , Zn+1) we
|
| 1792 |
+
have proven (15) and the total statement is verified.
|
| 1793 |
+
A.2
|
| 1794 |
+
Proof of Lemma 2.10
|
| 1795 |
+
Proof. Interpreting shrink(θ, S), specified through Algorithm 2.2, as random vari-
|
| 1796 |
+
able leads to the representation
|
| 1797 |
+
QS(θ, B) = P(shrink(θ, S) ∈ B),
|
| 1798 |
+
where θ ∈ S and B ∈ B(S). For sets F, G ∈ B([0, 2π)) let F ∆ G be the symmetric
|
| 1799 |
+
set difference, i.e.,
|
| 1800 |
+
F ∆ G := (F \ G) ∪ (G \ F).
|
| 1801 |
+
Note that gθ = g−1
|
| 1802 |
+
θ . Performing a case distinction, one obtains
|
| 1803 |
+
λ
|
| 1804 |
+
�
|
| 1805 |
+
gθ
|
| 1806 |
+
�
|
| 1807 |
+
I(α, β)
|
| 1808 |
+
�
|
| 1809 |
+
∆ I (gθ(β), gθ(α))
|
| 1810 |
+
�
|
| 1811 |
+
= 0,
|
| 1812 |
+
∀α, β ∈ [0, 2π).
|
| 1813 |
+
Moreover, as the Lebesgue measure is invariant under gθ, we also have
|
| 1814 |
+
Ugθ
|
| 1815 |
+
�
|
| 1816 |
+
I(α,β)
|
| 1817 |
+
�(g−1
|
| 1818 |
+
θ (A)) = UI(α,β)(A),
|
| 1819 |
+
∀α, β ∈ [0, 2π).
|
| 1820 |
+
This yields
|
| 1821 |
+
shrink(g−1
|
| 1822 |
+
θ (θ), g−1
|
| 1823 |
+
θ (S)) = g−1
|
| 1824 |
+
θ
|
| 1825 |
+
�
|
| 1826 |
+
shrink(θ, S)
|
| 1827 |
+
�
|
| 1828 |
+
almost surely.
|
| 1829 |
+
Therefore
|
| 1830 |
+
Qg−1
|
| 1831 |
+
θ
|
| 1832 |
+
(S)
|
| 1833 |
+
�
|
| 1834 |
+
g−1
|
| 1835 |
+
θ (θ), g−1
|
| 1836 |
+
θ (B)
|
| 1837 |
+
�
|
| 1838 |
+
= P
|
| 1839 |
+
�
|
| 1840 |
+
shrink(g−1
|
| 1841 |
+
θ (θ), g−1
|
| 1842 |
+
θ (S)) ∈ g−1
|
| 1843 |
+
θ (B)
|
| 1844 |
+
�
|
| 1845 |
+
= P
|
| 1846 |
+
�
|
| 1847 |
+
g−1
|
| 1848 |
+
θ
|
| 1849 |
+
�
|
| 1850 |
+
shrink(θ, S)
|
| 1851 |
+
�
|
| 1852 |
+
∈ g−1
|
| 1853 |
+
θ (B)
|
| 1854 |
+
�
|
| 1855 |
+
= P (shrink(θ, S) ∈ B)
|
| 1856 |
+
= QS(θ, B).
|
| 1857 |
+
20
|
| 1858 |
+
|
| 1859 |
+
References
|
| 1860 |
+
[Cotter et al., 2013] Cotter, S. L., Roberts, G. O., Stuart, A. M., and White, D.
|
| 1861 |
+
(2013). MCMC methods for functions: modifying old algorithms to make them
|
| 1862 |
+
faster. Statistical Science, pages 424–446.
|
| 1863 |
+
[Da Prato and Zabczyk, 2002] Da Prato, G. and Zabczyk, J. (2002). Second order
|
| 1864 |
+
partial differential equations in Hilbert spaces, volume 293. Cambridge Univer-
|
| 1865 |
+
sity Press.
|
| 1866 |
+
[Habeck et al., 2020] Habeck, M., Rudolf, D., and Sprungk, B. (2020). Stability
|
| 1867 |
+
of doubly-intractable distributions. Electronic Communications in Probability,
|
| 1868 |
+
25:1–13.
|
| 1869 |
+
[�Latuszy´nski and Rudolf, 2014] �Latuszy´nski, K. and Rudolf, D. (2014). Conver-
|
| 1870 |
+
gence of hybrid slice sampling via spectral gap. arXiv:1409.2709.
|
| 1871 |
+
[Lie et al., 2021] Lie, H. C., Rudolf, D., Sprungk, B., and Sullivan, T. J.
|
| 1872 |
+
(2021).
|
| 1873 |
+
Dimension-independent Markov chain Monte Carlo on the sphere.
|
| 1874 |
+
arXiv:2112.12185.
|
| 1875 |
+
[Medina-Aguayo et al., 2020] Medina-Aguayo, F., Rudolf, D., and Schweizer, N.
|
| 1876 |
+
(2020). Perturbation bounds for Monte Carlo within Metropolis via restricted
|
| 1877 |
+
approximations. Stochastic processes and their applications, 130(4):2200–2227.
|
| 1878 |
+
[Murray et al., 2010] Murray, I., Adams, R. P., and MacKay, D. J. C. (2010).
|
| 1879 |
+
Elliptical slice sampling. In The Proceedings of the 13th International Conference
|
| 1880 |
+
on Artificial Intelligence and Statistics, volume 9 of JMLR: W&CP, pages 541–
|
| 1881 |
+
548.
|
| 1882 |
+
[Murray and Graham, 2016] Murray, I. and Graham, M. (2016). Pseudo-marginal
|
| 1883 |
+
slice sampling.
|
| 1884 |
+
In The Proceedings of the 19th International Conference on
|
| 1885 |
+
Artificial Intelligence and Statistics, volume 51 of JMLR: W&CP, pages 911–
|
| 1886 |
+
919.
|
| 1887 |
+
[Natarovskii et al., 2021a] Natarovskii, V., Rudolf, D., and Sprungk, B. (2021a).
|
| 1888 |
+
Geometric convergence of elliptical slice sampling. In Meila, M. and Zhang, T.,
|
| 1889 |
+
editors, Proceedings of the 38th International Conference on Machine Learning,
|
| 1890 |
+
volume 139 of Proceedings of Machine Learning Research, pages 7969–7978.
|
| 1891 |
+
PMLR.
|
| 1892 |
+
[Natarovskii et al., 2021b] Natarovskii, V., Rudolf, D., and Sprungk, B. (2021b).
|
| 1893 |
+
Quantitative spectral gap estimate and Wasserstein contraction of simple slice
|
| 1894 |
+
sampling. The Annals of Applied Probability, 31(2):806–825.
|
| 1895 |
+
[Neal, 1999] Neal, R. M. (1999).
|
| 1896 |
+
Regression and classification using Gaussian
|
| 1897 |
+
process priors. J. M. Bernardo et al., editors, Bayesian Statistics, 6:475–501.
|
| 1898 |
+
21
|
| 1899 |
+
|
| 1900 |
+
[Neal, 2003] Neal, R. M. (2003).
|
| 1901 |
+
Slice sampling.
|
| 1902 |
+
The Annals of Statistics,
|
| 1903 |
+
31(3):705–767.
|
| 1904 |
+
[Nishihara et al., 2014] Nishihara, R., Murray, I., and Adams, R. P. (2014). Par-
|
| 1905 |
+
allel MCMC with generalized elliptical slice sampling. The Journal of Machine
|
| 1906 |
+
Learning Research, 15(1):2087–2112.
|
| 1907 |
+
[Rudolf, 2012] Rudolf, D. (2012). Explicit error bounds for Markov chain Monte
|
| 1908 |
+
Carlo. Dissertationes Math., 485:93 pp.
|
| 1909 |
+
[Rudolf and Schweizer, 2018] Rudolf, D. and Schweizer, N. (2018). Perturbation
|
| 1910 |
+
theory for Markov chains via Wasserstein distance.
|
| 1911 |
+
Bernoulli, 24(4A):2610–
|
| 1912 |
+
2639.
|
| 1913 |
+
[Rudolf and Sprungk, 2018] Rudolf, D. and Sprungk, B. (2018). On a generaliza-
|
| 1914 |
+
tion of the preconditioned Crank–Nicolson Metropolis algorithm. Foundations
|
| 1915 |
+
of Computational Mathematics, 18(2):309–343.
|
| 1916 |
+
[Rudolf and Sprungk, 2022] Rudolf, D. and Sprungk, B. (2022).
|
| 1917 |
+
Robust ran-
|
| 1918 |
+
dom walk-like Metropolis-Hastings algorithms for concentrating posteriors.
|
| 1919 |
+
arXiv:2202.12127.
|
| 1920 |
+
[Sprungk, 2020] Sprungk, B. (2020). On the local Lipschitz stability of Bayesian
|
| 1921 |
+
inverse problems. Inverse Problems, 36(5):055015.
|
| 1922 |
+
22
|
| 1923 |
+
|
FdE0T4oBgHgl3EQfhAGj/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
J9E1T4oBgHgl3EQfGQOR/content/2301.02912v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:98eb72e56005d31e6e010eb4513f45987bae131f12f240beb9db17ed11ebd509
|
| 3 |
+
size 175924
|
J9E1T4oBgHgl3EQfGQOR/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b761e40d790ce7eeac9b2554a202d1a9b0d9c3036e94676c362df0c50e08b696
|
| 3 |
+
size 1900589
|
J9E1T4oBgHgl3EQfGQOR/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6cfe3446870fbb8412b27d259ad3981d86a3807cf300a1f974943d43e61d247c
|
| 3 |
+
size 63166
|
KdAyT4oBgHgl3EQfTvcg/content/2301.00110v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2bf249efc9e5965b8ea44405cf6d1ab52d7ed4885f0eeab0f95f3f248c37412f
|
| 3 |
+
size 4078355
|
KdAyT4oBgHgl3EQfTvcg/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7c1789ca666852a583975903bfd3acb0c3d011b77557386e0f9f6a468a24b995
|
| 3 |
+
size 4653101
|
KdAyT4oBgHgl3EQfTvcg/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d0bddf5c5fb05af3b34cf827cd2616fd0d7e010addac03e5daa805915f413e1a
|
| 3 |
+
size 154052
|
KdE3T4oBgHgl3EQfvQvs/content/tmp_files/2301.04693v1.pdf.txt
ADDED
|
@@ -0,0 +1,1150 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
MNRAS 000, 1–8 (2022)
|
| 2 |
+
Preprint 13 January 2023
|
| 3 |
+
Compiled using MNRAS LATEX style file v3.0
|
| 4 |
+
Why the observed spin evolution of older-than-solar like stars might not
|
| 5 |
+
require a dynamo mode change
|
| 6 |
+
Ketevan Kotoroshvili 1,2★, Eric G. Blackman1,2†,James E. Owen3‡
|
| 7 |
+
1Department of Physics and Astronomy, University of Rochester, Rochester NY 14627
|
| 8 |
+
2Laboratory for Laser Energetics, University of Rochester, Rochester, NY 14623, USA
|
| 9 |
+
3Astrophysics Group, Department of Physics, Imperial College London, Prince Consort Rd, London SW7 2AZ, UK
|
| 10 |
+
13 January 2023
|
| 11 |
+
ABSTRACT
|
| 12 |
+
The spin evolution of main sequence stars has long been of interest for basic stellar evolution, stellar aging, stellar activity, and
|
| 13 |
+
consequent influence on companion planets. Observations of older than solar late-type main-sequence stars have been interpreted
|
| 14 |
+
to imply that a change from a dipole-dominated magnetic field to one with more prominent higher multipoles might be necessary
|
| 15 |
+
to account for the data. The spin-down models that lead to this inference are essentially tuned to the sun. Here we take a different
|
| 16 |
+
approach which considers individual stars as fixed points rather than just the Sun. We use a time-dependent theoretical model to
|
| 17 |
+
solve for the spin evolution of low-mass main-sequence stars that includes a Parker-type wind and a time-evolving magnetic field
|
| 18 |
+
coupled to the spin. Because the wind is exponentially sensitive to the stellar mass over radius and the coronal base temperature,
|
| 19 |
+
the use of each observed star as a separate fixed point is more appropriate and, in turn, produces a set of solution curves that
|
| 20 |
+
produces a solution envelope rather than a simple line. This envelope of solution curves, unlike a single line fit, is consistent with
|
| 21 |
+
the data and does not unambiguously require a modal transition in the magnetic field to explain it. Also, the theoretical envelope
|
| 22 |
+
does somewhat better track the older star data when thermal conduction is a more dominant player in the corona.
|
| 23 |
+
Key words: stars: late-type – stars: low-mass – stars: solar-type – stars: mass-loss.
|
| 24 |
+
1 INTRODUCTION
|
| 25 |
+
Understanding the coupled spin-activity evolution of stars is of inter-
|
| 26 |
+
est both for the basic physics of rotating stellar evolution and stellar
|
| 27 |
+
activity, for determining stellar ages via gyrochronology, and for
|
| 28 |
+
quantifying the influence of stellar activity on companion planetary
|
| 29 |
+
atmospheres. Predicting the spin evolution of main sequence stars
|
| 30 |
+
and the associated activity ultimately requires an accurate model for
|
| 31 |
+
the coupled evolution of their magnetic fields, their spin, their activity
|
| 32 |
+
and mass loss.
|
| 33 |
+
Until recently the standard period-age evolution for main sequence
|
| 34 |
+
solar-like FGK stars has been divided into two regimes, saturated and
|
| 35 |
+
unsaturated. The empirically determined transition between them
|
| 36 |
+
occurs at ˜𝑅𝑜 ∼ 0.13, where the Rossby number ˜𝑅𝑜 is defined as
|
| 37 |
+
˜𝑅𝑜 = 𝑃/𝜏𝑐, with 𝑃 being the star’s rotation period and 𝜏𝑐 the stellar
|
| 38 |
+
model-inferred convective turnover time (Wright et al. 2011; Reiners
|
| 39 |
+
et al. 2014). Very young, X-ray luminous stars are in the saturated
|
| 40 |
+
regime where their X-ray to bolometric luminosity ratio is nearly in-
|
| 41 |
+
dependent of rotation rate. Older stars are in the unsaturated regime
|
| 42 |
+
for which the period age relation has been traditionally characterized
|
| 43 |
+
by the empirical Skumanich law (Skumanich 1972). Recently how-
|
| 44 |
+
ever, for a sub-population of stars older than the sun, the spin-down
|
| 45 |
+
rate has been purported to be slower than that of the Skumanich
|
| 46 |
+
★ kkotoras@ur.rochester.edu
|
| 47 |
+
† eric.blackman@rochester.edu
|
| 48 |
+
‡ james.owen@imperial.ac.uk
|
| 49 |
+
law(Skumanich 1972) and slower than that predicted by some stan-
|
| 50 |
+
dard spin-down models with a fixed magnetic field geometry (Matt
|
| 51 |
+
et al. 2012; Reiners & Mohanty 2012; van Saders & Pinsonneault
|
| 52 |
+
2013; Gallet & Bouvier 2013; Matt et al. 2015; van Saders et al.
|
| 53 |
+
2016). This has led to the suggestion that dynamos in these stars may
|
| 54 |
+
be incurring a state transition from dipole to one in which the field is
|
| 55 |
+
dominated by higher multipoles that less effectively remove angular
|
| 56 |
+
momentum (van Saders et al. 2016). Such a transition would then
|
| 57 |
+
warrant a theoretical explanation.
|
| 58 |
+
The importance of this potential transition warrants further in-
|
| 59 |
+
vestigation to assess whether it is unambiguous. In particular, how
|
| 60 |
+
precise are the predictions of spin evolution from current theoretical
|
| 61 |
+
models that invoke no dynamo transition, and how are these models
|
| 62 |
+
used to obtain a predicted envelope of spin-period evolution bounds
|
| 63 |
+
for the evolution of a population of stars similar to, but not identical
|
| 64 |
+
to, the Sun?
|
| 65 |
+
To address this, we study the time evolution of the rotation pe-
|
| 66 |
+
riod for older-than-solar late-type stars using an example theoretical
|
| 67 |
+
model for the coupled time evolution of the X-ray luminosity, mag-
|
| 68 |
+
netic field strength, mass loss and rotation. Importantly, the observed
|
| 69 |
+
data for each star provides boundary conditions needed to solve the
|
| 70 |
+
system of equations for each specific star. We do not assume that
|
| 71 |
+
each star is an identical twin to the sun. This distinction proves to be
|
| 72 |
+
important in limiting the precision of what can be inferred and the
|
| 73 |
+
robustness of whether the observations definitively reveal the need
|
| 74 |
+
for a dynamo transition in each star.
|
| 75 |
+
In Section 2, we summarize the minimalist theoretical model that
|
| 76 |
+
© 2022 The Authors
|
| 77 |
+
arXiv:2301.04693v1 [astro-ph.SR] 11 Jan 2023
|
| 78 |
+
|
| 79 |
+
2
|
| 80 |
+
K. Kotorashvili et al.
|
| 81 |
+
couples the time evolution of X-ray luminosity, rotation, magnetic
|
| 82 |
+
field and mass loss (Blackman & Owen 2016). In Subsection 2.3
|
| 83 |
+
we provide expressions for X-ray luminosity and mass loss as a
|
| 84 |
+
function of the X-ray coronal temperature for cases when thermal
|
| 85 |
+
conduction is dominant and when thermal conduction can be ignored.
|
| 86 |
+
Thermal conduction can reduce the hot gas supply to the wind,
|
| 87 |
+
lowering its ability to spin down the star, but also keeps the magnetic
|
| 88 |
+
field stronger longer which would exacerbate spin down. The net
|
| 89 |
+
effect of this competition has yet to be quantified. In Section 3 we
|
| 90 |
+
obtain solutions for the time evolution of the rotation period of each
|
| 91 |
+
individual star in a sample of old stars with observed spins and
|
| 92 |
+
ages, using their observed stellar properties as fixed point boundary
|
| 93 |
+
conditions for the solutions. We find that even the small variations
|
| 94 |
+
in observed properties (e.g. magnetic field, mass, radius) between
|
| 95 |
+
solar-like stars, makes fitting an evolution model to a single star like
|
| 96 |
+
the Sun not sufficiently representative of the population to identify
|
| 97 |
+
that the population as a whole is incurring a dynamo transition. We
|
| 98 |
+
conclude in Section 4 and address some broader implications for
|
| 99 |
+
comparing theory and observation.
|
| 100 |
+
2 PHYSICAL MODEL AND EQUATIONS
|
| 101 |
+
Main sequence low-mass stars spin down as a consequence of their
|
| 102 |
+
magnetized stellar winds (Parker 1958; Schatzman 1962; Weber &
|
| 103 |
+
Davis 1967; Mestel 1968). F, G , K and M stars with masses in the
|
| 104 |
+
range 0.35𝑀⊙ < 𝑀 < 1.5𝑀⊙ have a convective zone surrounded by
|
| 105 |
+
a radiative zone and are in that respect potentially most solar-like with
|
| 106 |
+
respect to their dynamos (Parker 1955; Steenbeck & Krause 1969).
|
| 107 |
+
The magnetic field anchors the stellar wind to the surface of the star,
|
| 108 |
+
forcing it to co-rotate up to the Alfvén radius, so angular momentum is
|
| 109 |
+
lost from the star. As a result, the reduced angular momentum means
|
| 110 |
+
reduced free energy available for the dynamo, and the magnetic field
|
| 111 |
+
and X-ray luminosity also decrease. Therefore the strength of the
|
| 112 |
+
magnetic field at the surface, the rate of angular momentum loss,
|
| 113 |
+
X-ray luminosity and the rotation period are fundamentally linked
|
| 114 |
+
(Kawaler 1988).
|
| 115 |
+
Here we use and adapt a minimalist holistic model for this coupled
|
| 116 |
+
time evolution of X-ray luminosity, mass loss, rotation and magnetic
|
| 117 |
+
field strength (Blackman & Owen 2016) to explain the flattening in
|
| 118 |
+
the observed period–age relation for older stars than the sun. In this
|
| 119 |
+
model, some fraction of dynamo-generated magnetic field lines are
|
| 120 |
+
considered open, allowing stellar wind to remove angular momen-
|
| 121 |
+
tum, while some fraction of field lines are considered closed, sourcing
|
| 122 |
+
the thermal X-ray emission. The magnetic field expression is based
|
| 123 |
+
on a dynamo saturation model in a regime where the total saturated
|
| 124 |
+
field strength depends on the rotation rate The dynamo-produced
|
| 125 |
+
magnetic field is then mutually evolving with the spin evolution of
|
| 126 |
+
low-mass main-sequence stars in this slow rotator regime.
|
| 127 |
+
In this section, we briefly summarize the minimalist theoretical
|
| 128 |
+
model that couples the time evolution of the aforementioned stellar
|
| 129 |
+
properties, discuss the main ingredients of the model, and point
|
| 130 |
+
out a few numerical coefficient corrections to previous work. We
|
| 131 |
+
also apply the formalism for stars other than the Sun and use the
|
| 132 |
+
properties of each individual star for which we have observed data
|
| 133 |
+
as a boundary condition for respective solutions. The importance of
|
| 134 |
+
this as it pertains to making the theoretical prediction of spin-down
|
| 135 |
+
with age an "envelope" rather than a "single line" will be exemplified
|
| 136 |
+
and emphasized later in the paper. We provide only the streamlined
|
| 137 |
+
set of resulting equations here, and the detailed derivations of the
|
| 138 |
+
original model equations on which our revised derivations are based
|
| 139 |
+
can be found in Blackman & Owen (2016).
|
| 140 |
+
2.1 Saturated magnetic field and X-ray luminosity
|
| 141 |
+
The dynamo-produced magnetic fields are estimated (Blackman &
|
| 142 |
+
Thomas 2015; Blackman & Owen 2016) by: (1) using a generalized
|
| 143 |
+
correlation time for dynamos that equals the convection time (𝜏𝑐) for
|
| 144 |
+
slow rotators and becomes proportional to the rotation time for fast
|
| 145 |
+
rotators and (2) using a dynamo saturation model, based on the com-
|
| 146 |
+
bination of magnetic helicity evolution and loss of magnetic field by
|
| 147 |
+
magnetic buoyancy (Blackman & Field 2002; Blackman & Branden-
|
| 148 |
+
burg 2003). In the slow-rotator regime of interest, the field saturation
|
| 149 |
+
depends on the rotation rate, but the exact field saturation model is
|
| 150 |
+
less important than the fact that there remains a spin dependence of
|
| 151 |
+
the field strength and that the saturation time (of order cycle period)
|
| 152 |
+
is short compared to the Gyr time scales of secular evolution we are
|
| 153 |
+
interested in. This results in the expression for normalized surface
|
| 154 |
+
radial magnetic field:
|
| 155 |
+
𝑏𝑟 ≡ 𝐵𝑟∗(𝑡)
|
| 156 |
+
𝐵𝑟,∗𝑛
|
| 157 |
+
= 𝑔𝐿(𝑡)
|
| 158 |
+
� 𝑠
|
| 159 |
+
𝑠∗
|
| 160 |
+
�1/6√︄
|
| 161 |
+
1 + 𝑠∗ ˜𝑅𝑜∗
|
| 162 |
+
1 + 𝑠 ˜𝑅𝑜 ,
|
| 163 |
+
(1)
|
| 164 |
+
where 𝐵𝑟,∗𝑛 is present-day radial magnetic field value for each star
|
| 165 |
+
(here 𝑛 indicates "now") and 𝑔𝐿(𝑡) =
|
| 166 |
+
�
|
| 167 |
+
1
|
| 168 |
+
1.4−0.4𝑡
|
| 169 |
+
� 𝜆−1
|
| 170 |
+
4 . This factor
|
| 171 |
+
approximates the fusion-driven increase in the bolometric luminosity
|
| 172 |
+
with time 𝑡 in units of solar age from solar models (Gough 1981, e.g.),
|
| 173 |
+
and deviates from unity only if L𝑏𝑜𝑙 evolves. We crudely apply the
|
| 174 |
+
same approximation for other solar-like stars scaled in terms of their
|
| 175 |
+
age. More detailed empirical fits for each stellar model could be
|
| 176 |
+
inferred but this is beyond the level of precision required for present
|
| 177 |
+
purposes. . Here 𝑠 is a shear parameter defined as |Ω0 − Ω(𝑟𝑐, 𝜃𝑠)| =
|
| 178 |
+
Ω0/𝑠, where Ω is surface rotational speed; 𝜃𝑠 is a fiducial polar angle;
|
| 179 |
+
𝑟𝑠 is a fiducial radius in the convective zone and 𝜆 is a parameter
|
| 180 |
+
representing the power law dependence of the magnetic starspot area
|
| 181 |
+
covering fraction Θ on X-ray luminosity L𝑋, namely Θ ∝ L𝜆
|
| 182 |
+
𝑋.
|
| 183 |
+
In our case, we take 𝜆 = 1/3, consistent with the range inferred
|
| 184 |
+
from observations of star spot covering fractions (Nichols-Fleming
|
| 185 |
+
& Blackman 2020) and we fix the shear parameter at 𝑠 = 8.3, because
|
| 186 |
+
the transition from the saturated to the unsaturated regime of X-ray
|
| 187 |
+
luminosity was best matched theoretically with this value (Blackman
|
| 188 |
+
& Thomas 2015; Blackman & Owen 2016). In practice, this has to
|
| 189 |
+
be determined with detailed calculations, but the specific value does
|
| 190 |
+
not affect the overall message of the present paper as our focus is
|
| 191 |
+
on the unsaturated regime where the shear term contribution to the
|
| 192 |
+
correlation time is small.
|
| 193 |
+
The estimated X-ray luminosity derived in Blackman & Thomas
|
| 194 |
+
(2015) is the product of the magnetic energy flux, averaged over
|
| 195 |
+
the change over a stellar cycle for sun-like stars (Peres et al. 2000),
|
| 196 |
+
times the surface area through which the magnetic field penetrates
|
| 197 |
+
the photosphere. The result is
|
| 198 |
+
L𝑥 = KL𝑚𝑎𝑔 ≃ K 2
|
| 199 |
+
3
|
| 200 |
+
� 𝐵2
|
| 201 |
+
𝜙
|
| 202 |
+
8𝜋
|
| 203 |
+
�2 Θ𝑟2𝑐
|
| 204 |
+
𝜌𝑣 ,
|
| 205 |
+
(2)
|
| 206 |
+
where 𝜌 is a density and 𝑣 is a turbulent convective velocity; and
|
| 207 |
+
K defines how much magnetic energy goes to X-ray luminosity. In
|
| 208 |
+
(Blackman & Owen 2016) K was approximated as 1/2 based on
|
| 209 |
+
the coronal equilibrium solution when conduction is unimportant.
|
| 210 |
+
We find this is also an acceptable approximation when conduction
|
| 211 |
+
dominates so we adopt it. This leads to the relation between X-ray
|
| 212 |
+
luminosity and radial magnetic field (Blackman & Owen (2016)):
|
| 213 |
+
𝑙𝑥 ≡
|
| 214 |
+
1
|
| 215 |
+
1.4 − 0.4𝑡
|
| 216 |
+
� 𝑠
|
| 217 |
+
𝑠∗
|
| 218 |
+
�
|
| 219 |
+
2
|
| 220 |
+
3(1−𝜆) � 1 + 𝑠∗ ˜𝑅𝑜∗
|
| 221 |
+
1 + 𝑠 ˜𝑅𝑜
|
| 222 |
+
�
|
| 223 |
+
2
|
| 224 |
+
1−𝜆
|
| 225 |
+
= 𝑏
|
| 226 |
+
4
|
| 227 |
+
1−𝜆
|
| 228 |
+
𝑟
|
| 229 |
+
.
|
| 230 |
+
(3)
|
| 231 |
+
MNRAS 000, 1–8 (2022)
|
| 232 |
+
|
| 233 |
+
On the spin evolution of older sun-like stars
|
| 234 |
+
3
|
| 235 |
+
where ˜𝑅𝑜∗ is the Rossby number for each individual star. For the sun
|
| 236 |
+
˜𝑅𝑜 ∼ 2 Blackman & Thomas (2015).
|
| 237 |
+
2.2 Angular velocity evolution
|
| 238 |
+
Blackman & Owen (2016) considered angular momentum loss by
|
| 239 |
+
the stellar wind in the equatorial plane and used the (Weber & Davis
|
| 240 |
+
(1967)) model to find the surface toroidal magnetic field and the
|
| 241 |
+
equation for angular velocity. Following derivations in Weber &
|
| 242 |
+
Davis (1967), Lamers & Cassinelli (1999) and Blackman & Owen
|
| 243 |
+
(2016) for the Alfvén radius we have
|
| 244 |
+
𝑟𝐴
|
| 245 |
+
𝑟∗
|
| 246 |
+
=
|
| 247 |
+
�
|
| 248 |
+
1 − 𝑟∗𝐵𝑟∗𝐵𝜙∗
|
| 249 |
+
�𝑀Ω∗
|
| 250 |
+
�1/2
|
| 251 |
+
=
|
| 252 |
+
�
|
| 253 |
+
1 + 𝑟∗|𝐵𝑟∗||𝐵𝜙∗|
|
| 254 |
+
�𝑀Ω∗
|
| 255 |
+
�1/2
|
| 256 |
+
,
|
| 257 |
+
(4)
|
| 258 |
+
where compared to the same equation in Blackman & Owen (2016),
|
| 259 |
+
we emphasize that there is a positive sign when absolute values are
|
| 260 |
+
used because of the opposite signs of 𝐵𝜙∗ and 𝐵𝑟∗.
|
| 261 |
+
Separate equations for 𝑟𝐴
|
| 262 |
+
𝑟∗ and toroidal magnetic field are:
|
| 263 |
+
𝑟𝐴
|
| 264 |
+
𝑟∗
|
| 265 |
+
=
|
| 266 |
+
𝑏𝑟∗
|
| 267 |
+
�𝑚1/2 ˜𝑢1/2
|
| 268 |
+
𝐴
|
| 269 |
+
𝑟∗𝐵𝑟,∗𝑛
|
| 270 |
+
�𝑀1/2
|
| 271 |
+
∗𝑛 𝑢1/2
|
| 272 |
+
𝐴,∗𝑛
|
| 273 |
+
,
|
| 274 |
+
(5)
|
| 275 |
+
𝑏𝜙∗ ≡ 𝐵𝜙∗(𝑡)
|
| 276 |
+
𝐵𝜙,∗𝑛
|
| 277 |
+
= − �𝑚𝜔∗
|
| 278 |
+
𝑏𝑟∗
|
| 279 |
+
𝑀∗𝑛Ω∗𝑛
|
| 280 |
+
𝑟∗𝐵𝜙,∗𝑛𝐵𝑟,∗𝑛
|
| 281 |
+
�
|
| 282 |
+
𝑟2
|
| 283 |
+
𝐴
|
| 284 |
+
𝑟2∗
|
| 285 |
+
− 1
|
| 286 |
+
�
|
| 287 |
+
,
|
| 288 |
+
(6)
|
| 289 |
+
where 𝐵𝜙,∗𝑛 is a present-day toroidal magnetic field value for each
|
| 290 |
+
star; �𝑚 is a mass loss derived later (see equations (17) and (18)
|
| 291 |
+
for regime I and regime II respectively); 𝜔∗(𝑡) = Ω(𝑡)/Ω∗𝑛, where
|
| 292 |
+
Ω∗𝑛 represents the present day value of angular velocity for each
|
| 293 |
+
individual star. For the Sun, Ω∗𝑛 = Ω⊙ = 2.97 · 10−6/𝑠2, 𝐵𝜙,∗𝑛 =
|
| 294 |
+
𝐵𝜙⊙ = 1.56 · 10−2𝐺, 𝐵𝑟,∗𝑛 = 𝐵𝑟 ⊙ = 2𝐺. For other stars, the
|
| 295 |
+
corresponding values in Table 1 will be used. In equation (5), ˜𝑢𝐴(𝑡)
|
| 296 |
+
is the normalized Alfvén speed given by
|
| 297 |
+
˜𝑢𝐴(𝑡) ≡
|
| 298 |
+
𝑢𝐴
|
| 299 |
+
𝑢𝐴,∗𝑛
|
| 300 |
+
=
|
| 301 |
+
√︂
|
| 302 |
+
���∗
|
| 303 |
+
𝑇∗𝑛
|
| 304 |
+
𝑊𝑘 [−𝐷(𝑟𝐴)]
|
| 305 |
+
𝑊𝑘 [−𝐷(𝑟𝐴,∗𝑛)] ,
|
| 306 |
+
(7)
|
| 307 |
+
where𝑇∗ is the coronal X-ray temperature and𝑇∗𝑛 is the coronal X-ray
|
| 308 |
+
temperature at present time (now) for each specific star.𝑊𝑘 [−𝐷(𝑟𝐴)]
|
| 309 |
+
is the Lambert W function for Parker wind solutions 𝑘 = 0 for 𝑟 ≤ 𝑟𝑠
|
| 310 |
+
and 𝑘 = −1 for 𝑟 ≥ 𝑟𝑠 (Cranmer 2004) and
|
| 311 |
+
𝐷(𝑟𝐴) =
|
| 312 |
+
�𝑟𝐴
|
| 313 |
+
𝑟𝑠
|
| 314 |
+
�−4
|
| 315 |
+
exp
|
| 316 |
+
�
|
| 317 |
+
4
|
| 318 |
+
�
|
| 319 |
+
1 − rs
|
| 320 |
+
rA
|
| 321 |
+
�
|
| 322 |
+
− 1
|
| 323 |
+
�
|
| 324 |
+
.
|
| 325 |
+
(8)
|
| 326 |
+
The sonic radius is given by
|
| 327 |
+
𝑟𝑠
|
| 328 |
+
𝑟∗
|
| 329 |
+
= 𝐺𝑀
|
| 330 |
+
2𝑐2𝑠𝑟∗
|
| 331 |
+
(9)
|
| 332 |
+
with isothermal sound speed 𝑐𝑠 ∝ 𝑇1/2.
|
| 333 |
+
The evolution of stellar angular velocity in dimensionless form is
|
| 334 |
+
given by
|
| 335 |
+
𝑑𝜔∗
|
| 336 |
+
𝑑𝜏 ≡ −𝜔∗
|
| 337 |
+
𝑞𝑏2𝑟
|
| 338 |
+
𝑚 ˜𝑢𝐴
|
| 339 |
+
𝐵2𝑟,∗𝑛𝜏∗𝑛
|
| 340 |
+
𝑀∗𝑛𝑢𝐴,∗𝑛
|
| 341 |
+
,
|
| 342 |
+
(10)
|
| 343 |
+
where 𝜏⊙ is present-day solar age; 𝑞 is the inertial parameter, that
|
| 344 |
+
depends on internal angular momentum transport and defines what
|
| 345 |
+
fraction of the star contributes to the spin-down (and corrected a
|
| 346 |
+
typo on the right of equation (41) of Blackman & Owen (2016)
|
| 347 |
+
which had residual factor of Ω⊙). We use 𝑞 = 1 for all stars, which
|
| 348 |
+
indicates a conventional assumption that the field is coupled to the
|
| 349 |
+
moment of inertia of the full stellar mass. This could in principle be
|
| 350 |
+
violated if the field were not anchored sufficiently deeply and angular
|
| 351 |
+
momentum transport within the star was inefficient.
|
| 352 |
+
2.3 Coronal Equilibrium: relation between L𝑥, �𝑀 and 𝑇0
|
| 353 |
+
The above equations show that X-ray luminosity, dynamo-produced
|
| 354 |
+
magnetic field and angular velocity are all coupled. To determine how
|
| 355 |
+
all of these quantities are connected to the mass loss rate, we follow
|
| 356 |
+
the procedure of Blackman & Owen (2016) but since that paper
|
| 357 |
+
focused on younger-than-solar stars, here we study both younger and
|
| 358 |
+
older stars and generalize the equations accordingly.
|
| 359 |
+
Magnetic fields are the source of input energy to the corona in our
|
| 360 |
+
model, which is then distributed into either winds, x-rays, or lost to
|
| 361 |
+
the photosphere by thermal conduction. Equilibrium is established
|
| 362 |
+
between the sinks of mass loss, X-ray radiation and conduction over
|
| 363 |
+
time scales short compared to spin-down time scales and can be used
|
| 364 |
+
to determine the dominant sinks of the magnetic energy flux.
|
| 365 |
+
According to Hearn (1975), for a given coronal base pressure,
|
| 366 |
+
there is an average coronal temperature that minimizes energy loss.
|
| 367 |
+
The minimum coronal flux condition is given by
|
| 368 |
+
𝜕
|
| 369 |
+
𝜕𝑇 (𝐹𝑊 1 + 𝐹𝑐 + 𝐹𝑥) = 𝜕
|
| 370 |
+
𝜕𝑇 𝐹𝐵 = 0,
|
| 371 |
+
(11)
|
| 372 |
+
where 𝐹𝐵 is the flux of magnetic energy sourced into the coronal base
|
| 373 |
+
and 𝐹𝑊 1, 𝐹𝑐, 𝐹𝑥 are respectively the wind flux, conductive loss, and
|
| 374 |
+
the radiative (X-ray) loss, from the one density scale height region
|
| 375 |
+
above the chromosphere.
|
| 376 |
+
The expression for coronal energy loss in the stellar wind is given
|
| 377 |
+
by
|
| 378 |
+
𝐹𝑊 1 = 3.1 × 106𝑝0 ˜𝑇∗
|
| 379 |
+
1/2𝑒3.9 𝑚∗
|
| 380 |
+
𝑟∗
|
| 381 |
+
�
|
| 382 |
+
1− 1
|
| 383 |
+
˜𝑇∗
|
| 384 |
+
�
|
| 385 |
+
erg
|
| 386 |
+
𝑐𝑚2 · 𝑠 ,
|
| 387 |
+
(12)
|
| 388 |
+
where we used the isothermal Parker wind solution (Parker 1955)
|
| 389 |
+
along with the assumption of large-scale magnetic fields being ap-
|
| 390 |
+
proximately radial out to the Alfvén radius (𝑟𝐴). Here ˜𝑇∗ = 𝑇∗
|
| 391 |
+
𝑇 ′∗ is a
|
| 392 |
+
dimensionless temperature with a different normalization parameter
|
| 393 |
+
𝑇 ′∗ for each star; 𝑚∗ =
|
| 394 |
+
𝑀
|
| 395 |
+
𝑀∗𝑛 and 𝑟∗ = 𝑅0
|
| 396 |
+
𝑅∗𝑛 , where 𝑀∗𝑛 and 𝑅∗𝑛 repre-
|
| 397 |
+
sent a specific individual stellar mass and radius. Normalizing stellar
|
| 398 |
+
parameters to individual stars, we then have 𝑚∗ = 𝑟∗ = 1. We also
|
| 399 |
+
use 𝑝0 ∼ 𝜌0𝑐2𝑠 where the subscript 0 indicates values at the coronal
|
| 400 |
+
base and we use CGS units for 𝑝0.
|
| 401 |
+
For the X-ray radiation flux, we have
|
| 402 |
+
𝐹𝑥 = 1.24 × 106
|
| 403 |
+
𝑝2
|
| 404 |
+
0
|
| 405 |
+
˜𝑇∗
|
| 406 |
+
5/3
|
| 407 |
+
𝑟2∗
|
| 408 |
+
𝑚∗
|
| 409 |
+
erg
|
| 410 |
+
𝑐𝑚2 · 𝑠 ,
|
| 411 |
+
(13)
|
| 412 |
+
For the conductive loss,
|
| 413 |
+
𝐹𝑐 = 4.26 × 106𝑝0 ˜𝑇∗
|
| 414 |
+
3/4 ˜Θ
|
| 415 |
+
4𝜋
|
| 416 |
+
erg
|
| 417 |
+
𝑐𝑚2 · 𝑠 ,
|
| 418 |
+
(14)
|
| 419 |
+
where the solid angle correction fraction
|
| 420 |
+
˜Θ
|
| 421 |
+
4𝜋 ≤ 1 arises because
|
| 422 |
+
conduction down from the corona is assumed to be non-negligible
|
| 423 |
+
only along the fraction of the solid angle covered with field lines
|
| 424 |
+
perpendicular to the surface.
|
| 425 |
+
There is a monotonic relation between the base pressure of the
|
| 426 |
+
corona and the energy density at coronal equilibrium, and all three
|
| 427 |
+
energy losses increase with the base coronal pressure. The above
|
| 428 |
+
equations lead to an equilibrium pressure (with corrected numerical
|
| 429 |
+
coefficients in the first and third term, as well as the corrected factor
|
| 430 |
+
of 𝑚∗
|
| 431 |
+
𝑟∗ in the last term compared to Blackman & Owen (2016))
|
| 432 |
+
𝑝0 = 𝑚∗
|
| 433 |
+
𝑟2∗
|
| 434 |
+
0.12 ˜Θ ˜𝑇
|
| 435 |
+
29
|
| 436 |
+
12
|
| 437 |
+
0∗ + 𝑚∗
|
| 438 |
+
𝑟2∗
|
| 439 |
+
0.75 ˜𝑇
|
| 440 |
+
13
|
| 441 |
+
6
|
| 442 |
+
0∗ 𝑒
|
| 443 |
+
3.9 𝑚∗
|
| 444 |
+
𝑟∗
|
| 445 |
+
�
|
| 446 |
+
1− 1
|
| 447 |
+
˜𝑇0∗
|
| 448 |
+
�
|
| 449 |
+
+ 𝑚2∗
|
| 450 |
+
𝑟3∗
|
| 451 |
+
5.85 ˜𝑇
|
| 452 |
+
7
|
| 453 |
+
6
|
| 454 |
+
0∗𝑒
|
| 455 |
+
3.9 𝑚∗
|
| 456 |
+
𝑟∗
|
| 457 |
+
�
|
| 458 |
+
1− 1
|
| 459 |
+
˜𝑇0∗
|
| 460 |
+
�
|
| 461 |
+
,
|
| 462 |
+
(15)
|
| 463 |
+
MNRAS 000, 1–8 (2022)
|
| 464 |
+
|
| 465 |
+
4
|
| 466 |
+
K. Kotorashvili et al.
|
| 467 |
+
Figure 1. Normalized energy fluxes of X-rays
|
| 468 |
+
𝐹𝑥
|
| 469 |
+
𝐹𝑥,∗�� (blue); thermal con-
|
| 470 |
+
duction
|
| 471 |
+
𝐹𝑐
|
| 472 |
+
𝐹𝑐,∗𝑛 (green);, and mass outflow
|
| 473 |
+
�
|
| 474 |
+
𝑀
|
| 475 |
+
𝑀∗𝑛 (orange) are shown for each
|
| 476 |
+
individual star of Table 1. Similar plots were shown in Blackman & Owen
|
| 477 |
+
(2016) but only for the sun. The y-axis is in units of individual stellar val-
|
| 478 |
+
ues for each quantity and the unobserved equilibrium temperature 𝑇0∗ for
|
| 479 |
+
each star is normalized such that a transition between the dominance and
|
| 480 |
+
sub-dominance of thermal conduction occurs at dimensionless ˜𝑇0∗ = 0.5. In
|
| 481 |
+
regime I, ( ˜𝑇0∗ < 0.5), thermal conduction is dominant, but it is subdominant
|
| 482 |
+
in regime II ( ˜𝑇0∗ > 0.5), where 𝑙𝑥 ≃ �𝑚. Regime I corresponds to older
|
| 483 |
+
and regime II to younger phases of the main sequence for a given star. The
|
| 484 |
+
envelope of these curves for the different stars produces the bands of color
|
| 485 |
+
for each energy flux.
|
| 486 |
+
where ˜𝑇0∗ = 𝑇0∗
|
| 487 |
+
𝑇 ′∗ , 𝑇0∗ is the coronal temperature at equilibrium for
|
| 488 |
+
each specific star. For the present solar coronal temperature we take
|
| 489 |
+
𝑇0,∗𝑛 ∼ 𝑇⊙ ∼ 1.5 × 106𝐾 and for 𝑇 ′∗ we used 𝑇 ′∗ = 𝑇 ′
|
| 490 |
+
⊙ = 3 × 106𝐾,
|
| 491 |
+
so that at ˜𝑇0∗ = 0.5, 𝑙𝑥 =
|
| 492 |
+
L𝑥
|
| 493 |
+
L𝑥,∗𝑛 = 1 and �𝑚 =
|
| 494 |
+
�𝑀
|
| 495 |
+
𝑀∗𝑛 = 1.
|
| 496 |
+
Fig.1 shows radiation, conduction and total coronal wind fluxes
|
| 497 |
+
𝐹𝑥
|
| 498 |
+
𝐹𝑥,∗𝑛 ,
|
| 499 |
+
𝐹𝑐
|
| 500 |
+
𝐹𝑐,∗𝑛 , �𝑚 = 𝐹𝑊 1 ˜𝑇0,∗𝑛
|
| 501 |
+
𝐹𝑊 1,∗𝑛 ˜𝑇0∗ as a function of equilibrium temperature,
|
| 502 |
+
where ˜𝑇0,∗𝑛 is the coronal temperature at present time for sun-like
|
| 503 |
+
stars. All the quantities (y-axis) and the equilibrium temperature (x-
|
| 504 |
+
axis) are normalized to their respective stellar values for an individual
|
| 505 |
+
star. We define Regime I as the lifetime phase of a star for which
|
| 506 |
+
thermal conduction flux dominates the outflow flux and Regime II
|
| 507 |
+
when the reverse is true. This occurs at a different coronal equilibrium
|
| 508 |
+
temperature 𝑇0∗ specific to each star. We then define the transition
|
| 509 |
+
to occur at the same arbitrary dimensionless value of 0.5 for each
|
| 510 |
+
star such that ˜𝑇0∗ < 0.5 corresponds to regime I and ˜𝑇0∗ > 0.5
|
| 511 |
+
corresponds to regime II. The vertical line at ˜𝑇0∗ = 0.5 represents
|
| 512 |
+
the transition between the two regimes which have different relations
|
| 513 |
+
between X-ray luminosity and mass loss.
|
| 514 |
+
2.3.1 Regime I (conduction dominated)
|
| 515 |
+
In this regime, which generally corresponds to the spun-down older
|
| 516 |
+
main-sequence phase of a given star, the first term of equation (15)
|
| 517 |
+
dominates. Consequently, the normalized value for the X-ray lumi-
|
| 518 |
+
nosity is 𝑙𝑥 =
|
| 519 |
+
L𝑥
|
| 520 |
+
L𝑥,∗𝑛 =
|
| 521 |
+
𝐹𝑥
|
| 522 |
+
𝐹𝑥,∗𝑛 , which, for each star can be written
|
| 523 |
+
𝑙𝑥 ≃
|
| 524 |
+
�
|
| 525 |
+
˜𝑇0∗
|
| 526 |
+
˜𝑇0,∗𝑛
|
| 527 |
+
� 19
|
| 528 |
+
6
|
| 529 |
+
.
|
| 530 |
+
(16)
|
| 531 |
+
The normalized mass loss is �𝑚 =
|
| 532 |
+
�𝑀
|
| 533 |
+
𝑀⊙
|
| 534 |
+
�𝑚 ≃
|
| 535 |
+
�
|
| 536 |
+
˜𝑇0∗
|
| 537 |
+
˜𝑇0,∗𝑛
|
| 538 |
+
� 23
|
| 539 |
+
12
|
| 540 |
+
𝑒
|
| 541 |
+
3.9
|
| 542 |
+
˜𝑇0,∗𝑛
|
| 543 |
+
𝑚∗
|
| 544 |
+
𝑟∗
|
| 545 |
+
�
|
| 546 |
+
1−
|
| 547 |
+
˜𝑇0,∗𝑛
|
| 548 |
+
˜𝑇0∗
|
| 549 |
+
�
|
| 550 |
+
,
|
| 551 |
+
(17)
|
| 552 |
+
which couples with the three other stellar properties discussed above.
|
| 553 |
+
2.3.2 Regime II (no conduction)
|
| 554 |
+
In this regime, which generally corresponds to the younger, faster-
|
| 555 |
+
rotating phase of a given star, the second term on the right of equation
|
| 556 |
+
(15) dominates, which is the outflow flux term. So for 𝑙𝑥 and �𝑚 we
|
| 557 |
+
have (Blackman & Owen 2016)
|
| 558 |
+
𝑙𝑥 ≃ exp
|
| 559 |
+
�
|
| 560 |
+
ln( ˜𝑇0∗) + 7.8
|
| 561 |
+
˜𝑇0∗
|
| 562 |
+
𝑚∗
|
| 563 |
+
𝑟∗
|
| 564 |
+
� ˜𝑇0∗
|
| 565 |
+
˜𝑇0,∗𝑛
|
| 566 |
+
− 1
|
| 567 |
+
��
|
| 568 |
+
≃ �𝑚.
|
| 569 |
+
(18)
|
| 570 |
+
3 TIME-EVOLUTION OF ROTATION PERIOD
|
| 571 |
+
We numerically solved the four equations (3), (6), (10) and (17)
|
| 572 |
+
or (18) respectively for regimes I and II, along with equations (5)
|
| 573 |
+
and (7) for the spin evolution. Importantly, we solved these equa-
|
| 574 |
+
tions for individual stars, using measured stellar properties as a fixed
|
| 575 |
+
point (boundary condition) corresponding to the observations of that
|
| 576 |
+
particular star. The set of solutions comprises an envelope of these
|
| 577 |
+
individual curves.
|
| 578 |
+
3.1 Solutions and comparison to data
|
| 579 |
+
Data Table 1 shows the properties of the G-type and F-type stars
|
| 580 |
+
available for the study. Most of the G stars come from a sample from
|
| 581 |
+
21 Kepler with asteroseismology determined ages and measured ro-
|
| 582 |
+
tation rates, with effective temperatures between 5700-5900 K (van
|
| 583 |
+
Saders et al. 2016; Creevey, O. L. et al. 2017). In addition, we include
|
| 584 |
+
the stars 18 Sco and 𝛼 Cen A with less precisely measured parame-
|
| 585 |
+
ters (van Saders et al. (2016); Metcalfe et al. (2022) and references
|
| 586 |
+
therein)). Also, we have included a few stars with measured surface
|
| 587 |
+
magnetic fields and Zeeman Doppler image inferred chromospheric
|
| 588 |
+
rotation periods from the Bcool project magnetic survey (Marsden
|
| 589 |
+
et al. 2014). Note that, compared to the Kepler sample, the Bcool
|
| 590 |
+
survey does not provide precise photosphere rotational periods; how-
|
| 591 |
+
ever, it provides more precise measurements for magnetic fields. We
|
| 592 |
+
will present spin evolution solutions for stars 1-10 from this data
|
| 593 |
+
table for both regimes. The other data points are only for comparison
|
| 594 |
+
to solutions.
|
| 595 |
+
Fig. 2 shows the time evolution of the rotation period for individual
|
| 596 |
+
stars. The top panel shows solutions for regime I, where energy loss
|
| 597 |
+
due to conduction is dominant and stellar wind energy loss is very
|
| 598 |
+
low. The bottom panel shows solutions for regime II, where conduc-
|
| 599 |
+
tion is negligible, and the X-ray energy losses equal that of the stellar
|
| 600 |
+
wind. For most stars plotted, we chose the coronal temperature1 as
|
| 601 |
+
˜𝑇0,∗𝑛 =
|
| 602 |
+
1
|
| 603 |
+
2.4 for regime I and ˜𝑇0,∗𝑛 =
|
| 604 |
+
1
|
| 605 |
+
1.6 for regime II solutions.
|
| 606 |
+
These values correspond to the equilibrium temperatures for the so-
|
| 607 |
+
lar minimum and maximum (Blackman & Owen 2016; Johnstone
|
| 608 |
+
et al. 2015). Overall choosing a different value for ˜𝑇0,∗𝑛 for both
|
| 609 |
+
regimes does change the respective slopes of the solutions, but the
|
| 610 |
+
ranges chosen are consistent with bounds on observed stellar data
|
| 611 |
+
Johnstone et al. (2015). If we knew the present X-ray temperature,
|
| 612 |
+
1 For stars 2 and 10 form Table 1 we have used ˜𝑇0,∗𝑛 =
|
| 613 |
+
1
|
| 614 |
+
2.1 for regime I
|
| 615 |
+
solutions.
|
| 616 |
+
MNRAS 000, 1–8 (2022)
|
| 617 |
+
|
| 618 |
+
Regime I
|
| 619 |
+
Regime II
|
| 620 |
+
105
|
| 621 |
+
1000
|
| 622 |
+
Flux ratio
|
| 623 |
+
10
|
| 624 |
+
MIMn
|
| 625 |
+
0.100
|
| 626 |
+
FxIFx,*n
|
| 627 |
+
0.001
|
| 628 |
+
FclFc,*n
|
| 629 |
+
10-5
|
| 630 |
+
0.5
|
| 631 |
+
1
|
| 632 |
+
2On the spin evolution of older sun-like stars
|
| 633 |
+
5
|
| 634 |
+
Figure 2. The two panels show envelopes of solution curves for the time
|
| 635 |
+
evolution of the rotation period, where each observed star is a fixed point on
|
| 636 |
+
the individual curve. Panel a and b correspond to the regime I and regime
|
| 637 |
+
II solutions, where the y and x-axis are normalized to solar period and age.
|
| 638 |
+
Data points and boundary conditions used to find individual solution curves
|
| 639 |
+
are given in Table 1. Corresponding solutions for row numbers therein are
|
| 640 |
+
color-coded as 1 - red, 2 - purple, 3 - orange, 4 - green, 5 - magenta, 6 - cyan,
|
| 641 |
+
7 - blue, 8 - dark green, 9 - dark cyan and 10 - pink. Open circles correspond
|
| 642 |
+
to data points from the Bcool project magnetic survey (respectively 8, 9,
|
| 643 |
+
10 and 13 from Table 1). (Marsden et al. 2014). The Sun is marked as red
|
| 644 |
+
⊙. Triangles represent a star transitioning from the main-sequence to the
|
| 645 |
+
subgiant phase and a subgiant (respectively 14 and 15 from Table 1). The
|
| 646 |
+
vertical line represents the cutoff before the subgiant phase for the stars 1-7 in
|
| 647 |
+
Table 1. The blue-shaded region represents the envelope of solutions for all
|
| 648 |
+
the stars except the ones with large uncertainties in age from the Bcool project.
|
| 649 |
+
Both regime I and regime II solutions are compared with the Skumanich law
|
| 650 |
+
(black dotted line), a standard rotational evolution model (black dot-dashed
|
| 651 |
+
line) (van Saders et al. 2016), a modified rotational evolution model (black
|
| 652 |
+
dashed line) and the gray shaded region (Metcalfe & van Saders 2017) that
|
| 653 |
+
represents the expected dispersion due to different masses, metallicities and
|
| 654 |
+
effective temperatures between 5600-5900 K.
|
| 655 |
+
this would pin down whether a given star is presently in regime I or
|
| 656 |
+
regime II, and which solution to use. Instead, we compare the con-
|
| 657 |
+
sequences of time evolution solutions from either regime for a given
|
| 658 |
+
star. We find that the implications are not that sensitive to knowing the
|
| 659 |
+
X-ray temperature over the bounded range because either regime’s
|
| 660 |
+
solutions ultimately lead to our same main conclusions.
|
| 661 |
+
Both panels of Fig. 2 also show the modified Skumanich law
|
| 662 |
+
(Mamajek 2014) 𝑃 = 𝑡0.55 and a standard rotational evolution model
|
| 663 |
+
(van Saders & Pinsonneault 2013; van Saders et al. 2016). Regime
|
| 664 |
+
I solutions have decreasing slopes as does the empirical Skumanich
|
| 665 |
+
law, which captures the data trend quite well. Regime II solutions
|
| 666 |
+
Figure 3. Panels a and b represent the solutions for the time evolution of
|
| 667 |
+
the rotational period (purple) for one specific star (star 2 from the Table 1)
|
| 668 |
+
to demonstrate the sensitivity of our solutions to magnetic field strength.
|
| 669 |
+
These plots show a significant spread for different magnetic field strength
|
| 670 |
+
normalization values for both regimes I and II. Values used for the magnetic
|
| 671 |
+
field from bottom to top are 𝐵𝑝 = 0.6 G, 1 G, 2 G, 2.4 G, respectively. Data
|
| 672 |
+
points, black curves (dashed, dotted, dot-dashed) and shaded area have the
|
| 673 |
+
same meaning as in Fig 2. The vertical line represents the cutoff before the
|
| 674 |
+
subgiant phase for the stars 1-7 in Table 1.
|
| 675 |
+
have increasing slopes as does the rotational evolution model used
|
| 676 |
+
by van Saders et al. (2016), but our solutions comprise an envelope
|
| 677 |
+
of curves, each passing through a specific star. This envelope is
|
| 678 |
+
consistent with the observed period-age relation data. In Fig. 2 blue
|
| 679 |
+
shaded region corresponds to an envelope of solutions for stars with
|
| 680 |
+
a more precisely measured rotation period and age. It shows that even
|
| 681 |
+
without including stars from Bcool project this blue-shaded envelope
|
| 682 |
+
covers the region with the most stars. We include the subgiant star
|
| 683 |
+
data points on the plot (14 an 15 form Table. 1) we do not show
|
| 684 |
+
their evolution solutions because we are focusing on main sequence
|
| 685 |
+
stars only and whether the main sequence stars themselves exhibit
|
| 686 |
+
a spindown transition. van Saders et al. (2016) does include the
|
| 687 |
+
subgiant points in their data fitting, and this strongly affects the
|
| 688 |
+
shape of their shaded area, which rises at late times.
|
| 689 |
+
Observations do not provide accurate Rossby numbers for stars 2-7
|
| 690 |
+
or magnetic fields for stars 2-4. Since these stars are similar to the sun
|
| 691 |
+
in other respects, for lack of a better option, we simply assume that
|
| 692 |
+
these quantities are comparable to solar values. Since the magnetic
|
| 693 |
+
field is the agent of energy transport into the corona, our solutions
|
| 694 |
+
are quite sensitive to magnetic field strength. To exemplify this we
|
| 695 |
+
present solutions for different magnetic field strengths in Fig. 3 for a
|
| 696 |
+
star without a measured magnetic field. The top panel shows solutions
|
| 697 |
+
MNRAS 000, 1–8 (2022)
|
| 698 |
+
|
| 699 |
+
0
|
| 700 |
+
0.5
|
| 701 |
+
P / 24.47 days
|
| 702 |
+
1.0
|
| 703 |
+
1.5
|
| 704 |
+
0.5
|
| 705 |
+
1.0
|
| 706 |
+
1.5
|
| 707 |
+
2.0
|
| 708 |
+
2.5
|
| 709 |
+
Age / 4.6 Gyr0
|
| 710 |
+
0.5
|
| 711 |
+
/ 24.47 days
|
| 712 |
+
P /
|
| 713 |
+
1.0
|
| 714 |
+
1.5
|
| 715 |
+
0.5
|
| 716 |
+
1.0
|
| 717 |
+
1.5
|
| 718 |
+
2.0
|
| 719 |
+
2.5
|
| 720 |
+
Age / 4.6 Gyr0
|
| 721 |
+
0.6 G
|
| 722 |
+
1G
|
| 723 |
+
-2G
|
| 724 |
+
0.5
|
| 725 |
+
2.4 G
|
| 726 |
+
1 24.47 days
|
| 727 |
+
1.0
|
| 728 |
+
P /
|
| 729 |
+
全
|
| 730 |
+
1.5
|
| 731 |
+
0.5
|
| 732 |
+
1.0
|
| 733 |
+
1.5
|
| 734 |
+
2.0
|
| 735 |
+
2.5
|
| 736 |
+
Age / 4.6 Gyr0
|
| 737 |
+
0.6 G
|
| 738 |
+
1G
|
| 739 |
+
-2G
|
| 740 |
+
0.5
|
| 741 |
+
2.4 G
|
| 742 |
+
1 24.47 days
|
| 743 |
+
1.0
|
| 744 |
+
1.5
|
| 745 |
+
0.5
|
| 746 |
+
1.0
|
| 747 |
+
1.5
|
| 748 |
+
2.0
|
| 749 |
+
2.5
|
| 750 |
+
Age / 4.6 Gyr6
|
| 751 |
+
K. Kotorashvili et al.
|
| 752 |
+
Figure 4. Solutions for 𝑙𝑥 versus time for different magnetic field strengths
|
| 753 |
+
for star 2 form Table 1. This spread in the luminosities further demonstrates
|
| 754 |
+
the sensitivity of our solutions to surface magnetic field strengths. Here we
|
| 755 |
+
used the same magnetic field values and line styles as in Fig.3.
|
| 756 |
+
for regime I and the bottom panel for regime II using magnetic field
|
| 757 |
+
values 𝐵𝑝 = 0.6 G, 1 G, 2 G, 2.4 G. In both regimes we see the
|
| 758 |
+
conspicuous difference between solution curves for lower and higher
|
| 759 |
+
magnetic fields. Fig. 4 demonstrates the influence of magnetic field
|
| 760 |
+
strength on 𝑙𝑥.
|
| 761 |
+
Generally, Figures 3 and 4 show that the broad spread of solutions
|
| 762 |
+
for the range of magnetic fields considered makes it difficult to predict
|
| 763 |
+
the exact evolution path for each star. This further highlights the
|
| 764 |
+
imprecision of any prediction for the population that would arise
|
| 765 |
+
by using one single-line curve. The theoretical prediction for the
|
| 766 |
+
population is an envelope of curves.
|
| 767 |
+
3.2 Physical role of thermal conduction in Regimes I and II
|
| 768 |
+
As mentioned above, we assume that dynamo-produced fields source
|
| 769 |
+
the coronal energy, which in turn has three main processes for en-
|
| 770 |
+
ergy loss: stellar wind, thermal conduction and X-ray radiation. The
|
| 771 |
+
first two increase with increasing temperature, while X-ray radiation
|
| 772 |
+
decreases. This leads to an equilibrium with a minimum total coro-
|
| 773 |
+
nal flux (Hearn 1975). For regime I, thermal conduction and X-ray
|
| 774 |
+
luminosity dominate the energy loss leaving little contribution from
|
| 775 |
+
the stellar wind. Here conduction removes hot gas available for the
|
| 776 |
+
wind and the wind mass-loss rate correspondingly drops exponen-
|
| 777 |
+
tially with decreasing gas temperature. This, in turn, reduces the rate
|
| 778 |
+
of angular momentum loss. In regime II, conduction is sub-dominant
|
| 779 |
+
and wind loss and X-ray radiation dominate the coronal energy loss.
|
| 780 |
+
The difference in increasing and decreasing slope between regimes
|
| 781 |
+
in our solutions shown in Figure 2 with colored curves is caused by the
|
| 782 |
+
relative influence of thermal conduction, which is more important at
|
| 783 |
+
low temperatures where it determines the relation between luminosity
|
| 784 |
+
and mass loss, and in turn, the coupled evolution of x-ray luminosity,
|
| 785 |
+
magnetic field strength, and spin.
|
| 786 |
+
In the spin evolution model used by van Saders et al. (2016),
|
| 787 |
+
the scaling between luminosity and mass loss is the same as in our
|
| 788 |
+
regime II, equation (18), although for different reasons. This may
|
| 789 |
+
help to explain why their solutions (shown as black dot-dashed line
|
| 790 |
+
in Figure 2) also have a faster rate of spin down. But their results for
|
| 791 |
+
the time evolution of the rotation period are quite different from ours
|
| 792 |
+
due to different parameter choices and a different relation between
|
| 793 |
+
luminosity and angular velocity. For our case 𝑙𝑥 ∼ 𝜔3 for 𝜆 = 1/3
|
| 794 |
+
and for their case 𝑙𝑥 ∼ 𝜔2.
|
| 795 |
+
3.3 Influence of feedback of rotation on magnetic field evolution
|
| 796 |
+
In regime I, the relationship between luminosity and mass loss is very
|
| 797 |
+
different from regime II. As a result the solutions in Figure 2 show a
|
| 798 |
+
decreasing slope, and are quite similar to the Skumanich relation for
|
| 799 |
+
older main sequence stars. Regime I overall shows better agreement
|
| 800 |
+
with the data, although our envelope of solutions using either regime
|
| 801 |
+
I or regime II can describe the observed period-age relation without
|
| 802 |
+
requiring a change of a dynamo mode.
|
| 803 |
+
That the solutions curves for regime I versus regime II in Fig. 2
|
| 804 |
+
are not hugely different can be explained by considering the feed-
|
| 805 |
+
back between the rotation and the magnetic field. For low mass loss
|
| 806 |
+
(regime I) the change in the angular momentum, and in turn, the mag-
|
| 807 |
+
netic field is insignificant, while for regime II stars are losing angular
|
| 808 |
+
momentum faster, thereby reducing the magnetic field more than in
|
| 809 |
+
regime I. Because of the dynamical coupling between the magnetic
|
| 810 |
+
field and stellar rotation, reducing the magnetic field also reduces the
|
| 811 |
+
spin-down rate, resulting in a similar rotation period evolution to that
|
| 812 |
+
of regime.2
|
| 813 |
+
4 CONCLUSION
|
| 814 |
+
To study the time evolution of the stellar rotation period and the
|
| 815 |
+
period-age relationship for G and F-type main sequence stars we
|
| 816 |
+
have employed and generalized a minimalist holistic time-dependent
|
| 817 |
+
model for spin-down, X-ray luminosity, magnetic field, and mass
|
| 818 |
+
loss (Blackman & Owen 2016). The model combines an isothermal
|
| 819 |
+
Parker wind (Parker 1958), dynamo saturation model (Blackman &
|
| 820 |
+
Thomas 2015), a coronal equilibrium condition (Hearn 1975), and
|
| 821 |
+
assumes that angular momentum is lost primarily from the equatorial
|
| 822 |
+
plane (Weber & Davis 1967).
|
| 823 |
+
From a sample of older-than-solar stars chosen for having precise
|
| 824 |
+
measurements of period and age, we solved these evolution equa-
|
| 825 |
+
tions such that each star is a fixed point on a unique solution curve.
|
| 826 |
+
We argued that the envelope of these curves is a more appropriate
|
| 827 |
+
indicator of theoretical predictions than a single line fit through the
|
| 828 |
+
sun or any chosen star to represent the entire population.
|
| 829 |
+
We produce separate such envelopes for cases in which thermal
|
| 830 |
+
conduction is respectively less or more important, with the latter
|
| 831 |
+
appears to be in better agreement with the data. Overall, our results
|
| 832 |
+
suggest that a dynamo transition from dipole dominated to higher
|
| 833 |
+
multipole dominated is not unambiguously required to reduce the
|
| 834 |
+
2 Remember that these stars are in the unsaturated regime, where magnetic
|
| 835 |
+
field and X-ray luminosity do depend on spin.
|
| 836 |
+
MNRAS 000, 1–8 (2022)
|
| 837 |
+
|
| 838 |
+
500
|
| 839 |
+
0.6 G
|
| 840 |
+
1 G
|
| 841 |
+
2 G
|
| 842 |
+
100
|
| 843 |
+
2.4 G
|
| 844 |
+
50
|
| 845 |
+
10
|
| 846 |
+
5
|
| 847 |
+
0.5
|
| 848 |
+
1.0
|
| 849 |
+
1.5
|
| 850 |
+
2.0
|
| 851 |
+
Age0.6 G
|
| 852 |
+
100
|
| 853 |
+
1G
|
| 854 |
+
50
|
| 855 |
+
2 G
|
| 856 |
+
2.4 G
|
| 857 |
+
10
|
| 858 |
+
5
|
| 859 |
+
0.5
|
| 860 |
+
1.0
|
| 861 |
+
1.5
|
| 862 |
+
2.0
|
| 863 |
+
AgeOn the spin evolution of older sun-like stars
|
| 864 |
+
7
|
| 865 |
+
Table 1. Stellar properties of G-type and F-type stars used in our study (Wright et al. (2004); Bazot et al. (2012); Molenda-Żakowicz et al. (2013); Marsden
|
| 866 |
+
et al. (2014); van Saders et al. (2016); Creevey, O. L. et al. (2017); White, T. R. et al. (2017); Metcalfe et al. (2022))
|
| 867 |
+
KIC ID/Name
|
| 868 |
+
Sp.
|
| 869 |
+
Radius
|
| 870 |
+
Mass
|
| 871 |
+
Age
|
| 872 |
+
Period
|
| 873 |
+
Luminosity
|
| 874 |
+
Rossby
|
| 875 |
+
Magnetic field
|
| 876 |
+
or HIP no.
|
| 877 |
+
Type
|
| 878 |
+
(𝑅⊙)
|
| 879 |
+
(𝑀⊙)
|
| 880 |
+
(Gyr)
|
| 881 |
+
(Days)
|
| 882 |
+
(𝐿⊙)
|
| 883 |
+
number
|
| 884 |
+
(G)
|
| 885 |
+
1
|
| 886 |
+
Sun
|
| 887 |
+
G2V
|
| 888 |
+
1.001 ± 0.005
|
| 889 |
+
1.001 ± 0.019
|
| 890 |
+
4.6
|
| 891 |
+
24.47
|
| 892 |
+
0.97 ± 0.03
|
| 893 |
+
2
|
| 894 |
+
2
|
| 895 |
+
2
|
| 896 |
+
9098294
|
| 897 |
+
G3V
|
| 898 |
+
1.150 ± 0.003
|
| 899 |
+
0.979 ± 0.017
|
| 900 |
+
8.23 ± 0.53
|
| 901 |
+
19.79±1.33
|
| 902 |
+
1.34 ± 0.05
|
| 903 |
+
3
|
| 904 |
+
7680114
|
| 905 |
+
G0V
|
| 906 |
+
1.402 ± 0.014
|
| 907 |
+
1.092 ± 0.030
|
| 908 |
+
6.89 ± 0.46
|
| 909 |
+
26.31±1.86
|
| 910 |
+
2.07 ± 0.09
|
| 911 |
+
4
|
| 912 |
+
𝛼 Cen A
|
| 913 |
+
G2V
|
| 914 |
+
1.224 ± 0.009
|
| 915 |
+
1.105 ± 0.007
|
| 916 |
+
5.40 ± 0.30
|
| 917 |
+
22±5.9
|
| 918 |
+
1.55 ± 0.03
|
| 919 |
+
5
|
| 920 |
+
16 Cyg-A
|
| 921 |
+
G1.5Vb
|
| 922 |
+
1.223 ± 0.005
|
| 923 |
+
1.072 ± 0.013
|
| 924 |
+
7.36 ± 0.31
|
| 925 |
+
20.5+2
|
| 926 |
+
−1.1
|
| 927 |
+
1.52 ± 0.05
|
| 928 |
+
< 0.5
|
| 929 |
+
6
|
| 930 |
+
16 Cyg-B
|
| 931 |
+
G3V
|
| 932 |
+
1.113 ± 0.016
|
| 933 |
+
1.038 ± 0.047
|
| 934 |
+
7.05 ± 0.63
|
| 935 |
+
21.2+1.8
|
| 936 |
+
−1.5
|
| 937 |
+
1.21 ± 0.11
|
| 938 |
+
< 0.9
|
| 939 |
+
7
|
| 940 |
+
18 Sco
|
| 941 |
+
G2Va
|
| 942 |
+
1.010 ± 0.009
|
| 943 |
+
1.020 ± 0.003
|
| 944 |
+
3.66+0.44
|
| 945 |
+
−0.5
|
| 946 |
+
22.7 ± 0.5
|
| 947 |
+
1.07 ± 0.03
|
| 948 |
+
1.34
|
| 949 |
+
8
|
| 950 |
+
1499
|
| 951 |
+
G0V
|
| 952 |
+
1.11 ± 0.04
|
| 953 |
+
1.026+0.04
|
| 954 |
+
−0.03
|
| 955 |
+
7.12+1.40
|
| 956 |
+
−1.56
|
| 957 |
+
29+0.3
|
| 958 |
+
−0.3
|
| 959 |
+
1.197
|
| 960 |
+
2.16
|
| 961 |
+
0.6 ± 0.5
|
| 962 |
+
9
|
| 963 |
+
682
|
| 964 |
+
G2V
|
| 965 |
+
1.12 ± 0.05
|
| 966 |
+
1.045+0.028
|
| 967 |
+
−0.024
|
| 968 |
+
6.12+1.28
|
| 969 |
+
−1.48
|
| 970 |
+
4.3+0.0
|
| 971 |
+
−0.2
|
| 972 |
+
1.208
|
| 973 |
+
0.4
|
| 974 |
+
4.4 ± 1.8
|
| 975 |
+
10
|
| 976 |
+
1813
|
| 977 |
+
F8
|
| 978 |
+
1.18+0.06
|
| 979 |
+
−0.05
|
| 980 |
+
0.965+0.02
|
| 981 |
+
−0.02
|
| 982 |
+
10.88+1.36
|
| 983 |
+
−1.36
|
| 984 |
+
22.1+0.2
|
| 985 |
+
−0.2
|
| 986 |
+
1.315
|
| 987 |
+
1.95
|
| 988 |
+
2.4 ± 0.7
|
| 989 |
+
11
|
| 990 |
+
176465 A
|
| 991 |
+
G4V
|
| 992 |
+
0.918 ± 0.015
|
| 993 |
+
0.930 ± 0.04
|
| 994 |
+
3.0 ± 0.4
|
| 995 |
+
19.2±0.8
|
| 996 |
+
12
|
| 997 |
+
176465 B
|
| 998 |
+
G4V
|
| 999 |
+
0.885 ± 0.006
|
| 1000 |
+
0.930 ± 0.02
|
| 1001 |
+
2.9 ± 0.5
|
| 1002 |
+
17.6±2.3
|
| 1003 |
+
13
|
| 1004 |
+
400
|
| 1005 |
+
G9V
|
| 1006 |
+
0.8+0.02
|
| 1007 |
+
−0.03
|
| 1008 |
+
0.794+0.034
|
| 1009 |
+
−0.018
|
| 1010 |
+
12.28+1.72
|
| 1011 |
+
−7.08
|
| 1012 |
+
35.3+1.1
|
| 1013 |
+
−0.7
|
| 1014 |
+
0.455
|
| 1015 |
+
2
|
| 1016 |
+
2.1 ± 1.0
|
| 1017 |
+
14
|
| 1018 |
+
6116048
|
| 1019 |
+
F9IV-V
|
| 1020 |
+
1.233 ± 0.011
|
| 1021 |
+
1.048 ± 0.028
|
| 1022 |
+
6.08 ± 0.40
|
| 1023 |
+
17.26±1.96
|
| 1024 |
+
1.77 ± 0.13
|
| 1025 |
+
15
|
| 1026 |
+
3656476
|
| 1027 |
+
G5IV
|
| 1028 |
+
1.322 ± 0.007
|
| 1029 |
+
1.101 ± 0.025
|
| 1030 |
+
8.88 ± 0.41
|
| 1031 |
+
31.67±3.53
|
| 1032 |
+
1.63 ± 0.06
|
| 1033 |
+
𝑎 For 16 Cyg-A 16 Cyg-B and 18 Sco we used estimated mass loss rates from Metcalfe et al. (2022), based on the scaling relation �𝑀 ≃ 𝐹0.770.04
|
| 1034 |
+
𝑥
|
| 1035 |
+
(Wood et al.
|
| 1036 |
+
2021). For other stars we have used the Solar value.
|
| 1037 |
+
* In our solutions we have used Solar values for these parameters.
|
| 1038 |
+
rate of spin down, as there is not a clear contradiction between
|
| 1039 |
+
theory and observation for the envelope of solutions without such a
|
| 1040 |
+
transition when the theory depends on a Parker-type wind solution.
|
| 1041 |
+
We explored the sensitivity of our solutions to stellar properties
|
| 1042 |
+
that we may not know for individual stars, such as the coronal base X-
|
| 1043 |
+
ray temperature and magnetic field strength. Because the Parker-type
|
| 1044 |
+
wind solution is integral to the model, we are forced to an exponential
|
| 1045 |
+
sensitivity on the coronal base X-ray temperature. This limits the
|
| 1046 |
+
precision of any theoretical or model prediction expressed as a single
|
| 1047 |
+
line intended to capture the evolution of the stellar population. The
|
| 1048 |
+
prediction should instead be expressed as an envelope of curves.
|
| 1049 |
+
Said another way, the sample of observed data does not have enough
|
| 1050 |
+
sufficiently identical stars to make an ensemble average prediction of
|
| 1051 |
+
high precision. This connects to the broader need to more commonly
|
| 1052 |
+
express limitations in precision of theory field theories applied to
|
| 1053 |
+
astrophysical systems (Zhou et al. 2018).
|
| 1054 |
+
Since it is not possible to obtain more than 1 data point for individ-
|
| 1055 |
+
ual stars over their spin-down evolution lifetimes, more observations
|
| 1056 |
+
to better nail down evidence for or against a spin-down transition
|
| 1057 |
+
are desired. More data on individual more closely "identical" stars
|
| 1058 |
+
at different times in their spin-down evolution would be desirable. In
|
| 1059 |
+
addition, at the population level, period-mass plots for older clusters
|
| 1060 |
+
than have presently been measured would be valuable. Observations
|
| 1061 |
+
from the Kepler K2 mission have shown that by the time clusters
|
| 1062 |
+
reach an age of 950Myr, period-mass relations appear to converge to
|
| 1063 |
+
a relatively tight 1 to 1 dependence Godoy-Rivera et al. (2021). Sim-
|
| 1064 |
+
ilar results were obtained for 2.7 Gyr-old open cluster Ruprecht 147
|
| 1065 |
+
(Gruner & Barnes 2020), who found that stars lie in period-mass-
|
| 1066 |
+
age plane with possible evidence for a mass dependence requiring
|
| 1067 |
+
additional mass-dependent physics parameter variation (perhaps e.g.
|
| 1068 |
+
relating to our 𝑞 below Eqn. 10 deviating from unity), in modeling
|
| 1069 |
+
spin-down. If similar data could be obtained for much older clusters
|
| 1070 |
+
and the tight relations were to show strong kinks or bifurcate into
|
| 1071 |
+
more than one branch within the mass range 0.5 < 𝑀/𝑀⊙ < 1.5
|
| 1072 |
+
that we have considered, this would suggest that the population of
|
| 1073 |
+
solar-like stars that we are focusing on would show population-level
|
| 1074 |
+
evidence for a transition.
|
| 1075 |
+
5 DATA AVAILABILITY
|
| 1076 |
+
All the data used in the paper is either created theoretically from
|
| 1077 |
+
equations herein, or given in Table 1.
|
| 1078 |
+
6 ACKNOWLEDGMENTS
|
| 1079 |
+
KK acknowledges support from a Horton Graduate Fellowship
|
| 1080 |
+
from the Laboratory for Laser Energetics. We acknowledge support
|
| 1081 |
+
from the Department of Energy grants DE-SC0020432 and DE-
|
| 1082 |
+
SC0020434, and National Science Foundation grants AST-1813298
|
| 1083 |
+
and PHY-2020249. EB acknowledges the Isaac Newton Institute for
|
| 1084 |
+
Mathematical Sciences, Cambridge, for support and hospitality dur-
|
| 1085 |
+
ing the programme "Frontiers in dynamo theory: from the Earth to
|
| 1086 |
+
the stars"where work on this paper was undertaken. This work was
|
| 1087 |
+
supported by EPSRC grant no EP/R014604/1. JEO is supported by a
|
| 1088 |
+
Royal Society University Research Fellowship. This work was sup-
|
| 1089 |
+
ported by the European Research Council (ERC) under the European
|
| 1090 |
+
Union’s Horizon 2020 research and innovation programme (Grant
|
| 1091 |
+
agreement No. 853022, PEVAP). For the purpose of open access,
|
| 1092 |
+
the authors have applied a Creative Commons Attribution (CC-BY)
|
| 1093 |
+
licence to any Author Accepted Manuscript version arising.
|
| 1094 |
+
REFERENCES
|
| 1095 |
+
Bazot M., Bourguignon S., Christensen-Dalsgaard J., 2012, Monthly Notices
|
| 1096 |
+
of the Royal Astronomical Society, 427, 1847
|
| 1097 |
+
Blackman E. G., Brandenburg A., 2003, ApJ, 584, L99
|
| 1098 |
+
Blackman E. G., Field G. B., 2002, Physical Review Letters, 89, 265007
|
| 1099 |
+
Blackman E. G., Owen J. E., 2016, MNRAS, 458, 1548
|
| 1100 |
+
Blackman E. G., Thomas J. H., 2015, MNRAS, 446, L51
|
| 1101 |
+
Cranmer S. R., 2004, American Journal of Physics, 72, 1397
|
| 1102 |
+
MNRAS 000, 1–8 (2022)
|
| 1103 |
+
|
| 1104 |
+
8
|
| 1105 |
+
K. Kotorashvili et al.
|
| 1106 |
+
Creevey, O. L. et al., 2017, A&A, 601, A67
|
| 1107 |
+
Gallet F., Bouvier J., 2013, Astronomy & Astrophysics, 556, A36
|
| 1108 |
+
Godoy-Rivera D., Pinsonneault M. H., Rebull L. M., 2021, ApJS, 257, 46
|
| 1109 |
+
Gough D. O., 1981, Sol. Phys., 74, 21
|
| 1110 |
+
Gruner D., Barnes S. A., 2020, A&A, 644, A16
|
| 1111 |
+
Hearn A. G., 1975, Astron. & Astrophys., 40, 355
|
| 1112 |
+
Johnstone C. P., Güdel M., Brott I., Lüftinger T., 2015, A&A, 577, A28
|
| 1113 |
+
Kawaler S. D., 1988, ApJ, 333, 236
|
| 1114 |
+
Lamers H. J. G. L. M., Cassinelli J. P., 1999, Introduction to Stellar Winds.
|
| 1115 |
+
Cambridge Univ. Press
|
| 1116 |
+
Mamajek E. E., 2014, Figshare,http://dx.doi.org/10.6084/m9.figshare.1051826
|
| 1117 |
+
Marsden S. C., et al., 2014, MNRAS, 444, 3517
|
| 1118 |
+
Matt S. P., MacGregor K. B., Pinsonneault M. H., Greene T. P., 2012, ApJ,
|
| 1119 |
+
754, L26
|
| 1120 |
+
Matt S. P., Brun A. S., Baraffe I., Bouvier J., Chabrier G., 2015, ApJ, 799,
|
| 1121 |
+
L23
|
| 1122 |
+
Mestel L., 1968, Monthly Notices of the Royal Astronomical Society, 138,
|
| 1123 |
+
359
|
| 1124 |
+
Metcalfe T. S., van Saders J., 2017, Solar Physics, 292, 1
|
| 1125 |
+
Metcalfe T. S., et al., 2022, The Astrophysical Journal Letters, 933, L17
|
| 1126 |
+
Molenda-Żakowicz J., et al., 2013, Monthly Notices of the Royal Astronom-
|
| 1127 |
+
ical Society, 434, 1422
|
| 1128 |
+
Nichols-Fleming F., Blackman E. G., 2020, MNRAS, 491, 2706
|
| 1129 |
+
Parker E. N., 1955, ApJ, 122, 293
|
| 1130 |
+
Parker E. N., 1958, ApJ, 128, 664
|
| 1131 |
+
Peres G., Orlando S., Reale F., Rosner R., Hudson H., 2000, ApJ, 528, 537
|
| 1132 |
+
Reiners A., Mohanty S., 2012, The Astrophysical Journal, 746, 43
|
| 1133 |
+
Reiners A., Schüssler M., Passegger V. M., 2014, ApJ, 794, 144
|
| 1134 |
+
Schatzman E., 1962, Annales d’Astrophysique, 25, 18
|
| 1135 |
+
Skumanich A., 1972, ApJ, 171, 565
|
| 1136 |
+
Steenbeck M., Krause F., 1969, Astronomische Nachrichten, 291, 49
|
| 1137 |
+
Weber E. J., Davis Jr. L., 1967, ApJ, 148, 217
|
| 1138 |
+
White, T. R. et al., 2017, A&A, 601, A82
|
| 1139 |
+
Wood B. E., et al., 2021, The Astrophysical Journal, 915, 37
|
| 1140 |
+
Wright J. T., Marcy G. W., Butler R. P., Vogt S. S., 2004, The Astrophysical
|
| 1141 |
+
Journal Supplement Series, 152, 261
|
| 1142 |
+
Wright N. J., Drake J. J., Mamajek E. E., Henry G. W., 2011, ApJ, 743, 48
|
| 1143 |
+
Zhou H., Blackman E. G., Chamandy L., 2018, Journal of Plasma Physics,
|
| 1144 |
+
84, 735840302
|
| 1145 |
+
van Saders J. L., Pinsonneault M. H., 2013, ApJ, 776, 67
|
| 1146 |
+
van Saders J. L., Ceillier T., Metcalfe T. S., Silva Aguirre V., Pinsonneault
|
| 1147 |
+
M. H., García R. A., Mathur S., Davies G. R., 2016, Nature, 529, 181
|
| 1148 |
+
This paper has been typeset from a TEX/LATEX file prepared by the author.
|
| 1149 |
+
MNRAS 000, 1–8 (2022)
|
| 1150 |
+
|
KdE3T4oBgHgl3EQfvQvs/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
LtAzT4oBgHgl3EQfyv51/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:35509abdfadb743b1ee60527c364ee5ed680788b6c4e47b76f93394a844eea3c
|
| 3 |
+
size 176209
|
M9E0T4oBgHgl3EQf0QJr/content/tmp_files/2301.02683v1.pdf.txt
ADDED
|
@@ -0,0 +1,2510 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Classifying topological neural network quantum states via diffusion maps
|
| 2 |
+
Yanting Teng,1 Subir Sachdev,1 and Mathias S. Scheurer2
|
| 3 |
+
1Department of Physics, Harvard University, Cambridge MA 02138, USA
|
| 4 |
+
2Institut f¨ur Theoretische Physik, Universit¨at Innsbruck, A-6020 Innsbruck, Austria
|
| 5 |
+
We discuss and demonstrate an unsupervised machine-learning procedure to detect topological
|
| 6 |
+
order in quantum many-body systems.
|
| 7 |
+
Using a restricted Boltzmann machine to define a vari-
|
| 8 |
+
ational ansatz for the low-energy spectrum, we sample wave functions with probability decaying
|
| 9 |
+
exponentially with their variational energy; this defines our training dataset that we use as input to
|
| 10 |
+
a diffusion map scheme. The diffusion map provides a low-dimensional embedding of the wave func-
|
| 11 |
+
tions, revealing the presence or absence of superselection sectors and, thus, topological order. We
|
| 12 |
+
show that for the diffusion map, the required similarity measure of quantum states can be defined
|
| 13 |
+
in terms of the network parameters, allowing for an efficient evaluation within polynomial time.
|
| 14 |
+
However, possible “gauge redundancies” have to be carefully taken into account. As an explicit
|
| 15 |
+
example, we apply the method to the toric code.
|
| 16 |
+
I.
|
| 17 |
+
INTRODUCTION
|
| 18 |
+
In the last few years, machine learning (ML) tech-
|
| 19 |
+
niques have been very actively studied as novel tools in
|
| 20 |
+
many-body physics [1–7].
|
| 21 |
+
A variety of valuable appli-
|
| 22 |
+
cations of ML has been established, such as ML-based
|
| 23 |
+
variational ans¨atze for many-body wave functions, appli-
|
| 24 |
+
cation of ML to experimental data to extract information
|
| 25 |
+
about the underlying physics, ML methods for more ef-
|
| 26 |
+
ficient Monte-Carlo sampling , and employment of ML
|
| 27 |
+
to detect phase transitions, to name a few. Regarding
|
| 28 |
+
the latter type of applications, a particular focus has re-
|
| 29 |
+
cently been on topological phase transitions [8–31]. This
|
| 30 |
+
is motivated by the challenges associated with captur-
|
| 31 |
+
ing topological phase transitions: by definition, topolog-
|
| 32 |
+
ical features are related to the global connectivity of the
|
| 33 |
+
dataset rather than local similarity of samples. There-
|
| 34 |
+
fore, unless the dataset is sufficiently simple such that
|
| 35 |
+
topologically connected pairs of samples also happen to
|
| 36 |
+
be locally similar or features are used as input data that
|
| 37 |
+
are closely related to the underlying topological invari-
|
| 38 |
+
ant, the topological structure is hard to capture reliably
|
| 39 |
+
with many standard ML techniques [11, 12].
|
| 40 |
+
In this regard, the ML approach proposed in Ref. 12,
|
| 41 |
+
which is based on diffusion maps (DM) [32–35], is a
|
| 42 |
+
particularly promising route to learn topological phase
|
| 43 |
+
transitions; it allows to embed high-dimensional data
|
| 44 |
+
in a low-dimensional subspace such that pairs of sam-
|
| 45 |
+
ples that are smoothly connected in the dataset will be
|
| 46 |
+
mapped close to each other, while disconnected pairs will
|
| 47 |
+
be mapped to distant points. As such, the method cap-
|
| 48 |
+
tures the central notion of topology. In combination with
|
| 49 |
+
the fact that it is unsupervised and thus does not re-
|
| 50 |
+
quire a priori knowledge of the underlying topological
|
| 51 |
+
invariants, it is ideally suited for the task of topolog-
|
| 52 |
+
ical phase classification.
|
| 53 |
+
As a result, there have been
|
| 54 |
+
many recent efforts applying this approach to a variety
|
| 55 |
+
of problems, such as different symmetry-protected, in-
|
| 56 |
+
cluding non-Hermitian, topological systems [36–41], ex-
|
| 57 |
+
perimental data [39, 42], many-body localized states [43],
|
| 58 |
+
and dynamics [44]; extensions based on combining DM
|
| 59 |
+
with path finding [36] as well as with quantum computing
|
| 60 |
+
schemes [45] for speed-up have also been studied.
|
| 61 |
+
As alluded to above, another very actively pursued ap-
|
| 62 |
+
plication of ML in physics are neural network quantum
|
| 63 |
+
states: as proposed in Ref. 46, neural networks can be
|
| 64 |
+
used to efficiently parameterize and, in many cases, opti-
|
| 65 |
+
mize variational descriptions of wave functions of quan-
|
| 66 |
+
tum many-body systems [47–56]. In particular, restricted
|
| 67 |
+
Boltzmann machines (RBMs) [4] represent a very popular
|
| 68 |
+
neural-network structure in this context. For instance,
|
| 69 |
+
the ground states of the toric code model [57] can be ex-
|
| 70 |
+
actly expressed with a local RBM ansatz [58], i.e., where
|
| 71 |
+
only neighboring spins are connected to the same hidden
|
| 72 |
+
neurons.
|
| 73 |
+
When additional non-local extensions to the
|
| 74 |
+
RBM ansatz of Ref. 58 are added, this has been shown
|
| 75 |
+
to also provide a very accurate variational description of
|
| 76 |
+
the toric code in the presence of a magnetic field [59].
|
| 77 |
+
In this work, we combine the DM approach of Ref. 12
|
| 78 |
+
with neural network quantum states with the goal of cap-
|
| 79 |
+
turing topological order in an unsupervised way in in-
|
| 80 |
+
teracting quantum many-body systems. We use a local
|
| 81 |
+
network ansatz, with parameters Λ, as a variational de-
|
| 82 |
+
scription for the wave functions |Ψ(Λ)⟩ of the low-energy
|
| 83 |
+
subspace of a system with Hamiltonian ˆH.
|
| 84 |
+
While we
|
| 85 |
+
also briefly mention other possible ways of generating
|
| 86 |
+
ensembles of states, we primarily focus on an energetic
|
| 87 |
+
principle: we sample wavefunctions such that the proba-
|
| 88 |
+
bility of |Ψ(λ)⟩ is proportional to exp(− ⟨ ˆH⟩Λ /T) where
|
| 89 |
+
⟨ ˆH⟩Λ = ⟨Ψ(��)| ˆH |Ψ(Λ)⟩. As illustrated in Fig. 1(a), the
|
| 90 |
+
presence of superselection sectors in the low-energy spec-
|
| 91 |
+
trum of ˆH implies that the ensemble of states decays into
|
| 92 |
+
disconnected subsets of states for sufficiently small T (at
|
| 93 |
+
least at fixed finite system size); these can be extracted,
|
| 94 |
+
without need of prior labels, with dimensional reduction
|
| 95 |
+
via DM (and subsequent k-means clustering), and thus
|
| 96 |
+
allow to identify topological order. For sufficiently large
|
| 97 |
+
T, more and more high-energy states are included and
|
| 98 |
+
all sectors are connected, see Fig. 1(b), as can also be
|
| 99 |
+
readily revealed via DM-based embedding of the states.
|
| 100 |
+
Importantly, DM is a kernel technique in the sense
|
| 101 |
+
that the input data xl (in our case the states |Ψ(Λl)⟩)
|
| 102 |
+
arXiv:2301.02683v1 [quant-ph] 6 Jan 2023
|
| 103 |
+
|
| 104 |
+
2
|
| 105 |
+
Figure 1. (a) An illustration of a “low-energy” ensemble. Two (or more) initial states, |Ψ(Λ0)⟩ and |Ψ(Λ1)⟩, from two distinct
|
| 106 |
+
topological sectors are chosen as “seeds” (green dots). The dots denote the dataset (later fed into the DM), which are a set
|
| 107 |
+
of quantum states labeled by network parameters Λ. This dataset is generated using the procedure outlined in Sec. II A and
|
| 108 |
+
Algorithm. 1, where the next state Λ′ (blue dots at each arrow) is proposed by a random local perturbation and accepted
|
| 109 |
+
with probability based on the energy expectation ⟨H⟩Λ′. In the small-T regime, the full dataset is not inter-connected by such
|
| 110 |
+
local perturbations and cluster among each topological sectors (at left and right valley). (b) An illustration of a “high-energy”
|
| 111 |
+
ensemble. The states are generated using the same algorithm as before, however with a large hyperparameter T (compared to
|
| 112 |
+
the energy gap ∆). In this regime, the dataset include some of the low-energy states (blue dots), but also some high-energy
|
| 113 |
+
states (red dots). Because the high-energy states are agnostic of the low-energy topological sectors, there exist paths (denoted
|
| 114 |
+
by arrows among dots in the elliptical blob) such that the two initial seeds from distinct topological sectors effectively “diffuse”
|
| 115 |
+
and form one connected cluster.
|
| 116 |
+
does not directly enter as a high-dimensional vector but
|
| 117 |
+
only via a similarity measure S(xl, xl′), comparing how
|
| 118 |
+
“similar” two samples l and l′ are.
|
| 119 |
+
In the context of
|
| 120 |
+
applying DM to the problem of topological classifica-
|
| 121 |
+
tion, it defines what a smooth deformation (“homotopy”)
|
| 122 |
+
of samples is. We discuss two possible such measures.
|
| 123 |
+
The first one is just the quantum mechanical overlap,
|
| 124 |
+
Sq(Λl, Λl′) = |⟨Ψ(Λl)|Ψ(Λl′)⟩|2, of the wave functions.
|
| 125 |
+
Although conceptually straightforward, its evaluation is
|
| 126 |
+
computationally costly on a classical computer as it re-
|
| 127 |
+
quires importance sampling. The local nature of our net-
|
| 128 |
+
work ansatz allows us to also construct an alternative
|
| 129 |
+
similarity measure that is expressed as a simple func-
|
| 130 |
+
tion of the network parameters Λl and Λl′ describing the
|
| 131 |
+
two states to be compared. This can, however, lead to
|
| 132 |
+
subtleties associated with the fact that two states with
|
| 133 |
+
different Λ can correspond to the same wave functions
|
| 134 |
+
(modulo global phase). We discuss how these “gauge re-
|
| 135 |
+
dundancies” can be efficiently circumvented for generic
|
| 136 |
+
states.
|
| 137 |
+
We illustrate these aspects and explicitly demonstrate
|
| 138 |
+
the success of this approach using the toric code [57],
|
| 139 |
+
a prototype model for topological order which has also
|
| 140 |
+
been previously studied with other ML techniques with
|
| 141 |
+
different focus [15–18, 58–60]. We show that the DM al-
|
| 142 |
+
gorithm learns the underlying loop operators wrapping
|
| 143 |
+
around the torus without prior knowledge; at low T, this
|
| 144 |
+
leads to four clusters corresponding to the four ground
|
| 145 |
+
states.
|
| 146 |
+
At larger T, these clusters start to merge, as
|
| 147 |
+
expected. Interestingly, the DM still uncovers the under-
|
| 148 |
+
lying structure of the dataset related to the expectation
|
| 149 |
+
value of the loop operators. Finally, we also show that
|
| 150 |
+
applying a magnetic field leads to the disappearance of
|
| 151 |
+
clusters in the DM, capturing the transition from topo-
|
| 152 |
+
logical order to the confined phase.
|
| 153 |
+
The remainder of the paper is organized as follows. In
|
| 154 |
+
Sec. II, we describe our ML approach in general terms,
|
| 155 |
+
including the local network quantum state description
|
| 156 |
+
we use, the ensemble generation, a brief review of the
|
| 157 |
+
DM scheme of Ref. 12, and the similarity measure in
|
| 158 |
+
terms of neural network parameters. Using the toric code
|
| 159 |
+
model as an example, all of these general aspects are
|
| 160 |
+
then discussed in detail and illustrated in Sec. III. Finally,
|
| 161 |
+
explicit numerical results can be found in Sec. IV and a
|
| 162 |
+
conclusion is provided in Sec. V.
|
| 163 |
+
II.
|
| 164 |
+
GENERAL ALGORITHM
|
| 165 |
+
Here, we first present and discuss our algorithm [see
|
| 166 |
+
Fig. 2(a)] in general terms before illustrating it using
|
| 167 |
+
the toric code as an example in the subsequent sections.
|
| 168 |
+
Consider a system of N qubits or spins, with associated
|
| 169 |
+
operators {ˆs} = {ˆsi, i = 1, · · · , N}, ˆsi = (ˆsx
|
| 170 |
+
i , ˆsy
|
| 171 |
+
i , ˆsz
|
| 172 |
+
i ),
|
| 173 |
+
and interactions governed by a local, gapped Hamilto-
|
| 174 |
+
nian ˆH = H({ˆs}). We represent the states |Ψ(Λ)⟩ of this
|
| 175 |
+
system using neural network quantum states [46],
|
| 176 |
+
|Ψ(Λ)⟩ =
|
| 177 |
+
�
|
| 178 |
+
σ
|
| 179 |
+
ψ(σ; Λ) |σ⟩ ,
|
| 180 |
+
(1)
|
| 181 |
+
where σ = {σ1, σ2, ..., σN|σi = ±1} enumerates configu-
|
| 182 |
+
rations of the physical spin variables in a local computa-
|
| 183 |
+
tional basis (e.g. sz-basis) and Λ is the set of parameters
|
| 184 |
+
that the network ψ depends on to output the wavefunc-
|
| 185 |
+
tion amplitude ψ(σ; Λ) = ⟨σ|Ψ(Λ)⟩ for configuration |σ⟩.
|
| 186 |
+
Because the physical Hilbert space scales exponentially
|
| 187 |
+
with the system size, there is a trade-off between the
|
| 188 |
+
expressivity versus efficiency when choosing a network
|
| 189 |
+
architecture (or ansatz) ψ, so that the weights Λ can ap-
|
| 190 |
+
proximate the state |Ψ(Λ)⟩ to a reasonable degree and
|
| 191 |
+
|
| 192 |
+
(H)
|
| 193 |
+
<H)
|
| 194 |
+
(b)
|
| 195 |
+
0←
|
| 196 |
+
TZ△
|
| 197 |
+
<(oV)
|
| 198 |
+
((V)
|
| 199 |
+
<(V)
|
| 200 |
+
V3
|
| 201 |
+
can at the same time be an efficient representation (with
|
| 202 |
+
minimal number of parameters Λ that scale as a polyno-
|
| 203 |
+
mial in N). To reach the ground state or, more generally,
|
| 204 |
+
the relevant low-energy sector of the Hamiltonian ˆH for
|
| 205 |
+
the low-temperature physics, we minimize the energy in
|
| 206 |
+
the variational subspace defined by Eq. (1) using gradient
|
| 207 |
+
descent with a learning rate λ,
|
| 208 |
+
Λ → Λ − λ ∂Λ ⟨ ˆH⟩Λ ,
|
| 209 |
+
⟨ ˆH⟩Λ = ⟨Ψ(Λ)| ˆH |Ψ(Λ)⟩ .
|
| 210 |
+
(2)
|
| 211 |
+
Here, the quantum mechanical expectation value ⟨ ˆH⟩Λ is
|
| 212 |
+
evaluated using importance sampling (see Appendix B).
|
| 213 |
+
While there are exponentially many states in the
|
| 214 |
+
Hilbert space, the low-energy sector of a local Hamilto-
|
| 215 |
+
nian is expected to occupy a small subspace where states
|
| 216 |
+
obey area law entanglement [61, 62] whereas a typical
|
| 217 |
+
state obeys volume law [63, 64]. Motivated by these con-
|
| 218 |
+
siderations, we consider a class of networks that natu-
|
| 219 |
+
rally describe quantum states that obey area-law entan-
|
| 220 |
+
glement. Pictorially, in such networks, the connections
|
| 221 |
+
from the hidden neurons (representing the weights Λ) to
|
| 222 |
+
the physical spins are quasi-local [51, 53–55].
|
| 223 |
+
In that
|
| 224 |
+
case, it holds
|
| 225 |
+
ψ(σ, Λ) = φ1(σ1, Λ1) × φ2(σ2, Λ2) × · · · ,
|
| 226 |
+
(3)
|
| 227 |
+
where σȷ = {σk}k∈ȷ
|
| 228 |
+
denote (overlapping) subsets of
|
| 229 |
+
neighboring spins with ∪ȷσȷ = σ and Λȷ are the sub-
|
| 230 |
+
sets of the network parameters (weights and biases) that
|
| 231 |
+
are connected to the physical spins in ȷ.
|
| 232 |
+
Algorithm 1 Ensemble generation
|
| 233 |
+
procedure ({Λ}N
|
| 234 |
+
n=1)
|
| 235 |
+
init: optimized parameters Λ
|
| 236 |
+
for k independent times do:
|
| 237 |
+
for n sampling steps do:
|
| 238 |
+
Propose new parameter Λp = f(Λt)
|
| 239 |
+
Accept with probability determined by energy
|
| 240 |
+
⟨ ˆ
|
| 241 |
+
H⟩Λ and parameter T:
|
| 242 |
+
Λt+1 = Paccept(Λ′|Λ; T)
|
| 243 |
+
return the last m states for each k:
|
| 244 |
+
{Λi|i = n −
|
| 245 |
+
m, ..., n}k
|
| 246 |
+
A.
|
| 247 |
+
Dataset: network parameter ensembles
|
| 248 |
+
The dataset we use for unsupervised detection of topo-
|
| 249 |
+
logical order consists of an ensemble of wavefunctions
|
| 250 |
+
{|Ψ(Λ)⟩}l, parameterized by the set of network parame-
|
| 251 |
+
ters {Λ}l. While, depending on the precise application,
|
| 252 |
+
other choices are conceivable, we generate this ensem-
|
| 253 |
+
ble such that the relative occurrence of a state |Ψ(Λ)⟩ is
|
| 254 |
+
given by ρT (Λ) = exp(− ⟨ ˆH⟩Λ /T)/Z, with appropriate
|
| 255 |
+
normalization factor Z.
|
| 256 |
+
As such, a small value of the
|
| 257 |
+
“temperature-like” hyperparameter T corresponds to a
|
| 258 |
+
“low-energy” ensemble while large T parametrize “high-
|
| 259 |
+
energy” ensembles.
|
| 260 |
+
In practice, to generate this ensemble, we here first op-
|
| 261 |
+
timize the parameters Λ via Eq. (2) to obtain wavefunc-
|
| 262 |
+
tions with lowest energy expectation values. As Eq. (1)
|
| 263 |
+
does not contain all possible states, this will, in general,
|
| 264 |
+
only yield approximations to the exact low-energy eigen-
|
| 265 |
+
states of ˆH. However, as long as it is able to capture all
|
| 266 |
+
superselection sectors of the system as well as (a subset
|
| 267 |
+
of) higher energy states connecting these sectors, Eq. (1)
|
| 268 |
+
will be sufficient for our purpose of detecting topologi-
|
| 269 |
+
cal order or the absence thereof. We perform this op-
|
| 270 |
+
timization several times, Λ → Λ0
|
| 271 |
+
l , with different initial
|
| 272 |
+
conditions, to obtain several “seeds”, Λ0
|
| 273 |
+
l ; this is done
|
| 274 |
+
to make sure we have a low-energy representative of all
|
| 275 |
+
superselection sectors. Ideally the dataset is sampled di-
|
| 276 |
+
rectly from the the target probability distribution ρT , if
|
| 277 |
+
for instance, one has access to an experimental system
|
| 278 |
+
at finite temperature. Here, we adopt a Markov-chain-
|
| 279 |
+
inspired procedure for generating the ensemble based on
|
| 280 |
+
ρT for each of these seeds. Specifically, starting from a
|
| 281 |
+
state Λ, we propose updates on a randomly chosen local
|
| 282 |
+
block of parameters connected to the spins at sites ȷ,
|
| 283 |
+
Λ → Λ′ = {Λ1, Λ2, · · · , u(Λȷ), · · · , ΛN},
|
| 284 |
+
(4)
|
| 285 |
+
where the update u only depends on Λȷ. The proposed
|
| 286 |
+
parameter Λ′ given the current parameter Λ is accepted
|
| 287 |
+
with probability
|
| 288 |
+
Paccept(Λ′|Λ; T) = min
|
| 289 |
+
�
|
| 290 |
+
1, e−⟨ ˆH⟩Λ′ −⟨ ˆH⟩Λ
|
| 291 |
+
T
|
| 292 |
+
�
|
| 293 |
+
.
|
| 294 |
+
(5)
|
| 295 |
+
This means that if the proposed state Ψ(Λ′) has a lower
|
| 296 |
+
energy expectation value than Ψ(Λ), then the proposal
|
| 297 |
+
will be accepted; otherwise, it will be accepted with a
|
| 298 |
+
probability determined by the Boltzmann factor.
|
| 299 |
+
The
|
| 300 |
+
entire ensemble generation procedure is summarized in
|
| 301 |
+
Algorithm 1.
|
| 302 |
+
B.
|
| 303 |
+
Diffusion map
|
| 304 |
+
As proposed in Ref. 12, DM is ideally suited as an unsu-
|
| 305 |
+
pervised ML algorithm to identify the presence and num-
|
| 306 |
+
ber of superselection sectors in a collection of states, such
|
| 307 |
+
as {|Ψ(Λ)⟩}l defined above. To briefly review the key idea
|
| 308 |
+
of the DM algorithm [32–35] and introduce notation, as-
|
| 309 |
+
sume we are given a dataset X = {xl|l = 1, 2, ..., M},
|
| 310 |
+
consisting of M samples xl. Below we will consider the
|
| 311 |
+
cases xl = Λl and xl = |Ψ(Λl)⟩; in the first case, the
|
| 312 |
+
samples are the network parameters parametrizing the
|
| 313 |
+
wavefunction and, in the second, the samples are the
|
| 314 |
+
wavefunctions themselves.
|
| 315 |
+
To understand DM intuitively, let us define a diffusion
|
| 316 |
+
process among states xl ∈ X. The probability of state xl
|
| 317 |
+
transitioning to xl′ is defined by the Markov transition
|
| 318 |
+
matrix element pl,l′. To construct pl,l′, we introduce a
|
| 319 |
+
symmetric and positive-definite kernel kϵ(xl, xl′) between
|
| 320 |
+
states xl and xl′. Then the transition probability matrix
|
| 321 |
+
|
| 322 |
+
4
|
| 323 |
+
Figure 2. (a) Overview of the ML algorithm applied in this work: the “seeds” {Λ0} are computed using variational Monte
|
| 324 |
+
Carlo (see Appendix B), a Markov-chain algorithm is used to generate the network parameter ensemble dataset (Sec. II A),
|
| 325 |
+
then a similarity metric is used for the definition of kernels in the DM method (Sec. II B and Sec. II C), and finally k-means
|
| 326 |
+
is applied to the low-dimensional embedding in the subspace provided by the dominant DM eigenvector components. (b) The
|
| 327 |
+
square lattice geometry for the toric code model, where the qubits ˆsi are defined on the links of the lattice (grey dots). The
|
| 328 |
+
Hamiltonian [given in Eq. (16)] is written in terms of the operators ˆPP (supported by spins on plaquette P denoted by the
|
| 329 |
+
red square) and star ˆSS (supported by spins on star S denoted by the blue links). The two blue lines along x(y) directions
|
| 330 |
+
denote the Wilson loop operators ˆ
|
| 331 |
+
W1,¯x( ˆ
|
| 332 |
+
W2,¯y) along the straight paths ¯x(¯y). (c) An illustration of the quasi-local ansatz in
|
| 333 |
+
Eq. (17). The ansatz is a product over local function φ of spins in plaquette (or star), which depends on parameters {wXj, bX}
|
| 334 |
+
for X = P(S) being plaquette (or star).
|
| 335 |
+
pl,l′ is defined as
|
| 336 |
+
pl,l′ = kϵ(xl, xl′)
|
| 337 |
+
zl
|
| 338 |
+
,
|
| 339 |
+
zl =
|
| 340 |
+
�
|
| 341 |
+
l′
|
| 342 |
+
kϵ(xl, xl′),
|
| 343 |
+
(6)
|
| 344 |
+
where the factor zl ensures probability conservation,
|
| 345 |
+
�
|
| 346 |
+
l′ pl,l′ = 1 ∀l. Then spectral analysis on the transition
|
| 347 |
+
probability matrix leads to information on the global con-
|
| 348 |
+
nectivity of the dataset X, which, in our context of X
|
| 349 |
+
containing low-energy states, allows to identify superse-
|
| 350 |
+
lection sectors and, thus, topological order [12]. To quan-
|
| 351 |
+
tify how strongly two samples xl and xl′ are connected,
|
| 352 |
+
one introduces the 2t-step diffusion distance [32–35],
|
| 353 |
+
D2t(l, l′) =
|
| 354 |
+
�
|
| 355 |
+
l′′
|
| 356 |
+
1
|
| 357 |
+
zl′′ [(pt)l,l′′ − (pt)l′,l′′]2,
|
| 358 |
+
(7)
|
| 359 |
+
where pt denotes the t-th matrix power of the tran-
|
| 360 |
+
sition probability matrix p.
|
| 361 |
+
It was shown that D2t
|
| 362 |
+
can be computed from the eigenvalues λn and right
|
| 363 |
+
eigenvectors ψn
|
| 364 |
+
of the transition matrix p:
|
| 365 |
+
with
|
| 366 |
+
�
|
| 367 |
+
l′ pl,l′ (ψn)l′ = λn (ψn)l, and in descending ordering
|
| 368 |
+
λn > λn+1, it follows
|
| 369 |
+
D2t(l, l′) =
|
| 370 |
+
M−1
|
| 371 |
+
�
|
| 372 |
+
n=1
|
| 373 |
+
λ2t
|
| 374 |
+
n [(ψn)l − (ψn)l′]2
|
| 375 |
+
(8)
|
| 376 |
+
after straightforward algebra [35].
|
| 377 |
+
Geometrically, this
|
| 378 |
+
means that the diffusion distance is represented as a Eu-
|
| 379 |
+
clidean distance (weighted with λn) if we perform the
|
| 380 |
+
non-linear coordinate transformation xl → {(ψn)l, n =
|
| 381 |
+
0, . . . M − 1}.
|
| 382 |
+
Furthermore, as the global connectivity
|
| 383 |
+
is seen from the long-time limit, t → ∞, of the diffu-
|
| 384 |
+
sion distance, the largest eigenvalues are most important
|
| 385 |
+
to describe the connectivity. To be more precise, let us
|
| 386 |
+
choose a kernel kϵ of the form
|
| 387 |
+
kϵ(xl, xl′) = exp
|
| 388 |
+
�
|
| 389 |
+
−1 − S(xl, xl′)
|
| 390 |
+
ϵ
|
| 391 |
+
�
|
| 392 |
+
,
|
| 393 |
+
(9)
|
| 394 |
+
where S is a local similarity measure which obeys S ∈
|
| 395 |
+
[0, 1], S(xl, xl′) = S(xl′, xl), and S(x, x) = 1.
|
| 396 |
+
Here
|
| 397 |
+
“local” means that S(xl, xl′) = �
|
| 398 |
+
i Si(xl, xl′) where
|
| 399 |
+
Si(xl, xl′) only depend on the configuration of xl and
|
| 400 |
+
|
| 401 |
+
(a)
|
| 402 |
+
gs; A°)
|
| 403 |
+
similarity metric
|
| 404 |
+
dimensional
|
| 405 |
+
(V'v)s
|
| 406 |
+
represent /(A)》→ A
|
| 407 |
+
reduction
|
| 408 |
+
P(《H)A"-(H)^;T)
|
| 409 |
+
V
|
| 410 |
+
unsupervised learning
|
| 411 |
+
clustering
|
| 412 |
+
ensemble : ^° →{A}
|
| 413 |
+
(diffusion map)
|
| 414 |
+
K- means
|
| 415 |
+
(b)
|
| 416 |
+
(c)
|
| 417 |
+
(; ) =
|
| 418 |
+
d(ops ; w, b)
|
| 419 |
+
P,S
|
| 420 |
+
h
|
| 421 |
+
Wi5
|
| 422 |
+
xl′ in the vicinity of site i. While we will discuss pos-
|
| 423 |
+
sible explicit forms of S for our quantum mechanical N
|
| 424 |
+
spin/qubit system in Sec. II C below, a natural choice for
|
| 425 |
+
a classical system of N spins, xl = {Sl
|
| 426 |
+
i, (Sl
|
| 427 |
+
i)2 = 1, i =
|
| 428 |
+
1, 2, . . . , N}, is Scl(xl, xl′) = �
|
| 429 |
+
i Sl
|
| 430 |
+
i · Sl′
|
| 431 |
+
i /N. In Eq. (9),
|
| 432 |
+
ϵ plays the role of a “coarse graining” parameter that
|
| 433 |
+
is necessary as we only deal with finite datasets X: for
|
| 434 |
+
given X, we generically expect kϵ(xl, xl′) = pl,l′ = δl,l′
|
| 435 |
+
as ϵ → 0, i.e., all samples are dissimilar if ϵ is suffi-
|
| 436 |
+
ciently small and all eigenvalues λn approach 1. In turn,
|
| 437 |
+
for ϵ → ∞ the coarse graining parameter is so large
|
| 438 |
+
that all samples become connected, kϵ(xl, xl′) → 1; as
|
| 439 |
+
pl,l′ → 1/M, we will have λn>0 → 0, while the largest
|
| 440 |
+
eigenvalue λ0 is always 1 (as a consequence of proba-
|
| 441 |
+
bility conservation).
|
| 442 |
+
For values of ϵ in between these
|
| 443 |
+
extreme limits, the DM spectrum contains information
|
| 444 |
+
about X, including its topological structure: as shown
|
| 445 |
+
in Ref. 12, the presence of k ∈ N distinct topological
|
| 446 |
+
equivalence classes in X is manifested by a range of ϵ
|
| 447 |
+
where λ1, . . . λk−1 are all exponentially close (in ϵ) to
|
| 448 |
+
1, with a clear gap to λn≥k.
|
| 449 |
+
Furthermore, the differ-
|
| 450 |
+
ent samples l will cluster—with respect to the normal
|
| 451 |
+
Euclidean measure, e.g., as can be captured with k-
|
| 452 |
+
means—according to their topological equivalence class
|
| 453 |
+
when plotted in the mapped k − 1-dimensional space
|
| 454 |
+
{(ψ1)l, (ψ2)l, . . . , (ψk−1)l}. In the following, we will use
|
| 455 |
+
this procedure to identify the superselection sectors in
|
| 456 |
+
the ensemble of wave functions defined in Sec. II A. To
|
| 457 |
+
this end, however, we first need to introduce a suitable
|
| 458 |
+
similarity measure S, to be discussed next.
|
| 459 |
+
C.
|
| 460 |
+
Local similarity measure
|
| 461 |
+
A natural generalization of the abovementioned classi-
|
| 462 |
+
cal similarity measure Scl = �
|
| 463 |
+
i Sl
|
| 464 |
+
i · Sl′
|
| 465 |
+
i /N, which can be
|
| 466 |
+
thought of as the (Euclidean) inner product in the clas-
|
| 467 |
+
sical configuration space, is to take the inner product in
|
| 468 |
+
the Hilbert space of the quantum system,
|
| 469 |
+
Sq(Λl, Λl′) = |⟨Ψ(Λl)|Ψ(Λl′)⟩|2.
|
| 470 |
+
(10)
|
| 471 |
+
While this or other related fidelity measures for low-rank
|
| 472 |
+
quantum states could be estimated efficiently with quan-
|
| 473 |
+
tum simulation and computing setups [65–68], estimat-
|
| 474 |
+
ing Sq is generally a computationally expensive task on
|
| 475 |
+
a classical computer, as it requires sampling over spin
|
| 476 |
+
configurations for our variation procedure. To make the
|
| 477 |
+
evaluation of the similarity measure more efficient, we
|
| 478 |
+
here propose an alternative route that takes advantage
|
| 479 |
+
of the fact that we use a local ansatz for ψ(σ; Λ), see
|
| 480 |
+
Eq. (3). Our goal is to express the similarity measure
|
| 481 |
+
directly as
|
| 482 |
+
Sn(Λl, Λl′) = 1
|
| 483 |
+
Nȷ
|
| 484 |
+
�
|
| 485 |
+
ȷ
|
| 486 |
+
f((Λl)ȷ, (Λl′)ȷ),
|
| 487 |
+
(11)
|
| 488 |
+
where f only compares a local block of parameters de-
|
| 489 |
+
noted by ȷ and is a function that can be quickly evaluated,
|
| 490 |
+
without having to sample spin configurations. Further-
|
| 491 |
+
more, S(xl, xl′) = S(xl′, xl) can be ensured by choos-
|
| 492 |
+
ing a function f that is symmetric in its arguments and
|
| 493 |
+
S ∈ [0, 1] is also readily implemented by setting Nȷ = �
|
| 494 |
+
ȷ
|
| 495 |
+
and appropriate rescaling of f such that f ∈ [0, 1]. The
|
| 496 |
+
most subtle condition is
|
| 497 |
+
Sn(Λl, Λl′) = 1
|
| 498 |
+
⇐⇒
|
| 499 |
+
|Ψ(Λl)⟩ ∝ |Ψ(Λ′
|
| 500 |
+
l)⟩ ,
|
| 501 |
+
(12)
|
| 502 |
+
since, depending on the precise network architecture used
|
| 503 |
+
for ψ(σ; Λ), there are “gauge transformations” g ∈ G of
|
| 504 |
+
the weights, Λl → g[Λl], with
|
| 505 |
+
|Ψ(Λl)⟩ = eiϑg |Ψ(g[Λl])⟩
|
| 506 |
+
(13)
|
| 507 |
+
for some global phase ϑg. We want to ensure that
|
| 508 |
+
Sn(Λl, Λl′) = Sn(Λl, g[Λl′]) = Sn(g[Λl], Λl′)
|
| 509 |
+
(14)
|
| 510 |
+
for all such gauge transformations g ∈ G. A general way
|
| 511 |
+
to guarantee Eq. (14) proceeds by replacing,
|
| 512 |
+
Sn(Λl, Λl′)
|
| 513 |
+
−→
|
| 514 |
+
max
|
| 515 |
+
g,g′∈G Sn(g[Λl], g′[Λl′]).
|
| 516 |
+
(15)
|
| 517 |
+
However, in practice, it might not be required to iterate
|
| 518 |
+
over all possible gauge transformations in G due to the lo-
|
| 519 |
+
cality of the similarity measure. In the following, we will
|
| 520 |
+
use the toric code and a specific RBM variational ansatz
|
| 521 |
+
as an example to illustrate these gauge transformations
|
| 522 |
+
and how an appropriate function f in Eq. (11) and gauge
|
| 523 |
+
invariance (14) can be implemented efficiently.
|
| 524 |
+
Finally, note that, while we focus on applying DM in
|
| 525 |
+
this work, a similarity measure in terms of neural network
|
| 526 |
+
parameters can also be used for other kernel techniques
|
| 527 |
+
such as kernel PCA. Depending on the structure of the
|
| 528 |
+
underlying dataset, DM has clear advantage over kernel
|
| 529 |
+
PCA: the former really captures the global connectivity
|
| 530 |
+
of the dataset rather than the subspace with most vari-
|
| 531 |
+
ance that is extracted by the latter. This is why kernel
|
| 532 |
+
PCA fails when identifying, e.g., winding numbers, in
|
| 533 |
+
general datasets where DM still works well [12]. Specifi-
|
| 534 |
+
cally for our case study of the toric code below, we find
|
| 535 |
+
that kernel PCA can also identify topological sectors for
|
| 536 |
+
small T and without magnetic field, h = 0, as a result of
|
| 537 |
+
the simple data structure; however, only DM works well
|
| 538 |
+
when h is turned on, as we discuss below.
|
| 539 |
+
III.
|
| 540 |
+
EXAMPLE: TORIC CODE
|
| 541 |
+
Now we illustrate our DM-based ML algorithm using
|
| 542 |
+
the toric code model [57], defined on an Lx × Ly square
|
| 543 |
+
lattice with spin-1/2 operators or qubits on every bond,
|
| 544 |
+
see Fig. 2(b), leading to a total of N = 2LxLy spins;
|
| 545 |
+
throughout this work, we will assume periodic boundary
|
| 546 |
+
conditions. Referring to all four spins on the edges of
|
| 547 |
+
an elementary square (vertex) of the lattice as plaquette
|
| 548 |
+
P (star S), the plaquette and star operators are defined
|
| 549 |
+
|
| 550 |
+
6
|
| 551 |
+
Figure 3. Gauge freedom of RBM ansatz in Eq. (17). The
|
| 552 |
+
following transformations only lead to a global phase: (a)
|
| 553 |
+
Multiplying all the parameters of a plaquette (or star, not
|
| 554 |
+
shown) by a minus sign, see Eq. (18a); (b) A π shift of a
|
| 555 |
+
single parameter, see Eqs. (18b) and (18c); (c) A π/2 shift to
|
| 556 |
+
the weights crossed by a string ¯l, defined by g¯l in Eq. (18e).
|
| 557 |
+
The straight pink line represents the transformation on a non-
|
| 558 |
+
contractible loop denoted by gy; (d) Same as (c) but for loops
|
| 559 |
+
on the direct lattice and gl and g¯y, cf. Eq. (18d).
|
| 560 |
+
as ˆPP = �
|
| 561 |
+
i∈P ˆsz
|
| 562 |
+
i and ˆSS = �
|
| 563 |
+
i∈S ˆsx
|
| 564 |
+
i , respectively. The
|
| 565 |
+
toric code Hamiltonian then reads as
|
| 566 |
+
ˆHtc = −JP
|
| 567 |
+
�
|
| 568 |
+
P
|
| 569 |
+
ˆPP − JS
|
| 570 |
+
�
|
| 571 |
+
S
|
| 572 |
+
ˆSS,
|
| 573 |
+
(16)
|
| 574 |
+
where the sums are over all plaquettes and stars of the
|
| 575 |
+
lattice. All “stabilizers” ˆPP , ˆSS commute among each
|
| 576 |
+
other and with the Hamiltonian. Focusing on JP , JS > 0,
|
| 577 |
+
the ground states are obtained as the eigenstates with
|
| 578 |
+
eigenvalue +1 under all stabilizers. A counting argument,
|
| 579 |
+
taking into account the constraint �
|
| 580 |
+
S ˆSS = �
|
| 581 |
+
P ˆPP = 1,
|
| 582 |
+
reveals that there are four, exactly degenerate ground
|
| 583 |
+
states for periodic boundary conditions.
|
| 584 |
+
To describe the ground-states and low-energy subspace
|
| 585 |
+
of the toric code model (16) variationally, we parameter-
|
| 586 |
+
ize ψ(σ; Λ) in Eq. (1) using the ansatz
|
| 587 |
+
ψrbm(σ; Λ) =
|
| 588 |
+
�
|
| 589 |
+
P
|
| 590 |
+
cos(bP +
|
| 591 |
+
�
|
| 592 |
+
j∈P
|
| 593 |
+
wP jσj)
|
| 594 |
+
×
|
| 595 |
+
�
|
| 596 |
+
S
|
| 597 |
+
cos(bS +
|
| 598 |
+
�
|
| 599 |
+
j∈S
|
| 600 |
+
wSjσj),
|
| 601 |
+
(17)
|
| 602 |
+
proposed in Ref. 58, where every plaquette P (star S)
|
| 603 |
+
is associated with a “bias” bP (bS) and four weights
|
| 604 |
+
wP,j (wS,j), all of which are chosen to be real here, i.e.,
|
| 605 |
+
Λ = {bP , bS, wP,j, wS,j}.
|
| 606 |
+
This ansatz can be thought
|
| 607 |
+
of as an RBM [46] (see Appendix A), as illustrated in
|
| 608 |
+
Fig. 2(c), with the same geometric properties as the un-
|
| 609 |
+
derlying toric code model. It is clear that Eq. (17) defines
|
| 610 |
+
a quasi-local ansatz as it is of the form of Eq. (3), with ȷ
|
| 611 |
+
enumerating all plaquettes and stars (and thus Nȷ = 2N).
|
| 612 |
+
For this specific ansatz, the gauge transformations g ∈ G,
|
| 613 |
+
as introduced in Sec. II C above, are generated by the fol-
|
| 614 |
+
lowing set of operations on the parameters bP , bS, wP,j,
|
| 615 |
+
and wS,j:
|
| 616 |
+
1. For X being any plaquette or star, multiplying all bi-
|
| 617 |
+
ases and weights of that plaquette or star by −1 [see
|
| 618 |
+
Fig. 3(a)],
|
| 619 |
+
gX,− : bX → −bX, wXj → −wXj,
|
| 620 |
+
(18a)
|
| 621 |
+
leaves the wave function invariant [ϑg = 0 in Eq. (13)].
|
| 622 |
+
2. Adding π to either the bias or any of the weights as-
|
| 623 |
+
sociated with the plaquette or star X [see Fig. 3(b)],
|
| 624 |
+
gX,π,b : bX → bX + π,
|
| 625 |
+
(18b)
|
| 626 |
+
gX,π,j : wXj → wXj + π,
|
| 627 |
+
j ∈ X,
|
| 628 |
+
(18c)
|
| 629 |
+
leads to an overall minus sign [ϑg = π in Eq. (13)].
|
| 630 |
+
3. For any closed loop ℓ (or ¯ℓ) on the direct (or dual lat-
|
| 631 |
+
tice), adding π
|
| 632 |
+
2 to all weights of the stars (plaquettes)
|
| 633 |
+
that are connected to the spins crossed by the string
|
| 634 |
+
[see Fig. 3(c-d)],
|
| 635 |
+
gℓ : wSj → wSj + π
|
| 636 |
+
2 ,
|
| 637 |
+
Sj ∈ ℓ,
|
| 638 |
+
(18d)
|
| 639 |
+
g¯ℓ : wP j → wP j + π
|
| 640 |
+
2 ,
|
| 641 |
+
Pj ∈ ¯ℓ,
|
| 642 |
+
(18e)
|
| 643 |
+
leads to ϑg = 0 or π in Eq. (13) depending on the
|
| 644 |
+
length of the string. Note that any loop configuration
|
| 645 |
+
L, which can contain an arbitrary number of loops,
|
| 646 |
+
can be generated by the set {gS, gP , gx,y, g¯x,¯y}, where
|
| 647 |
+
gS (gP ) creates an elementary loop on the dual (di-
|
| 648 |
+
rect) lattice encircling the star S (plaquette P), see
|
| 649 |
+
Fig. 3(c,d), and gx,y (g¯x,¯y) creates a non-contractible
|
| 650 |
+
loop on the direct (dual) lattice along the x, y direc-
|
| 651 |
+
tion. Since the length of any contractible loop is even,
|
| 652 |
+
ϑg = 0 for any string transformations generated by
|
| 653 |
+
gS and gP . Meanwhile, on an odd lattice, the gauge
|
| 654 |
+
transformations gx,y(g¯x,¯y) involve an odd number of
|
| 655 |
+
sites and thus lead to ϑg = π.
|
| 656 |
+
|
| 657 |
+
WPj → -WPj, bp → -bp
|
| 658 |
+
(b)
|
| 659 |
+
WP1
|
| 660 |
+
→ WP1 + π
|
| 661 |
+
WP4
|
| 662 |
+
W P2
|
| 663 |
+
WP3
|
| 664 |
+
元
|
| 665 |
+
c)
|
| 666 |
+
gi: Wpj → Wpj
|
| 667 |
+
gy
|
| 668 |
+
元
|
| 669 |
+
(p)
|
| 670 |
+
gy: Wsj
|
| 671 |
+
-2
|
| 672 |
+
gl7
|
| 673 |
+
A highly inefficient way of dealing with this gauge re-
|
| 674 |
+
dundancy would be to use a choice of Sn in Eq. (11)
|
| 675 |
+
which is not invariant under any of the transformations
|
| 676 |
+
in Eq. (18); this would, for instance, be the case by just
|
| 677 |
+
taking the Euclidean distance of the weights,
|
| 678 |
+
Seu(Λl, Λl′) ∝ ||Λl − Λl′||2
|
| 679 |
+
=
|
| 680 |
+
�
|
| 681 |
+
X
|
| 682 |
+
�
|
| 683 |
+
(bl
|
| 684 |
+
X − bl′
|
| 685 |
+
X)2 +
|
| 686 |
+
�
|
| 687 |
+
j∈X
|
| 688 |
+
(wl
|
| 689 |
+
Xj − wl′
|
| 690 |
+
Xj)2�
|
| 691 |
+
,
|
| 692 |
+
where the sum over X involves all plaquettes and stars.
|
| 693 |
+
Naively going through all possible gauge transformations
|
| 694 |
+
to find the maximum in Eq. (15) would in principle rectify
|
| 695 |
+
the lack of gauge invariance. However, since the number
|
| 696 |
+
of gauge transformations scales exponentially with sys-
|
| 697 |
+
tem size N (holds for each of the three classes, 1.-3., of
|
| 698 |
+
transformations defined above), such an approach would
|
| 699 |
+
become very expensive for large N. Luckily, locality of
|
| 700 |
+
the ansatz and of the similarity measure allows us to con-
|
| 701 |
+
struct similarity measures that can be evaluated much
|
| 702 |
+
faster: as an example, consider
|
| 703 |
+
Sn(Λl, Λl′) = 1
|
| 704 |
+
2 +
|
| 705 |
+
1
|
| 706 |
+
10N
|
| 707 |
+
�
|
| 708 |
+
X
|
| 709 |
+
max
|
| 710 |
+
τX=±
|
| 711 |
+
�
|
| 712 |
+
�
|
| 713 |
+
j∈X
|
| 714 |
+
cos 2(τXwl
|
| 715 |
+
Xj − wl′
|
| 716 |
+
Xj) + cos 2(τXbl
|
| 717 |
+
X − bl′
|
| 718 |
+
X)
|
| 719 |
+
�
|
| 720 |
+
,
|
| 721 |
+
(19)
|
| 722 |
+
which
|
| 723 |
+
clearly
|
| 724 |
+
obeys
|
| 725 |
+
Sn(Λl, Λl′)
|
| 726 |
+
=
|
| 727 |
+
Sn(Λl′, Λl),
|
| 728 |
+
Sn(Λl, Λl′)
|
| 729 |
+
∈
|
| 730 |
+
[0, 1], and locality [it is of the form
|
| 731 |
+
of Eq. (11) with ȷ enumerating all X]. Concerning gauge
|
| 732 |
+
invariance, first note that the choice of cos(·) immedi-
|
| 733 |
+
ately leads to invariance under Eq. (18a). Second, for
|
| 734 |
+
each X we only have to maximize over two values (τX)
|
| 735 |
+
to enforce invariance under Eqs. (18b) and (18c), i.e.,
|
| 736 |
+
the maximization only doubles the computational cost.
|
| 737 |
+
The “string” redundancy, see Eqs. (18d) and (18e),
|
| 738 |
+
however, is not yet taken into account in Eq. (19). It can
|
| 739 |
+
be formally taken care of by maximizing over all possible
|
| 740 |
+
loop configurations, denoted by L,
|
| 741 |
+
Sstr(Λl, Λl′) = 1
|
| 742 |
+
2 +
|
| 743 |
+
1
|
| 744 |
+
10N max
|
| 745 |
+
L
|
| 746 |
+
��
|
| 747 |
+
X
|
| 748 |
+
max
|
| 749 |
+
τX=±
|
| 750 |
+
�
|
| 751 |
+
�
|
| 752 |
+
j∈X
|
| 753 |
+
µL
|
| 754 |
+
Xj cos 2(τXwl
|
| 755 |
+
Xj − wl′
|
| 756 |
+
Xj) + cos 2(τXbl
|
| 757 |
+
X − bl′
|
| 758 |
+
X)
|
| 759 |
+
��
|
| 760 |
+
,
|
| 761 |
+
(20)
|
| 762 |
+
where µL
|
| 763 |
+
Xj = −1 if Xj lives on a loop contained in L and
|
| 764 |
+
µL
|
| 765 |
+
Xj = 1 otherwise. While there is an exponential num-
|
| 766 |
+
ber of such strings, Ref. 12 has proposed an algorithm
|
| 767 |
+
to efficiently find an approximate maximum value.
|
| 768 |
+
In
|
| 769 |
+
our case, this algorithm amounts to randomly choosing a
|
| 770 |
+
plaquette P or a star S or a direction d = x, y and then
|
| 771 |
+
applying gS or gP or gd=x,y to Λl in Eq. (19). If this does
|
| 772 |
+
not decrease the similarity, keep that transformation; if
|
| 773 |
+
it decreases the similarity, discard the gauge transforma-
|
| 774 |
+
tion. Repeat this procedure Ng times. In Ref. 12, Ng
|
| 775 |
+
between 103 and 104 was found to be enough for a large
|
| 776 |
+
system consisting 18 × 18 square-lattice sites (total of
|
| 777 |
+
N = 2 × 182 qubits). On top of this, gS and gP are local
|
| 778 |
+
and, hence, the evaluation of the change of the similar-
|
| 779 |
+
ity with the gauge transformation only requires O(N 0)
|
| 780 |
+
amount of work.
|
| 781 |
+
In the numerical simulations below, using Eq. (19)
|
| 782 |
+
without sampling over loop configurations L turned out
|
| 783 |
+
to be sufficient. The reason is that, for our Markov-chain-
|
| 784 |
+
inspired sampling procedure of Λl (see Appendix C), up-
|
| 785 |
+
dates that correspond to these loop transformations hap-
|
| 786 |
+
pen very infrequently. Furthermore, even if a few pairs
|
| 787 |
+
of samples are incorrectly classified as distinct due to the
|
| 788 |
+
string redundancy, the DM will still correctly capture
|
| 789 |
+
the global connectivity and, hence, absence or presence
|
| 790 |
+
of topological sectors.
|
| 791 |
+
Figure 4. (a) DM spectrum for topological phase at h = 0
|
| 792 |
+
and T = 0.1 using the neutral network similarity measure in
|
| 793 |
+
Eq. (19). Inset left: associated leading DM components; color
|
| 794 |
+
represents the loop observable expectations values defined in
|
| 795 |
+
(c-d). Inset right: DM spectrum in descending order at ϵ =
|
| 796 |
+
0.01 indicated by the dashed line. (b) Same as (a), but using
|
| 797 |
+
exact overlaps Sq in Eq. (10) as metric. (c) Color map for the
|
| 798 |
+
non-local loop values ⟨W 1⟩, ⟨W 2⟩ in the left insets of (a) and
|
| 799 |
+
(b). (d) Different straight Wilson loops ˆ
|
| 800 |
+
W1,¯xi ( ˆ
|
| 801 |
+
W2,¯yi) along x
|
| 802 |
+
(y) direction, denoted by blue (red) lines. The loop values in
|
| 803 |
+
the color map in (c) are spatial averages over all straight-loop
|
| 804 |
+
expectation values (as in the equations for ⟨W 1⟩, ⟨W 2⟩).
|
| 805 |
+
|
| 806 |
+
a
|
| 807 |
+
1.0
|
| 808 |
+
1e-3
|
| 809 |
+
1.0
|
| 810 |
+
0.8
|
| 811 |
+
*0.6
|
| 812 |
+
0.4
|
| 813 |
+
0.8
|
| 814 |
+
0.2
|
| 815 |
+
0
|
| 816 |
+
1
|
| 817 |
+
0
|
| 818 |
+
2
|
| 819 |
+
6
|
| 820 |
+
8
|
| 821 |
+
41
|
| 822 |
+
1e-3
|
| 823 |
+
k
|
| 824 |
+
.
|
| 825 |
+
0.00
|
| 826 |
+
0.02
|
| 827 |
+
0.04
|
| 828 |
+
0.06
|
| 829 |
+
0.08
|
| 830 |
+
0.10
|
| 831 |
+
E
|
| 832 |
+
2.5
|
| 833 |
+
1.0
|
| 834 |
+
之
|
| 835 |
+
0.0
|
| 836 |
+
0.8
|
| 837 |
+
0.8
|
| 838 |
+
2.5
|
| 839 |
+
0.6
|
| 840 |
+
-2
|
| 841 |
+
0
|
| 842 |
+
0
|
| 843 |
+
2
|
| 844 |
+
4
|
| 845 |
+
6
|
| 846 |
+
8
|
| 847 |
+
41
|
| 848 |
+
1e-3
|
| 849 |
+
k
|
| 850 |
+
0.00
|
| 851 |
+
0.02
|
| 852 |
+
0.04
|
| 853 |
+
0.06
|
| 854 |
+
0.08
|
| 855 |
+
0.10
|
| 856 |
+
E
|
| 857 |
+
y1
|
| 858 |
+
J2
|
| 859 |
+
J3
|
| 860 |
+
y = (yi)
|
| 861 |
+
(c)
|
| 862 |
+
Q
|
| 863 |
+
X = (x;)
|
| 864 |
+
x1
|
| 865 |
+
(2M)
|
| 866 |
+
X2
|
| 867 |
+
X3
|
| 868 |
+
1
|
| 869 |
+
0
|
| 870 |
+
1
|
| 871 |
+
(Wi)
|
| 872 |
+
(W1)=(W1,a)
|
| 873 |
+
(W2) =eJ(W2,g)8
|
| 874 |
+
IV.
|
| 875 |
+
NUMERICAL RESULTS
|
| 876 |
+
We next demonstrate explicitly how the general pro-
|
| 877 |
+
cedure outlined above can be used to probe and analyze
|
| 878 |
+
topological order in the toric code. We start from the
|
| 879 |
+
pure toric code Hamiltonian defined in Eq. (16) using
|
| 880 |
+
the variational RBM ansatz in Eq. (17). An ensemble of
|
| 881 |
+
network parameters is generated by applying the proce-
|
| 882 |
+
dure of Sec. II A (see also Algorithm 1) for a system size
|
| 883 |
+
of N = 18 spins; the hyperparameters for ensemble gener-
|
| 884 |
+
ation and more details including the form of u in Eq. (4)
|
| 885 |
+
are given in Appendix C. From now on, we measure all
|
| 886 |
+
energies in units of JP and set JS = JP = 1.
|
| 887 |
+
Let us first focus on the low-energy ensemble and
|
| 888 |
+
choose T = 0.1 in Eq. (5).
|
| 889 |
+
For the simple similarity
|
| 890 |
+
measure in Eq. (19), that can be exactly evaluated at
|
| 891 |
+
a time linear in system size N, we find the DM spec-
|
| 892 |
+
trum shown in Fig. 4(a) as a function of ϵ in Eq. (9).
|
| 893 |
+
We observe the hallmark feature of four superselection
|
| 894 |
+
sectors [12]: there is a finite range of ϵ where there are
|
| 895 |
+
four eigenvalues exponentially close to 1. The association
|
| 896 |
+
of samples (in our case states) and these four sectors is
|
| 897 |
+
thus expected to be visible in a scatter plot of a projected
|
| 898 |
+
subspace spanned by the first three non-trivial eigenvec-
|
| 899 |
+
tors ψ1,2,3 [12]; note the zeroth eigenvector (ψ0)l = C is
|
| 900 |
+
always constant with eigenvalue λ = 1 from probability
|
| 901 |
+
conservation. In fact, we can see these clusters already in
|
| 902 |
+
the first two components, see left inset in Fig. 4(a). Then
|
| 903 |
+
a standard k-means algorithm is applied onto this pro-
|
| 904 |
+
jected subspace to identify the cluster number for each
|
| 905 |
+
data point. To verify that the ML algorithm has cor-
|
| 906 |
+
rectly clustered the states according to the four physical
|
| 907 |
+
sectors, we compute the expectation value for each state
|
| 908 |
+
of the string operators,
|
| 909 |
+
ˆW1,¯x =
|
| 910 |
+
�
|
| 911 |
+
i∈¯x
|
| 912 |
+
ˆsx
|
| 913 |
+
i ,
|
| 914 |
+
ˆW2,¯y =
|
| 915 |
+
�
|
| 916 |
+
i∈¯y
|
| 917 |
+
ˆsx
|
| 918 |
+
i ,
|
| 919 |
+
(21)
|
| 920 |
+
where ¯x(¯y) are loops defined on the dual lattice winding
|
| 921 |
+
along the x(y) direction, shown as blue lines in Fig. 2(b).
|
| 922 |
+
We quantify the association of a state to physical sec-
|
| 923 |
+
tors by the average of a set of straight loops X(Y) wind-
|
| 924 |
+
ing around the x(y) direction, shown as blue (red) lines
|
| 925 |
+
in Fig. 4(d). Indicating this averaged expectation value
|
| 926 |
+
⟨W 1⟩, ⟨W 2⟩ in the inset of Fig. 4(a) using the color code
|
| 927 |
+
defined in Fig. 4(c), we indeed see that the clustering is
|
| 928 |
+
done correctly.
|
| 929 |
+
To demonstrate that this is not a special feature of the
|
| 930 |
+
similarity measure in Eq. (19), we have done the same
|
| 931 |
+
analysis, with result shown in Fig. 4(b), using the full
|
| 932 |
+
quantum mechanical overlap measure in Eq. (10). Quan-
|
| 933 |
+
titative details change but, as expected, four superselec-
|
| 934 |
+
tion sectors are clearly identified and the clustering is
|
| 935 |
+
done correctly. We reiterate that the evaluation of the
|
| 936 |
+
neural-network similarity measure in Eq. (19) [exact eval-
|
| 937 |
+
uation O(N)] is much fast than that in Eq. (10) [exact
|
| 938 |
+
evaluation O(2N), but we can compute it approximately
|
| 939 |
+
with importance sampling] on a classical computer. Note,
|
| 940 |
+
however, that once Sn is computed for all samples, the
|
| 941 |
+
Figure 5.
|
| 942 |
+
(a) DM spectrum for the high-energy ensemble
|
| 943 |
+
at h = 0 and T = 1. The inset is the spectrum at ϵ = 0.03
|
| 944 |
+
indicated by the dashed line in the main panel; (b) Spatially
|
| 945 |
+
averaged straight Wilson loops ⟨W 1(2)⟩ [see Fig. 4(c-d)] along
|
| 946 |
+
two directions for the states in (a), where the color encodes
|
| 947 |
+
energy density ⟨H⟩/N; (c) Leading DM components where
|
| 948 |
+
the color of the dots encodes ⟨W 1(2)⟩ using the color map in
|
| 949 |
+
Fig. 4(d); (d) DM spectrum for the trivial phase at h = 1.0
|
| 950 |
+
and T = 0.1 using the quantum metric Sq.
|
| 951 |
+
actual DM-based clustering takes the same amount of
|
| 952 |
+
computational time for both approaches. Consequently,
|
| 953 |
+
suppose there is a quantum simulator that can efficiently
|
| 954 |
+
measure the quantum overlap in Eq. (10) or any other
|
| 955 |
+
viable similarity measure for that matter, then we can
|
| 956 |
+
equivalently use the “measured” similarity for an effi-
|
| 957 |
+
cient clustering of the superselection sectors via the DM
|
| 958 |
+
scheme. As a next step, we demonstrate that the superse-
|
| 959 |
+
lection sectors are eventually connected if we take into ac-
|
| 960 |
+
count states with sufficiently high energy. To this end, we
|
| 961 |
+
repeat the same analysis but for an ensemble with T = 1.
|
| 962 |
+
As can be seen in the resulting DM spectrum in Fig. 5(a),
|
| 963 |
+
there is no value of ϵ where more than one eigenvalue is
|
| 964 |
+
(exponentially) close to 1 and separated from the rest of
|
| 965 |
+
the spectrum by a clear gap.
|
| 966 |
+
Here we used again the
|
| 967 |
+
simplified measure in Eq. (19), but have checked nothing
|
| 968 |
+
changes qualitatively when using the overlap measure.
|
| 969 |
+
To verify that this is the correct answer for the given
|
| 970 |
+
dataset, we again computed the expectation value of the
|
| 971 |
+
loop operators in Eq. (21) for each state in the ensemble.
|
| 972 |
+
This is shown in Fig. 5(b), where we also use color to
|
| 973 |
+
indicate the energy expectation value for each state. We
|
| 974 |
+
can clearly see the four low-energy (blue) sectors (with
|
| 975 |
+
|W1,2| ≃ 1) are connected via high-energy (red) states
|
| 976 |
+
(with |W1,2| ≪ 1). This agrees with the DM result that
|
| 977 |
+
|
| 978 |
+
a
|
| 979 |
+
1.0
|
| 980 |
+
1.0
|
| 981 |
+
0.8
|
| 982 |
+
0.6
|
| 983 |
+
0.8
|
| 984 |
+
0.4
|
| 985 |
+
0
|
| 986 |
+
2
|
| 987 |
+
4
|
| 988 |
+
6
|
| 989 |
+
8
|
| 990 |
+
k
|
| 991 |
+
0.00
|
| 992 |
+
0.02
|
| 993 |
+
0.04
|
| 994 |
+
0.06
|
| 995 |
+
0.08
|
| 996 |
+
0.10
|
| 997 |
+
E
|
| 998 |
+
(b)
|
| 999 |
+
(H)/N
|
| 1000 |
+
(c)
|
| 1001 |
+
1e-3
|
| 1002 |
+
T=1.0,h=0.0
|
| 1003 |
+
1.0
|
| 1004 |
+
0.5
|
| 1005 |
+
<2M
|
| 1006 |
+
0
|
| 1007 |
+
0.0
|
| 1008 |
+
0.5
|
| 1009 |
+
-2
|
| 1010 |
+
-1.0
|
| 1011 |
+
1.0
|
| 1012 |
+
-0.5
|
| 1013 |
+
0.0
|
| 1014 |
+
0.5
|
| 1015 |
+
1.0
|
| 1016 |
+
-2
|
| 1017 |
+
0
|
| 1018 |
+
2
|
| 1019 |
+
<Wi>
|
| 1020 |
+
42
|
| 1021 |
+
1e-3
|
| 1022 |
+
(d)
|
| 1023 |
+
1.0
|
| 1024 |
+
1.0
|
| 1025 |
+
0.8
|
| 1026 |
+
0.8
|
| 1027 |
+
0.6
|
| 1028 |
+
K
|
| 1029 |
+
0.4
|
| 1030 |
+
0.6
|
| 1031 |
+
0.2
|
| 1032 |
+
0.0
|
| 1033 |
+
0
|
| 1034 |
+
2
|
| 1035 |
+
4
|
| 1036 |
+
8
|
| 1037 |
+
k
|
| 1038 |
+
0.4
|
| 1039 |
+
0.00
|
| 1040 |
+
0.01
|
| 1041 |
+
0.02
|
| 1042 |
+
0.03
|
| 1043 |
+
0.04
|
| 1044 |
+
0.05
|
| 1045 |
+
0.06
|
| 1046 |
+
E9
|
| 1047 |
+
all states are connected within the ensemble (topolog-
|
| 1048 |
+
ical order is lost).
|
| 1049 |
+
We can nonetheless investigate the
|
| 1050 |
+
clustering in the leading three non-trivial DM compo-
|
| 1051 |
+
nents ψ1,2,3. Focusing on a 2D projection in Fig. 5(c) for
|
| 1052 |
+
simplicity of the presentation, we can see that the DM
|
| 1053 |
+
reveals very interesting structure in the data: the four
|
| 1054 |
+
lobes roughly correspond to the four colors blue, red, or-
|
| 1055 |
+
ange, and green associated with the four superselection
|
| 1056 |
+
sectors and the states closer to |W1,2| = 1 (darker color)
|
| 1057 |
+
appear closer to the tips. Finally, note that the colors
|
| 1058 |
+
are arranged such that the red and green [orange and
|
| 1059 |
+
blue] lobes are on opposite ends, as expected since they
|
| 1060 |
+
correspond to (W1, W2) ≃ (1, −1) and (−1, 1) [(−1, −1)
|
| 1061 |
+
and (1, 1)].
|
| 1062 |
+
Another route to destroying topological order proceeds
|
| 1063 |
+
via application of a magnetic field.
|
| 1064 |
+
To study this, we
|
| 1065 |
+
extend the toric code Hamiltonian according to
|
| 1066 |
+
ˆH′
|
| 1067 |
+
tc = ˆHtc − h
|
| 1068 |
+
�
|
| 1069 |
+
i
|
| 1070 |
+
ˆsz
|
| 1071 |
+
i .
|
| 1072 |
+
(22)
|
| 1073 |
+
Clearly, in the limit of h → ∞, the ground state is just
|
| 1074 |
+
a state where all spins are polarized along ˆsz and topo-
|
| 1075 |
+
logical order is lost. Starting from the pure toric model
|
| 1076 |
+
(h = 0) and turning on h reduces the gap of the “charge
|
| 1077 |
+
excitations” defined by flipping ˆSS from +1 in the toric
|
| 1078 |
+
code groundstate to −1. Their condensation leads to a
|
| 1079 |
+
second-order quantum phase transition [69–72].
|
| 1080 |
+
Before addressing the transition, let us study the large-
|
| 1081 |
+
h limit. We first note that our ansatz in Eq. (17) does not
|
| 1082 |
+
need to be changed as it can capture the polarized phase
|
| 1083 |
+
as well.
|
| 1084 |
+
For instance, denoting the “northmost” (and
|
| 1085 |
+
“southmost”) spin of the plaquette P (and star S) by
|
| 1086 |
+
j0(P) (and j0(S)), respectively, the spin polarized state
|
| 1087 |
+
is realized for [see also Fig. 8(a) in the Appendix]
|
| 1088 |
+
bP = bS = −π
|
| 1089 |
+
4 ,
|
| 1090 |
+
wXj =
|
| 1091 |
+
�
|
| 1092 |
+
π
|
| 1093 |
+
4 ,
|
| 1094 |
+
j = j0(X),
|
| 1095 |
+
0,
|
| 1096 |
+
otherwise.
|
| 1097 |
+
(23)
|
| 1098 |
+
In fact, the spin polarized state has many representations
|
| 1099 |
+
within our RBM ansatz in Eq. (17), including represen-
|
| 1100 |
+
tations that are not just related by the gauge transforma-
|
| 1101 |
+
tions in Eq. (18). For instance, the association j → j0(X)
|
| 1102 |
+
of a spin to a plaquette and star can be changed, e.g., by
|
| 1103 |
+
using the “easternmost” spin. As discussed in more de-
|
| 1104 |
+
tail in Appendix A 2, this redundancy is a consequence
|
| 1105 |
+
of the product from of ψrbm(σ) in Eq. (17) and the fact
|
| 1106 |
+
that ψrbm(σ) is exactly zero if there is a single j with
|
| 1107 |
+
σj = −1; consequently, it is a special feature of the sim-
|
| 1108 |
+
ple product nature of the spin-polarized ground state.
|
| 1109 |
+
While in general there can still be additional redundan-
|
| 1110 |
+
cies besides the aforementioned gauge transformations,
|
| 1111 |
+
we do not expect such a structured set of redundancy to
|
| 1112 |
+
hold for generic states. There are various ways of resolv-
|
| 1113 |
+
ing this issue. The most straightforward one is to replace
|
| 1114 |
+
the simple overlap measure Sn in Eq. (11) by the direct
|
| 1115 |
+
overlap Sq in Eq. (10) for a certain fraction of pairs of
|
| 1116 |
+
samples l and l′. If this fraction is large enough, the DM
|
| 1117 |
+
algorithm will be able recognize that clusters of network
|
| 1118 |
+
Figure 6. DM spectra for low-energy ensembles with T = 0.3
|
| 1119 |
+
at finite field h. (a) First 10 eigenvalues for various field val-
|
| 1120 |
+
ues h = 0.475, 0.55, 0.575, 0.6, 0.7 at ϵ = 0.05. The dot marker
|
| 1121 |
+
(h = 0.475) shows that the eigenvalue spectra have four-fold
|
| 1122 |
+
degeneracy, indicating signature for topological order.
|
| 1123 |
+
In
|
| 1124 |
+
comparison, for spectra marked by the the triangular markers
|
| 1125 |
+
(h ≥ 0.55), such degeneracy is absent. A transition field value
|
| 1126 |
+
ht ≃ 0.55 is identified by observing that a gap opens in the
|
| 1127 |
+
degenerate eigenvalue spectra. This is consistent with what
|
| 1128 |
+
we have observed in the fidelity using the same dataset [see
|
| 1129 |
+
Appendix B 1]. (b) Projected eigenvectors onto the first two
|
| 1130 |
+
components for h = 0.475. The color encodes ⟨W 1(2)⟩ with the
|
| 1131 |
+
color scheme of Fig. 4(c). The black cross marks the k-means
|
| 1132 |
+
centers. (c) Same as (b) for h = 0.7. (d) Expectation for
|
| 1133 |
+
averaged straight Wilson loops ⟨W 1(2)⟩ along two directions
|
| 1134 |
+
for the states in (b). The color encodes the clustering results
|
| 1135 |
+
from k-means in the projected subspace of the eigenvectors
|
| 1136 |
+
shown in (b). (e) Same as (d) for ensemble shown in (c).
|
| 1137 |
+
parameters that might be distinct according to Sn actu-
|
| 1138 |
+
ally correspond to identical wave functions. We refer to
|
| 1139 |
+
Appendix A 3 where this is explicitly demonstrated. We
|
| 1140 |
+
note, however, that kernel PCA will not work anymore
|
| 1141 |
+
in this case; it will incorrectly classify connected samples
|
| 1142 |
+
as distinct as it’s based on the variance of the data rather
|
| 1143 |
+
than connectivity. For simplicity of the presentation, we
|
| 1144 |
+
use Sq for all states in the main text and focus on DM.
|
| 1145 |
+
|
| 1146 |
+
(a)1.0
|
| 1147 |
+
V
|
| 1148 |
+
h = 0.475
|
| 1149 |
+
h = 0.55
|
| 1150 |
+
0.8
|
| 1151 |
+
h = 0.575
|
| 1152 |
+
h= 0.6
|
| 1153 |
+
X0.6
|
| 1154 |
+
h = 0.7
|
| 1155 |
+
0.4
|
| 1156 |
+
V
|
| 1157 |
+
0
|
| 1158 |
+
2
|
| 1159 |
+
4
|
| 1160 |
+
6
|
| 1161 |
+
8
|
| 1162 |
+
k
|
| 1163 |
+
(b)
|
| 1164 |
+
(c)
|
| 1165 |
+
1e-3
|
| 1166 |
+
T= 0.3, h = 0.475
|
| 1167 |
+
1e-3
|
| 1168 |
+
T = 0.3, h = 0.7
|
| 1169 |
+
5
|
| 1170 |
+
= 0.05
|
| 1171 |
+
2
|
| 1172 |
+
0
|
| 1173 |
+
0
|
| 1174 |
+
5
|
| 1175 |
+
0
|
| 1176 |
+
-5.0
|
| 1177 |
+
-2.5
|
| 1178 |
+
0.0
|
| 1179 |
+
W1
|
| 1180 |
+
1e-3
|
| 1181 |
+
W1
|
| 1182 |
+
1e-3
|
| 1183 |
+
(d)
|
| 1184 |
+
(e)
|
| 1185 |
+
T= 0.3. h = 0.475
|
| 1186 |
+
T=0.3, h= 0.7
|
| 1187 |
+
1
|
| 1188 |
+
<M
|
| 1189 |
+
0
|
| 1190 |
+
0
|
| 1191 |
+
V
|
| 1192 |
+
V
|
| 1193 |
+
0
|
| 1194 |
+
0
|
| 1195 |
+
<IM>
|
| 1196 |
+
<IM>10
|
| 1197 |
+
The DM spectrum for large magnetic field, h = 1, and
|
| 1198 |
+
low temperatures, T = 0.1, is shown in Fig. 5(d). Clearly,
|
| 1199 |
+
there is no value of ϵ for which there is more than one
|
| 1200 |
+
eigenvalue close to 1 while exhibiting a gap to the rest
|
| 1201 |
+
of the spectrum. This shows that, as expected, the mag-
|
| 1202 |
+
netic field h has lead to the loss of topological order.
|
| 1203 |
+
To study with our DM algorithm the associated phase
|
| 1204 |
+
transition induced by h, we repeat the same procedure for
|
| 1205 |
+
various different values of h. The resulting spectra for se-
|
| 1206 |
+
lected h are shown in Fig. 6(a). We see that there are still
|
| 1207 |
+
four sectors for h = 0.55 in the data that are absent for
|
| 1208 |
+
h = 0.575 and larger values. While the associated criti-
|
| 1209 |
+
cal value of h is larger than expected [69–71], this is not
|
| 1210 |
+
a shortcoming of the DM algorithm but rather a conse-
|
| 1211 |
+
quence of our simple local variational ansatz in Eq. (17).
|
| 1212 |
+
By computing the fidelity as well as loop-operator ex-
|
| 1213 |
+
pectation values, we can see that a critical value around
|
| 1214 |
+
h = 0.55 is the expected answer for our dataset (see
|
| 1215 |
+
Appendix B 1). More sophisticated ans¨atze for the wave-
|
| 1216 |
+
function are expected to yield better values, but this is
|
| 1217 |
+
not the main focus of this work. More importantly, we see
|
| 1218 |
+
in Fig. 6(b) that the DM clustering of the states correctly
|
| 1219 |
+
reproduces the clustering according to the averaged loop
|
| 1220 |
+
operator expectation values ⟨W j⟩ (again indicated with
|
| 1221 |
+
color). Alternatively, this can be seen in Fig. 6(d) where
|
| 1222 |
+
⟨W j⟩ is indicated for the individual samples. Using four
|
| 1223 |
+
different colors for the four different clusters identified by
|
| 1224 |
+
the DM, we see that all states are clustered correctly. As
|
| 1225 |
+
expected based on the eigenvalues, there are no clear clus-
|
| 1226 |
+
ters anymore for larger h, Fig. 6(c); nonetheless, naively
|
| 1227 |
+
applying k-means clustering in ψ1,2,3 manages to discover
|
| 1228 |
+
some residual structure of the wavefunctions related to
|
| 1229 |
+
⟨W j⟩ as demonstrated in Fig. 6(e).
|
| 1230 |
+
V.
|
| 1231 |
+
SUMMARY AND DISCUSSION
|
| 1232 |
+
In this work, we have described an unsupervised ML
|
| 1233 |
+
algorithm for quantum phases with topological order. We
|
| 1234 |
+
use neural network parameters to efficiently represent an
|
| 1235 |
+
ensemble of quantum states, which are sampled accord-
|
| 1236 |
+
ing to their energy expectation values. To uncover the
|
| 1237 |
+
structure of the superselection sectors in the quantum
|
| 1238 |
+
states, we used the dimensional reduction technique of
|
| 1239 |
+
diffusion map and provided a kernel defined in terms of
|
| 1240 |
+
network parameters. As opposed to a kernel based on the
|
| 1241 |
+
overlap of wavefunctions (or other quantum mechanical
|
| 1242 |
+
similarity measures of states for that matter), this metric
|
| 1243 |
+
can be evaluated efficiently (within polynomial time) on
|
| 1244 |
+
a classical computer.
|
| 1245 |
+
We illustrated our general algorithm using a quasi-local
|
| 1246 |
+
restricted Boltzmann machine (RBM) and the toric code
|
| 1247 |
+
model in an external field; the choice of network ansatz
|
| 1248 |
+
was inspired by previous works [58, 59] showing the exis-
|
| 1249 |
+
tence of efficient representations of the low-energy spec-
|
| 1250 |
+
trum in terms of RBMs. Allowing for spatially inhomo-
|
| 1251 |
+
geneous RBM networks, we identified the “gauge symme-
|
| 1252 |
+
tries” of the ansatz, i.e., the set of changes in the network
|
| 1253 |
+
parameters that do not change the wavefunction, apart
|
| 1254 |
+
from trivial global phase factors. We carefully designed
|
| 1255 |
+
a similarity measure that is gauge invariant—a key prop-
|
| 1256 |
+
erty as, otherwise, identical wavefunctions represented
|
| 1257 |
+
in different gauges would be falsely identified as being
|
| 1258 |
+
distinct.
|
| 1259 |
+
We showed that the resultant unsupervised
|
| 1260 |
+
diffusion-map-based embedding of the wavefunctions is
|
| 1261 |
+
consistent with the expectation values of loop operators;
|
| 1262 |
+
it correctly captures the presence of superselection sec-
|
| 1263 |
+
tors and topological order at low energies and fields, as
|
| 1264 |
+
well as the lack thereof when higher-energy states are in-
|
| 1265 |
+
volved and/or the magnetic field is increased. We also
|
| 1266 |
+
verified our results using the full quantum mechanical
|
| 1267 |
+
overlap of wavefunctions as similarity measure.
|
| 1268 |
+
On a more general level, our analysis highlights the
|
| 1269 |
+
importance of the following two key properties of diffu-
|
| 1270 |
+
sion maps: first, in the presence of different topological
|
| 1271 |
+
sectors, the leading eigenvectors of diffusion maps cap-
|
| 1272 |
+
ture the connectivity rather than, e.g., the variance as is
|
| 1273 |
+
the case for PCA. For this reason, the clustering is still
|
| 1274 |
+
done correctly even if a fraction of pairs of wavefunc-
|
| 1275 |
+
tions are incorrectly classified as being distinct due to
|
| 1276 |
+
the usage of an approximate similarity measure. This is
|
| 1277 |
+
why complementing the neural-network similarity mea-
|
| 1278 |
+
sure, which has additional, state-specific redundancies in
|
| 1279 |
+
the large-field limit, by direct quantum mechanical over-
|
| 1280 |
+
laps for a certain fraction of pairs of states is sufficient to
|
| 1281 |
+
yield the correct classification. The second key property
|
| 1282 |
+
is that diffusion map is a kernel technique. This means
|
| 1283 |
+
that the actual machine learning procedure does not re-
|
| 1284 |
+
quire the full wavefunctions as input; instead, only (some
|
| 1285 |
+
measure of) the kernel of all pairs of wavefunctions in the
|
| 1286 |
+
dataset is required. We have used this to effectively re-
|
| 1287 |
+
move the gauge redundancy in the RBM parametrization
|
| 1288 |
+
of the states by proper definition of the network similarity
|
| 1289 |
+
measure in Eq. (20). Since the evaluation of full quan-
|
| 1290 |
+
tum mechanical similarity measures, like the wavefunc-
|
| 1291 |
+
tion overlap, are very expensive on classical computers,
|
| 1292 |
+
an interesting future direction would be to use the emerg-
|
| 1293 |
+
ing quantum-computing resources to evaluate a similarity
|
| 1294 |
+
measure quantum mechanically. This could then be used
|
| 1295 |
+
as input for a diffusion-map-based clustering.
|
| 1296 |
+
We finally point out that the ensemble of states we
|
| 1297 |
+
used in this work, which was based on sampling states
|
| 1298 |
+
according to their energy with respect to a Hamilto-
|
| 1299 |
+
nian, is only one of many possibilities.
|
| 1300 |
+
The proposed
|
| 1301 |
+
technique of applying diffusion map clustering using a
|
| 1302 |
+
gauge-invariant kernel in terms of network parameters of
|
| 1303 |
+
a variational description of quantum many-body wave-
|
| 1304 |
+
functions can be applied more generally, in principle, to
|
| 1305 |
+
any ensemble of interest. For instance, to consider arbi-
|
| 1306 |
+
trary local perturbations, one could generate an ensemble
|
| 1307 |
+
using finite depth local unitary circuits. Alternatively,
|
| 1308 |
+
one could generate an ensemble based on (Lindbladian)
|
| 1309 |
+
time-evolution to probe the stability of topological order
|
| 1310 |
+
against time-dependent perturbations or the coupling to
|
| 1311 |
+
a bath. We leave the investigation of such possibilities
|
| 1312 |
+
for future works.
|
| 1313 |
+
|
| 1314 |
+
11
|
| 1315 |
+
VI.
|
| 1316 |
+
CODE AND DATA AVAILABILITY
|
| 1317 |
+
The Monte Carlo simulations in this work were im-
|
| 1318 |
+
plemented in JAX [73]. Python code and data will be
|
| 1319 |
+
available at https://github.com/teng10/ml toric code/.
|
| 1320 |
+
ACKNOWLEDGEMENTS
|
| 1321 |
+
Y.T. acknowledges useful discussions with Dmitrii
|
| 1322 |
+
Kochkov,
|
| 1323 |
+
Juan Carrasquilla,
|
| 1324 |
+
Khadijeh Sona Najafi,
|
| 1325 |
+
Maine Christos and Rhine Samajdar. Y.T. and S.S. ac-
|
| 1326 |
+
knowledge funding by the U.S. Department of Energy
|
| 1327 |
+
under Grant DE-SC0019030. M.S.S. thanks Joaquin F.
|
| 1328 |
+
Rodriguez-Nieva for a previous collaboration on DM [12].
|
| 1329 |
+
The computations in this paper were run on the FASRC
|
| 1330 |
+
Cannon cluster supported by the FAS Division of Science
|
| 1331 |
+
Research Computing Group at Harvard University.
|
| 1332 |
+
Appendix A: Variational Ansatz: Restricted
|
| 1333 |
+
Boltzmann Machine
|
| 1334 |
+
The variational ansatz in Eq. (17) is a further-restricted
|
| 1335 |
+
restricted Boltzmann machine (RBM), first introduced
|
| 1336 |
+
by Ref. 58. RBM is a restricted class of Boltzmann ma-
|
| 1337 |
+
chine with an “energy” function ERBM(σ, h; Λ) depen-
|
| 1338 |
+
dent on the network parameters Λ, where σ are phys-
|
| 1339 |
+
ical spins and h = {h1, h2, · · · , hN | hi = ±1} are
|
| 1340 |
+
hidden spins (or hidden neurons) that are Ising vari-
|
| 1341 |
+
ables.
|
| 1342 |
+
The parameters Λ define the coupling strength
|
| 1343 |
+
among the physical and hidden spins.
|
| 1344 |
+
The restric-
|
| 1345 |
+
tion in RBM is that the couplings are only between
|
| 1346 |
+
the physical spin σi and hidden spin hj with strength
|
| 1347 |
+
−wij, so that the “energy” function takes the form
|
| 1348 |
+
ERBM(σ, h; Λ) = − �
|
| 1349 |
+
i aiσi − �
|
| 1350 |
+
i bihi − �
|
| 1351 |
+
ij wijσihj.
|
| 1352 |
+
It
|
| 1353 |
+
is a generative neural network that aims to model a prob-
|
| 1354 |
+
ability distribution P based on the Boltzmann factor,
|
| 1355 |
+
P(σ; Λ) = 1
|
| 1356 |
+
Z
|
| 1357 |
+
�
|
| 1358 |
+
h
|
| 1359 |
+
e−ERBM(σ,h;Λ),
|
| 1360 |
+
(A1a)
|
| 1361 |
+
normalization
|
| 1362 |
+
Z =
|
| 1363 |
+
�
|
| 1364 |
+
σ,h
|
| 1365 |
+
e−ERBM(σ,h;Λ).
|
| 1366 |
+
(A1b)
|
| 1367 |
+
For the task of modeling a quantum wavefunction ampli-
|
| 1368 |
+
tude ψ(σ; Λ), RBMs can be used as a variational ansatz
|
| 1369 |
+
by extending the parameters Λ to complex numbers.
|
| 1370 |
+
Further restricting parameters to the interlayer con-
|
| 1371 |
+
nections to the plaquette and star geometry in the toric
|
| 1372 |
+
code model [cf. Fig. 2(c)] and taking all parameters Λ to
|
| 1373 |
+
be purely imaginary, we recover the ansatz in Eq. (17)
|
| 1374 |
+
(up to normalization factor �Z),
|
| 1375 |
+
ψ(σ; Λ) = 1
|
| 1376 |
+
�Z
|
| 1377 |
+
�
|
| 1378 |
+
X=P,S
|
| 1379 |
+
�
|
| 1380 |
+
hX=±1
|
| 1381 |
+
e−i �
|
| 1382 |
+
X(wXjσj+bX)hX,
|
| 1383 |
+
= 1
|
| 1384 |
+
�Z
|
| 1385 |
+
�
|
| 1386 |
+
X=P,S
|
| 1387 |
+
cos(
|
| 1388 |
+
�
|
| 1389 |
+
j∈X
|
| 1390 |
+
wXjσj + bX).
|
| 1391 |
+
(A2)
|
| 1392 |
+
Figure 7. RBM representations of the four toric code ground
|
| 1393 |
+
states in the eigenbasis [Eq. (A4)] of loop operators ˆ
|
| 1394 |
+
W1, ˆ
|
| 1395 |
+
W2
|
| 1396 |
+
in Eq. (A3a).
|
| 1397 |
+
The cos(·) factors come from summing over the hid-
|
| 1398 |
+
den neurons and the ansatz factorizes into the product
|
| 1399 |
+
of individual plaquette (star) terms because of the re-
|
| 1400 |
+
stricted connections. The estimation of physical observ-
|
| 1401 |
+
ables of a wave function based on the RBM ansatz re-
|
| 1402 |
+
quires Monte Carlo sampling procedure which we discuss
|
| 1403 |
+
in Appendix B.
|
| 1404 |
+
1.
|
| 1405 |
+
Ground states representation in different
|
| 1406 |
+
topological sectors
|
| 1407 |
+
Placing the toric code model in Eq. (16) on the torus
|
| 1408 |
+
geometry, it is useful to define the loop operators,
|
| 1409 |
+
ˆW1 =
|
| 1410 |
+
�
|
| 1411 |
+
i∈¯lx
|
| 1412 |
+
ˆsx
|
| 1413 |
+
i ,
|
| 1414 |
+
ˆW2 =
|
| 1415 |
+
�
|
| 1416 |
+
i∈¯ly
|
| 1417 |
+
ˆsx
|
| 1418 |
+
i ,
|
| 1419 |
+
(A3a)
|
| 1420 |
+
ˆV1 =
|
| 1421 |
+
�
|
| 1422 |
+
i∈lx
|
| 1423 |
+
ˆsz
|
| 1424 |
+
i ,
|
| 1425 |
+
ˆV2 =
|
| 1426 |
+
�
|
| 1427 |
+
i∈ly
|
| 1428 |
+
ˆsz
|
| 1429 |
+
i ,
|
| 1430 |
+
(A3b)
|
| 1431 |
+
where lx,y is a non-contractible loop along x, y direc-
|
| 1432 |
+
tion, and ¯lx,y is similar on the dual lattice.
|
| 1433 |
+
Note the
|
| 1434 |
+
loop operators along two directions do not commute with
|
| 1435 |
+
each other as
|
| 1436 |
+
�
|
| 1437 |
+
ˆW1, ˆV2
|
| 1438 |
+
�
|
| 1439 |
+
̸= 0 and
|
| 1440 |
+
�
|
| 1441 |
+
ˆW2, ˆV1
|
| 1442 |
+
�
|
| 1443 |
+
̸= 0. However,
|
| 1444 |
+
since the hamiltonian commute with these loop operators
|
| 1445 |
+
�
|
| 1446 |
+
ˆW1,2, ˆHtc
|
| 1447 |
+
�
|
| 1448 |
+
=
|
| 1449 |
+
�
|
| 1450 |
+
ˆV1,2, ˆHtc
|
| 1451 |
+
�
|
| 1452 |
+
= 0, it follows that the ground
|
| 1453 |
+
state subspace is four-fold degenerate and spanned by
|
| 1454 |
+
the eigenvectors of the loop operators.
|
| 1455 |
+
Suppose we work in the eigenbasis of ˆW1,2; we define
|
| 1456 |
+
the four orthogonal ground states |ψi⟩ (i = 0, 1, 2, 3) that
|
| 1457 |
+
|
| 1458 |
+
(a)
|
| 1459 |
+
(b)
|
| 1460 |
+
WP1 = π/4
|
| 1461 |
+
WP1 = -π/4
|
| 1462 |
+
WP4 = π/4
|
| 1463 |
+
Wp4 = -π/4
|
| 1464 |
+
WP2 π/4
|
| 1465 |
+
WP2 π/4
|
| 1466 |
+
P3 = /4
|
| 1467 |
+
WP3 = π/4
|
| 1468 |
+
bp = 0
|
| 1469 |
+
bp
|
| 1470 |
+
=0
|
| 1471 |
+
(Wi, W2) = (-1, -1)
|
| 1472 |
+
(W1, W2) = (+1, +1)
|
| 1473 |
+
(c)
|
| 1474 |
+
(d)
|
| 1475 |
+
WP1 = -π/4
|
| 1476 |
+
WP1 = π/4
|
| 1477 |
+
Wp4 = π/4
|
| 1478 |
+
Wp4 = -π/4
|
| 1479 |
+
WP2 于 π/4
|
| 1480 |
+
WP2 π/4
|
| 1481 |
+
VP3
|
| 1482 |
+
VP3 = π^4
|
| 1483 |
+
bp = π/2
|
| 1484 |
+
bp =π/2
|
| 1485 |
+
(W1, W2) = (-1, +1)
|
| 1486 |
+
(W1, W2) = (+1, -1)12
|
| 1487 |
+
Figure 8. (a-b) Two RBM representations Eq. (A8) of the polarized state. (c) A path that connects the presentation for two
|
| 1488 |
+
spins in (a-b), which is explicitly shown in Table. I.
|
| 1489 |
+
span L as,
|
| 1490 |
+
ˆW1 |ψ0⟩ = |ψ0⟩ ,
|
| 1491 |
+
ˆW2 |ψ0⟩ = |ψ0⟩ ,
|
| 1492 |
+
(A4a)
|
| 1493 |
+
ˆW1 |ψ1⟩ = |ψ1⟩ ,
|
| 1494 |
+
ˆW2 |ψ1⟩ = − |ψ1⟩ ,
|
| 1495 |
+
(A4b)
|
| 1496 |
+
ˆW1 |ψ2⟩ = |ψ2⟩ ,
|
| 1497 |
+
ˆW2 |ψ2⟩ = − |ψ2⟩ ,
|
| 1498 |
+
(A4c)
|
| 1499 |
+
ˆW1 |ψ3⟩ = − |ψ3⟩ ,
|
| 1500 |
+
ˆW2 |ψ3⟩ = − |ψ3⟩ .
|
| 1501 |
+
(A4d)
|
| 1502 |
+
The RBM ansatz in Eq. (A2) can represent eigenstates
|
| 1503 |
+
of ˆW1,2 with eigenvalues (W1, W2) = (±1, ±1). Ref. [58]
|
| 1504 |
+
gave an representation of |ψ3⟩ with parameters,
|
| 1505 |
+
wP j = π
|
| 1506 |
+
4 ,
|
| 1507 |
+
bP = 0,
|
| 1508 |
+
wSj = π
|
| 1509 |
+
2 ,
|
| 1510 |
+
bS = 0.
|
| 1511 |
+
(A5a)
|
| 1512 |
+
On a system with odd number of sites along x and y di-
|
| 1513 |
+
rection, the other three degenerate states can be realized
|
| 1514 |
+
analogously by fixing the weights associated to stars to
|
| 1515 |
+
be wSj = 0, bS = 0. Then the four states can be chosen
|
| 1516 |
+
by changing the wP j and bP as shown in Fig. 7.
|
| 1517 |
+
2.
|
| 1518 |
+
Network parameter redundancies in polarized phase
|
| 1519 |
+
In Sec. III, we identified a set of gauge transformations Eq. (18) that leave a generic wavefunction parameterized
|
| 1520 |
+
by the RBM ansatz in Eq. (17) invariant up to a global phase [Eq. (13)]. Such gauge transformations should be taken
|
| 1521 |
+
into consideration when evaluating the similarity measure Sn. Moreover, we have numerically verified that for states
|
| 1522 |
+
generated close to the exact toric code wave functions, Sn is a good proxy for the quantum measure Sq after explicit
|
| 1523 |
+
removals of such redundancies via Sn in Eq. (19). However, as alluded to in the discussions of the large-h limit, there
|
| 1524 |
+
are state-specific redundancies that are generally not related by the gauge transformations in Eq. (18).
|
| 1525 |
+
Let us illustrate such redundancies here for the polarized state |Ψ⟩ = |1, · · · , 1⟩z which has all spin pointing up in
|
| 1526 |
+
the z-basis. Notice that there is the same number of cos(·) factors in the wavefunction ansatz as the number of spins.
|
| 1527 |
+
As a result, we can define a “covering” by assigning each individual spin to a single factor, and choosing the weights
|
| 1528 |
+
to ensure all spins are pointing up. Any such “covering” is a valid representation of the polarized state. For example,
|
| 1529 |
+
one representation is given by,
|
| 1530 |
+
bP = bS = −π
|
| 1531 |
+
4 ,
|
| 1532 |
+
wSj =
|
| 1533 |
+
�
|
| 1534 |
+
π
|
| 1535 |
+
4 ,
|
| 1536 |
+
j = js(S),
|
| 1537 |
+
0,
|
| 1538 |
+
otherwise,
|
| 1539 |
+
and wPj =
|
| 1540 |
+
�
|
| 1541 |
+
π
|
| 1542 |
+
4 ,
|
| 1543 |
+
j = jn(P),
|
| 1544 |
+
0,
|
| 1545 |
+
otherwise.
|
| 1546 |
+
(A6)
|
| 1547 |
+
where js(S) denotes the “southmost” spin in the star S and jn(P) denotes the “northmost” spin in the plaquette
|
| 1548 |
+
P [see Fig. 8(a)]. Any such coverings of the spins will correspond to a polarized state. For example, performing a
|
| 1549 |
+
“rotation” leads to a different covering in Fig. 8(b). Actually, because most amplitudes in local-z basis are 0 so there
|
| 1550 |
+
are so few constraints in the wave function amplitudes, a continuous set of weights exist to represent the polarized
|
| 1551 |
+
state, so there are an infinite amount of redundancies for completely polarized state.
|
| 1552 |
+
To illustrate this, let us consider the simplest example of just two spins [the boxed region in Fig. 8(c)] with the
|
| 1553 |
+
same RBM ansatz, which can be easily generalized to more spins. For two spins, such ansatz is given by,
|
| 1554 |
+
ψΛ(σA, σB) = cos(bS + wSAσA + wSBσB) cos(bP + wP AσA + wP BσB),
|
| 1555 |
+
(A7)
|
| 1556 |
+
where the weights Λ = {ΛS = {bS, wSA, wSB}, ΛP = {bP , wP A, wP B}} with ΛXj ∈ [0, π) for X = S or P fully
|
| 1557 |
+
determine the two-qubits physical state. For example, the following two choices of weights [Λ1 and Λ2 pictorially in
|
| 1558 |
+
|
| 1559 |
+
b
|
| 1560 |
+
c
|
| 1561 |
+
a
|
| 1562 |
+
B
|
| 1563 |
+
A4
|
| 1564 |
+
Path 3
|
| 1565 |
+
Path 2
|
| 1566 |
+
IV
|
| 1567 |
+
A213
|
| 1568 |
+
Path 1
|
| 1569 |
+
wSB = bS + wSA − π
|
| 1570 |
+
2
|
| 1571 |
+
ΛP fixed
|
| 1572 |
+
product ψ = ψS × ψP
|
| 1573 |
+
Λ1 → Λ3
|
| 1574 |
+
wSA : [0, π
|
| 1575 |
+
4 ), wSB : [ π
|
| 1576 |
+
4 , − π
|
| 1577 |
+
4 ), bS : [− π
|
| 1578 |
+
4 , 0)
|
| 1579 |
+
wP A = π
|
| 1580 |
+
4 , wP B = 0, bP = − π
|
| 1581 |
+
4
|
| 1582 |
+
cos(bX + wXA + wXB)
|
| 1583 |
+
̸= 0 if bS + wSA ̸= n
|
| 1584 |
+
2 π, n ∈ Z → 0 → 1
|
| 1585 |
+
1
|
| 1586 |
+
→ 0 → 1
|
| 1587 |
+
cos(bX + wXA − wXB)
|
| 1588 |
+
0
|
| 1589 |
+
0 ✓
|
| 1590 |
+
cos(bX − wXA + wXB)
|
| 1591 |
+
cos(2bS − π
|
| 1592 |
+
2 ) → 0
|
| 1593 |
+
0
|
| 1594 |
+
0 ✓
|
| 1595 |
+
cos(bX − wXA − wXB)
|
| 1596 |
+
0
|
| 1597 |
+
0 ✓
|
| 1598 |
+
Path 2
|
| 1599 |
+
ΛS fixed
|
| 1600 |
+
wP B = bP − wP A + π
|
| 1601 |
+
2
|
| 1602 |
+
Λ3 → Λ4
|
| 1603 |
+
wSA = π
|
| 1604 |
+
4 , wSB = − π
|
| 1605 |
+
4 , bS = 0
|
| 1606 |
+
wP A : [ π
|
| 1607 |
+
4 , 0], wP B : [0, π
|
| 1608 |
+
4 ], bP = − π
|
| 1609 |
+
4
|
| 1610 |
+
cos(bX + wXA + wXB)
|
| 1611 |
+
1
|
| 1612 |
+
1
|
| 1613 |
+
1
|
| 1614 |
+
cos(bX + wXA − wXB)
|
| 1615 |
+
0
|
| 1616 |
+
cos(2wP A − π
|
| 1617 |
+
2 ) → 0
|
| 1618 |
+
0 ✓
|
| 1619 |
+
cos(bX − wXA + wXB)
|
| 1620 |
+
0
|
| 1621 |
+
0 ✓
|
| 1622 |
+
cos(bX − wXA − wXB)
|
| 1623 |
+
0
|
| 1624 |
+
0 ✓
|
| 1625 |
+
Path 3
|
| 1626 |
+
wSB = −bS + wSA + π
|
| 1627 |
+
2
|
| 1628 |
+
ΛP fixed
|
| 1629 |
+
Λ4 → Λ2
|
| 1630 |
+
wSA = π
|
| 1631 |
+
4 , wSB : (− π
|
| 1632 |
+
4 , 0], bS : (0, − π
|
| 1633 |
+
4 ]
|
| 1634 |
+
wP A = 0, wP B = π
|
| 1635 |
+
4 , bP = − π
|
| 1636 |
+
4
|
| 1637 |
+
cos(bX + wXA + wXB)
|
| 1638 |
+
1
|
| 1639 |
+
1
|
| 1640 |
+
1
|
| 1641 |
+
cos(bX + wXA − wXB)
|
| 1642 |
+
0
|
| 1643 |
+
0 ✓
|
| 1644 |
+
cos(bX − wXA + wXB)
|
| 1645 |
+
0
|
| 1646 |
+
0 ✓
|
| 1647 |
+
cos(bX − wXA − wXB)
|
| 1648 |
+
0
|
| 1649 |
+
0 ✓
|
| 1650 |
+
Table I. A path going from Λ1 to Λ2 is composed of three steps. Path 1 (Λ1 → Λ3) is smooth except at the point wSA =
|
| 1651 |
+
π
|
| 1652 |
+
4 , wSB = − π
|
| 1653 |
+
4 , bS = 0, where the wavefunction vanishes. This is further denoted by the red arrows first decreasing to 0 before
|
| 1654 |
+
increasing to 1 in the first row. Path 2 and 3 are both smooth. The last column illustrates that the wavefunction ψ remains
|
| 1655 |
+
in the polarized state along the path.
|
| 1656 |
+
Fig. 8(c)] both parametrize the polarized state:
|
| 1657 |
+
Λ1 = {bS = −π
|
| 1658 |
+
4 , wSA = 0, wSB = π
|
| 1659 |
+
4 , bP = −π
|
| 1660 |
+
4 , wP A = π
|
| 1661 |
+
4 , wP B = 0},
|
| 1662 |
+
(A8a)
|
| 1663 |
+
Λ2 = {bS = −π
|
| 1664 |
+
4 , wSA = π
|
| 1665 |
+
4 , wSB = 0, bP = −π
|
| 1666 |
+
4 , wP A = 0, wP B = π
|
| 1667 |
+
4 },
|
| 1668 |
+
(A8b)
|
| 1669 |
+
ψΛ1,2 =
|
| 1670 |
+
�
|
| 1671 |
+
1,
|
| 1672 |
+
σA = σB = 1,
|
| 1673 |
+
0,
|
| 1674 |
+
otherwise.
|
| 1675 |
+
(A8c)
|
| 1676 |
+
Now to illustrate the continuous redundancies, we construct a path in the parameter space to go from Λ1 to Λ2.
|
| 1677 |
+
The path is composed of three steps [Fig. 8(c)],
|
| 1678 |
+
Λ1
|
| 1679 |
+
path 1
|
| 1680 |
+
−−−−→ Λ3
|
| 1681 |
+
path 2
|
| 1682 |
+
−−−−→ Λ4
|
| 1683 |
+
path 3
|
| 1684 |
+
−−−−→ Λ2,
|
| 1685 |
+
(A9)
|
| 1686 |
+
where the intermediate parameters are given by,
|
| 1687 |
+
Λ3 = {bS = 0, wSA = π
|
| 1688 |
+
4 , wSB = −π
|
| 1689 |
+
4 , bP = −π
|
| 1690 |
+
4 , wP A = π
|
| 1691 |
+
4 , wP B = 0},
|
| 1692 |
+
(A10)
|
| 1693 |
+
Λ4 = {bS = 0, wSA = π
|
| 1694 |
+
4 , wSB = −π
|
| 1695 |
+
4 , bP = −π
|
| 1696 |
+
4 , wP A = 0, wP B = π
|
| 1697 |
+
4 }.
|
| 1698 |
+
(A11)
|
| 1699 |
+
Along each path component, referred to as path 1 through 3 in Table I, the parameters of S (or P) are varied and the
|
| 1700 |
+
other held fixed, while remaining in the exactly polarized state. The path is continuous except at a singular point on
|
| 1701 |
+
path 1 where the wave function vanishes at Λsingular = {bS = 0, wSA = π
|
| 1702 |
+
4 , wSB = − π
|
| 1703 |
+
4 , bP = − π
|
| 1704 |
+
4 , wP A = π
|
| 1705 |
+
4 , wP B = 0}.
|
| 1706 |
+
3.
|
| 1707 |
+
Resolving the special redundancies
|
| 1708 |
+
In Appendix A 2, we explicitly showed that there can
|
| 1709 |
+
be a large set of redundancies given a polarized state.
|
| 1710 |
+
Hence, for simplicity in the main text, we have used the
|
| 1711 |
+
direct overlap Sq in Eq. (10) as the relevant measure
|
| 1712 |
+
at finite field values. As discussed in the main text, a
|
| 1713 |
+
straightforward way to alleviate the redundancies in the
|
| 1714 |
+
similarity measure Sn in Eq. (19) of the network parame-
|
| 1715 |
+
ters is to complement it with the direct overlap. By using
|
| 1716 |
+
a combination of both measures, we are able to reduce
|
| 1717 |
+
the amount of computational cost of the direct overlap
|
| 1718 |
+
|
| 1719 |
+
14
|
| 1720 |
+
by a fraction as the similarity is easy to compute. More
|
| 1721 |
+
specifically, we define a mixed measure Sm by replacing
|
| 1722 |
+
a random fraction (given by f) of the similarity measure
|
| 1723 |
+
pairs {l, l′} by a rescaled overlap measure �Sq such that,
|
| 1724 |
+
Sm(l, l′) =
|
| 1725 |
+
��Sq(l, l′)
|
| 1726 |
+
with probability f,
|
| 1727 |
+
Sn(l, l′)
|
| 1728 |
+
with probability 1 − f.
|
| 1729 |
+
(A12)
|
| 1730 |
+
The following rescaling of the overlap measure Sq is nec-
|
| 1731 |
+
essary as we want to include the two measures on an
|
| 1732 |
+
equal-footing given by,
|
| 1733 |
+
�Sq = Sq − nq
|
| 1734 |
+
mq − nq
|
| 1735 |
+
· (mn − nn) + nn,
|
| 1736 |
+
(A13a)
|
| 1737 |
+
mq = max(Sq),
|
| 1738 |
+
nq = min(Sq),
|
| 1739 |
+
(A13b)
|
| 1740 |
+
mn = max(Sn),
|
| 1741 |
+
nn = min(Sn).
|
| 1742 |
+
(A13c)
|
| 1743 |
+
For example, we see that the minimum of the rescaled
|
| 1744 |
+
overlap is the same as the minimum of the similarity
|
| 1745 |
+
min(�Sq) = min(Sn).
|
| 1746 |
+
In Fig. 9, we demonstrate that by using a mixed mea-
|
| 1747 |
+
sure with a fraction of f = 0.4 replacement, our algo-
|
| 1748 |
+
rithm with DM is able to identify the presence (indi-
|
| 1749 |
+
cated by the shaded blue region for smaller field values
|
| 1750 |
+
h = 0.475 and h = 0.55) and absence (h = 0.7) of su-
|
| 1751 |
+
perselection sectors across various field values, consistent
|
| 1752 |
+
with the predictions of the algorithm using direct over-
|
| 1753 |
+
lap (shown in Fig. 6). We note that in the case with a
|
| 1754 |
+
mixed measure, DM is a natural technique as the algo-
|
| 1755 |
+
rithm looks for connectivity; whereas kernel PCA would
|
| 1756 |
+
fail to identify such transition (since a fraction of pairs of
|
| 1757 |
+
wave functions are incorrectly considered to be dissimi-
|
| 1758 |
+
lar by Sn, the leading kernel PCA components still show
|
| 1759 |
+
four separated clusters up to the largest magnetic field,
|
| 1760 |
+
h = 1).
|
| 1761 |
+
Appendix B: Optimization with Variational Monte
|
| 1762 |
+
Carlo
|
| 1763 |
+
To find the ground state |Ψ(Λ0)⟩ ∝ �
|
| 1764 |
+
σ ψ(σ; Λ0) |σ⟩,
|
| 1765 |
+
we wish to minimize the energy expectation ⟨E⟩ =
|
| 1766 |
+
⟨Ψ| ˆH |Ψ⟩ / ⟨Ψ|Ψ⟩ (omitting the variational parameters
|
| 1767 |
+
Λ0 in this section), which is bounded by the ground state
|
| 1768 |
+
energy by the variational principle.
|
| 1769 |
+
An exact compu-
|
| 1770 |
+
tation ⟨E⟩exact is costly as the summation enumerates
|
| 1771 |
+
over exponentially many spin configurations σ as the sys-
|
| 1772 |
+
tem size increases. Here we use variational Monte Carlo
|
| 1773 |
+
(VMC) importance sampling algorithm to estimate such
|
| 1774 |
+
expectation values. The idea is to compute relative prob-
|
| 1775 |
+
ability between different configurations and sample from
|
| 1776 |
+
the true wavefunction probability density |ψ(σ)|2, with-
|
| 1777 |
+
out having to compute |ψ(σ)|2 for all σ.
|
| 1778 |
+
To perform
|
| 1779 |
+
this algorithm, we initialize M random configurations
|
| 1780 |
+
{σi}M
|
| 1781 |
+
i=1 and continue each with random walks based on
|
| 1782 |
+
previous configurations, hence forming M Markov chains.
|
| 1783 |
+
In
|
| 1784 |
+
particular,
|
| 1785 |
+
the
|
| 1786 |
+
Metropolis–Rosenbluth
|
| 1787 |
+
algo-
|
| 1788 |
+
rithm [74] is used to propose the next configuration σ′
|
| 1789 |
+
i
|
| 1790 |
+
0.2
|
| 1791 |
+
0.4
|
| 1792 |
+
0.6
|
| 1793 |
+
0.8
|
| 1794 |
+
1.0
|
| 1795 |
+
k
|
| 1796 |
+
h=0.475
|
| 1797 |
+
0.2
|
| 1798 |
+
0.4
|
| 1799 |
+
0.6
|
| 1800 |
+
0.8
|
| 1801 |
+
1.0
|
| 1802 |
+
k
|
| 1803 |
+
h=0.55
|
| 1804 |
+
0.00
|
| 1805 |
+
0.02
|
| 1806 |
+
0.04
|
| 1807 |
+
0.06
|
| 1808 |
+
0.08
|
| 1809 |
+
0.10
|
| 1810 |
+
0.2
|
| 1811 |
+
0.4
|
| 1812 |
+
0.6
|
| 1813 |
+
0.8
|
| 1814 |
+
1.0
|
| 1815 |
+
k
|
| 1816 |
+
h=0.7
|
| 1817 |
+
Figure 9.
|
| 1818 |
+
DM spectra for different field values h
|
| 1819 |
+
=
|
| 1820 |
+
0.475, 0.55, 0.7 at T = 0.3 using a mixed similarity measure
|
| 1821 |
+
Sm with a fraction f = 0.4 in Eq. (A12). The blue shaded re-
|
| 1822 |
+
gions highlight the existence of a range of ϵ with spectral gap
|
| 1823 |
+
between the degenerate eigenvalues and the decaying eigenval-
|
| 1824 |
+
ues, indicating underlying superselection sectors. As the field
|
| 1825 |
+
value approaches the transition field hc, the range of such re-
|
| 1826 |
+
gion shrinks and disappears at high field h = 0.7, indicating
|
| 1827 |
+
the absence of sectors.
|
| 1828 |
+
that is locally connected to ci according to function
|
| 1829 |
+
g(σ′|σ).
|
| 1830 |
+
For the toric code model, we use two types
|
| 1831 |
+
of proposals: spin flips and vertex flips. Here, we will
|
| 1832 |
+
assume a probability of p for proposing spin flips and
|
| 1833 |
+
analogously 1 − p for vertex flips that are equally likely
|
| 1834 |
+
at all sites:
|
| 1835 |
+
g(σ′|σ) =
|
| 1836 |
+
�
|
| 1837 |
+
p
|
| 1838 |
+
ns ,
|
| 1839 |
+
for spin flips
|
| 1840 |
+
1−p
|
| 1841 |
+
nv ,
|
| 1842 |
+
for vertex flips
|
| 1843 |
+
(B1)
|
| 1844 |
+
where ns and nv are the number of all possible spin and
|
| 1845 |
+
vertex flips.
|
| 1846 |
+
The acceptance of σ′ is determined by a
|
| 1847 |
+
probability,
|
| 1848 |
+
Paccept(σ → σ′) = min
|
| 1849 |
+
�
|
| 1850 |
+
|ψ(σ′)
|
| 1851 |
+
ψ(σ) |2, 1
|
| 1852 |
+
�
|
| 1853 |
+
.
|
| 1854 |
+
(B2)
|
| 1855 |
+
|
| 1856 |
+
15
|
| 1857 |
+
The random walks will be repeated long enough so
|
| 1858 |
+
that the final configurations at the tail of the chains
|
| 1859 |
+
ΣMC = {σf}M
|
| 1860 |
+
i=b approximate samples drawn from the
|
| 1861 |
+
probability distribution |ψ(σ)|2. A certain number b of
|
| 1862 |
+
walkers in each chain are discarded to reduce the biases
|
| 1863 |
+
from initialization of the chains. Then the expectation
|
| 1864 |
+
of an observable ˆO is given by,
|
| 1865 |
+
⟨ ˆO⟩MC =
|
| 1866 |
+
�
|
| 1867 |
+
σ ψ(σ)∗⟨σ| ˆO|Ψ⟩
|
| 1868 |
+
�
|
| 1869 |
+
σ|ψ(σ)|2
|
| 1870 |
+
,
|
| 1871 |
+
(B3a)
|
| 1872 |
+
=
|
| 1873 |
+
�
|
| 1874 |
+
σ|ψ(σ)|2 ⟨σ| ˆO|Ψ⟩
|
| 1875 |
+
ψ(σ)
|
| 1876 |
+
�
|
| 1877 |
+
σ|ψ(σ)|2
|
| 1878 |
+
,
|
| 1879 |
+
(B3b)
|
| 1880 |
+
= 1
|
| 1881 |
+
M
|
| 1882 |
+
�
|
| 1883 |
+
σ∈ΣMC
|
| 1884 |
+
⟨σ| ˆO|Ψ⟩
|
| 1885 |
+
ψ(σ)
|
| 1886 |
+
.
|
| 1887 |
+
(B3c)
|
| 1888 |
+
Defining a local value of the operator ˆO as,
|
| 1889 |
+
Oloc = ⟨σ| ˆO|Ψ⟩
|
| 1890 |
+
ψ(σ)
|
| 1891 |
+
,
|
| 1892 |
+
(B4)
|
| 1893 |
+
then the Monte Carlo estimation is the average of
|
| 1894 |
+
the local values in the Markov chain:
|
| 1895 |
+
⟨ ˆO⟩MC
|
| 1896 |
+
=
|
| 1897 |
+
1
|
| 1898 |
+
M
|
| 1899 |
+
�
|
| 1900 |
+
σ∈ΣMC Oloc.
|
| 1901 |
+
Next, to minimize ⟨E⟩, we can compute its gradient
|
| 1902 |
+
with respect to the weights Λ0 in terms of the local energy
|
| 1903 |
+
Eloc and wavefunction amplitude derivative Di:
|
| 1904 |
+
∂Λi⟨E⟩ = ⟨ElocDi⟩ − ⟨Eloc⟩⟨Di⟩
|
| 1905 |
+
(B5a)
|
| 1906 |
+
Eloc = ⟨σ| H |Ψ⟩
|
| 1907 |
+
ψ(σ)
|
| 1908 |
+
,
|
| 1909 |
+
Di = ∂Λiψ(σ)
|
| 1910 |
+
ψ(σ)
|
| 1911 |
+
(B5b)
|
| 1912 |
+
Finally, we use gradient descent with learning rate λ,
|
| 1913 |
+
Λi → Λi − λ∂Λi⟨E⟩,
|
| 1914 |
+
(B6)
|
| 1915 |
+
to minimize the energy expectation value. The gradient
|
| 1916 |
+
descent is performed by using an adaptive Adam opti-
|
| 1917 |
+
mizer [75]. We repeat this training step until empirical
|
| 1918 |
+
convergence.
|
| 1919 |
+
Note that the RBM ansatz can get stuck in local min-
|
| 1920 |
+
ima. To find the toric code ground state, we initialize
|
| 1921 |
+
the network parameters close to the analytic solutions in
|
| 1922 |
+
Eq. (A5).
|
| 1923 |
+
1.
|
| 1924 |
+
Fidelity
|
| 1925 |
+
To find the approximate ground states at finite field
|
| 1926 |
+
values h with step size ∆h, we initialize the weights to
|
| 1927 |
+
be those from the previous field value h − ∆h, and then
|
| 1928 |
+
use the current optimized weights as the initialization for
|
| 1929 |
+
the next step h + ∆h. A good indication of a quantum
|
| 1930 |
+
phase transition is by inspecting the fidelity F(h) defined
|
| 1931 |
+
as,
|
| 1932 |
+
F(h) = |⟨ψ(h)|ψ(h + ∆h)⟩|2.
|
| 1933 |
+
(B7)
|
| 1934 |
+
0.3
|
| 1935 |
+
0.4
|
| 1936 |
+
0.5
|
| 1937 |
+
0.6
|
| 1938 |
+
h
|
| 1939 |
+
0.925
|
| 1940 |
+
0.950
|
| 1941 |
+
0.975
|
| 1942 |
+
1.000
|
| 1943 |
+
(h)
|
| 1944 |
+
Figure 10. Fidelity F as a function of field h. The red dashed
|
| 1945 |
+
line is drawn to guide the eye, where the dip in fidelity indi-
|
| 1946 |
+
cates the critical field value hc ≃ 0.57.
|
| 1947 |
+
The critical field hc is identified as a dip in the fidelity,
|
| 1948 |
+
indicating an abrupt change in the ground state wave-
|
| 1949 |
+
function. A field value of hc ≃ 0.57 (at dashed line in
|
| 1950 |
+
Fig. 10) is found for the RBM ansatz. Note that one can
|
| 1951 |
+
get more accurate field value by including loop expecta-
|
| 1952 |
+
tions in the ansatz as done in Ref. 59.
|
| 1953 |
+
Appendix C: Ensemble generation
|
| 1954 |
+
Using the algorithm outlined in Sec. 1, we can gen-
|
| 1955 |
+
erate ensembles that deviate from the initial optimized
|
| 1956 |
+
parameters by setting hyper-parameter T = 0.1, 0.3, 1.
|
| 1957 |
+
The other choices of hyper-parameters for the ensembles
|
| 1958 |
+
are number of independent chains k = 2, length of each
|
| 1959 |
+
chain n = 250, and number of samples kept m = n.
|
| 1960 |
+
The parameter proposal function we use consists of with
|
| 1961 |
+
probability pm randomly apply minus sign or randomly
|
| 1962 |
+
adding local noise at a single spin site ȷ. More precisely,
|
| 1963 |
+
f(Λ, ξ) =
|
| 1964 |
+
�
|
| 1965 |
+
f−,ȷ,
|
| 1966 |
+
with probability : pm,
|
| 1967 |
+
flocal,ȷ,
|
| 1968 |
+
with probability : 1 − pm,
|
| 1969 |
+
(C1a)
|
| 1970 |
+
f−,ȷ =
|
| 1971 |
+
�
|
| 1972 |
+
−(Λ)i,
|
| 1973 |
+
i ∈ ȷ
|
| 1974 |
+
(Λ)i,
|
| 1975 |
+
i ̸∈ ȷ
|
| 1976 |
+
(C1b)
|
| 1977 |
+
flocal,ȷ =
|
| 1978 |
+
�
|
| 1979 |
+
uniform(0, ξ) + (Λ)i,
|
| 1980 |
+
i ∈ ȷ
|
| 1981 |
+
(Λ)i,
|
| 1982 |
+
i ̸∈ ȷ
|
| 1983 |
+
(C1c)
|
| 1984 |
+
In the exact toric code state, f−,ȷ corresponds to act
|
| 1985 |
+
σx operator at site ȷ to create a pair of m-particles. In
|
| 1986 |
+
the trivial phase, depending on the parametrization of
|
| 1987 |
+
the state, f−,ȷ could correspond to a single spin flip at
|
| 1988 |
+
site ȷ. The hyperparameters are chosen to be pm = 0.3
|
| 1989 |
+
and ξ = 0.2. In Fig. 11, we visualize the ensembles by
|
| 1990 |
+
computing their loop expectations ⟨W j⟩ at different field
|
| 1991 |
+
values.
|
| 1992 |
+
|
| 1993 |
+
16
|
| 1994 |
+
1
|
| 1995 |
+
0
|
| 1996 |
+
1
|
| 1997 |
+
W2
|
| 1998 |
+
h = 0.0
|
| 1999 |
+
h = 0.475
|
| 2000 |
+
h = 0.525
|
| 2001 |
+
h = 0.55
|
| 2002 |
+
h = 0.575
|
| 2003 |
+
h = 0.7
|
| 2004 |
+
T = 0.1
|
| 2005 |
+
h = 1.0
|
| 2006 |
+
1
|
| 2007 |
+
0
|
| 2008 |
+
1
|
| 2009 |
+
W2
|
| 2010 |
+
T = 0.3
|
| 2011 |
+
1
|
| 2012 |
+
0
|
| 2013 |
+
1
|
| 2014 |
+
W1
|
| 2015 |
+
1
|
| 2016 |
+
0
|
| 2017 |
+
1
|
| 2018 |
+
W2
|
| 2019 |
+
1
|
| 2020 |
+
0
|
| 2021 |
+
1
|
| 2022 |
+
W1
|
| 2023 |
+
1
|
| 2024 |
+
0
|
| 2025 |
+
1
|
| 2026 |
+
W1
|
| 2027 |
+
1
|
| 2028 |
+
0
|
| 2029 |
+
1
|
| 2030 |
+
W1
|
| 2031 |
+
1
|
| 2032 |
+
0
|
| 2033 |
+
1
|
| 2034 |
+
W1
|
| 2035 |
+
1
|
| 2036 |
+
0
|
| 2037 |
+
1
|
| 2038 |
+
W1
|
| 2039 |
+
1
|
| 2040 |
+
0
|
| 2041 |
+
1
|
| 2042 |
+
W1
|
| 2043 |
+
T = 1.0
|
| 2044 |
+
25
|
| 2045 |
+
20
|
| 2046 |
+
15
|
| 2047 |
+
10
|
| 2048 |
+
5
|
| 2049 |
+
H
|
| 2050 |
+
Figure 11. Illustration of the diffusion processes for different parameter T and field h at N = 18 spins. The loop expectation
|
| 2051 |
+
values ⟨W 1,2⟩ form four distinct clusters in the two-dimensional plane for small T and h. For large T = 1. at all fields and
|
| 2052 |
+
intermediate T = 0.3 at higher fields h > 0.57, the clusters “diffuse” and topological order is lost. Such “diffusion” process can
|
| 2053 |
+
be visualized by color coding the energy expectation ⟨H⟩.
|
| 2054 |
+
[1] Pankaj Mehta, Marin Bukov, Ching-Hao Wang, Alexan-
|
| 2055 |
+
dre G. R. Day, Clint Richardson, Charles K. Fisher, and
|
| 2056 |
+
David J. Schwab, “A high-bias, low-variance introduction
|
| 2057 |
+
to Machine Learning for physicists,” Physics Reports A
|
| 2058 |
+
High-Bias, Low-Variance Introduction to Machine Learn-
|
| 2059 |
+
ing for Physicists, 810, 1–124 (2019).
|
| 2060 |
+
[2] Giuseppe Carleo, Ignacio Cirac, Kyle Cranmer, Lau-
|
| 2061 |
+
rent Daudet, Maria Schuld, Naftali Tishby, Leslie Vogt-
|
| 2062 |
+
Maranto,
|
| 2063 |
+
and Lenka Zdeborov´a, “Machine learning
|
| 2064 |
+
and the physical sciences,” Rev. Mod. Phys. 91, 045002
|
| 2065 |
+
(2019).
|
| 2066 |
+
[3] Sankar Das Sarma, Dong-Ling Deng,
|
| 2067 |
+
and Lu-Ming
|
| 2068 |
+
Duan,
|
| 2069 |
+
“Machine
|
| 2070 |
+
learning
|
| 2071 |
+
meets
|
| 2072 |
+
quantum
|
| 2073 |
+
physics,”
|
| 2074 |
+
Physics
|
| 2075 |
+
Today
|
| 2076 |
+
72,
|
| 2077 |
+
48–54
|
| 2078 |
+
(2019),
|
| 2079 |
+
arXiv:1903.03516
|
| 2080 |
+
[physics.pop-ph].
|
| 2081 |
+
[4] Roger G. Melko, Giuseppe Carleo, Juan Carrasquilla,
|
| 2082 |
+
and J. Ignacio Cirac, “Restricted boltzmann machines in
|
| 2083 |
+
quantum physics,” Nature Physics 15, 887–892 (2019).
|
| 2084 |
+
[5] Juan Carrasquilla, “Machine learning for quantum mat-
|
| 2085 |
+
ter,” Advances in Physics: X 5, 1797528 (2020).
|
| 2086 |
+
[6] Juan Carrasquilla and Giacomo Torlai, “How To Use
|
| 2087 |
+
Neural Networks To Investigate Quantum Many-Body
|
| 2088 |
+
Physics,” PRX Quantum 2, 040201 (2021).
|
| 2089 |
+
[7] Anna Dawid, Julian Arnold, Borja Requena, Alexan-
|
| 2090 |
+
der Gresch, Marcin P�lodzie´n, Kaelan Donatella, Kim A.
|
| 2091 |
+
Nicoli, Paolo Stornati, Rouven Koch, Miriam B¨uttner,
|
| 2092 |
+
Robert Oku�la, Gorka Mu˜noz-Gil, Rodrigo A. Vargas-
|
| 2093 |
+
Hern´andez, Alba Cervera-Lierta, Juan Carrasquilla, Ve-
|
| 2094 |
+
dran Dunjko, Marylou Gabri´e, Patrick Huembeli, Evert
|
| 2095 |
+
van Nieuwenburg, Filippo Vicentini, Lei Wang, Sebas-
|
| 2096 |
+
tian J. Wetzel, Giuseppe Carleo, Eliˇska Greplov´a, Ro-
|
| 2097 |
+
man Krems, Florian Marquardt, Micha�l Tomza, Maciej
|
| 2098 |
+
Lewenstein, and Alexandre Dauphin, “Modern applica-
|
| 2099 |
+
tions of machine learning in quantum sciences,” (2022),
|
| 2100 |
+
arXiv:2204.04198 [cond-mat, physics:quant-ph].
|
| 2101 |
+
[8] Juan Carrasquilla and Roger G. Melko, “Machine learn-
|
| 2102 |
+
ing phases of matter,” Nature Physics 13, 431–434
|
| 2103 |
+
(2017).
|
| 2104 |
+
[9] Pengfei Zhang, Huitao Shen,
|
| 2105 |
+
and Hui Zhai, “Machine
|
| 2106 |
+
learning topological invariants with neural networks,”
|
| 2107 |
+
Phys. Rev. Lett. 120, 066401 (2018).
|
| 2108 |
+
[10] Yi Zhang and Eun-Ah Kim, “Quantum Loop Topogra-
|
| 2109 |
+
phy for Machine Learning,” Phys. Rev. Lett. 118, 216401
|
| 2110 |
+
(2017).
|
| 2111 |
+
[11] Matthew J. S. Beach, Anna Golubeva,
|
| 2112 |
+
and Roger G.
|
| 2113 |
+
Melko, “Machine learning vortices at the kosterlitz-
|
| 2114 |
+
thouless transition,” Phys. Rev. B 97, 045207 (2018).
|
| 2115 |
+
[12] Joaquin F. Rodriguez-Nieva and Mathias S. Scheurer,
|
| 2116 |
+
“Identifying topological order through unsupervised ma-
|
| 2117 |
+
chine learning,” Nature Physics 15, 790–795 (2019).
|
| 2118 |
+
[13] Japneet Singh, Mathias S. Scheurer,
|
| 2119 |
+
and Vipul Arora,
|
| 2120 |
+
“Conditional generative models for sampling and phase
|
| 2121 |
+
transition indication in spin systems,” SciPost Phys. 11,
|
| 2122 |
+
043 (2021).
|
| 2123 |
+
[14] Y.-H.
|
| 2124 |
+
Tseng
|
| 2125 |
+
and
|
| 2126 |
+
F.-J.
|
| 2127 |
+
Jiang,
|
| 2128 |
+
“Berezin-
|
| 2129 |
+
skii–kosterlitz–thouless transition – a universal neural
|
| 2130 |
+
network study with benchmarks,” Results in Physics 33,
|
| 2131 |
+
105134 (2022).
|
| 2132 |
+
[15] Eliska Greplova, Agnes Valenti, Gregor Boschung, Frank
|
| 2133 |
+
Sch¨afer, Niels L¨orch, and Sebastian D Huber, “Unsuper-
|
| 2134 |
+
vised identification of topological phase transitions using
|
| 2135 |
+
predictive models,” New J. Phys. 22, 045003 (2020).
|
| 2136 |
+
[16] Yi Zhang, Roger G. Melko,
|
| 2137 |
+
and Eun-Ah Kim, “Ma-
|
| 2138 |
+
chine learning ϝ2 quantum spin liquids with quasiparticle
|
| 2139 |
+
statistics,” Phys. Rev. B 96, 245119 (2017).
|
| 2140 |
+
[17] Hsin-Yuan Huang, Richard Kueng, Giacomo Torlai, Vic-
|
| 2141 |
+
|
| 2142 |
+
17
|
| 2143 |
+
tor V. Albert, and John Preskill, “Provably efficient ma-
|
| 2144 |
+
chine learning for quantum many-body problems,” Sci-
|
| 2145 |
+
ence 377, eabk3333 (2022).
|
| 2146 |
+
[18] Nicolas Sadoune, Giuliano Giudici, Ke Liu,
|
| 2147 |
+
and Lode
|
| 2148 |
+
Pollet, “Unsupervised Interpretable Learning of Phases
|
| 2149 |
+
From Many-Qubit Systems,”
|
| 2150 |
+
(2022), arXiv:2208.08850
|
| 2151 |
+
[cond-mat, physics:quant-ph].
|
| 2152 |
+
[19] Alex Cole, Gregory J. Loges,
|
| 2153 |
+
and Gary Shiu, “Inter-
|
| 2154 |
+
pretable Phase Detection and Classification with Per-
|
| 2155 |
+
sistent Homology,” arXiv e-prints , arXiv:2012.00783
|
| 2156 |
+
(2020), arXiv:2012.00783 [cond-mat.stat-mech].
|
| 2157 |
+
[20] Dan Sehayek and Roger G. Melko, “Persistent Homology
|
| 2158 |
+
of Z2 Gauge Theories,” Phys. Rev. B 106, 085111 (2022),
|
| 2159 |
+
arXiv:2201.09856 [cond-mat, physics:hep-th].
|
| 2160 |
+
[21] Niklas K¨aming, Anna Dawid, Korbinian Kottmann, Ma-
|
| 2161 |
+
ciej Lewenstein, Klaus Sengstock, Alexandre Dauphin,
|
| 2162 |
+
and Christof Weitenberg, “Unsupervised machine learn-
|
| 2163 |
+
ing of topological phase transitions from experimental
|
| 2164 |
+
data,” Machine Learning:
|
| 2165 |
+
Science and Technology 2,
|
| 2166 |
+
035037 (2021).
|
| 2167 |
+
[22] Chi-Ting Ho and Daw-Wei Wang, “Robust identification
|
| 2168 |
+
of topological phase transition by self-supervised machine
|
| 2169 |
+
learning approach,” New Journal of Physics 23, 083021
|
| 2170 |
+
(2021).
|
| 2171 |
+
[23] Min-Ruei Lin, Wan-Ju Li,
|
| 2172 |
+
and Shin-Ming Huang,
|
| 2173 |
+
“Quaternion-based
|
| 2174 |
+
machine
|
| 2175 |
+
learning
|
| 2176 |
+
on
|
| 2177 |
+
topolog-
|
| 2178 |
+
ical
|
| 2179 |
+
quantum
|
| 2180 |
+
systems,”
|
| 2181 |
+
arXiv
|
| 2182 |
+
e-prints
|
| 2183 |
+
(2022),
|
| 2184 |
+
arXiv:2209.14551 [quant-ph].
|
| 2185 |
+
[24] Gilad Margalit, Omri Lesser, T. Pereg-Barnea, and Yu-
|
| 2186 |
+
val Oreg, “Renormalization-group-inspired neural net-
|
| 2187 |
+
works for computing topological invariants,” Phys. Rev.
|
| 2188 |
+
B 105, 205139 (2022).
|
| 2189 |
+
[25] Sungjoon Park, Yoonseok Hwang, and Bohm-Jung Yang,
|
| 2190 |
+
“Unsupervised learning of topological phase diagram us-
|
| 2191 |
+
ing topological data analysis,” Phys. Rev. B 105, 195115
|
| 2192 |
+
(2022).
|
| 2193 |
+
[26] Ming-Chiang
|
| 2194 |
+
Chung,
|
| 2195 |
+
Tsung-Pao
|
| 2196 |
+
Cheng,
|
| 2197 |
+
Guang-Yu
|
| 2198 |
+
Huang,
|
| 2199 |
+
and Yuan-Hong Tsai, “Deep learning of topo-
|
| 2200 |
+
logical phase transitions from the point of view of entan-
|
| 2201 |
+
glement for two-dimensional chiral p-wave superconduc-
|
| 2202 |
+
tors,” Phys. Rev. B 104, 024506 (2021).
|
| 2203 |
+
[27] Yuan-Hong Tsai,
|
| 2204 |
+
Kuo-Feng Chiu,
|
| 2205 |
+
Yong-Cheng Lai,
|
| 2206 |
+
Kuan-Jung Su, Tzu-Pei Yang, Tsung-Pao Cheng, Guang-
|
| 2207 |
+
Yu Huang, and Ming-Chiang Chung, “Deep learning of
|
| 2208 |
+
topological phase transitions from entanglement aspects:
|
| 2209 |
+
An unsupervised way,” Phys. Rev. B 104, 165108 (2021).
|
| 2210 |
+
[28] Alejandro Jos´e Ur´ıa-´Alvarez, Daniel Molpeceres-Mingo,
|
| 2211 |
+
and
|
| 2212 |
+
Juan
|
| 2213 |
+
Jos´e
|
| 2214 |
+
Palacios,
|
| 2215 |
+
“Deep
|
| 2216 |
+
learning
|
| 2217 |
+
for
|
| 2218 |
+
dis-
|
| 2219 |
+
ordered topological insulators through entanglement
|
| 2220 |
+
spectrum,” arXiv e-prints , arXiv:2201.13306 (2022),
|
| 2221 |
+
arXiv:2201.13306 [cond-mat.dis-nn].
|
| 2222 |
+
[29] Paolo Molignini, Antonio Zegarra, Evert van Nieuwen-
|
| 2223 |
+
burg, R. Chitra, and Wei Chen, “A supervised learning
|
| 2224 |
+
algorithm for interacting topological insulators based on
|
| 2225 |
+
local curvature,” SciPost Phys. 11, 073 (2021).
|
| 2226 |
+
[30] Simone Tibaldi, Giuseppe Magnifico, Davide Vodola,
|
| 2227 |
+
and
|
| 2228 |
+
Elisa
|
| 2229 |
+
Ercolessi,
|
| 2230 |
+
“Unsupervised
|
| 2231 |
+
and
|
| 2232 |
+
supervised
|
| 2233 |
+
learning of interacting topological phases from single-
|
| 2234 |
+
particle correlation functions,” arXiv e-prints
|
| 2235 |
+
(2022),
|
| 2236 |
+
arXiv:2202.09281 [cond-mat.supr-con].
|
| 2237 |
+
[31] Andrea Tirelli and Natanael C. Costa, “Learning quan-
|
| 2238 |
+
tum phase transitions through topological data analysis,”
|
| 2239 |
+
Phys. Rev. B 104, 235146 (2021).
|
| 2240 |
+
[32] R. R. Coifman, S. Lafon, A. B. Lee, M. Maggioni,
|
| 2241 |
+
B. Nadler, F. Warner,
|
| 2242 |
+
and S. W. Zucker, “Geometric
|
| 2243 |
+
diffusions as a tool for harmonic analysis and structure
|
| 2244 |
+
definition of data: Diffusion maps,” Proceedings of the
|
| 2245 |
+
National Academy of Sciences 102, 7426–7431 (2005).
|
| 2246 |
+
[33] Boaz Nadler, Stephane Lafon, Ioannis Kevrekidis,
|
| 2247 |
+
and
|
| 2248 |
+
Ronald Coifman, “Diffusion Maps, Spectral Clustering
|
| 2249 |
+
and Eigenfunctions of Fokker-Planck Operators,” in Ad-
|
| 2250 |
+
vances in Neural Information Processing Systems, Vol. 18
|
| 2251 |
+
(MIT Press, 2005).
|
| 2252 |
+
[34] Boaz Nadler, St´ephane Lafon, Ronald R. Coifman, and
|
| 2253 |
+
Ioannis G. Kevrekidis, “Diffusion maps, spectral cluster-
|
| 2254 |
+
ing and reaction coordinates of dynamical systems,” Ap-
|
| 2255 |
+
plied and Computational Harmonic Analysis 21, 113–127
|
| 2256 |
+
(2006).
|
| 2257 |
+
[35] Ronald R. Coifman and St´ephane Lafon, “Diffusion
|
| 2258 |
+
maps,” Applied and Computational Harmonic Analysis
|
| 2259 |
+
21, 5–30 (2006).
|
| 2260 |
+
[36] Mathias S. Scheurer and Robert-Jan Slager, “Unsuper-
|
| 2261 |
+
vised Machine Learning and Band Topology,” Phys. Rev.
|
| 2262 |
+
Lett. 124, 226401 (2020).
|
| 2263 |
+
[37] Yang Long, Jie Ren,
|
| 2264 |
+
and Hong Chen, “Unsupervised
|
| 2265 |
+
Manifold Clustering of Topological Phononics,” Phys.
|
| 2266 |
+
Rev. Lett. 124, 185501 (2020).
|
| 2267 |
+
[38] Li-Wei Yu and Dong-Ling Deng, “Unsupervised learning
|
| 2268 |
+
of non-hermitian topological phases,” Phys. Rev. Lett.
|
| 2269 |
+
126, 240402 (2021).
|
| 2270 |
+
[39] Yefei Yu, Li-Wei Yu, Wengang Zhang, Huili Zhang, Xiao-
|
| 2271 |
+
long Ouyang, Yanqing Liu, Dong-Ling Deng, and L. M.
|
| 2272 |
+
Duan, “Experimental unsupervised learning of non-
|
| 2273 |
+
Hermitian knotted phases with solid-state spins,” npj
|
| 2274 |
+
Quantum Information 8, 116 (2022), arXiv:2112.13785
|
| 2275 |
+
[quant-ph].
|
| 2276 |
+
[40] Yanming Che, Clemens Gneiting, Tao Liu, and Franco
|
| 2277 |
+
Nori, “Topological quantum phase transitions retrieved
|
| 2278 |
+
through unsupervised machine learning,” Phys. Rev. B
|
| 2279 |
+
102, 134213 (2020).
|
| 2280 |
+
[41] En-Jui
|
| 2281 |
+
Kuo
|
| 2282 |
+
and
|
| 2283 |
+
Hossein
|
| 2284 |
+
Dehghani,
|
| 2285 |
+
“Unsuper-
|
| 2286 |
+
vised
|
| 2287 |
+
Learning
|
| 2288 |
+
of
|
| 2289 |
+
Symmetry
|
| 2290 |
+
Protected
|
| 2291 |
+
Topological
|
| 2292 |
+
Phase
|
| 2293 |
+
Transitions,”
|
| 2294 |
+
arXiv:2111.08747
|
| 2295 |
+
[cond-mat,
|
| 2296 |
+
physics:quant-ph]
|
| 2297 |
+
(2021), arXiv:2111.08747 [cond-mat,
|
| 2298 |
+
physics:quant-ph].
|
| 2299 |
+
[42] Eran Lustig, Or Yair, Ronen Talmon,
|
| 2300 |
+
and Mordechai
|
| 2301 |
+
Segev, “Identifying Topological Phase Transitions in Ex-
|
| 2302 |
+
periments Using Manifold Learning,” Phys. Rev. Lett.
|
| 2303 |
+
125, 127401 (2020).
|
| 2304 |
+
[43] Alexander Lidiak and Zhexuan Gong, “Unsupervised
|
| 2305 |
+
Machine Learning of Quantum Phase Transitions Using
|
| 2306 |
+
Diffusion Maps,” Phys. Rev. Lett. 125, 225701 (2020).
|
| 2307 |
+
[44] Gaurav Gyawali, Mabrur Ahmed, Eric Aspling, Luke
|
| 2308 |
+
Ellert-Beck,
|
| 2309 |
+
and Michael J. Lawler, “Revealing mi-
|
| 2310 |
+
crocanonical phase diagrams of strongly correlated sys-
|
| 2311 |
+
tems via time-averaged classical shadows,”
|
| 2312 |
+
(2022),
|
| 2313 |
+
arXiv:2211.01259 [cond-mat, physics:quant-ph].
|
| 2314 |
+
[45] Apimuk Sornsaeng, Ninnat Dangniam, Pantita Palit-
|
| 2315 |
+
tapongarnpim, and Thiparat Chotibut, “Quantum diffu-
|
| 2316 |
+
sion map for nonlinear dimensionality reduction,” Phys.
|
| 2317 |
+
Rev. A 104, 052410 (2021).
|
| 2318 |
+
[46] Giuseppe Carleo and Matthias Troyer, “Solving the
|
| 2319 |
+
quantum many-body problem with artificial neural net-
|
| 2320 |
+
works,” Science 355, 602–606 (2017).
|
| 2321 |
+
[47] Xun Gao and Lu-Ming Duan, “Efficient representation of
|
| 2322 |
+
quantum many-body states with deep neural networks,”
|
| 2323 |
+
Nature Communications 8, 662 (2017).
|
| 2324 |
+
|
| 2325 |
+
18
|
| 2326 |
+
[48] Giuseppe Carleo,
|
| 2327 |
+
Yusuke Nomura,
|
| 2328 |
+
and Masatoshi
|
| 2329 |
+
Imada, “Constructing exact representations of quantum
|
| 2330 |
+
many-body systems with deep neural networks,” Nat
|
| 2331 |
+
Commun 9, 5322 (2018).
|
| 2332 |
+
[49] Sirui Lu, Xun Gao,
|
| 2333 |
+
and L.-M. Duan, “Efficient repre-
|
| 2334 |
+
sentation of topologically ordered states with restricted
|
| 2335 |
+
Boltzmann machines,” Phys. Rev. B 99, 155136 (2019).
|
| 2336 |
+
[50] Or
|
| 2337 |
+
Sharir,
|
| 2338 |
+
Amnon
|
| 2339 |
+
Shashua,
|
| 2340 |
+
and
|
| 2341 |
+
Giuseppe
|
| 2342 |
+
Car-
|
| 2343 |
+
leo, “Neural tensor contractions and the expressive
|
| 2344 |
+
power
|
| 2345 |
+
of
|
| 2346 |
+
deep
|
| 2347 |
+
neural
|
| 2348 |
+
quantum
|
| 2349 |
+
states,”
|
| 2350 |
+
(2021),
|
| 2351 |
+
10.48550/arXiv.2103.10293.
|
| 2352 |
+
[51] Jing Chen, Song Cheng, Haidong Xie, Lei Wang,
|
| 2353 |
+
and
|
| 2354 |
+
Tao Xiang, “Equivalence of restricted Boltzmann ma-
|
| 2355 |
+
chines and tensor network states,” Phys. Rev. B 97,
|
| 2356 |
+
085104 (2018).
|
| 2357 |
+
[52] Yusuke
|
| 2358 |
+
Nomura,
|
| 2359 |
+
“Investigating
|
| 2360 |
+
Network
|
| 2361 |
+
Pa-
|
| 2362 |
+
rameters
|
| 2363 |
+
in
|
| 2364 |
+
Neural-Network
|
| 2365 |
+
Quantum
|
| 2366 |
+
States,”
|
| 2367 |
+
(2022),
|
| 2368 |
+
arXiv:2202.01704
|
| 2369 |
+
[cond-mat,
|
| 2370 |
+
physics:physics,
|
| 2371 |
+
physics:quant-ph].
|
| 2372 |
+
[53] Dong-Ling Deng, Xiaopeng Li, and S Das Sarma, “Quan-
|
| 2373 |
+
tum Entanglement in Neural Network States,”
|
| 2374 |
+
, 17
|
| 2375 |
+
(2017).
|
| 2376 |
+
[54] Zhih-Ahn Jia, Lu Wei, Yu-Chun Wu, Guang-Can Guo,
|
| 2377 |
+
and Guo-Ping Guo, “Entanglement area law for shallow
|
| 2378 |
+
and deep quantum neural network states,” New J. Phys.
|
| 2379 |
+
22, 053022 (2020).
|
| 2380 |
+
[55] Song
|
| 2381 |
+
Cheng,
|
| 2382 |
+
Jing
|
| 2383 |
+
Chen,
|
| 2384 |
+
and
|
| 2385 |
+
Lei
|
| 2386 |
+
Wang,
|
| 2387 |
+
Infor-
|
| 2388 |
+
mation Perspective to Probabilistic Modeling:
|
| 2389 |
+
Boltz-
|
| 2390 |
+
mann Machines versus Born Machines, Tech. Rep.
|
| 2391 |
+
(2017)
|
| 2392 |
+
arXiv:1712.04144
|
| 2393 |
+
[cond-mat,
|
| 2394 |
+
physics:physics,
|
| 2395 |
+
physics:quant-ph, stat].
|
| 2396 |
+
[56] Giacomo Torlai, Guglielmo Mazzola, Juan Carrasquilla,
|
| 2397 |
+
Matthias Troyer, Roger Melko,
|
| 2398 |
+
and Giuseppe Carleo,
|
| 2399 |
+
“Neural-network quantum state tomography,” Nature
|
| 2400 |
+
Physics 14, 447–450 (2018).
|
| 2401 |
+
[57] A. Yu Kitaev, “Fault-tolerant quantum computation
|
| 2402 |
+
by
|
| 2403 |
+
anyons,”
|
| 2404 |
+
Annals
|
| 2405 |
+
of
|
| 2406 |
+
Physics
|
| 2407 |
+
303,
|
| 2408 |
+
2–30
|
| 2409 |
+
(2003),
|
| 2410 |
+
arXiv:quant-ph/9707021.
|
| 2411 |
+
[58] Dong-Ling Deng, Xiaopeng Li,
|
| 2412 |
+
and S. Das Sarma,
|
| 2413 |
+
“Machine learning topological states,” Phys. Rev. B 96,
|
| 2414 |
+
195145 (2017).
|
| 2415 |
+
[59] Agnes Valenti, Eliska Greplova, Netanel H. Lindner, and
|
| 2416 |
+
Sebastian D. Huber, “Correlation-Enhanced Neural Net-
|
| 2417 |
+
works as Interpretable Variational Quantum States,”
|
| 2418 |
+
(2021), arXiv:2103.05017 [cond-mat, physics:quant-ph].
|
| 2419 |
+
[60] Ze-Pei
|
| 2420 |
+
Cian,
|
| 2421 |
+
Mohammad
|
| 2422 |
+
Hafezi,
|
| 2423 |
+
and
|
| 2424 |
+
Maissam
|
| 2425 |
+
Barkeshli, “Extracting Wilson loop operators and frac-
|
| 2426 |
+
tional statistics from a single bulk ground state,” arXiv
|
| 2427 |
+
e-prints (2022), arXiv:2209.14302 [cond-mat.str-el].
|
| 2428 |
+
[61] M. B. Hastings, “An area law for one-dimensional quan-
|
| 2429 |
+
tum systems,” J. Stat. Mech. 2007, P08024–P08024
|
| 2430 |
+
(2007).
|
| 2431 |
+
[62] F. Verstraete, M. M. Wolf, D. Perez-Garcia,
|
| 2432 |
+
and J. I.
|
| 2433 |
+
Cirac, “Criticality, the Area Law, and the Computational
|
| 2434 |
+
Power of Projected Entangled Pair States,” Phys. Rev.
|
| 2435 |
+
Lett. 96, 220601 (2006).
|
| 2436 |
+
[63] Michael M. Wolf, Frank Verstraete, Matthew B. Hast-
|
| 2437 |
+
ings,
|
| 2438 |
+
and J. Ignacio Cirac, “Area Laws in Quan-
|
| 2439 |
+
tum Systems:
|
| 2440 |
+
Mutual Information and Correlations,”
|
| 2441 |
+
Physical Review Letters 100 (2008), 10.1103/Phys-
|
| 2442 |
+
RevLett.100.070502.
|
| 2443 |
+
[64] J. Eisert, M. Cramer,
|
| 2444 |
+
and M. B. Plenio, “Colloquium
|
| 2445 |
+
: Area laws for the entanglement entropy,” Rev. Mod.
|
| 2446 |
+
Phys. 82, 277–306 (2010).
|
| 2447 |
+
[65] Jing-Ling Chen, Libin Fu, Abraham A. Ungar, and Xian-
|
| 2448 |
+
Geng Zhao, “Alternative fidelity measure between two
|
| 2449 |
+
states of an N -state quantum system,” Phys. Rev. A 65,
|
| 2450 |
+
054304 (2002).
|
| 2451 |
+
[66] Paulo E. M. F. Mendon¸ca, Reginaldo d. J. Napoli-
|
| 2452 |
+
tano, Marcelo A. Marchiolli, Christopher J. Foster, and
|
| 2453 |
+
Yeong-Cherng Liang, “Alternative fidelity measure be-
|
| 2454 |
+
tween quantum states,” Phys. Rev. A 78, 052330 (2008).
|
| 2455 |
+
[67] Zbigniew Pucha�la and Jaros�law Adam Miszczak, “Bound
|
| 2456 |
+
on trace distance based on superfidelity,” Phys. Rev. A
|
| 2457 |
+
79, 024302 (2009).
|
| 2458 |
+
[68] J. A. Miszczak, Z. Pucha�la, P. Horodecki, A. Uhlmann,
|
| 2459 |
+
and K. Zyczkowski, “Sub- and super-fidelity as bounds
|
| 2460 |
+
for quantum fidelity,”
|
| 2461 |
+
(2008), arXiv:0805.2037 [quant-
|
| 2462 |
+
ph].
|
| 2463 |
+
[69] I. S. Tupitsyn, A. Kitaev, N. V. Prokof’ev, and P. C. E.
|
| 2464 |
+
Stamp, “Topological multicritical point in the phase dia-
|
| 2465 |
+
gram of the toric code model and three-dimensional lat-
|
| 2466 |
+
tice gauge Higgs model,” Phys. Rev. B 82, 085114 (2010).
|
| 2467 |
+
[70] Simon Trebst, Philipp Werner, Matthias Troyer, Kirill
|
| 2468 |
+
Shtengel,
|
| 2469 |
+
and Chetan Nayak, “Breakdown of a Topo-
|
| 2470 |
+
logical Phase: Quantum Phase Transition in a Loop Gas
|
| 2471 |
+
Model with Tension,” Phys. Rev. Lett. 98, 070602 (2007).
|
| 2472 |
+
[71] Fengcheng Wu, Youjin Deng,
|
| 2473 |
+
and Nikolay Prokof’ev,
|
| 2474 |
+
“Phase diagram of the toric code model in a parallel mag-
|
| 2475 |
+
netic field,” Phys. Rev. B 85, 195104 (2012).
|
| 2476 |
+
[72] Michael Schuler, Seth Whitsitt, Louis-Paul Henry, Subir
|
| 2477 |
+
Sachdev, and Andreas M. L¨auchli, “Universal Signatures
|
| 2478 |
+
of Quantum Critical Points from Finite-Size Torus Spec-
|
| 2479 |
+
tra: A Window into the Operator Content of Higher-
|
| 2480 |
+
Dimensional Conformal Field Theories,” Phys. Rev. Lett.
|
| 2481 |
+
117, 210401 (2016), arXiv:1603.03042 [cond-mat.str-el].
|
| 2482 |
+
[73] James
|
| 2483 |
+
Bradbury,
|
| 2484 |
+
Roy
|
| 2485 |
+
Frostig,
|
| 2486 |
+
Peter
|
| 2487 |
+
Hawkins,
|
| 2488 |
+
Matthew James Johnson, Chris Leary, Dougal Maclau-
|
| 2489 |
+
rin, George Necula, Adam Paszke, Jake VanderPlas,
|
| 2490 |
+
Skye Wanderman-Milne, and Qiao Zhang, “JAX: com-
|
| 2491 |
+
posable transformations of Python+NumPy programs,”
|
| 2492 |
+
(2018).
|
| 2493 |
+
[74] Nicholas Metropolis,
|
| 2494 |
+
Arianna W. Rosenbluth,
|
| 2495 |
+
Mar-
|
| 2496 |
+
shall N. Rosenbluth, Augusta H. Teller,
|
| 2497 |
+
and Edward
|
| 2498 |
+
Teller, “Equation of State Calculations by Fast Comput-
|
| 2499 |
+
ing Machines,” J. Chem. Phys. 21, 1087–1092 (1953).
|
| 2500 |
+
[75] Diederik
|
| 2501 |
+
P
|
| 2502 |
+
Kingma
|
| 2503 |
+
and
|
| 2504 |
+
Jimmy
|
| 2505 |
+
Ba,
|
| 2506 |
+
“Adam:
|
| 2507 |
+
A
|
| 2508 |
+
method for stochastic optimization,” arXiv preprint
|
| 2509 |
+
arXiv:1412.6980 (2014).
|
| 2510 |
+
|
M9E0T4oBgHgl3EQf0QJr/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
MtE1T4oBgHgl3EQfZQTY/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:98dcb0f0f13047a09ba4c2792b709389f2f4144946722fc922a8c2c60e199505
|
| 3 |
+
size 125656
|
NNAyT4oBgHgl3EQfs_m_/content/2301.00588v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:611fc60be4b080da36a0737b07381d89a22a09812d7de212214a318da2150c21
|
| 3 |
+
size 518657
|
NNAyT4oBgHgl3EQfs_m_/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:93f18c36a8324f70c699985fa0ec64c251007aecb18c07148ee7212e0b65d4e7
|
| 3 |
+
size 4390957
|
NNAyT4oBgHgl3EQfs_m_/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:61232da48886596ad2f1b5d88ec86dd7dc43e6cd29b07d571061f3b9f87b6e67
|
| 3 |
+
size 135819
|
NNFQT4oBgHgl3EQfWDbE/content/tmp_files/2301.13303v1.pdf.txt
ADDED
|
@@ -0,0 +1,1774 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Variational sparse inverse Cholesky approximation for latent Gaussian
|
| 2 |
+
processes via double Kullback-Leibler minimization
|
| 3 |
+
Jian Cao * 1 Myeongjong Kang * 2 Felix Jimenez 2 Huiyan Sang 2 Florian Schafer 3 Matthias Katzfuss 1
|
| 4 |
+
Abstract
|
| 5 |
+
To achieve scalable and accurate inference for
|
| 6 |
+
latent Gaussian processes, we propose a varia-
|
| 7 |
+
tional approximation based on a family of Gaus-
|
| 8 |
+
sian distributions whose covariance matrices have
|
| 9 |
+
sparse inverse Cholesky (SIC) factors. We com-
|
| 10 |
+
bine this variational approximation of the pos-
|
| 11 |
+
terior with a similar and efficient SIC-restricted
|
| 12 |
+
Kullback-Leibler-optimal approximation of the
|
| 13 |
+
prior. We then focus on a particular SIC order-
|
| 14 |
+
ing and nearest-neighbor-based sparsity pattern
|
| 15 |
+
resulting in highly accurate prior and posterior
|
| 16 |
+
approximations. For this setting, our variational
|
| 17 |
+
approximation can be computed via stochastic gra-
|
| 18 |
+
dient descent in polylogarithmic time per iteration.
|
| 19 |
+
We provide numerical comparisons showing that
|
| 20 |
+
the proposed double-Kullback-Leibler-optimal
|
| 21 |
+
Gaussian-process approximation (DKLGP) can
|
| 22 |
+
sometimes be vastly more accurate than alter-
|
| 23 |
+
native approaches such as inducing-point and
|
| 24 |
+
mean-field approximations at similar computa-
|
| 25 |
+
tional complexity.
|
| 26 |
+
1. Introduction
|
| 27 |
+
Gaussian process (GP) priors are popular models for un-
|
| 28 |
+
known functions in a variety of settings, including geo-
|
| 29 |
+
statistics (e.g., Stein, 1999; Banerjee et al., 2004; Cressie
|
| 30 |
+
& Wikle, 2011), computer model emulation (e.g., Sacks
|
| 31 |
+
et al., 1989; Kennedy & O’Hagan, 2001; Gramacy, 2020),
|
| 32 |
+
and machine learning (e.g., Rasmussen & Williams, 2006;
|
| 33 |
+
Deisenroth, 2010). Latent GP (LGP) models, such as gener-
|
| 34 |
+
alized GPs, assume a Gaussian or non-Gaussian distribution
|
| 35 |
+
for the data conditional on a GP (e.g., Diggle et al., 1998;
|
| 36 |
+
Chan & Dong, 2011). LGPs extend GPs to a large class of
|
| 37 |
+
*Equal contribution 1Department of Statistics and Institute of
|
| 38 |
+
Data Science, Texas A&M University, College Station, Texas,
|
| 39 |
+
USA 2Department of Statistics, Texas A&M University, College
|
| 40 |
+
Station, Texas, USA 3School of Computational Science and Engi-
|
| 41 |
+
neering, Georgia Institute of Technology, Atlanta, Georgia, USA.
|
| 42 |
+
Correspondence to: Matthias Katzfuss <katzfuss@tamu.edu>.
|
| 43 |
+
settings, including noisy, categorical, and count data. How-
|
| 44 |
+
ever, LGP inference is generally analytically intractable and
|
| 45 |
+
hence requires approximations. In addition, direct GP in-
|
| 46 |
+
ference is prohibitive for large datasets due to cubic scaling
|
| 47 |
+
in the data size. There are two main challenges for (L)GPs
|
| 48 |
+
in many applications: One is to specify or learn a suitable
|
| 49 |
+
kernel for the GP, and the other is carrying out fast inference
|
| 50 |
+
for a given kernel. In this paper, we make no contributions
|
| 51 |
+
to the former and instead focus on the latter challenge: We
|
| 52 |
+
assume that a parametric kernel form is given and propose
|
| 53 |
+
an efficient approximation method for LGP inference via
|
| 54 |
+
structured variational learning.
|
| 55 |
+
Many approaches to scaling GPs to large datasets were
|
| 56 |
+
reviewed in Heaton et al. (2019) and Liu et al. (2020), in-
|
| 57 |
+
cluding low-rank approaches with a small number of pseudo
|
| 58 |
+
points that are popular in machine learning. Such low-rank
|
| 59 |
+
GP approximations have been combined with variational
|
| 60 |
+
inference for GPs (e.g., Titsias, 2009; Hensman et al., 2013)
|
| 61 |
+
and LGPs (e.g., Hensman et al., 2015; Leibfried et al., 2020).
|
| 62 |
+
A highly promising approach to achieve GP scalability is
|
| 63 |
+
given by nearest-neighbor Vecchia approximations from spa-
|
| 64 |
+
tial statistics (e.g., Vecchia, 1988; Stein et al., 2004; Datta
|
| 65 |
+
et al., 2016; Katzfuss & Guinness, 2021), which are optimal
|
| 66 |
+
with respect to forward Kullback-Leibler (KL) divergence
|
| 67 |
+
under the restriction of sparse inverse Cholesky (SIC) fac-
|
| 68 |
+
tors of the covariance matrix (Sch¨afer et al., 2021a). Such
|
| 69 |
+
SIC approximations have several attractive properties (e.g.,
|
| 70 |
+
as reviewed by Katzfuss et al., 2022). They result in a valid
|
| 71 |
+
joint density function given by the product of univariate
|
| 72 |
+
conditional Gaussians, each of which can be independently
|
| 73 |
+
computed in cubic complexity in the number of neighbors.
|
| 74 |
+
This allows straightforward mini-batch subsampling with
|
| 75 |
+
unbiased gradient estimators (Cao et al., 2022). For the or-
|
| 76 |
+
dering and sparsity pattern used here, the number of neigh-
|
| 77 |
+
bors needs to grow only polylogarithmically with the data
|
| 78 |
+
size to achieve ϵ-accurate approximations for Mat´ern-type
|
| 79 |
+
kernels up to boundary effects (Sch¨afer et al., 2021a) due to
|
| 80 |
+
the screening effect (Stein, 2011a). Many existing GP ap-
|
| 81 |
+
proximations, including low-rank and partially-independent
|
| 82 |
+
conditional approaches, can be viewed as special cases of
|
| 83 |
+
SIC corresponding to particular orderings and sparsity pat-
|
| 84 |
+
terns (Katzfuss & Guinness, 2021). SIC using our ordering
|
| 85 |
+
arXiv:2301.13303v1 [stat.ML] 30 Jan 2023
|
| 86 |
+
|
| 87 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 88 |
+
p(f)
|
| 89 |
+
ˆp(f)
|
| 90 |
+
Prior
|
| 91 |
+
p(f|y)
|
| 92 |
+
ˆp(f|y)
|
| 93 |
+
Posterior
|
| 94 |
+
q(f)
|
| 95 |
+
ˆq(f)
|
| 96 |
+
Variational
|
| 97 |
+
distribution
|
| 98 |
+
Bayes’ theorem
|
| 99 |
+
Variational inference
|
| 100 |
+
SIC
|
| 101 |
+
approximation
|
| 102 |
+
Figure 1. Double KL minimization for approximating a latent
|
| 103 |
+
Gaussian f given data y: Based on a forward-KL-optimal SIC ap-
|
| 104 |
+
proximation ˆp(f) of the prior, we obtain an SIC-restricted reverse-
|
| 105 |
+
KL-optimal variational approximation ˆq(f) to the posterior.
|
| 106 |
+
and sparsity does not exhibit the same limitations as low-
|
| 107 |
+
rank approximations (Stein, 2014) and can hence be signifi-
|
| 108 |
+
cantly more accurate for non-latent (i.e., directly observed)
|
| 109 |
+
GPs (Cao et al., 2022).
|
| 110 |
+
SIC approximations of LGPs are more challenging. For
|
| 111 |
+
LGPs with Gaussian noise, applying SIC approximations to
|
| 112 |
+
the noisy responses reduces accuracy, and SIC approxima-
|
| 113 |
+
tions of the latent field may not be scalable (e.g., Katzfuss
|
| 114 |
+
& Guinness, 2021). Existing approaches addressing this
|
| 115 |
+
challenge (Datta et al., 2016; Katzfuss & Guinness, 2021;
|
| 116 |
+
Sch¨afer et al., 2021a; Geoga & Stein, 2022) do not consider
|
| 117 |
+
estimation using stochastic gradient descent (SGD). For
|
| 118 |
+
non-Gaussian LGPs, Laplace SIC approximations (Zilber &
|
| 119 |
+
Katzfuss, 2021) are straightforward but can be inaccurate.
|
| 120 |
+
Liu & Liu (2019) combined an SIC-type approximation to
|
| 121 |
+
the prior with variational inference based on a variational
|
| 122 |
+
family of Gaussians with a sparse Cholesky factor of the co-
|
| 123 |
+
variance matrix, but we are not aware of results guaranteeing
|
| 124 |
+
that the covariance-Cholesky factor exhibits (approximate)
|
| 125 |
+
sparsity under random ordering. Wu et al. (2022) combined
|
| 126 |
+
SIC-type approximations of LGPs with mean-field varia-
|
| 127 |
+
tional inference, but the latter may be inaccurate when there
|
| 128 |
+
are strong correlations in the GP posterior (MacKay, 1992).
|
| 129 |
+
To achieve scalable and accurate inference for LGPs, we
|
| 130 |
+
propose a variational family of SIC Gaussian distributions
|
| 131 |
+
and combine it with a SIC approximation to the GP prior
|
| 132 |
+
(see Figure 1). Our approach is double-KL-optimal in the
|
| 133 |
+
sense that variational approximation is reverse-KL-optimal
|
| 134 |
+
for a given log normalizer (i.e., evidence) and our prior
|
| 135 |
+
SIC approximation, which is available in closed form, is
|
| 136 |
+
forward-KL-optimal for a given sparsity pattern (Sch¨afer
|
| 137 |
+
et al., 2021a). Within our double-Kullback-Leibler-optimal
|
| 138 |
+
Gaussian-process framework (DKLGP), we then focus on
|
| 139 |
+
a particular ordering and nearest-neighbor-based sparsity
|
| 140 |
+
pattern resulting in highly accurate prior and posterior ap-
|
| 141 |
+
proximations. We adopt a novel computational trick based
|
| 142 |
+
on the concept of reduced ancestor sets for achieving effi-
|
| 143 |
+
cient and scalable LGP inference. For this setting, our vari-
|
| 144 |
+
ational approximation can be computed via stochastic gra-
|
| 145 |
+
dient descent in polylogarithmic time per iteration. While
|
| 146 |
+
inducing-point methods assume that unobserved points de-
|
| 147 |
+
pend on data only through inducing points (e.g., Frigola
|
| 148 |
+
et al., 2014; Hensman et al., 2015), our method allows fast
|
| 149 |
+
and accurate KL-optimal prediction based on the screening
|
| 150 |
+
effect. Our numerical comparisons show that DKLGP can
|
| 151 |
+
be vastly more accurate than state-of-the-art alternatives
|
| 152 |
+
such as inducing-point and mean-field approximations at a
|
| 153 |
+
similar computational complexity.
|
| 154 |
+
2. Methodology
|
| 155 |
+
2.1. Model
|
| 156 |
+
Assume we have a vector y = (y1, . . . , yn)⊤ of noisy
|
| 157 |
+
observations of a latent GP f(·) ∼ GP(µ, K) at inputs
|
| 158 |
+
x1, . . . , xn ∈ Rd, such that p(y|f) = �n
|
| 159 |
+
i=1 p(yi|fi), where
|
| 160 |
+
f = (f1, . . . , fn)⊤ ∼ Nn(µ, K)
|
| 161 |
+
(1)
|
| 162 |
+
with µi = µ(xi) and Kij = K(xi, xj). Throughout, we
|
| 163 |
+
view the inputs xi as fixed (i.e., non-random) and hence do
|
| 164 |
+
not explicitly condition on them.
|
| 165 |
+
Unless y|f follows a Gaussian distribution, inference (such
|
| 166 |
+
as computing the posterior p(f|y)) generally cannot be car-
|
| 167 |
+
ried out in closed form. In addition, even for Gaussian
|
| 168 |
+
likelihoods, direct inference scales as O(n3) and is thus
|
| 169 |
+
computationally infeasible for large n. To address these
|
| 170 |
+
challenges, we propose an approximation based on double
|
| 171 |
+
KL minimization.
|
| 172 |
+
2.2. Variational sparse inverse Cholesky approximation
|
| 173 |
+
Consider a lower-triangular sparsity set Sq ⊂ {1, . . . , n}2,
|
| 174 |
+
with {(i, i) : i = 1, . . . , n} ⊂ Sq and such that i ≥ j for all
|
| 175 |
+
(i, j) ∈ Sq. Our preferred choice of Sq will be discussed
|
| 176 |
+
in Section 2.5, but typically we will have (i, j) ∈ Sq if xi
|
| 177 |
+
and xj are “close.” Corresponding to Sq, define the family
|
| 178 |
+
of distributions Q = {Nn(ν, (VV⊤)−1) : ν ∈ Rn, V ∈
|
| 179 |
+
Rn×n, V ∈ Sq}, where we write V ∈ Sq if (i, j) ∈ Sq
|
| 180 |
+
for all Vij ̸= 0. It is straightforward to show that any
|
| 181 |
+
q ∈ Q can be represented in ordered conditional form as
|
| 182 |
+
q(f) = �n
|
| 183 |
+
i=1 q(fi|fsq
|
| 184 |
+
i ), where sq
|
| 185 |
+
i = {j > i : (j, i) ∈ Sq}
|
| 186 |
+
for i = 1, . . . , n − 1 and sq
|
| 187 |
+
n = ∅.
|
| 188 |
+
We approximate the posterior p(f|y) by the closest distribu-
|
| 189 |
+
tion in Q in terms of reverse KL divergence:
|
| 190 |
+
ˆq(f) = arg min
|
| 191 |
+
q∈Q
|
| 192 |
+
KL
|
| 193 |
+
�
|
| 194 |
+
q(f)
|
| 195 |
+
��p(f|y)
|
| 196 |
+
�
|
| 197 |
+
.
|
| 198 |
+
We have KL(q(f)∥p(f|y)) = log p(y) − ELBO(q), where
|
| 199 |
+
|
| 200 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 201 |
+
p(y) does not depend on q, and so ˆq satisfies
|
| 202 |
+
ˆq(f) = arg max
|
| 203 |
+
q∈Q
|
| 204 |
+
ELBO(q).
|
| 205 |
+
(2)
|
| 206 |
+
Proposition 2.1. The ELBO in (2) can be written up to an
|
| 207 |
+
additive constant of n/2 as
|
| 208 |
+
ELBO(q) =
|
| 209 |
+
n
|
| 210 |
+
�
|
| 211 |
+
i=1
|
| 212 |
+
�
|
| 213 |
+
E
|
| 214 |
+
q log p(yi|fi) − ((ν − µ)⊤L:,i)2/2
|
| 215 |
+
+ log(V−1
|
| 216 |
+
ii Lii) − ∥V−1L:,i∥2/2
|
| 217 |
+
�
|
| 218 |
+
,(3)
|
| 219 |
+
where L is the inverse Cholesky factor of K such that
|
| 220 |
+
K−1 = LL⊤, and L:,i denotes its ith column.
|
| 221 |
+
All proofs can be found in Appendix C.
|
| 222 |
+
2.3. Approximating the prior via a second KL
|
| 223 |
+
minimization
|
| 224 |
+
Even for a sparse V, computing the ELBO in (3) is pro-
|
| 225 |
+
hibitively expensive for large n, because computing L (or
|
| 226 |
+
any of its columns) from K generally requires O(n3) time.
|
| 227 |
+
To avoid this, we replace the prior p(f) defined in (1) by
|
| 228 |
+
a Gaussian distribution that minimizes a second KL diver-
|
| 229 |
+
gence under an SIC constraint.
|
| 230 |
+
Specifically, consider a second lower-triangular sparsity
|
| 231 |
+
set Sp ⊂ {1, . . . , n}2, which may be the same as Sq.
|
| 232 |
+
We define the corresponding set of distributions P =
|
| 233 |
+
{Nn(˜µ, (˜L˜L⊤)−1) : ˜µ ∈ Rn, ˜L ∈ Rn×n, ˜L ∈ Sp}. We
|
| 234 |
+
approximate the prior p(f) by the closest approximation in
|
| 235 |
+
P in terms of forward KL divergence:
|
| 236 |
+
ˆp(f) = arg min
|
| 237 |
+
˜p∈P
|
| 238 |
+
KL
|
| 239 |
+
�
|
| 240 |
+
p(f)
|
| 241 |
+
��˜p(f)
|
| 242 |
+
�
|
| 243 |
+
.
|
| 244 |
+
(4)
|
| 245 |
+
By a slight extension of Sch¨afer et al. (2021a, Thm. 2.1),
|
| 246 |
+
we can show that this optimization problem has an efficient
|
| 247 |
+
closed-form solution.
|
| 248 |
+
Proposition
|
| 249 |
+
2.2.
|
| 250 |
+
The
|
| 251 |
+
solution
|
| 252 |
+
to
|
| 253 |
+
(4)
|
| 254 |
+
is
|
| 255 |
+
ˆp(f)
|
| 256 |
+
=
|
| 257 |
+
Nn(f|µ, (ˆLˆL⊤)−1), where the nonzero entries of the ith
|
| 258 |
+
column of ˆL can be computed in O(|Sp
|
| 259 |
+
i |3) time as
|
| 260 |
+
ˆLSp
|
| 261 |
+
i ,i = bi(bi,1)−1/2,
|
| 262 |
+
with bi = K−1
|
| 263 |
+
Sp
|
| 264 |
+
i ,Sp
|
| 265 |
+
i e1,
|
| 266 |
+
(5)
|
| 267 |
+
and Sp
|
| 268 |
+
i = {j : (j, i) ∈ Sp} is an ordered set with elements
|
| 269 |
+
in increasing order (i.e., the first element is i).
|
| 270 |
+
Throughout, we index matrices before inverting so that
|
| 271 |
+
K−1
|
| 272 |
+
Sp
|
| 273 |
+
i ,Sp
|
| 274 |
+
i := (KSp
|
| 275 |
+
i ,Sp
|
| 276 |
+
i )−1.
|
| 277 |
+
The approximation in Proposition 2.2 is equivalent to an
|
| 278 |
+
ordered conditional approximation (Vecchia, 1988) of the
|
| 279 |
+
prior density p(f) = �n
|
| 280 |
+
i=1 p(fi|f(i+1):n) by:
|
| 281 |
+
ˆp(f) = �n
|
| 282 |
+
i=1 p(fi|fsp
|
| 283 |
+
i ) = �n
|
| 284 |
+
i=1 N(fi|ηi, σ2
|
| 285 |
+
i ),
|
| 286 |
+
where ηi = µi − ˆL⊤
|
| 287 |
+
sp
|
| 288 |
+
i ,i(fsp
|
| 289 |
+
i − µsp
|
| 290 |
+
i )/ˆLi,i and σ2
|
| 291 |
+
i = ˆL−2
|
| 292 |
+
i,i ,
|
| 293 |
+
with sp
|
| 294 |
+
i = Sp
|
| 295 |
+
i \ {i}.
|
| 296 |
+
2.4. Computing the ELBO based on ancestor sets
|
| 297 |
+
Plugging ˆp(f) into (2), the ELBO in (3) becomes
|
| 298 |
+
ELBO(q) =
|
| 299 |
+
n
|
| 300 |
+
�
|
| 301 |
+
i=1
|
| 302 |
+
�
|
| 303 |
+
E
|
| 304 |
+
q log p(yi|fi) − ((ν − µ)⊤ˆL:,i)2/2
|
| 305 |
+
+ log(V−1
|
| 306 |
+
ii ˆLii) − ∥V−1ˆL:,i∥2/2
|
| 307 |
+
�
|
| 308 |
+
,(6)
|
| 309 |
+
with the ith summand depending on ˆL only via its ith
|
| 310 |
+
column ˆL:,i, whose nonzero entries can be computed in
|
| 311 |
+
O(|Sp
|
| 312 |
+
i |3) time using (5).
|
| 313 |
+
We need to compute V−1ˆL:,i and V−1ei, which appears
|
| 314 |
+
in Eq log p(yi|fi) (see Section 2.6) and where ei is a vec-
|
| 315 |
+
tor whose ith entry is one and all others are zero. The
|
| 316 |
+
nonzero entry of ei is a subset of the nonzero entries of ˆL:,i,
|
| 317 |
+
and hence we focus our discussion on computing V−1ˆL:,i.
|
| 318 |
+
Solving this sparse triangular system in principle requires
|
| 319 |
+
O(|Sq|) time.
|
| 320 |
+
However, it is possible to speed up computation by omitting
|
| 321 |
+
rows and columns of V that do not correspond to the ances-
|
| 322 |
+
tor set of Sp
|
| 323 |
+
i with respect to Sq, which is defined as Ai =
|
| 324 |
+
�
|
| 325 |
+
j : ∃L = {(j, l1), (l1, l2), . . . , (la−1, la), (la, l)} s.t. L ⊂
|
| 326 |
+
Sq, l ∈ Sp
|
| 327 |
+
i
|
| 328 |
+
�
|
| 329 |
+
. Ancestor sets are properties of the directed
|
| 330 |
+
acyclic graphs that can be used to represent our triangular
|
| 331 |
+
sparsity structures, as illustrated in Appendix B.
|
| 332 |
+
Proposition 2.3. (V−1ˆL:,i)j = 0 for all j /∈ Ai.
|
| 333 |
+
Thus, we have
|
| 334 |
+
∥V−1ˆL:,i∥ = ∥V−1
|
| 335 |
+
Ai,Ai ˆLAi,i∥,
|
| 336 |
+
(7)
|
| 337 |
+
where V−1
|
| 338 |
+
Ai,Ai ˆLAi,i can be computed in O(|Ai||Sq
|
| 339 |
+
i |) time.
|
| 340 |
+
2.5. Maximin ordering and nearest-neighbor sparsity
|
| 341 |
+
Sch¨afer et al. (2021a) proposed a sparsity pattern S based
|
| 342 |
+
on reverse-maximum-minimum-distance (r-maximin) or-
|
| 343 |
+
dering (see Figure 2 for an illustration). R-maximin or-
|
| 344 |
+
dering picks the last index in arbitrarily (often in the cen-
|
| 345 |
+
ter of the input domain), and then the previous indices
|
| 346 |
+
are sequentially selected for j = n − 1, n − 2, . . . , 1 as
|
| 347 |
+
ij = arg maxi /∈ Ij minj ∈ Ij dist(xj, xi), where Ij =
|
| 348 |
+
{ij+1, . . . , in}. Define ℓij = minj ∈ Ij dist(xj, xij). For
|
| 349 |
+
notational simplicity, we assume throughout that our in-
|
| 350 |
+
dexing follows r-maximin ordering (e.g., fj = fij and
|
| 351 |
+
ℓj = ℓij). We can then define the sparsity pattern Si =
|
| 352 |
+
{j ≥ i : dist(xj, xi) ≤ ρℓi} for some fixed ρ ≥ 1. We
|
| 353 |
+
can compute dist(xj, xi) as Euclidean distance between the
|
| 354 |
+
inputs, potentially in a transformed input space (see Section
|
| 355 |
+
2.6 for more details). The conditioning sets are all of similar
|
| 356 |
+
size |Si| = O(ρd) ≈ m = |S|/n under mild assumptions
|
| 357 |
+
on the regularity of the input locations. Sch¨afer et al. (2021a)
|
| 358 |
+
proved that a highly accurate approximation of the prior can
|
| 359 |
+
|
| 360 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 361 |
+
li
|
| 362 |
+
(a) i = n − 12
|
| 363 |
+
(b) i = n − 100
|
| 364 |
+
(c) i = n − 289
|
| 365 |
+
Figure 2. Reverse maximin ordering on a grid (small gray dots) of size n = 60 × 60 = 3,600 on a square. For three different indices i,
|
| 366 |
+
we show the i-th ordered input (▲), the subsequently ordered n − i inputs (�), the distance ℓi to the nearest neighbor (−), the neighboring
|
| 367 |
+
subsequent inputs Si (■) within a (yellow) circle of radius ρℓi (here, ρ = 2), the reduced ancestors ˜
|
| 368 |
+
Ai (+), and the ancestors Ai (×).
|
| 369 |
+
be obtained using Sp = S with ρ = O(log n) for kernels K
|
| 370 |
+
that are Green’s functions of elliptic boundary-value prob-
|
| 371 |
+
lems (similar to Mat´ern kernels up to boundary effects) and
|
| 372 |
+
demonstrated high numerical accuracy of the posterior us-
|
| 373 |
+
ing Sq = S for Gaussian likelihoods. For non-Gaussian
|
| 374 |
+
likelihoods, this implies highly accurate approximations to
|
| 375 |
+
the posterior when a second-order Taylor expansion can
|
| 376 |
+
adequately approximate the posterior.
|
| 377 |
+
While this means that our DKLGP can achieve high ac-
|
| 378 |
+
curacy by choosing Sp = Sq = S, the resulting ances-
|
| 379 |
+
tor sets can grow roughly linearly with n (e.g., see Fig-
|
| 380 |
+
ure 3a). Hence, evaluating the ELBO would often be pro-
|
| 381 |
+
hibitively expensive for large n. However, it is possible
|
| 382 |
+
to ignore most ancestors in (7) and only incur a small ap-
|
| 383 |
+
proximation error. Specifically, consider reduced ancestor
|
| 384 |
+
sets ˜
|
| 385 |
+
Ai = {j ≥ i : dist(xj, xi) ≤ ρℓj}, where the last
|
| 386 |
+
subscript is now a j, not an i. As illustrated in Figure 2,
|
| 387 |
+
we have Si ⊂
|
| 388 |
+
˜
|
| 389 |
+
Ai (because ℓj ≥ ℓi for j ≥ i) and ap-
|
| 390 |
+
proximately ˜
|
| 391 |
+
Ai ⊂ Ai. The reduced ancestor sets are of
|
| 392 |
+
size | ˜
|
| 393 |
+
Ai| = O(ρd log n) = O(m log n) and can all be com-
|
| 394 |
+
puted together in O(nm log2 n) time (Sch¨afer et al., 2021b).
|
| 395 |
+
Hence, reduced ancestor sets can be orders of magnitude
|
| 396 |
+
smaller than full ancestor sets (see Figures 3a and 6).
|
| 397 |
+
Claim 2.4. For Mat´ern-type LGPs with exponential-family
|
| 398 |
+
likelihoods, (V−1ˆL:,i)j ≈ 0 for all j /∈ ˜
|
| 399 |
+
Ai, where V mini-
|
| 400 |
+
mizes the ELBO in (6), under mild conditions.
|
| 401 |
+
We provide a non-rigorous justification for this claim in
|
| 402 |
+
Appendix C. Together, Proposition 2.3 and Claim 2.4 imply
|
| 403 |
+
that ∥V−1ˆL:,i∥ ≈ ∥V−1
|
| 404 |
+
˜
|
| 405 |
+
Ai, ˜
|
| 406 |
+
Ai
|
| 407 |
+
ˆL ˜
|
| 408 |
+
Ai,i∥ (as illustrated in Fig-
|
| 409 |
+
ure 3b), and so replacing the former by the latter in the
|
| 410 |
+
ELBO causes negligible error (Figure 3c).
|
| 411 |
+
2.6. Optimization of the ELBO
|
| 412 |
+
The class of distributions Q = {Nn(ν, (VV⊤)−1) : ν ∈
|
| 413 |
+
Rn, V ∈ Rn×n, V ∈ Sq} has n parameters in ν and |S|
|
| 414 |
+
parameters in V. We propose to find the optimal ˆq ∈ Q by
|
| 415 |
+
minimizing our approximation of − ELBO(q) with respect
|
| 416 |
+
to these O(nm) unknown parameters via minibatch stochas-
|
| 417 |
+
tic gradient descent. For each minibatch B, this requires
|
| 418 |
+
computing the gradient of
|
| 419 |
+
�
|
| 420 |
+
i∈B
|
| 421 |
+
�
|
| 422 |
+
E
|
| 423 |
+
q log p(yi|fi) − ((ν − µ)⊤ˆL:,i)2/2
|
| 424 |
+
+ log(V−1
|
| 425 |
+
ii ˆLii) − ∥V−1
|
| 426 |
+
˜
|
| 427 |
+
Ai, ˜
|
| 428 |
+
Ai
|
| 429 |
+
ˆL ˜
|
| 430 |
+
Ai,i∥2/2
|
| 431 |
+
�
|
| 432 |
+
(8)
|
| 433 |
+
using automatic differentiation.
|
| 434 |
+
For Gaussian observations with yi|fi ∼ N(fi, τ 2
|
| 435 |
+
i ), we
|
| 436 |
+
have −2 Eq log p(yi|fi) =
|
| 437 |
+
�
|
| 438 |
+
(yi − νi)2 + ∥V−1ei∥2�
|
| 439 |
+
/τ 2
|
| 440 |
+
i +
|
| 441 |
+
log τ 2
|
| 442 |
+
i + log 2π. For more general distributions p(yi|fi),
|
| 443 |
+
we can use the Monte Carlo gradient estimator (Kingma
|
| 444 |
+
& Welling, 2014) and approximate Eq log p(yi|fi)
|
| 445 |
+
≈
|
| 446 |
+
(1/L) �L
|
| 447 |
+
l=1 p(yi|f (l)
|
| 448 |
+
i ), where f (l)
|
| 449 |
+
i
|
| 450 |
+
= νi + (V−1ei)⊤z(l)
|
| 451 |
+
and z(l) iid
|
| 452 |
+
∼ Nn(0, In).
|
| 453 |
+
Evaluating each summand in (8) requires O(|Si|3) =
|
| 454 |
+
O(m3) time for obtaining ˆL:,i and O(m2 log n) time
|
| 455 |
+
for solving V−1
|
| 456 |
+
˜
|
| 457 |
+
Ai, ˜
|
| 458 |
+
Ai
|
| 459 |
+
ˆL ˜
|
| 460 |
+
Ai,i, because | ˜
|
| 461 |
+
Ai| = O(m log n).
|
| 462 |
+
The O(m3) cost dominates, as we typically need m =
|
| 463 |
+
O(logd n) for accurate approximations (Sch¨afer et al.,
|
| 464 |
+
2021a); for example, in Figure 3a, | ˜
|
| 465 |
+
Ai||Si| is smaller than
|
| 466 |
+
|Si|3. Also, ˆL does not need to be pre-computed and stored,
|
| 467 |
+
as each column ˆL:,i can be computed “on-the-fly”; this is
|
| 468 |
+
especially useful for hyperparameter estimation, for which
|
| 469 |
+
p(f) and hence ˆL changes with the hyperparameters at each
|
| 470 |
+
gradient-descent iteration.
|
| 471 |
+
|
| 472 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 473 |
+
0
|
| 474 |
+
8000
|
| 475 |
+
16000
|
| 476 |
+
24000
|
| 477 |
+
32000
|
| 478 |
+
n
|
| 479 |
+
0
|
| 480 |
+
2000
|
| 481 |
+
4000
|
| 482 |
+
6000
|
| 483 |
+
8000
|
| 484 |
+
sparsity
|
| 485 |
+
reduced ancestor
|
| 486 |
+
full ancestor
|
| 487 |
+
(a) Average set sizes
|
| 488 |
+
0.000
|
| 489 |
+
0.050
|
| 490 |
+
0.100
|
| 491 |
+
0.150
|
| 492 |
+
0.200
|
| 493 |
+
||V
|
| 494 |
+
1L : , i||
|
| 495 |
+
0.000
|
| 496 |
+
0.025
|
| 497 |
+
0.050
|
| 498 |
+
0.075
|
| 499 |
+
0.100
|
| 500 |
+
0.125
|
| 501 |
+
0.150
|
| 502 |
+
0.175
|
| 503 |
+
0.200
|
| 504 |
+
||V
|
| 505 |
+
1
|
| 506 |
+
i,
|
| 507 |
+
iL
|
| 508 |
+
i, i||
|
| 509 |
+
(b) ∥V−1
|
| 510 |
+
˜
|
| 511 |
+
Ai, ˜
|
| 512 |
+
Ai
|
| 513 |
+
ˆL ˜
|
| 514 |
+
Ai,i∥ vs ∥V−1 ˆL:,i∥
|
| 515 |
+
0.05
|
| 516 |
+
0.10
|
| 517 |
+
0.15
|
| 518 |
+
0.20
|
| 519 |
+
lengthscale (range)
|
| 520 |
+
600
|
| 521 |
+
700
|
| 522 |
+
800
|
| 523 |
+
900
|
| 524 |
+
1000
|
| 525 |
+
ELBO
|
| 526 |
+
full
|
| 527 |
+
reduced
|
| 528 |
+
(c) ELBO using reduced ancestors
|
| 529 |
+
Figure 3. Reduced ancestor sets (a) are much smaller than full ancestor sets and hence greatly reduce computational cost but (b)–(c) result
|
| 530 |
+
in negligible approximation error in the ELBO. (a) Average size of the sparsity Si, reduced ancestor ˜
|
| 531 |
+
Ai, and full ancestor sets Ai as
|
| 532 |
+
a function of n with d = 5; for n = 32,000, we have |Si| = 30, | ˜
|
| 533 |
+
Ai| = 293, and |Ai| = 8,693. (b) ∥V−1
|
| 534 |
+
˜
|
| 535 |
+
Ai, ˜
|
| 536 |
+
Ai
|
| 537 |
+
ˆL ˜
|
| 538 |
+
Ai,i∥ with reduced
|
| 539 |
+
ancestor sets versus ∥V−1 ˆL:,i∥ for i = 1, . . . , n, with n = 500 and d = 2. (c) ELBO curves based on full (6) and reduced (8) ancestor
|
| 540 |
+
sets, as a function of the range parameter with true value 0.1, for n = 500 and d = 2. In all plots, we set ρ = 2 and the n inputs are
|
| 541 |
+
sampled uniformly on [0, 1]d.
|
| 542 |
+
We initialize the optimization using an estimate of ν and V
|
| 543 |
+
based on a Vecchia-Laplace approximation of p(f|y) Zilber
|
| 544 |
+
& Katzfuss (2021) combined with an efficient incomplete
|
| 545 |
+
Cholesky (IC0) approximation of the posterior SIC (Sch¨afer
|
| 546 |
+
et al., 2021a). While this initialization itself provides a
|
| 547 |
+
reasonable approximation to the posterior, hyperparameter
|
| 548 |
+
estimation for this approach is more difficult, and it is less
|
| 549 |
+
accurate than DKLGP even for known hyperparameters as
|
| 550 |
+
shown in Appendix A.
|
| 551 |
+
The ordering and sparsity pattern in Section 2.5 depend
|
| 552 |
+
on a distance metric, dist(xj, xi), between inputs.
|
| 553 |
+
We
|
| 554 |
+
have found that the accuracy of the resulting approxima-
|
| 555 |
+
tion can be improved substantially by computing the Eu-
|
| 556 |
+
clidean distance between inputs in a transformed input space
|
| 557 |
+
in which the GP kernel is isotropic, as suggested by Katz-
|
| 558 |
+
fuss et al. (2022); Kang & Katzfuss (2021). For example,
|
| 559 |
+
consider an automatic relevance determination (ARD) ker-
|
| 560 |
+
nel of the form K(xi, xj) = ˜K(q(xi, xj)), where ˜K is an
|
| 561 |
+
isotropic kernel (Mat´ern 1.5 is used throughout this paper)
|
| 562 |
+
and q(xi, xj) = ∥˜xi − ˜xj∥ is a Euclidean distance based
|
| 563 |
+
on scaled inputs ˜x = (x1/λ1, . . . , xd/λd) with individual
|
| 564 |
+
ranges or length-scales λ = (λ1, . . . , λd) for the d input di-
|
| 565 |
+
mensions. In this example, we take dist(xj, xi) = q(xi, xj)
|
| 566 |
+
when computing the sparsity pattern. When the scaled dis-
|
| 567 |
+
tance and hence the sparsity pattern depend on unknown
|
| 568 |
+
hyperparameters (e.g., λ in the ARD case), we carry out a
|
| 569 |
+
two-step optimization procedure: first, we run our ELBO
|
| 570 |
+
optimization for a few epochs based on the sparsity pattern
|
| 571 |
+
obtained using an initial guess of λ to obtain a rough esti-
|
| 572 |
+
mate of λ, which we then use to obtain the final ordering
|
| 573 |
+
and sparsity pattern and warm-start our ELBO optimization.
|
| 574 |
+
2.7. Prediction
|
| 575 |
+
An important task for (generalized) GPs is prediction at
|
| 576 |
+
unobserved inputs, meaning that we want to obtain the
|
| 577 |
+
distribution of f ∗ at inputs x∗
|
| 578 |
+
1, . . . , x∗
|
| 579 |
+
n∗ given the data y.
|
| 580 |
+
To do so, we consider the joint posterior distribution of
|
| 581 |
+
˜f = (f ∗, f), from which any desired marginal distribution
|
| 582 |
+
can be computed. Since working with the joint covariance
|
| 583 |
+
matrix ˜K is again computationally prohibitive, we make
|
| 584 |
+
a joint SIC assumption on the posterior distribution of ˜f
|
| 585 |
+
(with the prediction variables ordered first) that naturally ex-
|
| 586 |
+
tends the SIC assumption for f in q(f). For the exact poste-
|
| 587 |
+
rior, we have p(˜f|y) = p(f ∗|f, y)p(f|y) = p(f ∗|f)p(f|y).
|
| 588 |
+
Similarly, we assume q(˜f) = q(f ∗|f)q(f), where q(f) =
|
| 589 |
+
Nn(f|ν, (VV⊤)−1) was obtained as described in previous
|
| 590 |
+
sections, and q(f ∗|f) is a sparse approximation of p(f ∗|f).
|
| 591 |
+
For i = 1, . . . , n∗, let S∗
|
| 592 |
+
i ⊂ {i, i + 1, . . . , n∗ + n} denote
|
| 593 |
+
the ith sparsity set relative to the joint posterior.
|
| 594 |
+
We define the approximation to the joint posterior by the
|
| 595 |
+
minimizer of the expected forward-KL divergence between
|
| 596 |
+
p(f ∗|f) and q(f ∗|f) for given ν and V, that is,
|
| 597 |
+
ˆq(˜f) =
|
| 598 |
+
arg min
|
| 599 |
+
q(˜f)∈ ˜
|
| 600 |
+
Q(ν,V)
|
| 601 |
+
E
|
| 602 |
+
p
|
| 603 |
+
�
|
| 604 |
+
KL
|
| 605 |
+
�
|
| 606 |
+
p(f ∗|f)
|
| 607 |
+
��q(f ∗|f)
|
| 608 |
+
��
|
| 609 |
+
,
|
| 610 |
+
where ˜Q(ν, V) = {Nn∗+n((ν∗⊤, ν⊤)⊤, (V∗, (0, V⊤)⊤)) :
|
| 611 |
+
ν∗
|
| 612 |
+
∈
|
| 613 |
+
Rn∗, V∗
|
| 614 |
+
∈
|
| 615 |
+
R(n∗+n)×n∗, V∗
|
| 616 |
+
∈
|
| 617 |
+
S∗} and
|
| 618 |
+
S∗
|
| 619 |
+
= �n∗
|
| 620 |
+
i=1{(j, i) : j
|
| 621 |
+
∈ S∗
|
| 622 |
+
i }.
|
| 623 |
+
Then the resulting
|
| 624 |
+
approximation can be obtained in the following manner.
|
| 625 |
+
Proposition 2.5. For given ν, V, and S∗, ˆq(˜f)
|
| 626 |
+
=
|
| 627 |
+
Nn∗+n(˜f|˜ν, ( ˜V ˜V⊤)−1), where ˜ν = (ˆν∗⊤, ν⊤)⊤, ˜V =
|
| 628 |
+
( ˆV∗, (0, V⊤)⊤), ˆV∗ = ( ˆV∗∗⊤, ˆVo∗⊤)⊤,
|
| 629 |
+
ˆV∗
|
| 630 |
+
S∗
|
| 631 |
+
i ,i = ci(ci,1)−1/2,
|
| 632 |
+
with ci = K(S∗
|
| 633 |
+
i , S∗
|
| 634 |
+
i )−1e1,
|
| 635 |
+
|
| 636 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 637 |
+
ˆν∗ = µ∗ − ( ˆV∗∗)−⊤ ˆVo∗⊤(ν − µ),
|
| 638 |
+
and µ∗ = (µ(x∗
|
| 639 |
+
1), . . . , µ(x∗
|
| 640 |
+
n∗))⊤.
|
| 641 |
+
The posterior distribution of a desired summary, say a⊤˜f
|
| 642 |
+
can then be computed as q(a⊤˜f) = N(a⊤ ˜ν, ∥ ˜V−1a∥2).
|
| 643 |
+
In particular, the marginal posterior of f ∗
|
| 644 |
+
i can be obtained
|
| 645 |
+
using a = ei as q(e⊤
|
| 646 |
+
i ˜f) = N(ν∗
|
| 647 |
+
i , ∥ ˜V−1ei∥2).
|
| 648 |
+
We again consider an r-maximin ordering and NN sparsity
|
| 649 |
+
pattern similar to above, but now conditioned on the pre-
|
| 650 |
+
diction points being ordered first, and the training points
|
| 651 |
+
ordered after (in the same ordering as before). Once the pre-
|
| 652 |
+
diction points are in this conditional r-maximin ordering, we
|
| 653 |
+
can define ℓ∗
|
| 654 |
+
i = minj≥1 dist(xj, x∗
|
| 655 |
+
i ) ∧ minj>i dist(x∗
|
| 656 |
+
j, x∗
|
| 657 |
+
i )
|
| 658 |
+
and S∗
|
| 659 |
+
i
|
| 660 |
+
= {j + n∗ : dist(xj, x∗
|
| 661 |
+
i ) ≤ ρℓ∗
|
| 662 |
+
i } ∪ {j ≥ i :
|
| 663 |
+
dist(x∗
|
| 664 |
+
j, x∗
|
| 665 |
+
i ) ≤ ρℓ∗
|
| 666 |
+
i }. This ordering and sparsity pattern can
|
| 667 |
+
be computed rapidly and was shown to lead to highly accu-
|
| 668 |
+
rate approximations; more details can be found in Sch¨afer
|
| 669 |
+
et al. (2021a, Section 4.2.1). Note that while computing
|
| 670 |
+
the prediction variances can be expensive, we can again
|
| 671 |
+
approximate ∥ ˜V−1ei∥ ≈ ∥ ˜V−1
|
| 672 |
+
˜
|
| 673 |
+
A∗
|
| 674 |
+
i , ˜
|
| 675 |
+
A∗
|
| 676 |
+
i ei; ˜
|
| 677 |
+
A∗
|
| 678 |
+
i ∥ using a reduced
|
| 679 |
+
ancestor set ˜
|
| 680 |
+
A∗
|
| 681 |
+
i = {j + n∗ : dist(xj, x∗
|
| 682 |
+
i ) ≤ ρℓ∗
|
| 683 |
+
j} ∪ {j ≥ i :
|
| 684 |
+
dist(x∗
|
| 685 |
+
j, x∗
|
| 686 |
+
i ) ≤ ρℓ∗
|
| 687 |
+
j}.
|
| 688 |
+
3. Numerical comparisons
|
| 689 |
+
3.1. Methods and comparison setup
|
| 690 |
+
We compared the following approaches:
|
| 691 |
+
DKLGP: Our method with reduced ancestor sets
|
| 692 |
+
DKL-G: DKLGP with global Sp
|
| 693 |
+
i = Sq
|
| 694 |
+
i = {1, . . . , m}
|
| 695 |
+
DKL-D: Same as DKLGP but with diagonal Sq
|
| 696 |
+
i = {i}
|
| 697 |
+
SVIGP: Stochastic variational GP (Hensman et al., 2013)
|
| 698 |
+
VNNGP: Variational nearest neighbor GP (Wu et al., 2022)
|
| 699 |
+
SVIGP and VNNGP are two state-of-the-art variational GP
|
| 700 |
+
methods, while DKL-G and DKL-D are variants of our
|
| 701 |
+
DKLGP that resemble SVIGP and VNNGP, respectively.
|
| 702 |
+
SVIGP assumes independence in f conditional on m global
|
| 703 |
+
inducing variables. VNNGP scales up the inducing points
|
| 704 |
+
to be equal to the observed input locations, ensuring com-
|
| 705 |
+
putational feasibility by assuming that each conditions only
|
| 706 |
+
on m others a priori, combined with a mean-field approx-
|
| 707 |
+
imation to the posterior. We used the GPyTorch (Gardner
|
| 708 |
+
et al., 2018) implementations of SVIGP and VNNGP. For
|
| 709 |
+
DKL-G and DKL-D, Ai = Sp
|
| 710 |
+
i , and so reduced ancestor
|
| 711 |
+
sets are not necessary. For all methods, computing a term
|
| 712 |
+
in the ELBO requires O(m3) time per sample. (Reusing
|
| 713 |
+
Cholesky factors for all samples in a minibatch is straight-
|
| 714 |
+
forward for SVIGP; similar savings may also be possible for
|
| 715 |
+
the other methods based on the supernode ideas in Sch¨afer
|
| 716 |
+
et al., 2021a.) Hence, m can be viewed as a comparable
|
| 717 |
+
complexity parameter that trades off computational speed
|
| 718 |
+
(for small m) against accuracy (large m). Thus, for all of
|
| 719 |
+
our comparisons, we aligned the m for all methods with the
|
| 720 |
+
average size of Si for a given ρ.
|
| 721 |
+
Throughout, we assumed f(·) ∼ GP(0, K), where K is
|
| 722 |
+
a Mat´ern1.5 ARD kernel whose variance (set to one for
|
| 723 |
+
simulations) and range (i.e., length-scale) parameters λ were
|
| 724 |
+
estimated. We considered three likelihood types p(yi|fi):
|
| 725 |
+
Gaussian: yi|fi ∼ N(fi, σ2
|
| 726 |
+
ϵ )
|
| 727 |
+
Student-t: yi|fi ∼ T2(fi, σ2
|
| 728 |
+
ϵ ) with 2 degrees of freedom
|
| 729 |
+
Bernoulli-logit: yi|fi ∼ B((1 + e−fi)−1)
|
| 730 |
+
The noise variance was estimated from the data; for simula-
|
| 731 |
+
tions, we used σϵ = 0.1 except where specified otherwise.
|
| 732 |
+
For estimation of hyperparameters, the initial values for λ,
|
| 733 |
+
σ2
|
| 734 |
+
ϵ, and the variance in K were all 0.25. DKLGP and its
|
| 735 |
+
variants ran the Adam optimizer for 35 epochs. SVIGP
|
| 736 |
+
and VNNGP used natural gradient descent and Adam, re-
|
| 737 |
+
spectively, as their optimizer for 500 epochs as suggested
|
| 738 |
+
in Wu et al. (2022). The minibatch size was 128. A multi-
|
| 739 |
+
step scheduler with a scaling factor of 0.1 was used for all
|
| 740 |
+
methods.
|
| 741 |
+
3.2. Visual comparison in one dimension
|
| 742 |
+
Figure 4 provides a visual comparison of SVIGP, VNNGP,
|
| 743 |
+
and DKLGP predictions for a toy example in one dimension.
|
| 744 |
+
We also included predictions from the (optimal) exact GP
|
| 745 |
+
(DenseGP). DKLGP approximated the optimal DenseGP
|
| 746 |
+
most closely, especially in terms of the prediction inter-
|
| 747 |
+
vals. SVIGP oversmoothed heavily and produced very wide
|
| 748 |
+
intervals. VNNGP assumes a diagonal covariance in the
|
| 749 |
+
variational distribution q(f), which appears to have caused
|
| 750 |
+
sharply fluctuating predictions and narrow intervals. Fig-
|
| 751 |
+
ure 9 in Appendix A shows similar comparisons for Student-
|
| 752 |
+
t and Bernoulli likelihoods.
|
| 753 |
+
3.3. Simulation study
|
| 754 |
+
Next, we carried out a more comprehensive comparison
|
| 755 |
+
based on 10,000 locations randomly distributed in the
|
| 756 |
+
unit hypercube, [0, 1]5, with true range parameters λ =
|
| 757 |
+
(0.25, 0.50, 0.75, 1.00, 1.25). We used n = 8,000 locations
|
| 758 |
+
for training and 2,000 for testing. Performance was mea-
|
| 759 |
+
sured in terms of the variational inference of the latent field
|
| 760 |
+
f(·) at training and testing inputs. For each scenario, results
|
| 761 |
+
over five replicates were produced and averaged.
|
| 762 |
+
Figure 5 compares root mean squared error (RMSE) and
|
| 763 |
+
negative log-likelihood (NLL) at testing locations. Under
|
| 764 |
+
the Gaussian and Student-t likelihoods, DKLGP produced
|
| 765 |
+
the most accurate predictions, while under the Bernoulli-
|
| 766 |
+
logit likelihood, SVIGP and DKLGP appeared similarly
|
| 767 |
+
accurate. DKLGP, DKL-G, and SVIGP all improved with
|
| 768 |
+
|
| 769 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 770 |
+
0.0
|
| 771 |
+
0.2
|
| 772 |
+
0.4
|
| 773 |
+
0.6
|
| 774 |
+
0.8
|
| 775 |
+
1.0
|
| 776 |
+
-4
|
| 777 |
+
-3
|
| 778 |
+
-2
|
| 779 |
+
-1
|
| 780 |
+
0
|
| 781 |
+
1
|
| 782 |
+
2
|
| 783 |
+
0.64
|
| 784 |
+
0.66
|
| 785 |
+
0.68
|
| 786 |
+
0.70
|
| 787 |
+
0.72
|
| 788 |
+
0.74
|
| 789 |
+
0.76
|
| 790 |
+
-2.0
|
| 791 |
+
-1.8
|
| 792 |
+
-1.6
|
| 793 |
+
-1.4
|
| 794 |
+
-1.2
|
| 795 |
+
-1.0
|
| 796 |
+
y
|
| 797 |
+
f
|
| 798 |
+
DKL
|
| 799 |
+
SVI
|
| 800 |
+
VNN
|
| 801 |
+
DenseGP
|
| 802 |
+
Figure 4. Comparison of exact GP predictions (DenseGP) to three variational GP approximations for simulated data with Gaussian noise
|
| 803 |
+
at n = 200 randomly sampled training locations on [0, 1] with σϵ = 0.3 and true range λ = 0.1. We show the means (solid lines) and
|
| 804 |
+
95% pointwise intervals of the posterior predictive distribution f ∗|y at 200 regularly spaced testing locations. The right plot zooms into a
|
| 805 |
+
smaller region of the left plot to highlight the differences.
|
| 806 |
+
0.20
|
| 807 |
+
0.40
|
| 808 |
+
0.60
|
| 809 |
+
0.80
|
| 810 |
+
RMSE
|
| 811 |
+
Gaussian
|
| 812 |
+
0.20
|
| 813 |
+
0.40
|
| 814 |
+
0.60
|
| 815 |
+
0.80
|
| 816 |
+
Student-t
|
| 817 |
+
0.50
|
| 818 |
+
0.75
|
| 819 |
+
1.00
|
| 820 |
+
1.25
|
| 821 |
+
1.50
|
| 822 |
+
Bernoulli-logit
|
| 823 |
+
1.0
|
| 824 |
+
1.5
|
| 825 |
+
2.0
|
| 826 |
+
rho
|
| 827 |
+
-2.00
|
| 828 |
+
0.00
|
| 829 |
+
2.00
|
| 830 |
+
4.00
|
| 831 |
+
NLL
|
| 832 |
+
1.0
|
| 833 |
+
1.5
|
| 834 |
+
2.0
|
| 835 |
+
rho
|
| 836 |
+
-2.00
|
| 837 |
+
0.00
|
| 838 |
+
2.00
|
| 839 |
+
4.00
|
| 840 |
+
1.0
|
| 841 |
+
1.5
|
| 842 |
+
2.0
|
| 843 |
+
rho
|
| 844 |
+
0.00
|
| 845 |
+
2.00
|
| 846 |
+
4.00
|
| 847 |
+
DKL-G
|
| 848 |
+
DKL-D
|
| 849 |
+
DKL
|
| 850 |
+
SVI
|
| 851 |
+
VNN
|
| 852 |
+
Figure 5. RMSE (top) and NLL (bottom) for predicting the latent field at testing locations based on simulated data in a five-dimensional
|
| 853 |
+
input domain, as a function of the complexity parameter ρ
|
| 854 |
+
|
| 855 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 856 |
+
Table 1. RMSE, NLL at held-out test points for several UCI datasets, ordered from low to high dimension d. The Student-t and
|
| 857 |
+
Bernoulli-logit likelihoods were applied to Precip and Covtype, respectively; a Gaussian likelihood was used for all other datasets.
|
| 858 |
+
3DROAD
|
| 859 |
+
PRECIP
|
| 860 |
+
KIN40K
|
| 861 |
+
PROTEIN
|
| 862 |
+
BIKE
|
| 863 |
+
ELEVATORS
|
| 864 |
+
KEGG
|
| 865 |
+
KEGGU
|
| 866 |
+
COVTYPE
|
| 867 |
+
(n, d)
|
| 868 |
+
(65K, 3)
|
| 869 |
+
(85K, 3)
|
| 870 |
+
(40K, 8)
|
| 871 |
+
(44K, 9)
|
| 872 |
+
(17K, 17)
|
| 873 |
+
(17K, 18)
|
| 874 |
+
(16K, 20)
|
| 875 |
+
(18K, 26)
|
| 876 |
+
(100K, 53)
|
| 877 |
+
SVI
|
| 878 |
+
.80
|
| 879 |
+
.28
|
| 880 |
+
.91
|
| 881 |
+
.44
|
| 882 |
+
.62
|
| 883 |
+
.03
|
| 884 |
+
.82
|
| 885 |
+
0.30
|
| 886 |
+
.08
|
| 887 |
+
-1.88
|
| 888 |
+
.39
|
| 889 |
+
-.45
|
| 890 |
+
.07
|
| 891 |
+
-2.14
|
| 892 |
+
.06
|
| 893 |
+
-2.20
|
| 894 |
+
.50
|
| 895 |
+
NA
|
| 896 |
+
VNN
|
| 897 |
+
.28
|
| 898 |
+
2.16
|
| 899 |
+
.49
|
| 900 |
+
4.40
|
| 901 |
+
.57
|
| 902 |
+
25.36
|
| 903 |
+
.70
|
| 904 |
+
5.39
|
| 905 |
+
.49
|
| 906 |
+
7.74
|
| 907 |
+
.67
|
| 908 |
+
1.38
|
| 909 |
+
.12
|
| 910 |
+
0.71
|
| 911 |
+
.15
|
| 912 |
+
5.85
|
| 913 |
+
NA
|
| 914 |
+
NA
|
| 915 |
+
DKL
|
| 916 |
+
.27
|
| 917 |
+
-.83
|
| 918 |
+
.41
|
| 919 |
+
-.42
|
| 920 |
+
.37
|
| 921 |
+
-.53
|
| 922 |
+
.56
|
| 923 |
+
-.18
|
| 924 |
+
.12
|
| 925 |
+
-1.50
|
| 926 |
+
.39
|
| 927 |
+
-.42
|
| 928 |
+
.08
|
| 929 |
+
-1.97
|
| 930 |
+
.14
|
| 931 |
+
-1.95
|
| 932 |
+
.28
|
| 933 |
+
NA
|
| 934 |
+
increasing ρ as expected, but the mean-field approximations
|
| 935 |
+
(VNNGP and DKL-D) generally did not.
|
| 936 |
+
For completeness, we also plotted the RMSE and NLL at
|
| 937 |
+
training locations in Figure 8 in Appendix A. Consistent
|
| 938 |
+
with the results from Figure 5, DKLGP had the best score
|
| 939 |
+
in most scenarios. VNNGP performed similarly to DKLGP
|
| 940 |
+
under the Gaussian and Student-t likelihoods but underes-
|
| 941 |
+
timated the variance at testing locations, resulting in poor
|
| 942 |
+
NLL scores. Note that variational methods are generally
|
| 943 |
+
known to underestimate the variance of the posterior distri-
|
| 944 |
+
bution (Blei et al., 2017).
|
| 945 |
+
3.4. Real data
|
| 946 |
+
We considered datasets from the UCI data repository com-
|
| 947 |
+
monly used for benchmarking LGP models for a more com-
|
| 948 |
+
prehensive comparison of SVIGP, VNNGP, and DKLGP.
|
| 949 |
+
For all datasets, covariates were first standardized to [0, 1]
|
| 950 |
+
and removed if the standard deviation after standardization
|
| 951 |
+
was smaller than 0.01. Furthermore, locations were filtered
|
| 952 |
+
to guarantee the minimum pair-wise distance was greater
|
| 953 |
+
than 0.001 to prevent numerical singularity. Approximately
|
| 954 |
+
20% of each dataset was used for testing. We used m ≈ 10
|
| 955 |
+
for all methods. As Section 3.3 demonstrated the advantage
|
| 956 |
+
of DKLGP over DKL-G and DKL-D, we excluded the two
|
| 957 |
+
DKL variants here for clarity.
|
| 958 |
+
Table 1 summarizes the performance of the three meth-
|
| 959 |
+
ods across nine datasets. DKLGP had better scores than
|
| 960 |
+
VNNGP for all datasets except for Covtype, for which VN-
|
| 961 |
+
NGP ran out of memory on a 64GB node despite having
|
| 962 |
+
reduced the data size to a subset of size 100K. Relative to
|
| 963 |
+
SVIGP, DKLGP had substantially better performance for
|
| 964 |
+
the binary Covtype data and for low-dimensional (d < 10)
|
| 965 |
+
settings, and roughly similar performance for most higher-
|
| 966 |
+
dimensional datasets except the KEGGU data, for which
|
| 967 |
+
SVIGP produced much lower RMSE than DKLGP. How-
|
| 968 |
+
ever, this does not appear to be due to DKLGP providing
|
| 969 |
+
a less accurate approximation to the exact GP, but rather
|
| 970 |
+
it appears to be due to the exact GP (with its simple ARD
|
| 971 |
+
kernel) being severely misspecified for the KEGGU data.
|
| 972 |
+
To explore this further, we fit the exact GP (DenseGP) to
|
| 973 |
+
the KEGGU data. The DenseGP RMSE was 0.14 (same
|
| 974 |
+
as for DKLGP), and the root average squared distance be-
|
| 975 |
+
tween the DenseGP predictions and the DKLGP and SVIGP
|
| 976 |
+
predictions was 0.05 and 0.13, respectively, meaning that
|
| 977 |
+
the DKLGP predictions were a much better approximation
|
| 978 |
+
of the exact predictions than the SVIGP predictions. VN-
|
| 979 |
+
NGP provided better point predictions than SVIGP for the
|
| 980 |
+
lower-dimensional datasets, consistent with the results in
|
| 981 |
+
Wu et al. (2022); however, VNNGP’s NLL was high due to
|
| 982 |
+
its underestimation of posterior variance.
|
| 983 |
+
4. Conclusions
|
| 984 |
+
We have introduced a variational approach using a varia-
|
| 985 |
+
tional family and approximate prior based on SIC restric-
|
| 986 |
+
tions. Maximin ordering, a nearest-neighbor sparsity pat-
|
| 987 |
+
tern, and reduced ancestor sets together result in efficient
|
| 988 |
+
and accurate inference and prediction for LGPs. While
|
| 989 |
+
the time complexity is cubic in the number of neighbors,
|
| 990 |
+
quadratic complexity for the prior approximation can be
|
| 991 |
+
achieved by grouping observations and re-using Cholesky
|
| 992 |
+
factors (Sch¨afer et al., 2021a); we will investigate an exten-
|
| 993 |
+
sion of this idea to computing the ELBO in our variational
|
| 994 |
+
setting. Although we here assume that the input domain is
|
| 995 |
+
Euclidean, our method can be applied more generally; using
|
| 996 |
+
a correlation-based distance instead of Euclidean distance
|
| 997 |
+
(Kang & Katzfuss, 2021), one can use our method to per-
|
| 998 |
+
form LGP inference for large data on complex domains (cf.
|
| 999 |
+
Tibo & Nielsen, 2022). We will also explore extensions to
|
| 1000 |
+
deep GPs (cf. Sauer et al., 2022). An implementation of our
|
| 1001 |
+
method, along with code to reproduce all results, will be
|
| 1002 |
+
made publicly available on GitHub.
|
| 1003 |
+
|
| 1004 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 1005 |
+
References
|
| 1006 |
+
Banerjee, S., Carlin, B. P., and Gelfand, A. E. Hierarchical
|
| 1007 |
+
Modeling and Analysis for Spatial Data. Chapman &
|
| 1008 |
+
Hall, 2004.
|
| 1009 |
+
Bao, J. Y., Ye, F., and Yang, Y. Screening effect in isotropic
|
| 1010 |
+
Gaussian processes. Acta Mathematica Sinica, English
|
| 1011 |
+
Series, 36(5):512–534, 2020. ISSN 14397617. doi: 10.
|
| 1012 |
+
1007/s10114-020-7300-5.
|
| 1013 |
+
Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. Varia-
|
| 1014 |
+
tional inference: a review for statisticians. Journal of
|
| 1015 |
+
the American statistical Association, 112(518):859–877,
|
| 1016 |
+
2017.
|
| 1017 |
+
Cao, J., Guinness, J., Genton, M. G., and Katzfuss, M. Scal-
|
| 1018 |
+
able Gaussian-process regression and variable selection
|
| 1019 |
+
using Vecchia approximations. arXiv:2202.12981, 2022.
|
| 1020 |
+
Chan, A. B. and Dong, D. Generalized gaussian process
|
| 1021 |
+
models. In CVPR, pp. 2681–2688, 2011.
|
| 1022 |
+
Cressie, N. and Wikle, C. K. Statistics for Spatio-Temporal
|
| 1023 |
+
Data. Wiley, Hoboken, NJ, 2011.
|
| 1024 |
+
Datta, A., Banerjee, S., Finley, A. O., and Gelfand, A. E.
|
| 1025 |
+
Hierarchical nearest-neighbor Gaussian process models
|
| 1026 |
+
for large geostatistical datasets. Journal of the American
|
| 1027 |
+
Statistical Association, 111(514):800–812, 2016. ISSN
|
| 1028 |
+
0162-1459. doi: 10.1080/01621459.2015.1044091. URL
|
| 1029 |
+
http://arxiv.org/abs/1406.7343.
|
| 1030 |
+
Deisenroth, M. P. Efficient reinforcement learning using
|
| 1031 |
+
Gaussian processes, volume 9. KIT Scientific Publishing,
|
| 1032 |
+
2010.
|
| 1033 |
+
Diggle, P., Tawn, J., and Moyeed, R. Model-based geostatis-
|
| 1034 |
+
tics. Journal of the Royal Statistical Society, Series C, 47
|
| 1035 |
+
(3):299–350, 1998.
|
| 1036 |
+
Frigola, R., Chen, Y., and Rasmussen, C. E. Variational
|
| 1037 |
+
Gaussian process state-space models. Advances in neural
|
| 1038 |
+
information processing systems, 27, 2014.
|
| 1039 |
+
Gardner, J. R., Pleiss, G., Bindel, D., Weinberger, K. Q., and
|
| 1040 |
+
Wilson, A. G. Gpytorch: Blackbox matrix-matrix gaus-
|
| 1041 |
+
sian process inference with gpu acceleration. In Advances
|
| 1042 |
+
in Neural Information Processing Systems, 2018.
|
| 1043 |
+
Geoga, C. J. and Stein, M. L. A scalable method to ex-
|
| 1044 |
+
ploit screening in Gaussian process models with noise.
|
| 1045 |
+
arXiv:2208.06877, 2022. URL http://arxiv.org/
|
| 1046 |
+
abs/2208.06877.
|
| 1047 |
+
Gramacy, R. B. Surrogates: Gaussian process modeling, de-
|
| 1048 |
+
sign, and optimization for the applied sciences. Chapman
|
| 1049 |
+
and Hall/CRC, 2020.
|
| 1050 |
+
Heaton, M. J., Datta, A., Finley, A. O., Furrer, R., Guin-
|
| 1051 |
+
ness, J., Guhaniyogi, R., Gerber, F., Gramacy, R. B.,
|
| 1052 |
+
Hammerling, D. M., Katzfuss, M., Lindgren, F., Ny-
|
| 1053 |
+
chka, D. W., Sun, F., and Zammit-Mangion, A.
|
| 1054 |
+
A
|
| 1055 |
+
case study competition among methods for analyzing
|
| 1056 |
+
large spatial data. Journal of Agricultural, Biological,
|
| 1057 |
+
and Environmental Statistics, 24(3):398–425, 2019. doi:
|
| 1058 |
+
10.1007/s13253-018-00348-w. URL http://arxiv.
|
| 1059 |
+
org/abs/1710.05013.
|
| 1060 |
+
Hensman, J., Fusi, N., and Lawrence, N. D.
|
| 1061 |
+
Gaussian
|
| 1062 |
+
processes for big data. In Uncertainty in Artificial Intelli-
|
| 1063 |
+
gence - Proceedings of the 29th Conference, UAI 2013,
|
| 1064 |
+
pp. 282–290, 2013.
|
| 1065 |
+
Hensman, J., Matthews, A. G., and Ghahramani, Z. Scalable
|
| 1066 |
+
variational Gaussian process classification. In Artificial
|
| 1067 |
+
Intelligence and Statistics (AISTATS), volume 38, pp. 351–
|
| 1068 |
+
360, 2015.
|
| 1069 |
+
Kang, M. and Katzfuss, M. Correlation-based sparse in-
|
| 1070 |
+
verse Cholesky factorization for fast Gaussian-process
|
| 1071 |
+
inference. arXiv:2112.14591, 2021.
|
| 1072 |
+
Katzfuss, M. and Guinness, J. A general framework for
|
| 1073 |
+
Vecchia approximations of Gaussian processes. Statistical
|
| 1074 |
+
Science, 36(1):124–141, 2021. doi: 10.1214/19-STS755.
|
| 1075 |
+
URL http://arxiv.org/abs/1708.06302.
|
| 1076 |
+
Katzfuss, M., Guinness, J., and Lawrence, E. Scaled Vec-
|
| 1077 |
+
chia approximation for fast computer-model emulation.
|
| 1078 |
+
SIAM/ASA Journal on Uncertainty Quantification, 10
|
| 1079 |
+
(2):537–554, 2022. doi: 10.1137/20M1352156. URL
|
| 1080 |
+
http://arxiv.org/abs/2005.00386.
|
| 1081 |
+
Kennedy, M. C. and O’Hagan, A. Bayesian calibration
|
| 1082 |
+
of computer models. Journal of the Royal Statistical
|
| 1083 |
+
Society: Series B, 63(3):425–464, 2001. ISSN 1369-7412.
|
| 1084 |
+
doi: 10.1111/1467-9868.00294. URL http://doi.
|
| 1085 |
+
wiley.com/10.1111/1467-9868.00294.
|
| 1086 |
+
Kingma, D. P. and Welling, M. Auto-encoding variational
|
| 1087 |
+
bayes. In 2nd International Conference on Learning Rep-
|
| 1088 |
+
resentations, ICLR 2014 - Conference Track Proceedings,
|
| 1089 |
+
2014.
|
| 1090 |
+
Leibfried, F., Dutordoir, V., John, S., and Durrande, N.
|
| 1091 |
+
A tutorial on sparse Gaussian processes and variational
|
| 1092 |
+
inference. arXiv preprint arXiv:2012.13962, 2020.
|
| 1093 |
+
Liu, H., Ong, Y.-S., Shen, X., and Cai, J. When Gaussian
|
| 1094 |
+
process meets big data: A review of scalable GPs. IEEE
|
| 1095 |
+
Transactions on Neural Networks and Learning Systems,
|
| 1096 |
+
2020. doi: 10.1109/TNNLS.2019.2957109. URL http:
|
| 1097 |
+
//arxiv.org/abs/1807.01065.
|
| 1098 |
+
|
| 1099 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 1100 |
+
Liu, L. and Liu, L. Amortized variational inference with
|
| 1101 |
+
graph convolutional networks for Gaussian processes. In
|
| 1102 |
+
The 22nd International Conference on Artificial Intelli-
|
| 1103 |
+
gence and Statistics, pp. 2291–2300. PMLR, 2019.
|
| 1104 |
+
MacKay, D. J. A practical Bayesian framework for back-
|
| 1105 |
+
propagation networks. Neural computation, 4(3):448–
|
| 1106 |
+
472, 1992.
|
| 1107 |
+
Nickisch, H. and Rasmussen, C. E. Approximations for bi-
|
| 1108 |
+
nary Gaussian process classification. Journal of Machine
|
| 1109 |
+
Learning Research, 9:2035–2078, 2008. ISSN 15324435.
|
| 1110 |
+
URL
|
| 1111 |
+
http://www.jmlr.org/papers/
|
| 1112 |
+
volume9/nickisch08a/nickisch08a.pdf.
|
| 1113 |
+
Rasmussen, C. E. and Williams, C. K. I.
|
| 1114 |
+
Gaussian
|
| 1115 |
+
Processes for Machine Learning. MIT Press, 2006. ISBN
|
| 1116 |
+
026218253X. doi: 10.1142/S0129065704001899. URL
|
| 1117 |
+
http://www.gaussianprocess.org/gpml/
|
| 1118 |
+
chapters/RW.pdf.
|
| 1119 |
+
Sacks, J., Welch, W., Mitchell, T., and Wynn, H. Design and
|
| 1120 |
+
analysis of computer experiments. Statistical Science,
|
| 1121 |
+
4(4):409–435, 1989. ISSN 2168-8745. doi: 10.2307/
|
| 1122 |
+
2246134.
|
| 1123 |
+
Sauer, A., Cooper, A., and Gramacy, R. B.
|
| 1124 |
+
Vecchia-
|
| 1125 |
+
approximated deep Gaussian processes for computer
|
| 1126 |
+
experiments.
|
| 1127 |
+
arXiv:2204.02904, 2022.
|
| 1128 |
+
URL http:
|
| 1129 |
+
//arxiv.org/abs/2204.02904.
|
| 1130 |
+
Sch¨afer, F., Katzfuss, M., and Owhadi, H. Sparse Cholesky
|
| 1131 |
+
factorization by Kullback-Leibler minimization. SIAM
|
| 1132 |
+
Journal on Scientific Computing, 43(3):A2019–A2046,
|
| 1133 |
+
2021a. doi: 10.1137/20M1336254.
|
| 1134 |
+
Sch¨afer, F., Sullivan, T. J., and Owhadi, H. Compression,
|
| 1135 |
+
inversion, and approximate PCA of dense kernel matri-
|
| 1136 |
+
ces at near-linear computational complexity. Multiscale
|
| 1137 |
+
Modeling & Simulation, 19(2):688–730, 2021b.
|
| 1138 |
+
doi:
|
| 1139 |
+
10.1137/19M129526X.
|
| 1140 |
+
Stein,
|
| 1141 |
+
M.
|
| 1142 |
+
L.
|
| 1143 |
+
Interpolation
|
| 1144 |
+
of
|
| 1145 |
+
Spatial
|
| 1146 |
+
Data:
|
| 1147 |
+
Some
|
| 1148 |
+
Theory
|
| 1149 |
+
for
|
| 1150 |
+
Kriging.
|
| 1151 |
+
Springer,
|
| 1152 |
+
New
|
| 1153 |
+
York,
|
| 1154 |
+
NY, 1999.
|
| 1155 |
+
ISBN 0387986294.
|
| 1156 |
+
URL
|
| 1157 |
+
http://books.google.com/books?hl=en&
|
| 1158 |
+
amp;lr=&id=5n_XuL2Wx1EC&oi=
|
| 1159 |
+
fnd&pg=PR7&dq=Interpolation+
|
| 1160 |
+
of+Spatial+Data:+Some+Theory+for+
|
| 1161 |
+
Kriging&ots=83bkk_TrGZ&sig=
|
| 1162 |
+
jG_G7nHFmPQVLAta8Aa7UDOHJYs.
|
| 1163 |
+
Stein,
|
| 1164 |
+
M.
|
| 1165 |
+
L.
|
| 1166 |
+
When
|
| 1167 |
+
does
|
| 1168 |
+
the
|
| 1169 |
+
screening
|
| 1170 |
+
effect
|
| 1171 |
+
hold?
|
| 1172 |
+
Annals of Statistics,
|
| 1173 |
+
39(6):2795–2819,
|
| 1174 |
+
12 2011a.
|
| 1175 |
+
ISSN 0090-5364.
|
| 1176 |
+
doi:
|
| 1177 |
+
10.1214/
|
| 1178 |
+
11-AOS909. URL http://projecteuclid.org/
|
| 1179 |
+
euclid.aos/1327413769.
|
| 1180 |
+
Stein, M. L. 2010 Rietz lecture: When does the screen-
|
| 1181 |
+
ing effect hold? Annals of Statistics, 39(6):2795–2819,
|
| 1182 |
+
2011b. ISSN 00905364. doi: 10.1214/11-AOS909.
|
| 1183 |
+
Stein,
|
| 1184 |
+
M. L.
|
| 1185 |
+
Limitations on low rank approx-
|
| 1186 |
+
imations
|
| 1187 |
+
for
|
| 1188 |
+
covariance
|
| 1189 |
+
matrices
|
| 1190 |
+
of
|
| 1191 |
+
spatial
|
| 1192 |
+
data.
|
| 1193 |
+
Spatial Statistics, 8:1–19, 5 2014.
|
| 1194 |
+
ISSN
|
| 1195 |
+
22116753.
|
| 1196 |
+
doi:
|
| 1197 |
+
10.1016/j.spasta.2013.06.003.
|
| 1198 |
+
URL
|
| 1199 |
+
http://linkinghub.elsevier.com/
|
| 1200 |
+
retrieve/pii/S2211675313000390https:
|
| 1201 |
+
//linkinghub.elsevier.com/retrieve/
|
| 1202 |
+
pii/S2211675313000390.
|
| 1203 |
+
Stein, M. L., Chi, Z., and Welty, L.
|
| 1204 |
+
Approximat-
|
| 1205 |
+
ing likelihoods for large spatial data sets.
|
| 1206 |
+
Journal
|
| 1207 |
+
of the Royal Statistical Society: Series B, 66(2):275–
|
| 1208 |
+
296, 2004.
|
| 1209 |
+
URL http://www3.interscience.
|
| 1210 |
+
wiley.com/journal/118808457/abstract.
|
| 1211 |
+
Tibo, A. and Nielsen, T. D. Inducing gaussian process
|
| 1212 |
+
networks. arXiv preprint arXiv:2204.09889, 2022.
|
| 1213 |
+
Titsias, M. K. Variational learning of inducing variables in
|
| 1214 |
+
sparse Gaussian processes. In Artificial Intelligence and
|
| 1215 |
+
Statistics (AISTATS), volume 5, pp. 567–574, 2009.
|
| 1216 |
+
Vecchia, A. Estimation and model identification for contin-
|
| 1217 |
+
uous spatial processes. Journal of the Royal Statistical
|
| 1218 |
+
Society, Series B, 50(2):297–312, 1988. URL http://
|
| 1219 |
+
www.jstor.org/stable/10.2307/2345768.
|
| 1220 |
+
Wu, L., Pleiss, G., and Cunningham, J. Variational nearest
|
| 1221 |
+
neighbor Gaussian processes. arXiv:2202.01694, 2022.
|
| 1222 |
+
URL http://arxiv.org/abs/2202.01694.
|
| 1223 |
+
Zilber, D. and Katzfuss, M.
|
| 1224 |
+
Vecchia-Laplace approx-
|
| 1225 |
+
imations of generalized Gaussian processes for big
|
| 1226 |
+
non-Gaussian spatial data.
|
| 1227 |
+
Computational Statistics
|
| 1228 |
+
& Data Analysis, 153:107081, 2021. doi: 10.1016/j.
|
| 1229 |
+
csda.2020.107081. URL http://arxiv.org/abs/
|
| 1230 |
+
1906.07828.
|
| 1231 |
+
|
| 1232 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 1233 |
+
A. Additional numerical results
|
| 1234 |
+
Complementing Figure 3a, Figure 6 shows that reduced ancestor sets ˜
|
| 1235 |
+
Ai are much smaller than full ancestor sets Ai across
|
| 1236 |
+
a range of value for ρ.
|
| 1237 |
+
1.0
|
| 1238 |
+
1.5
|
| 1239 |
+
2.0
|
| 1240 |
+
2.5
|
| 1241 |
+
3.0
|
| 1242 |
+
rho
|
| 1243 |
+
0
|
| 1244 |
+
500
|
| 1245 |
+
1000
|
| 1246 |
+
1500
|
| 1247 |
+
2000
|
| 1248 |
+
2500
|
| 1249 |
+
3000
|
| 1250 |
+
3500
|
| 1251 |
+
sparsity
|
| 1252 |
+
reduced ancestor
|
| 1253 |
+
full ancestor
|
| 1254 |
+
Figure 6. Average size of the sparsity Si, reduced ancestor ˜
|
| 1255 |
+
Ai, and full ancestor sets Ai as a function of ρ for n = 8,000 inputs are
|
| 1256 |
+
sampled uniformly on [0, 1]5.
|
| 1257 |
+
Figure 7 suggests that the initialization of ν using Vecchia-Laplace approximation and IC0 provides informative starting
|
| 1258 |
+
values for ν, which can be further refined by optimizing the ELBO.
|
| 1259 |
+
2
|
| 1260 |
+
0
|
| 1261 |
+
2
|
| 1262 |
+
true posterior mean
|
| 1263 |
+
2
|
| 1264 |
+
0
|
| 1265 |
+
2
|
| 1266 |
+
initial posterior mean
|
| 1267 |
+
2
|
| 1268 |
+
0
|
| 1269 |
+
2
|
| 1270 |
+
true posterior mean
|
| 1271 |
+
2
|
| 1272 |
+
0
|
| 1273 |
+
2
|
| 1274 |
+
est'd posterior mean
|
| 1275 |
+
Figure 7. Posterior mean at initialization (i.e., the IC0 solution) and after optimization for simulated Gaussian data. The setting is the
|
| 1276 |
+
same as in Figure 8.
|
| 1277 |
+
In the setting of Figure 5, Figure 8 shows a comparison of scores for the posterior marginals of the entries of f at training
|
| 1278 |
+
input locations. In contrast to Figure 5, VNNGP performed similarly to DKLGP and outperformed SVIGP for Gaussian and
|
| 1279 |
+
Student-t likelihoods. Furthermore, the Vecchia-Laplace approximation with IC0 (used as the initialization for DKLGP) is
|
| 1280 |
+
usually the third best model among the comparisons, amounting to an advantage of using the SIC structure for L and V.
|
| 1281 |
+
Figure 9 shows a visual comparison of predictions in the simulated-data setting of Section 3.2 but with Student-t and
|
| 1282 |
+
Bernoulli-logit likelihoods.
|
| 1283 |
+
B. Graph representation of sparsity patterns and ancestor sets
|
| 1284 |
+
We here illustrate the sparsity patterns and ancestor sets, using their graph representations. As pointed out by Katzfuss
|
| 1285 |
+
& Guinness (2021), the sparsity patterns can be represented by directed acyclic graphs (DAGs), which also allows
|
| 1286 |
+
straightforward visualization of ancestor sets. Figure 10 presents sparsity patterns and ancestor sets for three selected points
|
| 1287 |
+
|
| 1288 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 1289 |
+
0.20
|
| 1290 |
+
0.40
|
| 1291 |
+
0.60
|
| 1292 |
+
0.80
|
| 1293 |
+
RMSE
|
| 1294 |
+
Gaussian
|
| 1295 |
+
0.20
|
| 1296 |
+
0.40
|
| 1297 |
+
0.60
|
| 1298 |
+
0.80
|
| 1299 |
+
Student-t
|
| 1300 |
+
0.40
|
| 1301 |
+
0.60
|
| 1302 |
+
0.80
|
| 1303 |
+
1.00
|
| 1304 |
+
1.20
|
| 1305 |
+
1.40
|
| 1306 |
+
Bernoulli-logit
|
| 1307 |
+
1.0
|
| 1308 |
+
1.5
|
| 1309 |
+
2.0
|
| 1310 |
+
rho
|
| 1311 |
+
-2.00
|
| 1312 |
+
0.00
|
| 1313 |
+
2.00
|
| 1314 |
+
4.00
|
| 1315 |
+
NLL
|
| 1316 |
+
1.0
|
| 1317 |
+
1.5
|
| 1318 |
+
2.0
|
| 1319 |
+
rho
|
| 1320 |
+
-2.00
|
| 1321 |
+
0.00
|
| 1322 |
+
2.00
|
| 1323 |
+
4.00
|
| 1324 |
+
1.0
|
| 1325 |
+
1.5
|
| 1326 |
+
2.0
|
| 1327 |
+
rho
|
| 1328 |
+
0.00
|
| 1329 |
+
2.00
|
| 1330 |
+
4.00
|
| 1331 |
+
DKL-G
|
| 1332 |
+
DKL-D
|
| 1333 |
+
DKL
|
| 1334 |
+
SVI
|
| 1335 |
+
VNN
|
| 1336 |
+
Figure 8. RMSE and NLL for predicting the latent field at training locations, in the same setting as Figure 8. n = 8,000 training locations
|
| 1337 |
+
were randomly sampled in the unit hypercube, [0, 1]5, with true range parameters λ = (0.25, 0.50, 0.75, 1.00, 1.25). The green dotted
|
| 1338 |
+
lines are the scores of the initial model using Vecchia-Laplace approximation and IC0 before optimization.
|
| 1339 |
+
0.0
|
| 1340 |
+
0.2
|
| 1341 |
+
0.4
|
| 1342 |
+
0.6
|
| 1343 |
+
0.8
|
| 1344 |
+
1.0
|
| 1345 |
+
-4
|
| 1346 |
+
-3
|
| 1347 |
+
-2
|
| 1348 |
+
-1
|
| 1349 |
+
0
|
| 1350 |
+
1
|
| 1351 |
+
2
|
| 1352 |
+
0.0
|
| 1353 |
+
0.2
|
| 1354 |
+
0.4
|
| 1355 |
+
0.6
|
| 1356 |
+
0.8
|
| 1357 |
+
1.0
|
| 1358 |
+
-3
|
| 1359 |
+
-2
|
| 1360 |
+
-1
|
| 1361 |
+
0
|
| 1362 |
+
1
|
| 1363 |
+
2
|
| 1364 |
+
y
|
| 1365 |
+
f
|
| 1366 |
+
DKL
|
| 1367 |
+
SVI
|
| 1368 |
+
VNN
|
| 1369 |
+
DenseGP
|
| 1370 |
+
Figure 9. Comparison of variational GP approximations to the means (solid) and 95% intervals (dashed) of the posterior predictive
|
| 1371 |
+
distribution f ∗|y, under (left) Student-t and (right) Bernoulli-logit likelihoods in 1D. Here, n = 200 locations were randomly chosen for
|
| 1372 |
+
training and another 200 locations on a grid were used for testing. The ‘DenseGP’ result is only available for the Gaussian likelihood in
|
| 1373 |
+
Figure 4.
|
| 1374 |
+
|
| 1375 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 1376 |
+
(i = 12, 4, 1) of 16 grid points in the unit square. For example, x1 =
|
| 1377 |
+
� 1
|
| 1378 |
+
3, 1
|
| 1379 |
+
�
|
| 1380 |
+
and x16 =
|
| 1381 |
+
� 2
|
| 1382 |
+
3, 2
|
| 1383 |
+
3
|
| 1384 |
+
�
|
| 1385 |
+
. One can easily see that
|
| 1386 |
+
ℓ16 = ∞, ℓ15 = 2
|
| 1387 |
+
√
|
| 1388 |
+
2
|
| 1389 |
+
3 , ℓ14 = ℓ13 =
|
| 1390 |
+
�� 1
|
| 1391 |
+
3
|
| 1392 |
+
�2 +
|
| 1393 |
+
� 2
|
| 1394 |
+
3
|
| 1395 |
+
�2, ℓ12 = ℓ11 =
|
| 1396 |
+
√
|
| 1397 |
+
2
|
| 1398 |
+
3 and ℓ10 = · · · = ℓ1 = 1
|
| 1399 |
+
3. The edges of the graphs
|
| 1400 |
+
corresponding to the ancestor sets A12, A4 and A1 are denoted by the black curved arrows. Specifically, the sparsity set
|
| 1401 |
+
S1 = {2, 7, 13}, the reduced ancestor set ˜
|
| 1402 |
+
A1 = S1 ∪ {9, 11, 12} and the (full) ancestor set A1 = ˜
|
| 1403 |
+
A1 ∪ {15, 16}. Note that
|
| 1404 |
+
A1 contains ˜
|
| 1405 |
+
A1, which is a desirable property for leveraging the screening effect in GPs (Stein, 2011b; Bao et al., 2020).
|
| 1406 |
+
This is not always the case for small-scale problems (n < 104) and it depends on distribution of the points, as shown in
|
| 1407 |
+
Figure 10b. Specifically, A4 = {10, 11, 14, 15, 16}, but ˜
|
| 1408 |
+
A4 \ A4 = {13} ̸= ∅. But our numerical studies suggest that
|
| 1409 |
+
˜
|
| 1410 |
+
Ai \ Ai are typically empty or very small for large-scale problems, for which computational issues are most severe and
|
| 1411 |
+
hence our method is most likely to be used. For relatively large i = 12, S12 = ˜
|
| 1412 |
+
A12 = A12 = {16}. As illustrated here,
|
| 1413 |
+
all the reduced ancestor sets include x16, since ℓ16 = ∞. Otherwise, unlike ˜
|
| 1414 |
+
A4 and ˜
|
| 1415 |
+
A1, ˜
|
| 1416 |
+
A12 does not include x15 since
|
| 1417 |
+
dist(x15, x12) =
|
| 1418 |
+
√
|
| 1419 |
+
2 is larger than ρℓ15 = 1.226.
|
| 1420 |
+
1
|
| 1421 |
+
2
|
| 1422 |
+
3
|
| 1423 |
+
4
|
| 1424 |
+
5
|
| 1425 |
+
6
|
| 1426 |
+
7
|
| 1427 |
+
8
|
| 1428 |
+
9
|
| 1429 |
+
10
|
| 1430 |
+
11
|
| 1431 |
+
12
|
| 1432 |
+
13
|
| 1433 |
+
14
|
| 1434 |
+
15
|
| 1435 |
+
16
|
| 1436 |
+
(a) i = 12
|
| 1437 |
+
1
|
| 1438 |
+
2
|
| 1439 |
+
3
|
| 1440 |
+
4
|
| 1441 |
+
5
|
| 1442 |
+
6
|
| 1443 |
+
7
|
| 1444 |
+
8
|
| 1445 |
+
9
|
| 1446 |
+
10
|
| 1447 |
+
11
|
| 1448 |
+
12
|
| 1449 |
+
13
|
| 1450 |
+
14
|
| 1451 |
+
15
|
| 1452 |
+
16
|
| 1453 |
+
(b) i = 4
|
| 1454 |
+
1
|
| 1455 |
+
2
|
| 1456 |
+
3
|
| 1457 |
+
4
|
| 1458 |
+
5
|
| 1459 |
+
6
|
| 1460 |
+
7
|
| 1461 |
+
8
|
| 1462 |
+
9
|
| 1463 |
+
10
|
| 1464 |
+
11
|
| 1465 |
+
12
|
| 1466 |
+
13
|
| 1467 |
+
14
|
| 1468 |
+
15
|
| 1469 |
+
16
|
| 1470 |
+
(c) i = 1
|
| 1471 |
+
Figure 10. Reverse maximin ordering on a grid (small gray points) of size n = 4 × 4 = 16 on a unit square, [0, 1]d with d = 2. The i-th
|
| 1472 |
+
ordered input (▲), the subsequently ordered n − i inputs (�), the distance ℓi to the nearest neighbor (−), the neighboring subsequent
|
| 1473 |
+
inputs Si (■) within a (yellow) circle of radius ρℓi, with ρ = 1.3, the reduced ancestors ˜
|
| 1474 |
+
Ai (+), and the ancestors Ai (×). The directed
|
| 1475 |
+
acyclic graphs of the sparsity patterns are denoted by arrows (↷
|
| 1476 |
+
↷
|
| 1477 |
+
↷).
|
| 1478 |
+
C. Proofs
|
| 1479 |
+
Proof of Proposition 2.1. We have
|
| 1480 |
+
ELBO(q) = E
|
| 1481 |
+
q log p(y|f) − KL(q(f)∥p(f)),
|
| 1482 |
+
where Eq log p(y|f) = �n
|
| 1483 |
+
i=1 Eq log p(yi|fi). Using a well-known expression for the KL divergence between two Gaussian
|
| 1484 |
+
distributions, we have
|
| 1485 |
+
2 KL(q(f)∥p(f)) = tr
|
| 1486 |
+
�
|
| 1487 |
+
(LL⊤)(VV⊤)−1�
|
| 1488 |
+
+ (ν − µ)⊤(LL⊤)(ν − µ) + log |VV⊤| − log |LL⊤| − n,
|
| 1489 |
+
(9)
|
| 1490 |
+
where log |VV⊤| = 2 �n
|
| 1491 |
+
i=1 log Vii, log |LL⊤| = 2 �n
|
| 1492 |
+
i=1 log Lii, (ν − µ)⊤(LL⊤)(ν − µ) = �n
|
| 1493 |
+
i=1((ν − µ)⊤L:,i)2,
|
| 1494 |
+
L:,i denotes the ith column of L, and
|
| 1495 |
+
tr
|
| 1496 |
+
�
|
| 1497 |
+
(LL⊤)(VV⊤)−1�
|
| 1498 |
+
= tr
|
| 1499 |
+
�
|
| 1500 |
+
(V−1L)⊤(V−1L)
|
| 1501 |
+
�
|
| 1502 |
+
=
|
| 1503 |
+
n
|
| 1504 |
+
�
|
| 1505 |
+
i=1
|
| 1506 |
+
(V−1L:,i)⊤(V−1L:,i) =
|
| 1507 |
+
n
|
| 1508 |
+
�
|
| 1509 |
+
i=1
|
| 1510 |
+
∥V−1L:,i∥2.
|
| 1511 |
+
Proof of Proposition 2.2. Using a well-known formula for the KL divergence between two Gaussian distributions (e.g., see
|
| 1512 |
+
(9)), we have
|
| 1513 |
+
KL
|
| 1514 |
+
�
|
| 1515 |
+
p(f)
|
| 1516 |
+
��˜p(f)
|
| 1517 |
+
�
|
| 1518 |
+
= (˜µ − µ)⊤(˜L˜L⊤)(˜µ − µ)/2 + KL
|
| 1519 |
+
�
|
| 1520 |
+
Nn(0, K)
|
| 1521 |
+
��Nn(0, (˜L˜L⊤)−1)
|
| 1522 |
+
�
|
| 1523 |
+
,
|
| 1524 |
+
|
| 1525 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 1526 |
+
which is minimized with respect to ˜µ by ˜µ = µ, the exact prior mean. Plugging this in, the first summand is zero and the
|
| 1527 |
+
second summand was shown in Sch¨afer et al. (2021a, Thm. 2.1) to be minimized by an inverse Cholesky factor ˆL whose ith
|
| 1528 |
+
column can be computed in parallel for i = 1, . . . , n as
|
| 1529 |
+
ˆLSp
|
| 1530 |
+
i ,i = bi/
|
| 1531 |
+
�
|
| 1532 |
+
bi,1,
|
| 1533 |
+
with bi = K−1
|
| 1534 |
+
Sp
|
| 1535 |
+
i ,Sp
|
| 1536 |
+
i e1.
|
| 1537 |
+
Proof of Proposition 2.3.
|
| 1538 |
+
V−1ˆL:,i =
|
| 1539 |
+
�V1:i−1,1:i−1
|
| 1540 |
+
0
|
| 1541 |
+
Vi:n,1:i−1
|
| 1542 |
+
Vi:n,i:n
|
| 1543 |
+
�−1 �
|
| 1544 |
+
0
|
| 1545 |
+
ˆLi:n,i
|
| 1546 |
+
�
|
| 1547 |
+
=
|
| 1548 |
+
�
|
| 1549 |
+
0
|
| 1550 |
+
V−1
|
| 1551 |
+
i:n,i:nˆLi:n,i
|
| 1552 |
+
�
|
| 1553 |
+
Let X be the inverse of Vi:n,i:n. Then,
|
| 1554 |
+
(V−1ˆL:,i)j =
|
| 1555 |
+
1
|
| 1556 |
+
Vj,j
|
| 1557 |
+
�
|
| 1558 |
+
�ˆLj,i − ˆLj−1,i
|
| 1559 |
+
j−1
|
| 1560 |
+
�
|
| 1561 |
+
r=j−1
|
| 1562 |
+
Vj,rXr−i+1,j−i − · · · − ˆLi,i
|
| 1563 |
+
i
|
| 1564 |
+
�
|
| 1565 |
+
r=j−1
|
| 1566 |
+
Vj,rXr−i+1,1
|
| 1567 |
+
�
|
| 1568 |
+
�
|
| 1569 |
+
Since Sp
|
| 1570 |
+
i
|
| 1571 |
+
⊂ Ai, ˆLj,i = 0 for j
|
| 1572 |
+
/∈ Ai.
|
| 1573 |
+
Also, from the definition of Ai, it can be shown for j
|
| 1574 |
+
/∈ Ai that
|
| 1575 |
+
ˆLj−1,i
|
| 1576 |
+
�j−1
|
| 1577 |
+
r=j−1 Vj,rXr−i+1,j−i = . . . = ˆLi,i
|
| 1578 |
+
�i
|
| 1579 |
+
r=j−1 Vj,rXr−i+1,1 = 0. For instance, suppose j = i + 1 /∈ Ai.
|
| 1580 |
+
Then, (V−1ˆL:,i)i+1 =
|
| 1581 |
+
1
|
| 1582 |
+
Vi+1,i+1
|
| 1583 |
+
�
|
| 1584 |
+
ˆLi+1,i − ˆLi,iVi+1,iX1,1
|
| 1585 |
+
�
|
| 1586 |
+
= 0, since ˆLi+1,i = Vi+1,i = 0. Therefore, (V−1ˆL:,i)j = 0
|
| 1587 |
+
for all j /∈ Ai.
|
| 1588 |
+
Justification for Claim 2.4. We now provide theoretical justification for our claim that the entries of the vector V−1ˆL:,i are
|
| 1589 |
+
small outside of ˜
|
| 1590 |
+
Ai with magnitudes that decay exponentially as a function of ρ for each i = 1, . . . , n. In other words, our
|
| 1591 |
+
claim is that for j ≥ i,
|
| 1592 |
+
log
|
| 1593 |
+
����(V−1ˆL:,i)j
|
| 1594 |
+
���
|
| 1595 |
+
�
|
| 1596 |
+
⪅ log(n) − dist (xj, xi) /ℓj.
|
| 1597 |
+
By the results on exponential screening in Sch¨afer et al. (2021b), the matrix ˆL satisfies the above decay property for
|
| 1598 |
+
covariances that are Green’s functions of elliptic PDEs. It satisfies even the stronger property with ℓj replaced by ℓi.
|
| 1599 |
+
For a Gaussian likelihood, the matrix V satisfies
|
| 1600 |
+
VV⊤ = ˆLˆL⊤ + R−1 =: Σ−1,
|
| 1601 |
+
(10)
|
| 1602 |
+
where R is a diagonal covariance matrix of the likelihood. Interpreted as a PDE, the diagonal matrix R−1 corresponds to a
|
| 1603 |
+
zero-order term. Thus, the associated covariance matrix (ˆLˆL⊤)−1 behaves like a discretized elliptic Green’s function and
|
| 1604 |
+
is therefore subject to an exponential screening effect (Sch¨afer et al., 2021a, Section 4.1). Let P↕ denote the permutation
|
| 1605 |
+
matrix that reverts the order of the degrees of freedom. Since P↕V−⊤P↕ is lower triangular and
|
| 1606 |
+
P↕ΣP↕ = P↕V−⊤P↕P↕V−1P↕ =
|
| 1607 |
+
�
|
| 1608 |
+
P↕V−⊤P↕� �
|
| 1609 |
+
P↕V−⊤P↕�⊤
|
| 1610 |
+
,
|
| 1611 |
+
the matrix P↕V−⊤P↕ is the Cholesky factor of Σ in the maximin (as opposed to the reverse maximin) ordering. In Sch¨afer
|
| 1612 |
+
et al. (2021b), it is shown that the Cholesky factors of discretized Green’s funcions of elliptic PDEs in the maximin ordering
|
| 1613 |
+
have exponentially decaying Cholesky factors. In particular, the results of Sch¨afer et al. (2021b) suggest that
|
| 1614 |
+
∀j ≥ i : log
|
| 1615 |
+
�����
|
| 1616 |
+
�
|
| 1617 |
+
P↕V−⊤P↕�
|
| 1618 |
+
ji
|
| 1619 |
+
����
|
| 1620 |
+
�
|
| 1621 |
+
⪅ log(n) − dist (xj, xi) /ℓi
|
| 1622 |
+
⇒ ∀j ≥ i : log
|
| 1623 |
+
����
|
| 1624 |
+
�
|
| 1625 |
+
V−1�
|
| 1626 |
+
ji
|
| 1627 |
+
���
|
| 1628 |
+
�
|
| 1629 |
+
⪅ log(n) − dist (xj, xi) /ℓj.
|
| 1630 |
+
As shown, for instance, in Sch¨afer et al. (2021b, Lemma 5.19), products of matrices that decay rapidly with respect to a
|
| 1631 |
+
distance function dist(·, ·) on its index set, inherit this decay property. To this end, assume that lower triangular matrices A
|
| 1632 |
+
|
| 1633 |
+
Approximating latent GPs via double SIC-KL-minimization
|
| 1634 |
+
and B satisfy this property. We then have
|
| 1635 |
+
log
|
| 1636 |
+
����(AB)ji
|
| 1637 |
+
���
|
| 1638 |
+
�
|
| 1639 |
+
= log
|
| 1640 |
+
������
|
| 1641 |
+
�
|
| 1642 |
+
k
|
| 1643 |
+
AjkBki
|
| 1644 |
+
�����
|
| 1645 |
+
�
|
| 1646 |
+
≤ log(n) + log
|
| 1647 |
+
�
|
| 1648 |
+
max
|
| 1649 |
+
k
|
| 1650 |
+
|AjkBki|
|
| 1651 |
+
�
|
| 1652 |
+
⪅ log(n) − max
|
| 1653 |
+
k
|
| 1654 |
+
(dist (xj, xk) /ℓj − dist (xj, xk) /ℓk) .
|
| 1655 |
+
By the triangle inequality, we have dist (xj, xk) + dist (xk, xi) ≥ dist (xj, xi). Since the right hand is −∞ unless j > i
|
| 1656 |
+
and thus ℓj ≥ ℓi, we have thus
|
| 1657 |
+
log
|
| 1658 |
+
����(AB)ji
|
| 1659 |
+
���
|
| 1660 |
+
�
|
| 1661 |
+
= log
|
| 1662 |
+
������
|
| 1663 |
+
�
|
| 1664 |
+
k
|
| 1665 |
+
AjkBki
|
| 1666 |
+
�����
|
| 1667 |
+
�
|
| 1668 |
+
⪅ log(n) − dist (xj, xi) /ℓj,
|
| 1669 |
+
proving the the result.
|
| 1670 |
+
For a general exponential family likelihood, the matrix V does not necessarily satisfy (10). Instead, according to Nickisch &
|
| 1671 |
+
Rasmussen (2008), a quadratic approximation to the log-likelihood under mild conditions implies that
|
| 1672 |
+
VV⊤ = ˆLˆL⊤ + W−1,
|
| 1673 |
+
where W is the covariance of the effective likelihood obtained by dividing the approximate posterior by the prior. Assuming
|
| 1674 |
+
that W−1 corresponds to a zero-order term in the context of a PDE, one can also obtain the result from the justification for
|
| 1675 |
+
the Gaussian likelihood case above.
|
| 1676 |
+
Proof of Proposition 2.5. Note that p(f ∗|f) = p(˜f)/p(f) = Nn∗ �
|
| 1677 |
+
µ∗ + K∗oK−1(f − µ), K∗|o
|
| 1678 |
+
�
|
| 1679 |
+
, where K∗|o
|
| 1680 |
+
=
|
| 1681 |
+
K∗∗ − K∗oK−1Ko∗, and q(f ∗|f) = q(˜f)/q(f) = Nn∗ �
|
| 1682 |
+
ν∗ − (V∗∗)−⊤Vo∗⊤(f − ν), (V∗∗V∗∗⊤)−1�
|
| 1683 |
+
. Then, since
|
| 1684 |
+
KL
|
| 1685 |
+
�
|
| 1686 |
+
p(f ∗|f)
|
| 1687 |
+
��q(f ∗|f)
|
| 1688 |
+
�
|
| 1689 |
+
is a KL divergence between two Gaussian distributions, we have
|
| 1690 |
+
2 KL
|
| 1691 |
+
�
|
| 1692 |
+
p(f ∗|f)
|
| 1693 |
+
��q(f ∗|f)
|
| 1694 |
+
�
|
| 1695 |
+
= (Gf + h)⊤(V∗∗V∗∗⊤)(Gf + h) + 2 KL
|
| 1696 |
+
�
|
| 1697 |
+
Nn∗ �
|
| 1698 |
+
0, K∗|o
|
| 1699 |
+
� ��Nn∗ �
|
| 1700 |
+
0, (V∗∗V∗∗⊤)−1� �
|
| 1701 |
+
where G = −(V∗∗)−⊤Vo∗⊤ − K∗oK−1 and h = ν∗ + (V∗∗)−⊤Vo∗⊤ν − µ∗ + K∗oK−1µ. Using the fact that the first
|
| 1702 |
+
term is quadratic in form, one can show that
|
| 1703 |
+
E
|
| 1704 |
+
p
|
| 1705 |
+
�
|
| 1706 |
+
(Gf + h)⊤(V∗∗V∗∗⊤)(Gf + h)
|
| 1707 |
+
�
|
| 1708 |
+
= (Gµ + h)⊤(V∗∗V∗∗⊤)(Gµ + h) + tr
|
| 1709 |
+
�
|
| 1710 |
+
(V∗∗V∗∗⊤)(GKG⊤)
|
| 1711 |
+
�
|
| 1712 |
+
.
|
| 1713 |
+
Then, we can see that KL
|
| 1714 |
+
�
|
| 1715 |
+
p(f ∗|f)
|
| 1716 |
+
��q(f ∗|f)
|
| 1717 |
+
�
|
| 1718 |
+
is minimized with respect to ν∗ by Gµ + h = 0. This implies that
|
| 1719 |
+
ˆν∗ = µ∗ − (V∗∗)−⊤Vo∗⊤(ν − µ). Plugging this in, we have
|
| 1720 |
+
arg min
|
| 1721 |
+
V∗∈S∗ E
|
| 1722 |
+
p
|
| 1723 |
+
�
|
| 1724 |
+
KL
|
| 1725 |
+
�
|
| 1726 |
+
p(f ∗|f)
|
| 1727 |
+
��q(f ∗|f)
|
| 1728 |
+
��
|
| 1729 |
+
= arg min
|
| 1730 |
+
V∗∈S∗
|
| 1731 |
+
�
|
| 1732 |
+
tr
|
| 1733 |
+
�
|
| 1734 |
+
V∗⊤ ˜KV∗�
|
| 1735 |
+
− log det(V∗∗V∗∗⊤)
|
| 1736 |
+
�
|
| 1737 |
+
= arg min
|
| 1738 |
+
V∗∈S∗
|
| 1739 |
+
n∗
|
| 1740 |
+
�
|
| 1741 |
+
i=1
|
| 1742 |
+
�
|
| 1743 |
+
V∗
|
| 1744 |
+
S∗
|
| 1745 |
+
i ,i
|
| 1746 |
+
⊤ ˜KS∗
|
| 1747 |
+
i ,S∗
|
| 1748 |
+
i V∗
|
| 1749 |
+
S∗
|
| 1750 |
+
i ,i − 2 log V∗
|
| 1751 |
+
i,i
|
| 1752 |
+
�
|
| 1753 |
+
Taking the first derivative of the summation with respect to the column vector V∗
|
| 1754 |
+
S∗
|
| 1755 |
+
i ,i and setting it to zero, one can
|
| 1756 |
+
show that ˆV∗
|
| 1757 |
+
S∗
|
| 1758 |
+
i ,i = ˜K−1
|
| 1759 |
+
S∗
|
| 1760 |
+
i ,S∗
|
| 1761 |
+
i e1/V∗
|
| 1762 |
+
i,i. Since V∗
|
| 1763 |
+
i,i is the first entry of ˆV∗
|
| 1764 |
+
S∗
|
| 1765 |
+
i ,i, we can have ˆV∗
|
| 1766 |
+
S∗
|
| 1767 |
+
i ,i = ˜bi/
|
| 1768 |
+
�
|
| 1769 |
+
˜bi,1 where
|
| 1770 |
+
˜bi = ˜K−1
|
| 1771 |
+
S∗
|
| 1772 |
+
i ,S∗
|
| 1773 |
+
i e1.
|
| 1774 |
+
|
NNFQT4oBgHgl3EQfWDbE/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
O9AyT4oBgHgl3EQfUffK/content/tmp_files/2301.00128v1.pdf.txt
ADDED
|
@@ -0,0 +1,360 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
arXiv:2301.00128v1 [math.AG] 31 Dec 2022
|
| 2 |
+
CURVE SELECTION LEMMA IN ARC SPACES
|
| 3 |
+
NGUYEN HONG DUC
|
| 4 |
+
Abstract. We first generalize a curve selection lemma for Noetherian schemes and apply
|
| 5 |
+
it to prove a version of Curve Selection Lemma in arc spaces, answering affirmatively a
|
| 6 |
+
question by Reguera. Furthermore, thanks to a structure theorem of Grinberg, Kazhdan
|
| 7 |
+
and Drinfeld, we obtain other versions of Curve Selection Lemma in arc spaces.
|
| 8 |
+
1. Introduction
|
| 9 |
+
Curve Selection Lemma is shown to be a very useful tool in many geometric situations
|
| 10 |
+
in algebraic, analytic and semi-algebraic geometry. The classical version of Curve Selection
|
| 11 |
+
Lemma was achieved by Milnor [15]. Let X be a semi-algebraic subset in Rn and x be a
|
| 12 |
+
point in the closure ¯X of X. Then there exists a Nash curve (analytic and semi-algebraic
|
| 13 |
+
curve)
|
| 14 |
+
φ : [0, ε) → Rn
|
| 15 |
+
such that φ(0) = x and φ(t) ∈ X for all t ∈ (0, ε). In algebraic geometry a version of Curve
|
| 16 |
+
Selection Lemma for varieties, which can be proved by using a cutting method, is stated as
|
| 17 |
+
follows. Let X be a scheme of finite type over a field k. If a non-isolated point x is in the
|
| 18 |
+
Zariski closure ¯A of a constructible subset A, then there is a non-constant morphism
|
| 19 |
+
α: Spec (kx[[t]]) → ¯A
|
| 20 |
+
sending the closed point to x and the generic point to a point in A. If kx = k is equal to C
|
| 21 |
+
or R the parametrization can be chosen convergent, or algebraic.
|
| 22 |
+
We are interested in the study of Curve Selection Lemma in the arc spaces. The difficulty
|
| 23 |
+
is that the arc spaces are of infinite dimension and it is widely known that a plain formulation
|
| 24 |
+
of Curve Selection Lemma in infinite dimensional algebraic geometry as stated above is not
|
| 25 |
+
true in genreral as the following example shows.
|
| 26 |
+
Consider A := V
|
| 27 |
+
�
|
| 28 |
+
{x1 − xn
|
| 29 |
+
n}n∈N
|
| 30 |
+
�
|
| 31 |
+
. Let a be equal to the origin. There is no morphism
|
| 32 |
+
α : Spec(K[[t]]) → A
|
| 33 |
+
such that α(0) is equal to the origin a and such that the image of the generic point is not
|
| 34 |
+
the origin, since otherwise the order of the formal power series x1(α(t)) must be finite and
|
| 35 |
+
would be divisible by n for all positive integers n.
|
| 36 |
+
The first version of Curve Selection Lemma for arc spaces, due to Reguera in [17, Corollary
|
| 37 |
+
4.8] is of the following form. Let X be an algebraic variety and let N and N′ two irreducible
|
| 38 |
+
subsets of the arc space X∞ such that ¯N ⊊ N′. Suppose that N is generically stable (e.g.
|
| 39 |
+
weakly stabe in the sense of Denef-Loeser [5], see Definition 3.1) with the residue field K.
|
| 40 |
+
Then there is a finite algebraic extension K ⊂ L and a morphism
|
| 41 |
+
α: Spec(L[[t]]) → X∞
|
| 42 |
+
1
|
| 43 |
+
|
| 44 |
+
whose special point is sent to the generic point of N and such that the image of the generic
|
| 45 |
+
point Spec (L((t))) falls in N′\ ¯N.
|
| 46 |
+
This version of Curve Selection Lemma has many applications in the study of arc spaces of
|
| 47 |
+
algebraic varieties (see for example [3, 4, 12, 13, 14, 16, 17]). Especially it plays an essential
|
| 48 |
+
role in the proofs of of Nash problem for surfaces in [4] and for terminal singularities in [2].
|
| 49 |
+
In this paper we introduce stronger versions of Curve Selection Lemma. More concretely,
|
| 50 |
+
we prove two versions of the Curve Selection Lemma in arc spaces under the assumption
|
| 51 |
+
that either the closure of the set {x} is generically stable or x is a non-degenerate k-arc
|
| 52 |
+
(i.e. the corresponding morphism can not factor through the singular locus of the considered
|
| 53 |
+
variety, see Section 3.1) and A is generically stable. The first version (Theorem 3.8) answers
|
| 54 |
+
affirmatively a question by Reguera in [17]. For the proof of the second version (Theorem
|
| 55 |
+
3.11), we need to generalize the structure theorem of Grinberg-Kazhdan and Drinfeld to
|
| 56 |
+
generically stable subsets. Precisely, we prove that the formal neighbourhood of a generically
|
| 57 |
+
stable subset of an arc space at a non-degenerate k-arc is isomorphic to the product of a
|
| 58 |
+
local adic Noetherian formal k-scheme and an infinitely dimensional affine formal disk.
|
| 59 |
+
2. Curve selection lemma in Noetherian schemes
|
| 60 |
+
Throughout this note, k is a field.
|
| 61 |
+
If x is a point of a k-scheme then kx denotes the
|
| 62 |
+
residue field of x. In this section we prove a strong versions of Curve Selection Lemma for
|
| 63 |
+
Noetherian schemes which generalizes the version stated in the introduction.
|
| 64 |
+
Theorem 2.1. Let X be an irreducible Noetherian k-scheme and let z be its generic point.
|
| 65 |
+
Then for any point x of X there exist an extension K of kx and an arc γ : Spec(K[[t]]) → X
|
| 66 |
+
which maps the closed point to x and the generic point to z.
|
| 67 |
+
Proof. Consider the blowing-up h : Y → X of X along the closure Z of {x} in X. Let
|
| 68 |
+
E ⊂ Y be a prime exceptional divisor which dominates Z. Let y be the generic point of E
|
| 69 |
+
and let OY,y the localization of OY at y. Since OY is Noetherian and since E is a divisor
|
| 70 |
+
on Y , OY,y is a Noetherian ring of dimension 1. It follows that the normalization of the
|
| 71 |
+
completion of OY,y is isomorphic to K[[t]], where K = ky. Let φ be the arc defined by the
|
| 72 |
+
following composition of injective morphisms
|
| 73 |
+
OX → OX,x → OY,y → ˆOY,y → K[[t]],
|
| 74 |
+
where the last morphism is the normalization. Then φ(0) = x since OX,x → K[[t]] is a
|
| 75 |
+
morphisms of local rings, and φ(η) = z due to the injectivity of the morphism OX →
|
| 76 |
+
K[[t]].
|
| 77 |
+
□
|
| 78 |
+
The following corollary is a direct consequence of the theorem where we consider the
|
| 79 |
+
closure of {y} instead of X.
|
| 80 |
+
Corollary 2.2. Let X be a Noetherian k-scheme. Let x, y be two points of X such that x is
|
| 81 |
+
a specilization of y. Then there exist an extension K of kx and an arc γ : Spec(K[[t]]) → X
|
| 82 |
+
which maps the closed point to x and the generic point to y.
|
| 83 |
+
Corollary 2.3. Let X be an irreducible Noetherian k-scheme of positive dimension. Let x
|
| 84 |
+
be a non-isolated k-point of X and Z a strictly closed subset of X. Then there exists an arc
|
| 85 |
+
γ : Spec(k[[t]]) → X
|
| 86 |
+
which maps the closed point to x and the generic point outside Z.
|
| 87 |
+
2
|
| 88 |
+
|
| 89 |
+
Proof. Since Z is a closed subset of X, we can find another closed subset Y of X of dimension
|
| 90 |
+
1 containing x such that Y ∩ Z ⊂ {x}. Then, as in the proof of Theorem 2.1, one has a
|
| 91 |
+
morphism
|
| 92 |
+
OY → OY,y → ˆOY,y → k[[t]],
|
| 93 |
+
which defines the expected arc.
|
| 94 |
+
□
|
| 95 |
+
3. Curve selection lemma in arc spaces
|
| 96 |
+
3.1. Generically and weakly stable subsets of the space of arcs. Let X be a k-variety.
|
| 97 |
+
For any n in N, denote by Xn (or, JnX) the k-scheme of n-jets of X, which represents the
|
| 98 |
+
functor from the category of k-algebras to the category of sets sending a k-algebra A to
|
| 99 |
+
Mork-schemes(Spec (A[t]/A(tn+1)), X). For m ≥ n, the truncation k[t]/(tm+1) → k[t]/(tn+1)
|
| 100 |
+
induces a morphism of k-schemes
|
| 101 |
+
πm
|
| 102 |
+
n : Xm → Xn.
|
| 103 |
+
We call the projective limit
|
| 104 |
+
X∞ := lim
|
| 105 |
+
←− Xn
|
| 106 |
+
the arc space of X. For any field extension K ⊇ k, the K-points, or K-arcs of X∞ correspond
|
| 107 |
+
one-to-one to the K[[t]]-points of X. A K-arc x of X∞ is said to be non-degenerate if its
|
| 108 |
+
corresponding morphism Spec(K[[t]]) → X∞ can not factor through the singular locus SingX
|
| 109 |
+
of X.
|
| 110 |
+
For each n ∈ N we denote by πn (or, πn,X) the natural morphism X∞ → Xn. Let A be a
|
| 111 |
+
subset of the arc space X∞. The set A is said to be weakly stable at level n, for some n in N,
|
| 112 |
+
if A is a union of fibers of πn : X∞ → Xn; the set A is said to be weakly stable if it is weakly
|
| 113 |
+
stable at some level.
|
| 114 |
+
Definition 3.1. A locally closed subset N of X∞\(SingX)∞ will be called generically stable
|
| 115 |
+
if there exists an open affine subscheme W of X∞, such that N ∩ W is weakly stable.
|
| 116 |
+
Remark 3.2. Our notion of generic stability is slightly different from that of [17, Definition
|
| 117 |
+
3.1]. They coincide if the base field k is perfect by [17, Theorem 4.1].
|
| 118 |
+
Lemma 3.3. [17, Corollary 4.6] Let N be an irreducible generically stable subset of X∞, and
|
| 119 |
+
let z be its generic point. Then:
|
| 120 |
+
(i) the ring �
|
| 121 |
+
OX∞,z is Noetherian;
|
| 122 |
+
(ii) if N′ is an irreducible subset of X∞ such that N′ ⊃ N, N ̸= N′, then �
|
| 123 |
+
ON′,z is a
|
| 124 |
+
Noetherian local ring of dimension ⩾ 1.
|
| 125 |
+
Proof. The proof of [17, Corollary 4.6] works with our definition of generically stable subsets
|
| 126 |
+
of arc spaces.
|
| 127 |
+
□
|
| 128 |
+
Most of the locally closed subsets considered in the literature are generically stable. Ex-
|
| 129 |
+
amples are cylindrical sets, contact loci with ideals and maximal divisorial sets (see [8] and
|
| 130 |
+
[11]). The following lemma gives us several additional classes of generically stable subsets of
|
| 131 |
+
the arc space X∞. cf. [17, Lemma 3.6].
|
| 132 |
+
Lemma 3.4. Let N be an irreducible locally closed subset of X∞ with the generic point z.
|
| 133 |
+
Then N is generically stable if one of the following statements hold:
|
| 134 |
+
(i) N is semi-algebraic1 and the corresponding morphism of z is dominant;
|
| 135 |
+
1see [5, (2.2)], [17, 3.4]
|
| 136 |
+
3
|
| 137 |
+
|
| 138 |
+
(ii) there exists a resolution of singularities h: Y → X and a prime divisor E of Y such
|
| 139 |
+
that h∞ maps the generic point of π−1
|
| 140 |
+
Y (E) to z.
|
| 141 |
+
3.2. Reguera’s Curve Selection Lemma.
|
| 142 |
+
Theorem 3.5. [17] Let N and N′ be irreducible locally closed subsets of X∞ such that
|
| 143 |
+
¯N ⊊ N′ and N is generically stable. Let z, z′ be generic points of N, N′ respectively. Then
|
| 144 |
+
there exists an arc
|
| 145 |
+
φ: Spec K[[t]] → N′,
|
| 146 |
+
where K is a finite algebraic extension of kz, such that φ(0) = z and φ(η) ∈ N′ \ N.
|
| 147 |
+
Corollary 3.6. Assume char k = 0. Let N be an irreducible subset of X∞ strictly contained
|
| 148 |
+
in an irreducible component of XSing
|
| 149 |
+
∞
|
| 150 |
+
with the generic point z. Then, there exists an arc
|
| 151 |
+
φ: Spec K[[t]] → N′,
|
| 152 |
+
where K is a finite algebraic extension of kz, such that φ(0) = z and φ(η) ∈ XSing
|
| 153 |
+
∞
|
| 154 |
+
\ N.
|
| 155 |
+
Question 3.7. [17, Page 127] Let N and N′ be irreducible locally closed subsets of X∞ such
|
| 156 |
+
that ¯N ⊊ N′ and N is generically stable. Let z, z′ be generic points of N, N′ respectively.
|
| 157 |
+
Is it true that there exists an arc
|
| 158 |
+
φ: Spec K[[t]] → N′,
|
| 159 |
+
where K is a finite algebraic extension of kz, such that φ(0) = z and φ(η) = z′.
|
| 160 |
+
3.3. Strong versions of the Curve Selection Lemma. In this section we prove several
|
| 161 |
+
strong versions of Curve Selection Lemma. The first one answers affirmatively Reguera’
|
| 162 |
+
question (Question 3.7).
|
| 163 |
+
Theorem 3.8. Let N and N′ be irreducible locally closed subsets of X∞ such that ¯N ⊊ N′
|
| 164 |
+
and N is generically stable. Let z, z′ be generic points of N, N′ respectively. Then there
|
| 165 |
+
exists an arc
|
| 166 |
+
φ: Spec K[[t]] → N′,
|
| 167 |
+
where K is a finite algebraic extension of kz, such that φ(0) = z and φ(η) = z′.
|
| 168 |
+
Proof. Since N is a generically stable of X∞, it follows from Lemma 3.3 that the ring ON′,z is
|
| 169 |
+
Noetherian. Applying the curve selection lemma for Noetherian kz-schemes (Theorem 2.1),
|
| 170 |
+
we obtain an arc defined by the following injective morphism of local kz-algebras
|
| 171 |
+
ON′,z → K[[t]],
|
| 172 |
+
K is a finite algebraic extension of kz. Hence the composition
|
| 173 |
+
ON′ → ON′,z → K[[t]]
|
| 174 |
+
defines an expected arc.
|
| 175 |
+
□
|
| 176 |
+
In order to prove other strong versions of Curve Selection Lemma we need the following
|
| 177 |
+
structure theorem, which generalizes Drinfeld-Grinberg-Kazhdan theorem [9, 6, 7]. For its
|
| 178 |
+
proof we need to use the proof of Drinfeld-Grinberg-Kazhdan theorem in [1, Theorems 4.1-
|
| 179 |
+
4.2].
|
| 180 |
+
4
|
| 181 |
+
|
| 182 |
+
Lemma 3.9. Let N be an irreducible generically stable subset of X∞. Let γ ∈ N be a
|
| 183 |
+
non-degenerate k-point of X∞. Then there exists a local adic Noetherian k-algebra A and an
|
| 184 |
+
isomorphism
|
| 185 |
+
�
|
| 186 |
+
ON,γ ∼= k[[N]] ˆ⊗ A,
|
| 187 |
+
where k[[N]] stands for k[[x1, x2, . . . , xn, . . .]].
|
| 188 |
+
Proof. As in the proofs of Drinfeld-Grinberg-Kazhdan theorem (see [1, 6, 7]), we may assume
|
| 189 |
+
that X is a complete intersection, i.e. the subscheme of Spec k [x1, . . . , xd, y1, . . . , yl] defined
|
| 190 |
+
by equations f1 = . . . = fl = 0 such that the arc γ0(t) = (x0(t), y0(t)) is not contained in the
|
| 191 |
+
subscheme of X defined by det ∂f
|
| 192 |
+
∂y = 0. Here ∂f
|
| 193 |
+
∂y is the matrix of partial derivatives ∂fi
|
| 194 |
+
∂yj . It
|
| 195 |
+
follows from the proof of [1, Theorem 4.1], [7, Theorem 2.1.1] that there is an isomorphism
|
| 196 |
+
θ: �
|
| 197 |
+
X∞,γ → �
|
| 198 |
+
(Ad
|
| 199 |
+
k)∞,0 × �Yy,
|
| 200 |
+
where y is a k-point of some k-variety Y . Moreover, for each natural number n, there is a
|
| 201 |
+
morphism
|
| 202 |
+
φn : �
|
| 203 |
+
(Ad
|
| 204 |
+
k)n,0 × �Yy → �
|
| 205 |
+
Xn,γn
|
| 206 |
+
such that the following diagram commutes
|
| 207 |
+
�
|
| 208 |
+
Nγ
|
| 209 |
+
� �
|
| 210 |
+
X∞,γ
|
| 211 |
+
ˆπn,X
|
| 212 |
+
�
|
| 213 |
+
θ� �
|
| 214 |
+
(Ad
|
| 215 |
+
k)∞,0 × �Yy
|
| 216 |
+
pn
|
| 217 |
+
�
|
| 218 |
+
�
|
| 219 |
+
Xn,γn
|
| 220 |
+
�
|
| 221 |
+
(Ad
|
| 222 |
+
k)n,0 × �Yy
|
| 223 |
+
φn
|
| 224 |
+
�
|
| 225 |
+
where γn = πn,X(γ) and the vertical morphisms are induced by truncation maps. We take
|
| 226 |
+
n a positive integer such that N ∩ W is weakly stable at level n for some open subset W
|
| 227 |
+
of X∞. Let Nn be the closure of πn(N) in Xn. Since N is generically stable, it follows
|
| 228 |
+
that
|
| 229 |
+
�
|
| 230 |
+
π−1
|
| 231 |
+
n (Nn)γ ∼= �
|
| 232 |
+
Nγ. Then the preimage of φ−1
|
| 233 |
+
n
|
| 234 |
+
�
|
| 235 |
+
�
|
| 236 |
+
Nn,γn
|
| 237 |
+
�
|
| 238 |
+
is an affine formal subscheme of
|
| 239 |
+
�
|
| 240 |
+
(Ad
|
| 241 |
+
k)n,0 × �Yy and therefore
|
| 242 |
+
φ−1
|
| 243 |
+
n
|
| 244 |
+
�
|
| 245 |
+
�
|
| 246 |
+
Nn,γn
|
| 247 |
+
�
|
| 248 |
+
= SpfA,
|
| 249 |
+
for some local adic Noetherian k-algebra A. Since pn is a trivial fibration with fiber Spf(k[[N]]),
|
| 250 |
+
it yields that
|
| 251 |
+
Spf �ON,γ ∼= ˆπ−1
|
| 252 |
+
n,X (φn(SpfA)) = θ−1 �
|
| 253 |
+
p−1
|
| 254 |
+
n (SpfA)
|
| 255 |
+
� ∼= Spfk[[N]] × SpfA
|
| 256 |
+
and hence
|
| 257 |
+
�ON,γ ∼= k[[N]] ˆ⊗A.
|
| 258 |
+
□
|
| 259 |
+
Remark 3.10. By a more concrete argument we may indeed choose the k-algebra A in the
|
| 260 |
+
statement of Theorem 3.11 such that SpfA is the completion of a k-variety at a k-point.
|
| 261 |
+
Nevertheless, we do not need such a strong result in this paper.
|
| 262 |
+
5
|
| 263 |
+
|
| 264 |
+
Theorem 3.11. Let N be an irreducible generically stable subset of X∞ with the generic
|
| 265 |
+
point z. Let γ ∈ N be a non-degenerate k-point. Then there exist an extension K of k and
|
| 266 |
+
an arc
|
| 267 |
+
φ: Spec K[[t]] → N,
|
| 268 |
+
such that φ(0) = γ and φ(η) = z.
|
| 269 |
+
Proof. Since γ is a non-degenerate k-arc, it follows from Lemma 3.9 that there is an isomor-
|
| 270 |
+
phism of k-algebras
|
| 271 |
+
�ON,γ ∼= k[[N]] ˆ⊗ A,
|
| 272 |
+
where k[[N]] stands for k[[x1, x2, . . . , xn, . . .]] and A is a local adic Noetherian k-algebra.
|
| 273 |
+
Applying Theorem 2.1, we get an arc defined by the following injective morphism of local
|
| 274 |
+
k-algebras
|
| 275 |
+
A → K1[[t]].
|
| 276 |
+
We denote by K2 the quotient field of the integral domain k[[N]] and by K the completed
|
| 277 |
+
tensor product K1 ˆ⊗ K2. Let φ be the arc defined by the following composition of injective
|
| 278 |
+
morphisms
|
| 279 |
+
ON → �
|
| 280 |
+
ON,γ ∼= k[[N]] ˆ⊗ A ∼= k[[N]] ˆ⊗ K1[[t]] → K[[t]].
|
| 281 |
+
Then φ(0) = γ and φ(η) = z.
|
| 282 |
+
□
|
| 283 |
+
The following example shows that the assumption that N is generically stable in Theorem
|
| 284 |
+
3.8 and the assumption that γ is non-degenerate in Theorem 3.11 are necessary.
|
| 285 |
+
Example 3.12 (Lejeune-Jalabert and Reguera). Let X be the Whitney umbrella x2
|
| 286 |
+
3 = x1x2
|
| 287 |
+
2
|
| 288 |
+
in A3
|
| 289 |
+
C. Then SingX is defined by x2 = x3 = 0. Let γ be the point in X∞ determined by any
|
| 290 |
+
arc x1(t), x2(t), x3(t) such that
|
| 291 |
+
ordt x1(t) = 1
|
| 292 |
+
and
|
| 293 |
+
x2(t) = x3(t) = 0.
|
| 294 |
+
Let N be the closure of the point γ and let N′ be the set π−1
|
| 295 |
+
X (SingX)\(Sing X)∞, the set of
|
| 296 |
+
arcs centered in some point of Sing X. Then N ⊂ N′ ([10, Lemma 2.12]) but there does not
|
| 297 |
+
exist an arc φ: Spec K[[s]] → N′ which maps the closed point to γ and the generic point to
|
| 298 |
+
the generic point of N′.
|
| 299 |
+
In fact, assume that such an arc exists, i.e. there is a wedge whose coordinates
|
| 300 |
+
x1(t, s), x2(t, s), x3(t, s) ∈ K[[t, s]]
|
| 301 |
+
satisfy x2
|
| 302 |
+
3 = x1x2
|
| 303 |
+
2;
|
| 304 |
+
x1(t, 0) = x1(t); x2(t, 0) = x2(t) = 0 and x3(t, 0) = x3(t) = 0.
|
| 305 |
+
Then ord(t,s) x1(t, s) = 1 and thus
|
| 306 |
+
2 ord(t,s) x3(t, s) = 1 + 2 ord(t,s) x2(t, s).
|
| 307 |
+
Hence x2(t, s) and x3(t, s) must be equal to zero, i.e.
|
| 308 |
+
the image of the generic poit of
|
| 309 |
+
Spec K[[s]] is in (Sing X)∞, a contradiction.
|
| 310 |
+
Notice that the output of the previous result is a parametrization defined over the field
|
| 311 |
+
K which is of infinite transcendence degree over the base field. In many applications it is
|
| 312 |
+
necessary to obtain a Curve Selection Lemma whose outcome curve is defined over the base
|
| 313 |
+
field.
|
| 314 |
+
6
|
| 315 |
+
|
| 316 |
+
Corollary 3.13. Let N be an irreducible generically stable subset of X∞ with the generic
|
| 317 |
+
point z. Let P is another irreducible closed subset of X∞ not containing N. Let γ ∈ N be a
|
| 318 |
+
non-degenerate k-point. Then there exisst an arc
|
| 319 |
+
φ: Spec k[[t]] → N,
|
| 320 |
+
which maps the closed point to γ and the generic point outside P.
|
| 321 |
+
Proof. It is proved in the same way as in the proof of Theorem 3.11 by using a cutting
|
| 322 |
+
method (cf. the proof of Corollary 2.3).
|
| 323 |
+
□
|
| 324 |
+
References
|
| 325 |
+
[1] D. Bourqui, J. Sebag The Drinfeld-Grinberg-Kazhdan theorem for formal schemes and singularity theory.
|
| 326 |
+
Confluentes Math. 9 (2017), no. 1, 29–64. 4, 5
|
| 327 |
+
[2] T. de Fernex, R. Docampo, Terminal valuations and the Nash problem. Invent. math. 203 (2016), 303–
|
| 328 |
+
331. 2
|
| 329 |
+
[3] J. F. de Bobadilla, M. Pe Pereira, Nash problem for surface singularities is a topological problem. Adv.
|
| 330 |
+
Math. 230 (2012), no. 1, 131–176. 2
|
| 331 |
+
[4] J. F. de Bobadilla, M. Pe Pereira, The Nash problem for surfaces, Ann. of Math. (2) 176 (2012), no. 3,
|
| 332 |
+
2003–2029. 2
|
| 333 |
+
[5] J. Denef, F. Loeser, Germs of arcs on singular algebraic varieties and motivic integration. Invent. Math.
|
| 334 |
+
135 (1999), 201-232. 1, 3
|
| 335 |
+
[6] V. Drinfeld, On the Grinberg–Kazhdan formal arc theorem, Preprint (2002), math.AG/0203263. 4, 5
|
| 336 |
+
[7] V. Drinfeld, The Grinberg-Kazhdan formal arc theorem and the Newton groupoids, 37–56, World Sci.
|
| 337 |
+
Publ., Hackensack, NJ, [2020], ©2020. 4, 5
|
| 338 |
+
[8] L. Ein, R. Lazarsfeld, M. Mustat¸˘a, Contact loci in arc spaces. Compos. Math. 140 (2004), 1229-1244. 3
|
| 339 |
+
[9] M. Grinberg, D. Kazhdan, Versal deformations of formal arcs, Geom. Funct. Anal. 10 (2000), 543–555.
|
| 340 |
+
4
|
| 341 |
+
[10] S. Ishii and J. Koll´ar, The Nash problem on arc families of singularities, Duke Math. J. 120 (2003),
|
| 342 |
+
601–620. 6
|
| 343 |
+
[11] S. Ishii, Maximal divisorial sets in arc spaces. Algebraic geometry in East Asia-Hanoi 2005, 237-249,
|
| 344 |
+
Adv. Stud. Pure Math., 50, Math. Soc. Japan, Tokyo, 2008. 3
|
| 345 |
+
[12] M. Lejeune-Jalabert. Arcs analytiques et r´esolution minimale des singularit´es des surfaces quasi-
|
| 346 |
+
homog`enes Springer LNM 777, 303-336, (1980). 2
|
| 347 |
+
[13] M. Lejeune-Jalabert, A. Reguera-L´opez. Arcs and wedges on sandwiched surface singularities, Amer. J.
|
| 348 |
+
Math. 121, (1999) 1191-1213. 2
|
| 349 |
+
[14] M. Lejeune-Jalabert, A. Reguera-L´opez. Exceptional divisors which are not uniruled belong to the image
|
| 350 |
+
of the Nash map. J. Inst. Math. Jussieu 11 (2012), no. 2, 273–287. 2
|
| 351 |
+
[15] J. Milnor, Singular points of complex hypersurfaces. Princeton Univ. Press (1968), iii+122 pp. 1
|
| 352 |
+
[16] M. Pe Pereira. Nash problem for quotient surface singularities. J. Lond. Math. Soc. (2) 87 (2013), no.
|
| 353 |
+
1, 177–203. 2
|
| 354 |
+
[17] A. J. Reguera, A curve selection lemma in spaces of arcs and the image of the Nash map, Compos.
|
| 355 |
+
Math. 142 (2006) 11–130. 1, 2, 3, 4
|
| 356 |
+
†TIMAS, Thang Long University,
|
| 357 |
+
Nghiem Xuan Yem, Hanoi, Vietnam.
|
| 358 |
+
Email address: duc.nh@thanglong.edu.vn
|
| 359 |
+
7
|
| 360 |
+
|
O9E0T4oBgHgl3EQfjwGf/content/tmp_files/2301.02464v1.pdf.txt
ADDED
|
@@ -0,0 +1,976 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Architect, Regularize and Replay (ARR): a Flexible
|
| 2 |
+
Hybrid Approach for Continual Learning
|
| 3 |
+
Vincenzo Lomonaco
|
| 4 |
+
Department of Computer Science
|
| 5 |
+
University of Pisa
|
| 6 |
+
vincenzo.lomonaco@unipi.it
|
| 7 |
+
Lorenzo Pellegrini
|
| 8 |
+
Department of Computer Science and Engineering
|
| 9 |
+
University of Bologna
|
| 10 |
+
l.pellegrini@unibo.it
|
| 11 |
+
Gabriele Graffieti
|
| 12 |
+
Department of Computer Science and Engineering
|
| 13 |
+
University of Bologna
|
| 14 |
+
gabriele.graffieti@unibo.it
|
| 15 |
+
Davide Maltoni
|
| 16 |
+
Department of Computer Science and Engineering
|
| 17 |
+
University of Bologna
|
| 18 |
+
davide.maltoni@unibo.it
|
| 19 |
+
Abstract
|
| 20 |
+
In recent years we have witnessed a renewed interest in machine learning method-
|
| 21 |
+
ologies, especially for deep representation learning, that could overcome basic
|
| 22 |
+
i.i.d. assumptions and tackle non-stationary environments subject to various dis-
|
| 23 |
+
tributional shifts or sample selection biases. Within this context, several compu-
|
| 24 |
+
tational approaches based on architectural priors, regularizers and replay policies
|
| 25 |
+
have been proposed with different degrees of success depending on the specific
|
| 26 |
+
scenario in which they were developed and assessed. However, designing compre-
|
| 27 |
+
hensive hybrid solutions that can flexibly and generally be applied with tunable
|
| 28 |
+
efficiency-effectiveness trade-offs still seems a distant goal. In this paper, we
|
| 29 |
+
propose Architect, Regularize and Replay (ARR), an hybrid generalization of the
|
| 30 |
+
renowned AR1 algorithm and its variants, that can achieve state-of-the-art results
|
| 31 |
+
in classic scenarios (e.g. class-incremental learning) but also generalize to arbitrary
|
| 32 |
+
data streams generated from real-world datasets such as CIFAR-100, CORe50 and
|
| 33 |
+
ImageNet-1000.
|
| 34 |
+
1
|
| 35 |
+
Introduction
|
| 36 |
+
Continual Machine Learning is a challenging research problem with profound scientific and engineer-
|
| 37 |
+
ing implications [22]. On one hand, it undermines the foundations of classic machine learning systems
|
| 38 |
+
relying on iid assumptions, on the other hand, it offers a path towards efficient and scalable human-
|
| 39 |
+
centered AI systems that can learn and think like humans, swiftly adapting to the ever-changing
|
| 40 |
+
nature of the external world. However, despite the recent surge of interest from the machine learning
|
| 41 |
+
and deep learning communities on the topic and the prolific scientific activity of the last few years,
|
| 42 |
+
this vision is far from being reached.
|
| 43 |
+
While most continual learning algorithms significantly reduce the impact of catastrophic forgetting
|
| 44 |
+
on specific scenarios, it is difficult to generalize those results to settings in which they have not
|
| 45 |
+
been specifically designed to operate (lack of robustness and generality). Moreover, they are mostly
|
| 46 |
+
Book Chapter Preprint.
|
| 47 |
+
arXiv:2301.02464v1 [cs.LG] 6 Jan 2023
|
| 48 |
+
|
| 49 |
+
focused on vertical and exclusive approaches to continual learning based on regularization, replay or
|
| 50 |
+
architectural changes of the underlying prediction model.
|
| 51 |
+
In this paper, we summarize the effort made in the formulation of hybrid strategies for Continual
|
| 52 |
+
Learning that can be more robust, generally applicable and effective in real-world application contexts.
|
| 53 |
+
In particular, we will focus on the definition of the “Architect, Regularize and Replay" (ARR) method:
|
| 54 |
+
a general reformulation and generalization of the renowned AR1 algorithm [30] with all its variants
|
| 55 |
+
[23, 36], and, arguably, one of the first hybrid continual learning methods proposed [34] (Sec. 4).
|
| 56 |
+
Through a number of experiments on state-of-the-art benchmarks such as CIFAR-100, CORe50 and
|
| 57 |
+
ImageNet-1000, we show the efficiency and effectiveness of the proposed approach with respect to
|
| 58 |
+
other existing state-of-the-art methods (Sec 5). Then, we discuss tunable parameters to easily control
|
| 59 |
+
the effectiveness-efficiency trade-off such as the selection of the latent replay layer (Sec. 5.4) and the
|
| 60 |
+
replay memory size (Sec. 5.5). Finally, we discuss current ARR implementation porting in Avalanche
|
| 61 |
+
[27] (Sec 6).
|
| 62 |
+
2
|
| 63 |
+
Background and Problem Formulation
|
| 64 |
+
Continual Learning (CL) is mostly concerned with the concept of learning from a stream of ephemeral
|
| 65 |
+
non-stationary data that can be processed in separate computational steps and cannot be revisited if not
|
| 66 |
+
explicitly memorized. In an agnostic continual learning scenario data arrives in a streaming fashion
|
| 67 |
+
as a (possibly infinite) sequence S of, what we call, learning experiences e, so that S=e1, . . . , en.
|
| 68 |
+
For simplicity, we assume a supervised classification problem, where each experience ei consists of a
|
| 69 |
+
batch of samples Di, where each sample is a tuple ⟨xi
|
| 70 |
+
k, yi
|
| 71 |
+
k⟩ of input and target data, respectively, and
|
| 72 |
+
the labels yi
|
| 73 |
+
k are from the set Yi, which is a subset of the entire universe of classes Y. However, we
|
| 74 |
+
note this formulation is very easy to generalize to different CL problems. Usually Di is split into a
|
| 75 |
+
separate train set Di
|
| 76 |
+
train and test set Di
|
| 77 |
+
test. A continual learning algorithm ACL is a function with
|
| 78 |
+
the following signature [19, 2]:
|
| 79 |
+
ACL: ⟨f CL
|
| 80 |
+
i−1, Di
|
| 81 |
+
train, Mi−1, ti⟩ → ⟨f CL
|
| 82 |
+
i
|
| 83 |
+
, Mi⟩
|
| 84 |
+
(1)
|
| 85 |
+
where f CL
|
| 86 |
+
i
|
| 87 |
+
is the model learned after training on experience ei, Mi a buffer of past knowledge
|
| 88 |
+
(can be also void), such as previous samples or activations, stored from the previous experiences
|
| 89 |
+
and usually of fixed size. The term ti is a task label that may be used to identify the correct data
|
| 90 |
+
distribution (or task). All the experiments in this paper assume the most challenging scenario of ti
|
| 91 |
+
being unavailable. Usually, CL algorithms are limited in the amount of resources that they can use
|
| 92 |
+
and they should be designed to scale up to a large number of training experiences without increasing
|
| 93 |
+
their memory / computational overheads over time. The objective of a CL algorithm is to minimize
|
| 94 |
+
the loss LS over the entire stream of data S, composed of n distinct experiences:
|
| 95 |
+
LS(f CL
|
| 96 |
+
n
|
| 97 |
+
, n)=
|
| 98 |
+
1
|
| 99 |
+
n�
|
| 100 |
+
i=1
|
| 101 |
+
|Di
|
| 102 |
+
test|
|
| 103 |
+
n
|
| 104 |
+
�
|
| 105 |
+
i=1
|
| 106 |
+
Lexp(f CL
|
| 107 |
+
n
|
| 108 |
+
, Di
|
| 109 |
+
test)
|
| 110 |
+
(2)
|
| 111 |
+
Lexp(f CL
|
| 112 |
+
n
|
| 113 |
+
, Di
|
| 114 |
+
test)=
|
| 115 |
+
|Di
|
| 116 |
+
test|
|
| 117 |
+
�
|
| 118 |
+
j=1
|
| 119 |
+
L(f CL
|
| 120 |
+
n
|
| 121 |
+
(xi
|
| 122 |
+
j), yi
|
| 123 |
+
j),
|
| 124 |
+
(3)
|
| 125 |
+
where the loss L(f CL
|
| 126 |
+
n
|
| 127 |
+
(x), y) is computed on a single sample ⟨x, y⟩, such as cross-entropy in
|
| 128 |
+
classification problems. Hence, the main assumption in this formulation is that all the concepts
|
| 129 |
+
encountered over time are still relevant (the drift is only virtual) and there’s no conflicting evidence.
|
| 130 |
+
This is quite a common assumption for the deep continual learning literature which is more concerned
|
| 131 |
+
with building robust and general representations over time rather than building systems that can
|
| 132 |
+
quickly adapt to changing circumstances.
|
| 133 |
+
3
|
| 134 |
+
Towards Hybrid Continual Learning Approaches
|
| 135 |
+
We show in Fig. 1 some of the most popular and recent CL approaches divided into the above-
|
| 136 |
+
introduced categories and their combinations. In the diagram, we differentiate methods with rehearsal
|
| 137 |
+
(replay of explicitly stored training samples) from methods with generative replay (replay of latent
|
| 138 |
+
2
|
| 139 |
+
|
| 140 |
+
Figure 1: Venn diagram of some of the most popular CL strategies: CWR [24], PNN [40], EWC [17], SI
|
| 141 |
+
[44], LWF [21], ICARL [38], GEM [29], FearNet [15], GDM [35], ExStream [13], Pure Rehearsal, GR [41],
|
| 142 |
+
MeRGAN [42] and AR1 [31]. Rehearsal and Generative Replay upper categories can be seen as a subset of
|
| 143 |
+
replay strategies.
|
| 144 |
+
representations or the training samples). Crucially, although an increasing number of methods have
|
| 145 |
+
been proposed, there is no consensus on which training schemes and performance metrics are better
|
| 146 |
+
to evaluate CL models. Different sets of metrics have been proposed to evaluate CL performance
|
| 147 |
+
on supervised and unsupervised learning tasks (e.g. [12, 16, 10]). In the absence of standardized
|
| 148 |
+
metrics and evaluation schemes, it is unclear what it means to endow a method with CL capabilities.
|
| 149 |
+
In particular, a number of CL models still require large computational and memory resources that
|
| 150 |
+
hinder their ability to learn in real time, or with a reasonable latency, from data streams.
|
| 151 |
+
It is also worth noting that, while a multitude of methods for each main category has been proposed,
|
| 152 |
+
it is still difficult to find hybrid algorithmic solutions that can flexibly leverage the often orthogonal
|
| 153 |
+
advantages of the three different approaches (i.e. architectural, regularization and replay), depending
|
| 154 |
+
on the specific application needs and target efficiency-effectiveness trade-off. However, some evidence
|
| 155 |
+
shows that effective biological continual learning systems (such as the human brain) make use of all
|
| 156 |
+
these distinct functionalities.
|
| 157 |
+
In this paper, we argue that in the near and long term future of lifelong learning machines we will
|
| 158 |
+
witness a significantly growing interest in the development of hybrid continual learning algorithms
|
| 159 |
+
[28] and we propose ARR as one of the first methodologies that practically implement such a vision.
|
| 160 |
+
4
|
| 161 |
+
ARR: Architect, Regularize and Replay
|
| 162 |
+
The Architect, Regularize and Replay algorithm, ARR for short, is a flexible generalization of the AR1
|
| 163 |
+
algorithm and its variants (CWR+, CWR*, AR1*, AR1Free) [26, 36]. ARR, with a proper initialization
|
| 164 |
+
of its hyper-parameters, can be instantiated in the aforementioned algorithms based on the desired
|
| 165 |
+
efficiency-efficacy trade-off [37]. It can use pre-trained parameters as suggested by a consolidated
|
| 166 |
+
trend in the field [7] or start from a random initialization. The pseudo-code 1 describes ARR in detail
|
| 167 |
+
which is based on three main components: architectural, regularization and replay.
|
| 168 |
+
4.1
|
| 169 |
+
Architectural Component
|
| 170 |
+
The core concept behind an architectural approach is to isolate and preserve some parameters
|
| 171 |
+
while adding new parameters in order to house new knowledge. CWR+, an evolution of CWR [24]
|
| 172 |
+
whose pseudo-code is reported in Algorithm 2 of [30] maintains two sets of weights for the output
|
| 173 |
+
classification layer: cw are the consolidated weights (for stability) used for inference and tw the
|
| 174 |
+
temporary weights (for plasticity) used for training; cw are initialized to 0 before the first experience
|
| 175 |
+
and then iteratively updated, while tw are reset to 0 before each training experience.
|
| 176 |
+
3
|
| 177 |
+
|
| 178 |
+
Rehearsal
|
| 179 |
+
Generative Replay
|
| 180 |
+
Pure
|
| 181 |
+
OGR
|
| 182 |
+
Rehearsal
|
| 183 |
+
MeRGAN
|
| 184 |
+
OExstream
|
| 185 |
+
OFearNet
|
| 186 |
+
OICARL
|
| 187 |
+
GDM
|
| 188 |
+
OGEM
|
| 189 |
+
EWC
|
| 190 |
+
CWR
|
| 191 |
+
Is o
|
| 192 |
+
PNN
|
| 193 |
+
LWF
|
| 194 |
+
·AR1
|
| 195 |
+
Regularization
|
| 196 |
+
ArchitecturalFigure 2: Architectural diagram of ARR [36].
|
| 197 |
+
In [30], the authors proposed an extension of CWR+ called CWR* which works both under Class-
|
| 198 |
+
Incremental [39] and Class-Incremental with Repetition settings [6]; in particular, under Class-
|
| 199 |
+
Incremental with Repetition the coming experiences include examples of both new and already
|
| 200 |
+
encountered classes. For already known classes, instead of resetting weights to 0, consolidated
|
| 201 |
+
weights are reloaded. Furthermore, in the consolidation step, a weighted sum is now used: the first
|
| 202 |
+
term represents the weight of the past and the second term is the contribution from the current training
|
| 203 |
+
experience. The weight wpastj used for the first term is proportional to the ratio pastj
|
| 204 |
+
curj , where pastj
|
| 205 |
+
is the total number of examples of class j encountered in past experiences whereas curj is their count
|
| 206 |
+
in the current experience. In case of a large number of small non-i.i.d. training experiences, the
|
| 207 |
+
weight for the most recent experiences may be too low thus hindering the learning process. In order
|
| 208 |
+
to avoid this, a square root is used in order to smooth the final value of wpastj.
|
| 209 |
+
4.2
|
| 210 |
+
Regularization Component
|
| 211 |
+
The well-known Elastic Weight Consolidation (EWC) pure regularization approach [17] controls
|
| 212 |
+
forgetting by proportionally constraining the model weights based on their estimated importance with
|
| 213 |
+
respect to previously encountered data distributions and tasks. To this purpose, in a classification
|
| 214 |
+
approach, a regularization term is added to the conventional cross-entropy loss, where each weight θk
|
| 215 |
+
of the model is pulled back to their optimal value θ∗
|
| 216 |
+
k with a strength Fk proportional to their estimated
|
| 217 |
+
importance for modeling past knowledge:
|
| 218 |
+
L=Lcross(·) + λ
|
| 219 |
+
2 ·
|
| 220 |
+
�
|
| 221 |
+
k
|
| 222 |
+
Fk · (θk − θ∗
|
| 223 |
+
k)2.
|
| 224 |
+
(4)
|
| 225 |
+
Synaptic Intelligence (SI) [44] is an equally known lightweight variant of EWC where, instead of
|
| 226 |
+
updating the Fisher information F at the end of each experience1, Fk are obtained by integrating the
|
| 227 |
+
1In this paper, for the EWC and ARR implementations we use a single Fisher matrix updated over time,
|
| 228 |
+
following the approach described in [30].
|
| 229 |
+
4
|
| 230 |
+
|
| 231 |
+
Concat (at
|
| 232 |
+
mini-batch level)loss over the weight trajectories exploiting information already available during gradient descent. For
|
| 233 |
+
both approaches, the weight update rule corresponding to equation 4 is:
|
| 234 |
+
θ
|
| 235 |
+
′
|
| 236 |
+
k=θk − η · ∂Lcross(·)
|
| 237 |
+
∂θk
|
| 238 |
+
− η · Fk · (θk − θ∗
|
| 239 |
+
k)
|
| 240 |
+
(5)
|
| 241 |
+
where η is the learning rate. This equation has two drawbacks. Firstly, the value of λ must be carefully
|
| 242 |
+
calibrated: in fact, if its value is too high the optimal value of some parameters could be overshoot,
|
| 243 |
+
leading to divergence (see discussion in Section 2 of [30]). Secondly, two copies of all model weights
|
| 244 |
+
must be maintained to store both θk and θ∗
|
| 245 |
+
k, leading to double memory consumption for each weight.
|
| 246 |
+
To overcome the above problems, the authors of [26] propose to replace the update rule of equation 5
|
| 247 |
+
with:
|
| 248 |
+
θ
|
| 249 |
+
′
|
| 250 |
+
k=θk − η · (1 −
|
| 251 |
+
Fk
|
| 252 |
+
maxF
|
| 253 |
+
) · ∂Lcross(·)
|
| 254 |
+
∂θk
|
| 255 |
+
(6)
|
| 256 |
+
where maxF is the maximum value for weight importance (we clip to maxF the Fk values larger
|
| 257 |
+
than maxF ). Basically, the learning rate is reduced to 0 (i.e., complete freezing) for weights of
|
| 258 |
+
highest importance (Fk=maxF ) and maintained to η for weights whose Fk=0. It is worth noting
|
| 259 |
+
that these two update rules work differently: the former still moves weights with high Fk in the
|
| 260 |
+
direction opposite to the gradient and then makes a step in direction of the past (optimal) values;
|
| 261 |
+
the latter tends to completely freeze weights with high Fk. However, in the experiments conducted
|
| 262 |
+
in [26], the two approaches lead to similar results, and therefore the second one is preferable since
|
| 263 |
+
it solves the aforementioned drawbacks. Regularization of learning parameters can be enforced
|
| 264 |
+
both on the low-level generic features as well as on the class-specific discriminative features as
|
| 265 |
+
implemented in AR1*. However, for the sake of simplicity in ARR we consider only the application of
|
| 266 |
+
such regularization terms to the last group, since freezing or slowly finetuning the low-level generic
|
| 267 |
+
features already proved to be an effective strategy.
|
| 268 |
+
4.3
|
| 269 |
+
Replay Component
|
| 270 |
+
In [36, 33] it was shown that a very simple rehearsal implementation (hereafter denoted as native
|
| 271 |
+
rehearsal), where for every training experience a random subset of the experience examples is added
|
| 272 |
+
to the external storage to replace a (equally random) subset of the external memory, is not less
|
| 273 |
+
effective than more sophisticated approaches such as iCaRL. Therefore, in [36] the authors opted for
|
| 274 |
+
simplicity and compared the learning trend of CWR* and AR1* of a MobileNetV12 trained with and
|
| 275 |
+
without rehearsal on CORe50 NICv2 – 391 [26]. They used the same protocol and hyper-parameters
|
| 276 |
+
introduced in [25] and a rehearsal memory of 1,500 examples. It is well evident from their study that
|
| 277 |
+
even a moderate external memory (about 1.27% of the total training set) is very effective to improve
|
| 278 |
+
the accuracy of both approaches and to reduce the gap with the cumulative upper bound that, for this
|
| 279 |
+
model, is ∼85%.
|
| 280 |
+
In deep neural networks the layers close to the input (often denoted as representation layers) usually
|
| 281 |
+
perform low-level feature extraction and, after a proper pre-training on a large dataset (e.g., ImageNet),
|
| 282 |
+
their weights are quite stable and reusable across applications. On the other hand, higher layers
|
| 283 |
+
tend to extract class-specific discriminant features and their tuning is often important to maximize
|
| 284 |
+
accuracy.
|
| 285 |
+
A latent replay (see Figure 2) approach [36] can then be formulated: instead of maintaining copies of
|
| 286 |
+
input examples in the external memory in the form of raw data, we can store the activations volumes
|
| 287 |
+
at a given layer (denoted as latent replay layer). To keep the representation stable and the stored
|
| 288 |
+
activations valid we propose to slow down the learning at all the layers below the latent replay one and
|
| 289 |
+
to leave the layers above free to learn at full pace. In the limit case where lower layers are completely
|
| 290 |
+
frozen (i.e., slow-down to 0) latent replay is functionally equivalent to rehearsal from the input, but
|
| 291 |
+
achieves a computational and storage saving thanks to the smaller fraction of examples that need to
|
| 292 |
+
flow forward and backward across the entire network and the typical information compression that
|
| 293 |
+
networks perform at higher layers.
|
| 294 |
+
2The network was pre-trained on ImageNet-1k.
|
| 295 |
+
5
|
| 296 |
+
|
| 297 |
+
Algorithm 1 ARR pseudocode: ¯Θ are the class-shared parameters of the representation layers; the notation
|
| 298 |
+
cw[j] / tw[j] is used to denote the groups of consolidated / temporary weights corresponding to class j. Note that
|
| 299 |
+
this version continues to work under New Classes (NC), which is seen here as a special case of New Classes and
|
| 300 |
+
Instances (NIC) [24]; in fact, since in NC the classes in the current experience were never encountered before,
|
| 301 |
+
the step at line 7 loads 0 values for classes in Di because cwj were initialized to 0 and in the consolidation
|
| 302 |
+
step (line 13) wpastj values are always 0. The external random memory RM is populated across the training
|
| 303 |
+
experiences. Note that the amount h of examples to add progressively decreases to maintain a nearly balanced
|
| 304 |
+
contribution from the different training experiences, but no constraints are enforced to achieve a class-balancing.
|
| 305 |
+
λ is the regularization strength, α is the replay layer. The three input parameters default to 0 if omitted.
|
| 306 |
+
1: procedure ARR(RMsize, λ, α)
|
| 307 |
+
2:
|
| 308 |
+
RM=∅, cw[j]=0 and pastj=0 ∀j
|
| 309 |
+
3:
|
| 310 |
+
init ¯Θ randomly or from pre-trained model (e.g. on ImageNet)
|
| 311 |
+
4:
|
| 312 |
+
for each training experience ei:
|
| 313 |
+
5:
|
| 314 |
+
tw[j]=
|
| 315 |
+
�
|
| 316 |
+
cw[j],
|
| 317 |
+
if class j in Di
|
| 318 |
+
0,
|
| 319 |
+
otherwise
|
| 320 |
+
6:
|
| 321 |
+
mbe=
|
| 322 |
+
�
|
| 323 |
+
|Di|
|
| 324 |
+
(|Di|+RMsize)/mbsize ,
|
| 325 |
+
if ei>e1
|
| 326 |
+
mbsize,
|
| 327 |
+
otherwise
|
| 328 |
+
7:
|
| 329 |
+
mbr=mbsize − mbe
|
| 330 |
+
8:
|
| 331 |
+
for each epoch:
|
| 332 |
+
9:
|
| 333 |
+
Sample mbe examples from Di and mbr examples from RM
|
| 334 |
+
10:
|
| 335 |
+
train the model on sampled data (replay data to be injected at leyer α):
|
| 336 |
+
11:
|
| 337 |
+
if ei=e1 learn both ¯Θ and tw
|
| 338 |
+
12:
|
| 339 |
+
else learn tw and ¯Θ with λ to control forgetting.
|
| 340 |
+
13:
|
| 341 |
+
for each class j in Di:
|
| 342 |
+
14:
|
| 343 |
+
wpastj=
|
| 344 |
+
�
|
| 345 |
+
pastj
|
| 346 |
+
curj , where curj is the number of examples of class j in Di
|
| 347 |
+
15:
|
| 348 |
+
cw[j]=
|
| 349 |
+
cw[j]·wpastj+(tw[j]−avg(tw))
|
| 350 |
+
wpastj+1
|
| 351 |
+
16:
|
| 352 |
+
pastj=pastj + curj
|
| 353 |
+
17:
|
| 354 |
+
test the model by using ¯Θ and cw
|
| 355 |
+
18:
|
| 356 |
+
h= RMsize
|
| 357 |
+
i
|
| 358 |
+
19:
|
| 359 |
+
Radd= random sampling h examples from Di
|
| 360 |
+
20:
|
| 361 |
+
Rreplace=
|
| 362 |
+
�
|
| 363 |
+
∅
|
| 364 |
+
if i==1
|
| 365 |
+
random sample h examples from RM
|
| 366 |
+
otherwise
|
| 367 |
+
21:
|
| 368 |
+
RM=(RM − Rreplace) ∪ Radd
|
| 369 |
+
In the general case where the representation layers are not completely frozen, the activations stored
|
| 370 |
+
in the external memory may suffer from an aging effect (i.e., as time passes they tend to increasingly
|
| 371 |
+
deviate from the activations that the same pattern would produce if feed-forwarded from the input
|
| 372 |
+
layer). However, if the training of these layers is sufficiently slow, the aging effect is not disruptive
|
| 373 |
+
since the external memory has enough time to be updated with newly acquired examples. When
|
| 374 |
+
latent replay is implemented with mini-batch SGD training: (i) in the forward step, a concatenation is
|
| 375 |
+
performed at the replay layer (on the mini-batch dimension) to join examples coming from the input
|
| 376 |
+
layer with activations coming from the external storage; (ii) the backward step is stopped just before
|
| 377 |
+
the replay layer for the replay examples.
|
| 378 |
+
5
|
| 379 |
+
Empirical Evaluation
|
| 380 |
+
In order to empirically evaluate the overall quality and flexibility of ARR, we evaluate its performance
|
| 381 |
+
on three commonly used continual learning benchmarks for computer vision classification tasks:
|
| 382 |
+
CIFAR-100 (Section 5.1), CORe50 (Section 5.2) and ImageNet-1000 (Section 5.3). Then, we
|
| 383 |
+
provide a more in-depth analysis of the impact of the latent replay layer selection (Sec: 5.4) and the
|
| 384 |
+
memory size in terms of memorized activations volumes (Sec: 5.5).
|
| 385 |
+
6
|
| 386 |
+
|
| 387 |
+
5.1
|
| 388 |
+
CIFAR-100
|
| 389 |
+
CIFAR-100 [18] is a well-known and largely used dataset for small (32 × 32) natural image classi-
|
| 390 |
+
fication. It includes 100 classes containing 600 images each (500 training + 100 test). The default
|
| 391 |
+
classification benchmark can be translated into a Class-Incremental scenario (denoted as iCIFAR-100
|
| 392 |
+
by [38]) by splitting the 100 classes into groups. In this paper, we consider groups of 10 classes thus
|
| 393 |
+
obtaining 10 incremental experiences.
|
| 394 |
+
Figure 3: Accuracy on iCIFAR-100 with 10 experiences (10 classes per experience). Results are averaged on
|
| 395 |
+
10 runs: for all the strategies hyperparameters have been tuned on run 1 and kept fixed in the other runs. The
|
| 396 |
+
experiment on the right, consistently with the CORe50 test protocol, considers a fixed test set including all the
|
| 397 |
+
100 classes, while on the left we include in the test set only the classes encountered so far (analogously to results
|
| 398 |
+
reported in [38]). Colored areas represent the standard deviation of each curve. Better viewed in color. [30]
|
| 399 |
+
The CNN model used for this experiment is the same used by [44] for experiments on CIFAR-10/100
|
| 400 |
+
Split [30]. It consists of 4 convolutional + 2 fully connected layers; details are available in Appendix
|
| 401 |
+
A of [44]. The model was pre-trained on CIFAR-10 [18]. Figure 3 compares the accuracy of the
|
| 402 |
+
different approaches on iCIFAR-100. The results suggest that:
|
| 403 |
+
• Unlike the Naïve approach, Learning without Forgetting (LWF) [21] and Elastic Weights
|
| 404 |
+
Consolidation (EWC) provide some robustness against forgetting, even if in this incremental
|
| 405 |
+
scenario their performance is not satisfactory. SI, when used in isolation, is quite unstable
|
| 406 |
+
and performs worse than LWF and EWC.
|
| 407 |
+
• The accuracy improvement of CWR+ over CWR is here very small because the experiences are
|
| 408 |
+
balanced (so weight normalization is not required) and the CNN initialization for the last
|
| 409 |
+
level weights was already very close to 0 (we used the authors’ default setting of a Gaussian
|
| 410 |
+
with std = 0.005).
|
| 411 |
+
• ARR (λ=4.0e5) consistently outperforms all the other approaches.
|
| 412 |
+
It is worth noting that both the experiments reported in Figure 3 (i.e., an expanding (left) and fixed
|
| 413 |
+
(right) test set, from left to right) lead to the same conclusions in terms of relative ranking among
|
| 414 |
+
approaches. However, we believe that a fixed test set allows to better appreciate the incremental
|
| 415 |
+
learning trend and its peculiarities (saturation, forgetting, etc.) because the classification complexity
|
| 416 |
+
(which is proportional to the number of classes) remains constant across the experiences. For example,
|
| 417 |
+
in the right graph it can be noted that SI, EWC and LWF learning capacities tend to saturate after 6-7
|
| 418 |
+
experiences while CWR, CWR+ and ARR continue to grow; the same information is not evident on the
|
| 419 |
+
left because of the underlying negative trend due to the increasing problem complexity.
|
| 420 |
+
7
|
| 421 |
+
|
| 422 |
+
Growing Test Set
|
| 423 |
+
Fixed Test Set
|
| 424 |
+
80
|
| 425 |
+
80
|
| 426 |
+
70
|
| 427 |
+
70
|
| 428 |
+
60
|
| 429 |
+
60
|
| 430 |
+
50
|
| 431 |
+
50
|
| 432 |
+
Accuracy
|
| 433 |
+
40
|
| 434 |
+
40
|
| 435 |
+
30
|
| 436 |
+
30
|
| 437 |
+
20
|
| 438 |
+
20
|
| 439 |
+
10
|
| 440 |
+
10
|
| 441 |
+
+ 0
|
| 442 |
+
0
|
| 443 |
+
2
|
| 444 |
+
3
|
| 445 |
+
5
|
| 446 |
+
6
|
| 447 |
+
8
|
| 448 |
+
6
|
| 449 |
+
10
|
| 450 |
+
2
|
| 451 |
+
3
|
| 452 |
+
a
|
| 453 |
+
5
|
| 454 |
+
8
|
| 455 |
+
10
|
| 456 |
+
Encountered Batches
|
| 457 |
+
Encountered Batches
|
| 458 |
+
+ Cumulative
|
| 459 |
+
Naive
|
| 460 |
+
CWR
|
| 461 |
+
CWR+
|
| 462 |
+
LWF
|
| 463 |
+
EWC
|
| 464 |
+
SI
|
| 465 |
+
+ ARRFigure 4: Accuracy results on the CORe50 NICv2 – 391 benchmark of ARR(α=pool6), ARR(λ=0.0003),
|
| 466 |
+
DSLDA, iCaRL, ARR(RMsize=1500, α=conv5_4), ARR(RMsize=1500, α=pool6). Results are averaged
|
| 467 |
+
across 10 runs in which the order of the experiences is randomly shuffled. Colored areas indicate the standard
|
| 468 |
+
deviation of each curve. As an exception, iCaRL was trained only on a single run given its extensive run time
|
| 469 |
+
(∼14 days).
|
| 470 |
+
Finally note that absolute accuracy on iCIFAR-100 cannot be directly compared with [38] because
|
| 471 |
+
the CNN model used in [38] is a ResNet-32, which is much more accurate than the model here used:
|
| 472 |
+
on the full training set the model here used achieves about 51% accuracy while ResNet-32 about
|
| 473 |
+
68.1%.
|
| 474 |
+
5.2
|
| 475 |
+
CORe50
|
| 476 |
+
While the accuracy improvement of the proposed approach w.r.t. the state-of-the-art rehearsal-free
|
| 477 |
+
techniques have been already discussed in the previous section, further comparison with other state-
|
| 478 |
+
of-the-art continual learning techniques on CORe50 may be beneficial for better appreciating its
|
| 479 |
+
practical impact and advantages in real-world continual learning scenarios and longer sequences of
|
| 480 |
+
experiences. In particular, while ARR and ARR(α=pool6) have been already proven to be substantially
|
| 481 |
+
better than LWF and EWC on the NICv2 - 391 benchmark [26], a comparison with iCaRL[39], one of
|
| 482 |
+
the best know rehearsal-based techniques, is worth to be considered.
|
| 483 |
+
Unfortunately, iCaRL was conceived for Class-Incremental scenarios and its porting to Class-
|
| 484 |
+
Incremental with Repetition (whose experiences also include examples of know classes) is not
|
| 485 |
+
trivial. To avoid subjective modifications, the authors of [26] started from the code shared by its
|
| 486 |
+
original authors and emulated a Class-Incremental with Repetition setting by: (i) always creating
|
| 487 |
+
new virtual classes from examples in the coming experiences; (ii) fusing virtual classes together
|
| 488 |
+
when evaluating accuracies. For example, let us suppose to encounter 300 examples of class 5 in
|
| 489 |
+
experience 2 and other 300 examples of the same class in experience 7; while two virtual classes
|
| 490 |
+
are created by iCaRL during training, when evaluating accuracy both classes point to the real class
|
| 491 |
+
5. Such iCaRL implementation, with an external memory of 8000 examples (much more than the
|
| 492 |
+
1500 used by the proposed latent replay, but in line with the settings proposed in the original paper
|
| 493 |
+
[38]), was run on NICv2 - 391, but we were not able to obtain satisfactory results. In Figure 4
|
| 494 |
+
we report the iCaRL accuracy over time and compare it with ARR(RMsize=1500, α=conv5_4/dw),
|
| 495 |
+
ARR(RMsize=1500, α=pool6) as well as the top three performing rehearsal-free strategies introduced
|
| 496 |
+
before: ARR(α=pool6), ARR(λ=0.0003) and DSLDA. While iCaRL exhibits better performance than
|
| 497 |
+
LWF and EWC (as reported in [25]), it is far from DSLDA, ARR(α=pool6) and ARR(λ=0.0003).
|
| 498 |
+
Furthermore, when the algorithm has to deal with a so large number of classes (including virtual ones)
|
| 499 |
+
and training experiences its efficiency becomes very low (as also reported in [30]). In Table 1 of [26]
|
| 500 |
+
the total run time (training and testing), memory overhead and accuracy difference with respect to the
|
| 501 |
+
cumulative upper bound are reported. We believe ARR(RMsize=1500, α=conv5_4/dw) represents a
|
| 502 |
+
8
|
| 503 |
+
|
| 504 |
+
ARR(α=poo16)
|
| 505 |
+
ARR(A=0.0003)
|
| 506 |
+
iCaRL
|
| 507 |
+
DSLDA
|
| 508 |
+
ARR(RM =1.5k, q=conv5/4)
|
| 509 |
+
ARR(RM =1.5k, α=pool6)
|
| 510 |
+
CumulativeMethod
|
| 511 |
+
Final Accuracy
|
| 512 |
+
Fine Tuning (Naive)
|
| 513 |
+
27.4
|
| 514 |
+
EWC-E [17]
|
| 515 |
+
28.4
|
| 516 |
+
RWalk [5]
|
| 517 |
+
24.9
|
| 518 |
+
LwM [9]
|
| 519 |
+
17.7
|
| 520 |
+
LwF [20]
|
| 521 |
+
19.8
|
| 522 |
+
iCaRL [39]
|
| 523 |
+
30.2
|
| 524 |
+
EEIL [3]
|
| 525 |
+
25.1
|
| 526 |
+
LUCIR [14]
|
| 527 |
+
20.1
|
| 528 |
+
IL2M [1]
|
| 529 |
+
29.7
|
| 530 |
+
BiC [43]
|
| 531 |
+
32.4
|
| 532 |
+
ARR [30]
|
| 533 |
+
33.1
|
| 534 |
+
Table 1: Final accuracy on ImageNet-1000 following the benchmark of [32] with 25 experiences composed
|
| 535 |
+
of 40 classes each. For each method, a replay memory of 20,000 examples is used (20 per class at the end of
|
| 536 |
+
training). Results for other methods reported from [32].
|
| 537 |
+
good trade-off in terms of efficiency-efficacy with a limited computational-memory overhead and
|
| 538 |
+
only a ∼13% accuracy gap from the cumulative upper bound. For iCaRL the total training time was
|
| 539 |
+
∼14 days compared to a training time of less than ∼1 hour for the other learning algorithms on a
|
| 540 |
+
single GPU.
|
| 541 |
+
5.3
|
| 542 |
+
ImageNet-1000
|
| 543 |
+
In order to further validate the ARR algorithm scalability the authors of [11] performed a test on
|
| 544 |
+
a competitive benchmark such as ImageNet-1000, following the Class-Incremental benchmark
|
| 545 |
+
proposed by [32], which is composed of 25 experiences, each of them containing 40 classes. The
|
| 546 |
+
benchmark is particularly challenging due to the large number of classes (1,000), the incremental
|
| 547 |
+
nature of the task (with 25 experiences), and the data dimensionality of 224 × 224 (as with ImageNet
|
| 548 |
+
protocol).
|
| 549 |
+
In this experiment, [11] tested ARR against both regularization-based methods [8, 17, 21] and replay-
|
| 550 |
+
based approaches [1, 3, 4, 14, 39, 43]. They used the same classifier (ResNet-18) and the same
|
| 551 |
+
memory size for all the tested methods (20,000 examples, 20 per class); for the regularization-based
|
| 552 |
+
approaches, the replay is added as an additional mechanism.
|
| 553 |
+
For ARR, they trained the model with an SGD optimizer. For the first experience, the algorithm was
|
| 554 |
+
tuned with an aggressive learning rate of 0.1 with momentum of 0.9 and weight decay of 10−4. Then,
|
| 555 |
+
the initial learning rate was multiplied by 0.1 every 15 epochs. The model was trained for a total of
|
| 556 |
+
45 epochs, using a batch size of 128. For all the subsequent experiences SGD with a learning rate of
|
| 557 |
+
5 · 10−3 for the feature extractor’s parameters φ and 5 · 10−2 for the classifier’s parameters ψ were
|
| 558 |
+
used. The model was trained for 32 epochs for each experience, employing a learning rate scheduler
|
| 559 |
+
that decreases the learning rate as the number of experiences progresses. This was done to protect old
|
| 560 |
+
knowledge against new knowledge when the former is more abundant than the latter. As in the first
|
| 561 |
+
experience, the batch size was set to 128, composed of 92 examples from the current experience and
|
| 562 |
+
36 randomly sampled (without replacement) from the replay memory.
|
| 563 |
+
The results are shown in Table 1. Replay-based methods exhibit the best performance, with iCaRL
|
| 564 |
+
and BiC exceeding a final accuracy of 30%. ARR(RMsize=1500, α=pool6) outperforms all the
|
| 565 |
+
baselines (33.1%) achieving state-of-the-art performance on this challenging benchmark, and proving
|
| 566 |
+
the advantage of flexible hybrid continual learning approaches. However, considering that top-1
|
| 567 |
+
ImageNet accuracy for a ResNet-18 when trained on the entire dataset is 69.76%3, even for the best
|
| 568 |
+
methods the accuracy gap in the continual learning setup is very large. This suggests that continual
|
| 569 |
+
learning, especially in complex scenarios with a large number of classes and high dimensional data,
|
| 570 |
+
is far to be solved, and further research should be devoted to this field.
|
| 571 |
+
3Accuracy taken from the torchvision official page: https://pytorch.org/vision/stable/models.
|
| 572 |
+
html
|
| 573 |
+
9
|
| 574 |
+
|
| 575 |
+
5.4
|
| 576 |
+
Replay Layer Selection
|
| 577 |
+
Figure 5: ARR with latent replay (RMsize=1500) for different choices of the latent replay layer. Setting the
|
| 578 |
+
replay layer at the “images” layer corresponds to native rehearsal. The saturation effect which characterizes the
|
| 579 |
+
last training experiences is due to the data distribution in NICv2 – 391 (see [25]): in particular, the lack of new
|
| 580 |
+
instances for some classes (that already introduced all their data) slows down the accuracy trend and intensifies
|
| 581 |
+
the effect of activations aging.
|
| 582 |
+
In Figure 5 we report the accuracy of ARR(RMsize=1500, α=. . . ) for different choices of the rehearsal
|
| 583 |
+
layer α for the CORe50 experiment. As expected, when the replay layer is pushed down the
|
| 584 |
+
corresponding accuracy increases, proving that a continual tuning of the representation layers is
|
| 585 |
+
important. However, after conv5_4/dw there is a sort of saturation and the model accuracy is no
|
| 586 |
+
longer improving. The residual gap (∼4%) with respect to native rehearsal is not due to the weights
|
| 587 |
+
freezing of the lower part of the network but to the aging effect introduced above. This can be simply
|
| 588 |
+
proved by implementing an “intermediate” approach that always feeds the replay pattern from the
|
| 589 |
+
input and stops the backward at conv5_4: such an intermediate approach achieved an accuracy at the
|
| 590 |
+
end of the training very close to the native rehearsal (from raw data). We believe that the accuracy
|
| 591 |
+
drop due to the aging effect can be further reduced with better tuning of Batch Re-Normalization
|
| 592 |
+
(BRN) hyper-parameters and/or with the introduction of a scheduling policy making the global
|
| 593 |
+
moment mobile windows wider as the continual learning progresses (i.e., more plasticity in the early
|
| 594 |
+
stages and more stability later); however, such fine optimization is application specific and beyond
|
| 595 |
+
the scope of this study.
|
| 596 |
+
To better evaluate the latent replay with respect to the native rehearsal we report in Table 2 the
|
| 597 |
+
relevant dimensions: (i) computation refers to the percentage cost in terms of ops of a partial forward
|
| 598 |
+
(from the latent replay layer on) relative to a full forward step from the input layer; (ii) pattern size
|
| 599 |
+
is the dimensionality of the pattern to be stored in the external memory (considering that we are
|
| 600 |
+
using a MobileNetV1 with 128×128×3 inputs to match CORe50 image size); (iii) accuracy and
|
| 601 |
+
∆ accuracy quantify the absolute accuracy at the end of the training and the gap with respect to a
|
| 602 |
+
native rehearsal, respectively. For example, conv5_4/dw exhibits an interesting trade-off because the
|
| 603 |
+
computation is about 32% of the native rehearsal one, the storage is reduced to 66% (more on this
|
| 604 |
+
point in subsection 5.5) and the accuracy drop is mild (5.07%). ARR(RMsize=1500, α=pool6) has a
|
| 605 |
+
really negligible computational cost (0.027%) with respect to native rehearsal and still provides an
|
| 606 |
+
accuracy improvement of ∼4% w.r.t. the non-rehearsal case (∼60% vs ∼56% as it is possible to see
|
| 607 |
+
from Figure 5 and Figure 6, respectively).
|
| 608 |
+
5.5
|
| 609 |
+
Replay Memory Size Selection
|
| 610 |
+
To understand the influence of the external memory size we repeated the experiment with different
|
| 611 |
+
RMsize values: 500, 1,000, 1,500, 3,000. The results are shown in Figure 6: it is worth noting that
|
| 612 |
+
increasing the rehearsal memory leads to better accuracy for all the algorithms, but the gap between
|
| 613 |
+
1500 and 3000 is not large and we believe 1500 is a good trade-off for this dataset. ARR(RMsize=
|
| 614 |
+
10
|
| 615 |
+
|
| 616 |
+
75
|
| 617 |
+
70
|
| 618 |
+
65
|
| 619 |
+
60
|
| 620 |
+
325
|
| 621 |
+
350
|
| 622 |
+
375
|
| 623 |
+
ARR(α=pool6)
|
| 624 |
+
ARR(α=conv6/dw)
|
| 625 |
+
ARR(α=conv5 6/dw)
|
| 626 |
+
ARR(α=conv5_5/dw)
|
| 627 |
+
ARR(a=conv5 4/dw)
|
| 628 |
+
ARR(αa=conv5 3/dw)
|
| 629 |
+
ARR(α=conv5_2/dw)
|
| 630 |
+
ARR(αa=conv5 1/dw)
|
| 631 |
+
ARR(α=images)Table 2: Computation, storage, and accuracy trade-off with Latent Replay at different layers of a MobileNetV1
|
| 632 |
+
ConvNet trained continually on NICv2 – 391 with RMsize=1500.
|
| 633 |
+
Layer
|
| 634 |
+
Computation %
|
| 635 |
+
vs Native Rehearsal Example Size Final Accuracy %
|
| 636 |
+
∆ Accuracy %
|
| 637 |
+
vs Native Rehearsal
|
| 638 |
+
Images
|
| 639 |
+
100.00%
|
| 640 |
+
49152
|
| 641 |
+
77.30%
|
| 642 |
+
0.00%
|
| 643 |
+
conv5_1/dw
|
| 644 |
+
59.261%
|
| 645 |
+
32768
|
| 646 |
+
72.82%
|
| 647 |
+
-4.49%
|
| 648 |
+
conv5_2/dw
|
| 649 |
+
50.101%
|
| 650 |
+
32768
|
| 651 |
+
73.21%
|
| 652 |
+
-4.10%
|
| 653 |
+
conv5_3/dw
|
| 654 |
+
40.941%
|
| 655 |
+
32768
|
| 656 |
+
73.22%
|
| 657 |
+
-4.09%
|
| 658 |
+
conv5_4/dw
|
| 659 |
+
31.781%
|
| 660 |
+
32768
|
| 661 |
+
72.24%
|
| 662 |
+
-5.07%
|
| 663 |
+
conv5_5/dw
|
| 664 |
+
22.621%
|
| 665 |
+
32768
|
| 666 |
+
68.59%
|
| 667 |
+
-8.71%
|
| 668 |
+
conv5_6/dw
|
| 669 |
+
13.592%
|
| 670 |
+
8192
|
| 671 |
+
65.24%
|
| 672 |
+
-12.06%
|
| 673 |
+
conv6/dw
|
| 674 |
+
9.012%
|
| 675 |
+
16384
|
| 676 |
+
59.89%
|
| 677 |
+
-17.42%
|
| 678 |
+
pool6
|
| 679 |
+
0.027%
|
| 680 |
+
1024
|
| 681 |
+
59.76%
|
| 682 |
+
-17.55%
|
| 683 |
+
. . . ) works slightly better than ARR(RMsize=. . . , λ=0.003) when a sufficient number of rehearsal
|
| 684 |
+
examples are provided but, as expected, accuracy is worse with light (i.e. RMsize=500) or no
|
| 685 |
+
rehearsal.
|
| 686 |
+
It is worth noting that the best ARR configuration in Figure 6, i.e. ARR(RMsize=3000), is only 5%
|
| 687 |
+
worse than the cumulative upper bound and a better parametrization and exploitation of the rehearsal
|
| 688 |
+
memory could further reduce this gap.
|
| 689 |
+
Figure 6: Comparison of main ARR configurations on CORe50 NICv2 – 391 with different external memory
|
| 690 |
+
sizes (RMsize=500, 1000, 1500 and 3000 examples).
|
| 691 |
+
6
|
| 692 |
+
ARR Implementation in Avalanche
|
| 693 |
+
The Architect, Regularize and Replay (ARR) method we proposed in this paper is the result of a
|
| 694 |
+
comprehensive re-formalization of different variants and improvements proposed over the last few
|
| 695 |
+
years starting from [24, 30]. Original implementations of such methods (CWR, CWR+, CWR*, AR1, AR1*
|
| 696 |
+
and AR1* with Latent Replay) exist in Caffe and PyTorch. However, given their diversity, it is quite
|
| 697 |
+
difficult to move from one implementation to the other and apply them to settings and scenarios even
|
| 698 |
+
slightly different from the ones on which they have been proposed.
|
| 699 |
+
In order to exploit the general applicability and flexibility of the ARR method, we decided to re-
|
| 700 |
+
implement it directly in Avalanche [27]. Avalanche, an open-source (MIT licensed) end-to-end library
|
| 701 |
+
for continual learning based on PyTorch, was devised to provide a shared and collaborative codebase
|
| 702 |
+
for fast prototyping, training, and evaluation of continual learning algorithms.
|
| 703 |
+
Thanks to the Avalanche portable implementation (soon to be integrated into the next stable version
|
| 704 |
+
of the library), ARR can be configured to reproduce the experiments presented in this paper (Fig. 7,
|
| 705 |
+
conform to the previously proposed strategies (e.g. AR1*, CWR*, etc.) as well as being ready to be
|
| 706 |
+
11
|
| 707 |
+
|
| 708 |
+
Ext. Memory Size (CWR*)
|
| 709 |
+
Ext. Memory Size (AR1*)
|
| 710 |
+
Ext.MemorySize(AR1*free)
|
| 711 |
+
85
|
| 712 |
+
85
|
| 713 |
+
85
|
| 714 |
+
80
|
| 715 |
+
80
|
| 716 |
+
80
|
| 717 |
+
75
|
| 718 |
+
75
|
| 719 |
+
75
|
| 720 |
+
70
|
| 721 |
+
70
|
| 722 |
+
70
|
| 723 |
+
65
|
| 724 |
+
65
|
| 725 |
+
65
|
| 726 |
+
%
|
| 727 |
+
60
|
| 728 |
+
60
|
| 729 |
+
60
|
| 730 |
+
55
|
| 731 |
+
iracy
|
| 732 |
+
55
|
| 733 |
+
9%
|
| 734 |
+
55
|
| 735 |
+
50
|
| 736 |
+
325
|
| 737 |
+
350
|
| 738 |
+
-375
|
| 739 |
+
50
|
| 740 |
+
325
|
| 741 |
+
350.-375
|
| 742 |
+
50
|
| 743 |
+
325..
|
| 744 |
+
350..375
|
| 745 |
+
45
|
| 746 |
+
45
|
| 747 |
+
85
|
| 748 |
+
45
|
| 749 |
+
85
|
| 750 |
+
40
|
| 751 |
+
75
|
| 752 |
+
40
|
| 753 |
+
40
|
| 754 |
+
35
|
| 755 |
+
35
|
| 756 |
+
80
|
| 757 |
+
35
|
| 758 |
+
80
|
| 759 |
+
30
|
| 760 |
+
70
|
| 761 |
+
30
|
| 762 |
+
OE
|
| 763 |
+
25
|
| 764 |
+
25
|
| 765 |
+
75.
|
| 766 |
+
25
|
| 767 |
+
75
|
| 768 |
+
65°
|
| 769 |
+
20
|
| 770 |
+
20
|
| 771 |
+
70
|
| 772 |
+
20
|
| 773 |
+
70
|
| 774 |
+
15
|
| 775 |
+
60
|
| 776 |
+
15
|
| 777 |
+
15
|
| 778 |
+
10
|
| 779 |
+
10
|
| 780 |
+
10
|
| 781 |
+
0
|
| 782 |
+
50
|
| 783 |
+
100
|
| 784 |
+
150
|
| 785 |
+
200
|
| 786 |
+
250
|
| 787 |
+
300
|
| 788 |
+
350
|
| 789 |
+
0
|
| 790 |
+
50
|
| 791 |
+
100
|
| 792 |
+
150
|
| 793 |
+
200
|
| 794 |
+
250
|
| 795 |
+
300
|
| 796 |
+
350
|
| 797 |
+
50
|
| 798 |
+
100
|
| 799 |
+
150
|
| 800 |
+
200
|
| 801 |
+
250
|
| 802 |
+
300
|
| 803 |
+
350
|
| 804 |
+
Experiences
|
| 805 |
+
Experiences
|
| 806 |
+
Experiences
|
| 807 |
+
ARR(RM=500, α=pool6)
|
| 808 |
+
ARR(RM=1.5k, α=pool6)
|
| 809 |
+
ARR(RM=500, A=0.003, Q=p0016)
|
| 810 |
+
ARR(RM=1.5k, A=0.003, Q=p00IG)
|
| 811 |
+
ARR(RM=500)
|
| 812 |
+
ARR(RM=1500)
|
| 813 |
+
--- ARR(RMsize=1k, q=pool6)
|
| 814 |
+
ARR(RM=3k, α=pool6)
|
| 815 |
+
...
|
| 816 |
+
ARR(RM=1k, A=0.003, Q=p0016)
|
| 817 |
+
ARR(RM=3k, A=0.003, 0=p00I6)
|
| 818 |
+
ARR(RM=1000)
|
| 819 |
+
ARR(RM=3000)Figure 7: ARR implementation in Avalanche. Given a set of hyper-parameters ARR can be instantiated and
|
| 820 |
+
properly configured to be tested on a large set of benchmarks already available in Avalanche.
|
| 821 |
+
tested on a large set of benchmarks already available in Avalanche or that can be easily added to the
|
| 822 |
+
library.
|
| 823 |
+
7
|
| 824 |
+
Conclusion
|
| 825 |
+
In this paper we showed that ARR is a flexible, effective and efficient technique to continually learn
|
| 826 |
+
new classes and new instances of known classes even from small and non i.i.d. experiences. ARR,
|
| 827 |
+
instantiated with latent replay, is indeed able to learn efficiently and, at the same time, the achieved
|
| 828 |
+
accuracy is not far from the cumulative upper bound (about 5% in some cases). The computation-
|
| 829 |
+
storage-accuracy trade-off can be defined according to both the target application and the available
|
| 830 |
+
resources so that even edge devices with no GPUs can learn continually. Moreover, ARR can be
|
| 831 |
+
easily extended to support more sophisticated replay memory management strategies (also to contrast
|
| 832 |
+
the aging effect) and even be coupled with a generative model trained in the loop and capable of
|
| 833 |
+
providing pseudo-activations volumes on demand as initially showed in [11].
|
| 834 |
+
References
|
| 835 |
+
References
|
| 836 |
+
[1] Eden Belouadah and Adrian Popescu. Il2m: Class incremental learning with dual memory. In
|
| 837 |
+
Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 583–592,
|
| 838 |
+
2019.
|
| 839 |
+
[2] Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, and Davide Bacciu. Ex-model: Continual
|
| 840 |
+
learning from a stream of trained models. arXiv preprint arXiv:2112.06511, 2021.
|
| 841 |
+
[3] Francisco M Castro, Manuel J Marín-Jiménez, Nicolás Guil, Cordelia Schmid, and Karteek
|
| 842 |
+
Alahari. End-to-end incremental learning. In Proceedings of the European conference on
|
| 843 |
+
computer vision (ECCV), pages 233–248, 2018.
|
| 844 |
+
[4] Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian
|
| 845 |
+
walk for incremental learning: Understanding forgetting and intransigence. In Proceedings of
|
| 846 |
+
the European Conference on Computer Vision (ECCV), pages 532–547, 2018.
|
| 847 |
+
[5] Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian
|
| 848 |
+
walk for incremental learning: Understanding forgetting and intransigence. In ECCV, 2018.
|
| 849 |
+
[6] Andrea Cossu, Gabriele Graffieti, Lorenzo Pellegrini, Davide Maltoni, Davide Bacciu, Antonio
|
| 850 |
+
Carta, and Vincenzo Lomonaco. Is class-incremental enough for continual learning? Frontiers
|
| 851 |
+
in Artificial Intelligence, 5, 2022.
|
| 852 |
+
12
|
| 853 |
+
|
| 854 |
+
strategy = ARR(model, optimizer, criterion, mem_size, lambd, alpha)
|
| 855 |
+
for experience in benchmark.train_stream:
|
| 856 |
+
strategy.train(experience)
|
| 857 |
+
strategy.eval(benchmark.test_stream)[7] Andrea Cossu, Tinne Tuytelaars, Antonio Carta, Lucia Passaro, Vincenzo Lomonaco, and
|
| 858 |
+
Davide Bacciu. Continual pre-training mitigates forgetting in language and vision. arXiv
|
| 859 |
+
preprint arXiv:2205.09357, 2022.
|
| 860 |
+
[8] Prithviraj Dhar, Rajat Vikram Singh, Kuan-Chuan Peng, Ziyan Wu, and Rama Chellappa.
|
| 861 |
+
Learning without memorizing. In Proceedings of the IEEE/CVF conference on computer vision
|
| 862 |
+
and pattern recognition, pages 5138–5146, 2019.
|
| 863 |
+
[9] Prithviraj Dhar, Rajat Vikram Singh, Kuan-Chuan Peng, Ziyan Wu, and Rama Chellappa.
|
| 864 |
+
Learning without memorizing. In The IEEE Conference on Computer Vision and Pattern
|
| 865 |
+
Recognition (CVPR), June 2019.
|
| 866 |
+
[10] Natalia Díaz-Rodríguez, Vincenzo Lomonaco, David Filliat, and Davide Maltoni. Don’t forget,
|
| 867 |
+
there is more than forgetting: new metrics for Continual Learning. In Workshop on Continual
|
| 868 |
+
Learning, NeurIPS 2018 (Neural Information Processing Systems, Montreal, Canada, December
|
| 869 |
+
2018.
|
| 870 |
+
[11] Gabriele Graffieti, Davide Maltoni, Lorenzo Pellegrini, and Vincenzo Lomonaco. Generative
|
| 871 |
+
negative replay for continual learning. arXiv preprint arXiv:2204.05842, 2022.
|
| 872 |
+
[12] T. L. Hayes, R. Kemker, N. D. Cahill, and C. Kanan. New metrics and experimental paradigms
|
| 873 |
+
for continual learning. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recog-
|
| 874 |
+
nition Workshops (CVPRW), pages 2112–21123, June 2018.
|
| 875 |
+
[13] Tyler L. Hayes, Nathan D. Cahill, and Christopher Kanan. Memory efficient experience replay
|
| 876 |
+
for streaming learning. 2019 International Conference on Robotics and Automation (ICRA),
|
| 877 |
+
pages 9769–9776, 2018.
|
| 878 |
+
[14] Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin. Learning a unified
|
| 879 |
+
classifier incrementally via rebalancing. In Proceedings of the IEEE/CVF Conference on
|
| 880 |
+
Computer Vision and Pattern Recognition, pages 831–839, 2019.
|
| 881 |
+
[15] Ronald Kemker and Christopher Kanan. Fearnet: Brain-inspired model for incremental learning.
|
| 882 |
+
In International Conference on Learning Representations, 2018.
|
| 883 |
+
[16] Ronald Kemker, Marc McClure, Angelina Abitino, Tyler L. Hayes, and Christopher Kanan.
|
| 884 |
+
Measuring catastrophic forgetting in neural networks. In AAAI, 2017.
|
| 885 |
+
[17] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins,
|
| 886 |
+
Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska,
|
| 887 |
+
et al. Overcoming catastrophic forgetting in neural networks. Proc. of the national academy of
|
| 888 |
+
sciences, 2017.
|
| 889 |
+
[18] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images.
|
| 890 |
+
Technical report, Citeseer, 2009.
|
| 891 |
+
[19] Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat, and
|
| 892 |
+
Natalia Díaz-Rodríguez. Continual Learning for Robotics: Definition, Framework, Learning
|
| 893 |
+
Strategies, Opportunities and Challenges. Information Fusion, December 2019.
|
| 894 |
+
[20] Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher, and Caiming Xiong. Learn to grow:
|
| 895 |
+
A continual structure learning framework for overcoming catastrophic forgetting.
|
| 896 |
+
CoRR,
|
| 897 |
+
abs/1904.00310, 2019.
|
| 898 |
+
[21] Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE Transactions on Pattern
|
| 899 |
+
Analysis and Machine Intelligence, 2017.
|
| 900 |
+
[22] Vincenzo Lomonaco. Continual Learning with Deep Architectures. Phd thesis, University of
|
| 901 |
+
Bologna, 2019.
|
| 902 |
+
[23] Vincenzo Lomonaco, Karan Desai, Eugenio Culurciello, and Davide Maltoni. Continual
|
| 903 |
+
reinforcement learning in 3d non-stationary environments. arXiv preprint arXiv:1905.10112,
|
| 904 |
+
2019.
|
| 905 |
+
13
|
| 906 |
+
|
| 907 |
+
[24] Vincenzo Lomonaco and Davide Maltoni. CORe50: a New Dataset and Benchmark for
|
| 908 |
+
Continuous Object Recognition. In Sergey Levine, Vincent Vanhoucke, and Ken Goldberg,
|
| 909 |
+
editors, Proceedings of the 1st Annual Conference on Robot Learning, volume 78 of Proceedings
|
| 910 |
+
of Machine Learning Research, pages 17–26. PMLR, 13–15 Nov 2017.
|
| 911 |
+
[25] Vincenzo Lomonaco, Davide Maltoni, and Lorenzo Pellegrini. Fine-Grained Continual Learning.
|
| 912 |
+
arXiv preprint arXiv: 1907.03799, pages 1–14, 2019.
|
| 913 |
+
[26] Vincenzo Lomonaco, Davide Maltoni, and Lorenzo Pellegrini. Rehearsal-free continual learning
|
| 914 |
+
over small non-iid batches. In CVPR Workshops, volume 1, page 3, 2020.
|
| 915 |
+
[27] Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Antonio Carta, Gabriele Graffieti,
|
| 916 |
+
Tyler L Hayes, Matthias De Lange, Marc Masana, Jary Pomponi, Gido M Van de Ven, et al.
|
| 917 |
+
Avalanche: an end-to-end library for continual learning. In Proceedings of the IEEE/CVF
|
| 918 |
+
Conference on Computer Vision and Pattern Recognition, pages 3600–3610, 2021.
|
| 919 |
+
[28] Vincenzo Lomonaco, Lorenzo Pellegrini, Pau Rodriguez, Massimo Caccia, Qi She, Yu Chen,
|
| 920 |
+
Quentin Jodelet, Ruiping Wang, Zheda Mai, David Vazquez, et al. Cvpr 2020 continual learning
|
| 921 |
+
in computer vision competition: Approaches, results, current challenges and future directions.
|
| 922 |
+
Artificial Intelligence, 303:103635, 2022.
|
| 923 |
+
[29] David Lopez-Paz and Marc-Aurelio Ranzato. Gradient episodic memory for continual learning.
|
| 924 |
+
In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett,
|
| 925 |
+
editors, Advances in Neural Information Processing Systems 30, pages 6467–6476. Curran
|
| 926 |
+
Associates, Inc., 2017.
|
| 927 |
+
[30] Davide Maltoni and Vincenzo Lomonaco. Continuous learning in single-incremental-task
|
| 928 |
+
scenarios. Neural Networks, 116:56–73, aug 2019.
|
| 929 |
+
[31] Davide Maltoni and Vincenzo Lomonaco. Continuous learning in single-incremental-task
|
| 930 |
+
scenarios. Neural Networks, 116:56 – 73, 2019.
|
| 931 |
+
[32] Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D Bagdanov, and
|
| 932 |
+
Joost van de Weijer. Class-incremental learning: survey and performance evaluation on image
|
| 933 |
+
classification. arXiv preprint arXiv:2010.15277, 2020.
|
| 934 |
+
[33] Gabriele Merlin, Vincenzo Lomonaco, Andrea Cossu, Antonio Carta, and Davide Bacciu.
|
| 935 |
+
Practical recommendations for replay-based continual learning methods.
|
| 936 |
+
arXiv preprint
|
| 937 |
+
arXiv:2203.10317, 2022.
|
| 938 |
+
[34] German I Parisi and Vincenzo Lomonaco. Online continual learning on sequences. In Recent
|
| 939 |
+
Trends in Learning From Data, pages 197–221. Springer, 2020.
|
| 940 |
+
[35] German I. Parisi, Jun Tani, Cornelius Weber, and Stefan Wermter.
|
| 941 |
+
Lifelong learning of
|
| 942 |
+
spatiotemporal representations with dual-memory recurrent self-organization. Frontiers in
|
| 943 |
+
Neurorobotics, 12:78, 2018.
|
| 944 |
+
[36] Lorenzo Pellegrini, Gabrile Graffieti, Vincenzo Lomonaco, and Davide Maltoni. Latent replay
|
| 945 |
+
for real-time continual learning. arXiv preprint arXiv:1912.01100, 2019.
|
| 946 |
+
[37] Leonardo Ravaglia, Manuele Rusci, Alessandro Capotondi, Francesco Conti, Lorenzo Pellegrini,
|
| 947 |
+
Vincenzo Lomonaco, Davide Maltoni, and Luca Benini. Memory-latency-accuracy trade-offs
|
| 948 |
+
for continual learning on a risc-v extreme-edge node. In 2020 IEEE Workshop on Signal
|
| 949 |
+
Processing Systems (SiPS), pages 1–6. IEEE, 2020.
|
| 950 |
+
[38] S. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert. icarl: Incremental classifier and
|
| 951 |
+
representation learning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition
|
| 952 |
+
(CVPR), pages 5533–5542, July 2017.
|
| 953 |
+
[39] Sylvestre-alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. iCaRL:
|
| 954 |
+
Incremental Classifier and Representation Learning. In The IEEE Conference on Computer
|
| 955 |
+
Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, 2017.
|
| 956 |
+
14
|
| 957 |
+
|
| 958 |
+
[40] A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu,
|
| 959 |
+
R. Pascanu, and R. Hadsell. Progressive Neural Networks. ArXiv e-prints, June 2016.
|
| 960 |
+
[41] Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep
|
| 961 |
+
generative replay. In Advances in Neural Information Processing Systems, pages 2990–2999,
|
| 962 |
+
2017.
|
| 963 |
+
[42] Chenshen Wu, Luis Herranz, Xialei Liu, yaxing wang, Joost van de Weijer, and Bogdan
|
| 964 |
+
Raducanu. Memory replay gans: Learning to generate new categories without forgetting. In
|
| 965 |
+
S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors,
|
| 966 |
+
Advances in Neural Information Processing Systems 31, pages 5962–5972. Curran Associates,
|
| 967 |
+
Inc., 2018.
|
| 968 |
+
[43] Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu.
|
| 969 |
+
Large scale incremental learning. In Proceedings of the IEEE/CVF Conference on Computer
|
| 970 |
+
Vision and Pattern Recognition, pages 374–382, 2019.
|
| 971 |
+
[44] Friedeman Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intel-
|
| 972 |
+
ligence. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International
|
| 973 |
+
Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research,
|
| 974 |
+
pages 3987–3995, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR.
|
| 975 |
+
15
|
| 976 |
+
|