Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- -tE3T4oBgHgl3EQfSwk7/content/tmp_files/2301.04435v1.pdf.txt +2547 -0
- -tE3T4oBgHgl3EQfSwk7/content/tmp_files/load_file.txt +0 -0
- .gitattributes +66 -0
- 09E4T4oBgHgl3EQfzA3P/vector_store/index.faiss +3 -0
- 19E2T4oBgHgl3EQfiwfE/content/tmp_files/2301.03962v1.pdf.txt +0 -0
- 19E2T4oBgHgl3EQfiwfE/content/tmp_files/load_file.txt +0 -0
- 1NAzT4oBgHgl3EQfRftD/content/tmp_files/2301.01216v1.pdf.txt +1690 -0
- 1NAzT4oBgHgl3EQfRftD/content/tmp_files/load_file.txt +0 -0
- 29AzT4oBgHgl3EQfuf0F/content/2301.01690v1.pdf +3 -0
- 29AzT4oBgHgl3EQfuf0F/vector_store/index.faiss +3 -0
- 29AzT4oBgHgl3EQfuf0F/vector_store/index.pkl +3 -0
- 29FQT4oBgHgl3EQfGTVE/vector_store/index.pkl +3 -0
- 3tAzT4oBgHgl3EQfD_po/content/tmp_files/2301.00985v1.pdf.txt +2547 -0
- 3tAzT4oBgHgl3EQfD_po/content/tmp_files/load_file.txt +0 -0
- 49AyT4oBgHgl3EQfpPiQ/content/2301.00522v1.pdf +3 -0
- 49AyT4oBgHgl3EQfpPiQ/vector_store/index.faiss +3 -0
- 49AyT4oBgHgl3EQfpPiQ/vector_store/index.pkl +3 -0
- 4dAyT4oBgHgl3EQf2Pkf/content/tmp_files/2301.00746v1.pdf.txt +1615 -0
- 4dAyT4oBgHgl3EQf2Pkf/content/tmp_files/load_file.txt +0 -0
- 79AyT4oBgHgl3EQf2_lP/vector_store/index.faiss +3 -0
- 7dE5T4oBgHgl3EQfQA7X/content/2301.05510v1.pdf +3 -0
- 7dE5T4oBgHgl3EQfQA7X/vector_store/index.faiss +3 -0
- 8NE1T4oBgHgl3EQfngRF/content/2301.03309v1.pdf +3 -0
- 8dE0T4oBgHgl3EQffgB5/content/2301.02405v1.pdf +3 -0
- 8dE0T4oBgHgl3EQffgB5/vector_store/index.faiss +3 -0
- 8dE0T4oBgHgl3EQffgB5/vector_store/index.pkl +3 -0
- AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf +0 -0
- AdAzT4oBgHgl3EQfTPx3/content/tmp_files/2301.01246v1.pdf.txt +694 -0
- AdAzT4oBgHgl3EQfTPx3/content/tmp_files/load_file.txt +259 -0
- C9AyT4oBgHgl3EQf4fqw/content/tmp_files/2301.00788v1.pdf.txt +677 -0
- C9AyT4oBgHgl3EQf4fqw/content/tmp_files/load_file.txt +0 -0
- CdE0T4oBgHgl3EQfgQEc/content/2301.02414v1.pdf +3 -0
- CdE0T4oBgHgl3EQfgQEc/vector_store/index.faiss +3 -0
- CdE0T4oBgHgl3EQfgQEc/vector_store/index.pkl +3 -0
- D9E4T4oBgHgl3EQfGQye/content/tmp_files/2301.04893v1.pdf.txt +0 -0
- D9E4T4oBgHgl3EQfGQye/content/tmp_files/load_file.txt +0 -0
- D9E5T4oBgHgl3EQfUg90/content/2301.05544v1.pdf +3 -0
- D9E5T4oBgHgl3EQfUg90/vector_store/index.faiss +3 -0
- D9E5T4oBgHgl3EQfUg90/vector_store/index.pkl +3 -0
- F9AzT4oBgHgl3EQfHPuf/content/tmp_files/2301.01042v1.pdf.txt +2175 -0
- F9AzT4oBgHgl3EQfHPuf/content/tmp_files/load_file.txt +0 -0
- F9E1T4oBgHgl3EQfFAMn/vector_store/index.faiss +3 -0
- F9E1T4oBgHgl3EQfFAMn/vector_store/index.pkl +3 -0
- FNAzT4oBgHgl3EQfw_7Q/content/tmp_files/2301.01732v1.pdf.txt +901 -0
- FNAzT4oBgHgl3EQfw_7Q/content/tmp_files/load_file.txt +0 -0
- FNE3T4oBgHgl3EQfVgrM/content/2301.04461v1.pdf +3 -0
- I9E3T4oBgHgl3EQfXAr0/content/2301.04476v1.pdf +3 -0
- I9E3T4oBgHgl3EQfXAr0/vector_store/index.faiss +3 -0
- I9E3T4oBgHgl3EQfXAr0/vector_store/index.pkl +3 -0
- INE5T4oBgHgl3EQfWw9u/content/2301.05561v1.pdf +3 -0
-tE3T4oBgHgl3EQfSwk7/content/tmp_files/2301.04435v1.pdf.txt
ADDED
|
@@ -0,0 +1,2547 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
arXiv:2301.04435v1 [hep-th] 11 Jan 2023
|
| 2 |
+
Holographic entanglement entropy in T T -deformed AdS3
|
| 3 |
+
Miao Hea,b, Yuan Sunc
|
| 4 |
+
aSchool of Physics, Southeast University, Nanjing 211189, China
|
| 5 |
+
bShing-Tung Yau Center, Southeast University, Nanjing 210096, China
|
| 6 |
+
cCenter for Theoretical Physics and College of Physics, Jilin University,
|
| 7 |
+
Changchun 130012, People’s Republic of China
|
| 8 |
+
E-mail: hemiao@seu.edu.cn, sunyuan@jlu.edu.cn
|
| 9 |
+
Abstract
|
| 10 |
+
In this work, we study the holographic entanglement entropy in AdS3 gravity
|
| 11 |
+
with the certain mixed boundary condition, which turns out to correspond to T ¯T-
|
| 12 |
+
deformed 2D CFTs.
|
| 13 |
+
By employing the Chern-Simons formalism and Wilson line
|
| 14 |
+
technique, the exact holographic entanglement entropy in T ¯T-deformed BTZ black
|
| 15 |
+
hole is obtained. We also get the same formula by calculating the RT surface. The
|
| 16 |
+
holographic entanglement entropy agrees with the perturbation result derived from
|
| 17 |
+
both T ¯T-deformed CFTs and cutoff AdS3.
|
| 18 |
+
Moreover, our result also shows that
|
| 19 |
+
the deformed holographic entanglement entropy behaves like the zero temperature
|
| 20 |
+
CFT one for the large deformation parameter. Based on this result, the two intervals
|
| 21 |
+
entanglement entropy and phase transition between disconnected and connected phase
|
| 22 |
+
are also studied.
|
| 23 |
+
|
| 24 |
+
Contents
|
| 25 |
+
1
|
| 26 |
+
Introduction
|
| 27 |
+
1
|
| 28 |
+
2
|
| 29 |
+
Wilson lines and entanglement entropy in AdS3
|
| 30 |
+
3
|
| 31 |
+
2.1
|
| 32 |
+
Wilson lines in AdS3 gravity . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
| 33 |
+
4
|
| 34 |
+
2.2
|
| 35 |
+
Equivalence to the geodesic equation
|
| 36 |
+
. . . . . . . . . . . . . . . . . . . . . .
|
| 37 |
+
6
|
| 38 |
+
2.3
|
| 39 |
+
Holographic entanglement entropy . . . . . . . . . . . . . . . . . . . . . . . .
|
| 40 |
+
7
|
| 41 |
+
2.3.1
|
| 42 |
+
Poincar´e AdS3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
| 43 |
+
8
|
| 44 |
+
2.3.2
|
| 45 |
+
BTZ black hole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
| 46 |
+
9
|
| 47 |
+
2.4
|
| 48 |
+
Loops and thermal entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
| 49 |
+
11
|
| 50 |
+
3
|
| 51 |
+
Holographic entanglement entropy in T ¯T - deformed AdS3
|
| 52 |
+
12
|
| 53 |
+
3.1
|
| 54 |
+
T ¯T deformed AdS3 geometry
|
| 55 |
+
. . . . . . . . . . . . . . . . . . . . . . . . . .
|
| 56 |
+
12
|
| 57 |
+
3.2
|
| 58 |
+
T ¯T-deformed holographic entanglement entropy . . . . . . . . . . . . . . . .
|
| 59 |
+
14
|
| 60 |
+
3.3
|
| 61 |
+
Thermal entropy
|
| 62 |
+
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
| 63 |
+
18
|
| 64 |
+
3.4
|
| 65 |
+
Two intervals entanglement entropy . . . . . . . . . . . . . . . . . . . . . . .
|
| 66 |
+
19
|
| 67 |
+
4
|
| 68 |
+
Geodesic line method
|
| 69 |
+
22
|
| 70 |
+
5
|
| 71 |
+
Conclusion and discussion
|
| 72 |
+
24
|
| 73 |
+
A Conventions
|
| 74 |
+
25
|
| 75 |
+
B Wilson line defects
|
| 76 |
+
26
|
| 77 |
+
1
|
| 78 |
+
Introduction
|
| 79 |
+
The AdS/CFT correspondence gives a geometric interpretation to the conformal field theory.
|
| 80 |
+
This correspondence allows us to study quantum gravity from the conformal field theory,
|
| 81 |
+
and it achieves great success in 3D quantum gravity.
|
| 82 |
+
It is significant to generalize the
|
| 83 |
+
AdS/CFT correspondence by deforming the conformal field theory and investigating its
|
| 84 |
+
geometric interpretation. One of the deformed theories called T ¯T deformation was proposed
|
| 85 |
+
and its holographic descriptions were also explored [1–4]. It is interesting to establish the
|
| 86 |
+
holographic dictionary under T ¯T deformation. The holographic technique also provides us
|
| 87 |
+
with a gravitational method to study the T ¯T deformed CFT.
|
| 88 |
+
The T ¯T deformation is defined through the T ¯T flow equation
|
| 89 |
+
∂ST ¯T
|
| 90 |
+
∂µ
|
| 91 |
+
=
|
| 92 |
+
�
|
| 93 |
+
d2xOT ¯T ,
|
| 94 |
+
OT ¯T ≡ T ijTij + T 2,
|
| 95 |
+
1
|
| 96 |
+
|
| 97 |
+
where Tij is the stress tensor of the deformed theory. This flow equation generates a family
|
| 98 |
+
of integrable field theory if the original theory is integrable [1, 2]. The factorizable of T ¯T
|
| 99 |
+
operator leads to the Burgers equation for the deformed spectrum [5], so that the spectrum of
|
| 100 |
+
the deformed theory can be exactly solved. The partition function of the deformed theory
|
| 101 |
+
can be obtained from various methods, the result turns out that the deformed partition
|
| 102 |
+
function satisfies a differential equation or an integral transformation of the original one [6–
|
| 103 |
+
8]. The deformed partition function is still modular invariant [9]. According to the T ¯T
|
| 104 |
+
flow equation, the Lagrangian form and Hamiltonian form were also studied [10, 11]. There
|
| 105 |
+
are also some evidences shown that the T ¯T deformed theory is a non-local theory [12–
|
| 106 |
+
16]. In this irrelevant deformation, it is difficult to study the local properties, such as the
|
| 107 |
+
correlation function and entanglement entropy. These observables play the important role
|
| 108 |
+
in the quantum field theory. By using the perturbative method, the correlation functions
|
| 109 |
+
and entanglement entropy have also been obtained [21–31]. Some non-perturbative results
|
| 110 |
+
about the correlation function and entanglement were explored in [17–20]. However, there
|
| 111 |
+
is still an open question to calculate the correlation function and entanglement entropy in
|
| 112 |
+
T ¯T deformed theory. For a pedagogical review see [32].
|
| 113 |
+
According to the AdS/CFT correspondence, the deformed theory can be investigated
|
| 114 |
+
by using the gravitational approach. There are two points of view to understand the T ¯T
|
| 115 |
+
deformed CFTs from gravity. The one is the T ¯T deformed CFTs dual to the AdS3 with
|
| 116 |
+
a finite radial cutoff [3, 4]. In this situation, the quasi-local energy of the cutoff region
|
| 117 |
+
matches the spectrum of the deformed theory. The T ¯T flow equation coincides with the
|
| 118 |
+
Hamilton-Jacobi equation governing the radial evolution of the classical gravity action in
|
| 119 |
+
AdS3.
|
| 120 |
+
Many holographic features of the T ¯T deformed CFT have been explored based
|
| 121 |
+
on the cutoff perspective [33–40].
|
| 122 |
+
The other holographic perspective to understand the
|
| 123 |
+
T ¯T deformation is the AdS3 gravity with certain mixed boundary condition [41].
|
| 124 |
+
The
|
| 125 |
+
boundary condition was derived from the flow equation and variational principle. It turned
|
| 126 |
+
out that the solution of the metric flow equation related to the higher order Fefferman-
|
| 127 |
+
Graham expansion, which leads to the mixed boundary condition. The mixed boundary
|
| 128 |
+
condition coincides with the induced metric on the finite radial cutoff. The AdS3 solutions
|
| 129 |
+
that satisfy the mixed boundary condition were also constructed through a field-dependent
|
| 130 |
+
coordinate transformation [41]. The dynamic coordinate transformation approach to T ¯T
|
| 131 |
+
was also found in field theoretic results [42, 43]. The deformed spectrum can also be obtained
|
| 132 |
+
from the deformed AdS3. The mixed boundary condition allows boundary graviton degree
|
| 133 |
+
of freedom, which turns out to be a T ¯T deformed theory [44–47]. The mixed boundary
|
| 134 |
+
condition provides us with another approach to studying the T ¯T deformation including the
|
| 135 |
+
entanglement entropy.
|
| 136 |
+
In this paper, we would like to investigate the entanglement entropy in T ¯T deformed CFT
|
| 137 |
+
from holography. For the cutoff perspective, the holographic entanglement was obtained
|
| 138 |
+
by calculating the length of cutoff geodesic line, and the results match perturbative CFT
|
| 139 |
+
results [22, 24]. The entanglement entropy in T ¯T deformation was also studied on both
|
| 140 |
+
the field theory side and holographic side in recent works [48–52]. We prefer to use the
|
| 141 |
+
mixed boundary condition perspective to study holographic entanglement entropy. Since
|
| 142 |
+
the deformed geometry is still AdS3, we will work in the SL(2, R)×SL(2, R) gauged Chern-
|
| 143 |
+
2
|
| 144 |
+
|
| 145 |
+
Simons formalism of AdS3 [53]. The Chern-Simons formalism has been used to study T ¯T
|
| 146 |
+
deformation in the literatures [44–46, 54–56]. In the gauge theory form, the holographic
|
| 147 |
+
entanglement entropy is encoded in the Wilson line of Chern-Simons [57]. Generally, the
|
| 148 |
+
Wilson lines depend on the path and representation of the gauge group. If we choose a
|
| 149 |
+
appropriate representation of sl(2, R), the trace over the representation can be formulated
|
| 150 |
+
into the path integral of a SL(2, R) × SL(2, R) invariant auxiliary theory. The on-shell
|
| 151 |
+
action of the auxiliary is equivalent to the length of geodesics in AdS3. In addition, the
|
| 152 |
+
Wilson line is a probe in gauge theory, just like a point particle in a curved background.
|
| 153 |
+
The Wilson lines give a back-reaction to the bulk geometry, and the resulting geometry
|
| 154 |
+
turns out to be a conical defect on the branch point, which exactly generates a n-sheet
|
| 155 |
+
manifold [57, 58]. Therefore, the Wilson line back reaction corresponds to the replica trick
|
| 156 |
+
along the ending points of the Wilson line on the boundary. These results told us that the
|
| 157 |
+
Wilson line is related to the entanglement entropy through
|
| 158 |
+
SEE = − log(WR(C)),
|
| 159 |
+
where the ending points of the Wilson line correspond to the interval on the boundary.
|
| 160 |
+
The thermal entropy also turned out corresponds to the Wilson loop. We use this tech-
|
| 161 |
+
nique for the deformed AdS3 geometry.
|
| 162 |
+
The single interval holographic entanglement
|
| 163 |
+
entropy is calculated exactly, which can reproduce the perturbative result obtained in
|
| 164 |
+
other literatures [22, 24, 51].
|
| 165 |
+
We also consider the two intervals entanglement entropy
|
| 166 |
+
in T ¯T deformation, which implies a certain phase transition. Moreover, the holographic
|
| 167 |
+
entanglement entropy of T ¯T-deformed AdS3 in the non-perturbative region is also studied.
|
| 168 |
+
The results show that the entanglement entropy behaves like a zero temperature CFT one
|
| 169 |
+
for the large deformation parameter.
|
| 170 |
+
The paper is organized as follows: In section 2, we give an overview of the gravitational
|
| 171 |
+
Wilson line approach to obtain the holographic entanglement entropy.
|
| 172 |
+
In section 3, we
|
| 173 |
+
introduce the deformed AdS3 under T ¯T, which is parameterized by the deformed spectrum.
|
| 174 |
+
The holographic entanglement entropy is obtained using the Wilson line approach. We also
|
| 175 |
+
consider the two intervals entanglement entropy and its phase transition. The same result
|
| 176 |
+
is derived by calculating the RT surface in the deformed AdS3 in section 4. We summarize
|
| 177 |
+
our results and discussion in section 5. The appendix contains our conventions and Wilson
|
| 178 |
+
line defects.
|
| 179 |
+
2
|
| 180 |
+
Wilson lines and entanglement entropy in AdS3
|
| 181 |
+
This section is a review of using the Wilson lines technique to calculate the holographic
|
| 182 |
+
entanglement entropy, based on [57].
|
| 183 |
+
By rewriting the AdS3 gravity in Chern-Simons
|
| 184 |
+
form, the Wilson line in an infinite-dimensional representation of the bulk gauge group
|
| 185 |
+
is related to the geodesics in the bulk. According to the Ryu-Takayanagi proposal [59, 60],
|
| 186 |
+
the holographic entanglement entropy or RT surface can be obtained through the Wilson
|
| 187 |
+
line approach.
|
| 188 |
+
3
|
| 189 |
+
|
| 190 |
+
2.1
|
| 191 |
+
Wilson lines in AdS3 gravity
|
| 192 |
+
It is well-known that 3D general relativity has no local degrees of freedom, which is purely
|
| 193 |
+
topological and can be formulated as a Chern-Simons theory [53].
|
| 194 |
+
In the case of AdS3
|
| 195 |
+
gravity, the relevant Chern-Simons gauge group is SO(2, 2) ≃ SL(2, R) × SL(2, R), so
|
| 196 |
+
Einstein-Hilbert action can be written as
|
| 197 |
+
SEH[e, ω] = ICS[A] − ICS[ ¯A],
|
| 198 |
+
(2.1)
|
| 199 |
+
where the Chern-Simons action is
|
| 200 |
+
ICS[A] = k
|
| 201 |
+
4π
|
| 202 |
+
�
|
| 203 |
+
M
|
| 204 |
+
Tr
|
| 205 |
+
�
|
| 206 |
+
A ∧ dA + 2
|
| 207 |
+
3A ∧ A ∧ A
|
| 208 |
+
�
|
| 209 |
+
,
|
| 210 |
+
k = 1
|
| 211 |
+
4G.
|
| 212 |
+
(2.2)
|
| 213 |
+
The gauge fields A and ¯A are valued in sl(2, R), which are the linear combination of
|
| 214 |
+
gravitational vielbein and spin connection
|
| 215 |
+
A = (ωa + ea) La,
|
| 216 |
+
¯A = (ωa − ea) La.
|
| 217 |
+
(2.3)
|
| 218 |
+
The La are sl(2, R) generators, see Appendix A for our conventions. Variation of the action
|
| 219 |
+
leads to the equations of motion
|
| 220 |
+
F ≡ dA + A ∧ A = 0,
|
| 221 |
+
¯F ≡ d ¯A + ¯A ∧ ¯A = 0,
|
| 222 |
+
(2.4)
|
| 223 |
+
which are equivalent to the first order gravitational field equation and torsion free equation.
|
| 224 |
+
The AdS3 metric can also be recovered from the gauge fields via
|
| 225 |
+
gij = 1
|
| 226 |
+
2Tr
|
| 227 |
+
�
|
| 228 |
+
(Ai − ¯Ai)(Aj − ¯Aj)
|
| 229 |
+
�
|
| 230 |
+
.
|
| 231 |
+
(2.5)
|
| 232 |
+
As a consequence, the AdS3 gravity is formulated into a Chern-Simons gauge theory.
|
| 233 |
+
By using the Chern-Simons form, we can introduce the gravitational Wilson lines in AdS3
|
| 234 |
+
gravity
|
| 235 |
+
WR(C) = TrR
|
| 236 |
+
�
|
| 237 |
+
P exp
|
| 238 |
+
�
|
| 239 |
+
C
|
| 240 |
+
A
|
| 241 |
+
�
|
| 242 |
+
,
|
| 243 |
+
(2.6)
|
| 244 |
+
where R denotes a representation of sl(2, R), and C is a curve on M with two ending points
|
| 245 |
+
living on the boundary of M. If the path C is closed, it gives the Wilson loop which is
|
| 246 |
+
invariant under the gauge transformation
|
| 247 |
+
A → A′ = Λ−1(d + A)Λ.
|
| 248 |
+
(2.7)
|
| 249 |
+
We can use the Wilson lines to probe the bulk geometry, instead of a massive particle. The
|
| 250 |
+
massive particle moving in bulk is characterized by its mass m and spin s. These parameters
|
| 251 |
+
would contribute to the backreaction on the bulk geometry. The trajectory of the particle
|
| 252 |
+
can be understood as geodesics. When we turn to use the Wilson line to probe the bulk
|
| 253 |
+
geometry, we have to use the infinite-dimensional representations of sl(2, R), characterized
|
| 254 |
+
4
|
| 255 |
+
|
| 256 |
+
by (h, ¯h). So that the mass m and spin s of the particle can be encoded in the representation
|
| 257 |
+
of sl(2, R) through the relations m = h+ ¯h and s = h−¯h. For the representation of sl(2, R)
|
| 258 |
+
see Appendix A.
|
| 259 |
+
Note that infinite-dimensional representations of symmetry algebras can be regarded as
|
| 260 |
+
the Hilbert spaces of quantum mechanical systems in physics. The trace over all the states
|
| 261 |
+
in the representation R can be formulated into a path integral of an auxiliary quantum
|
| 262 |
+
mechanical system. Then the Wilson line can be written as
|
| 263 |
+
WR(C) =
|
| 264 |
+
�
|
| 265 |
+
DU exp [−S(U; A)C] .
|
| 266 |
+
(2.8)
|
| 267 |
+
where S(U; A)C is the action of the auxiliary quantum mechanical system that lives on
|
| 268 |
+
the Wilson line. The action should have a global symmetry group SL(2, R) × SL(2, R), so
|
| 269 |
+
that the Hilbert space of the system will be precisely the representation of sl(2, R) after
|
| 270 |
+
quantization.
|
| 271 |
+
For the free theory (without gauge fields), an appropriate system is described by a
|
| 272 |
+
particle moving on the group manifold [61], whose action reads
|
| 273 |
+
S(U, P)free =
|
| 274 |
+
�
|
| 275 |
+
C
|
| 276 |
+
ds
|
| 277 |
+
�
|
| 278 |
+
Tr
|
| 279 |
+
�
|
| 280 |
+
PU−1dU
|
| 281 |
+
ds
|
| 282 |
+
�
|
| 283 |
+
+ λ(s)
|
| 284 |
+
�
|
| 285 |
+
Tr
|
| 286 |
+
�
|
| 287 |
+
P 2�
|
| 288 |
+
− C
|
| 289 |
+
��
|
| 290 |
+
,
|
| 291 |
+
(2.9)
|
| 292 |
+
where P lives in the Lie algebra sl(2, R) and U lives in Lie group SL(2, R). The trace in
|
| 293 |
+
this action means contraction with Cartan-Killing metric. The equations of motion for the
|
| 294 |
+
free theory are
|
| 295 |
+
U−1dU
|
| 296 |
+
ds + 2λP = 0,
|
| 297 |
+
(2.10)
|
| 298 |
+
dP
|
| 299 |
+
ds = 0,
|
| 300 |
+
(2.11)
|
| 301 |
+
TrP 2 = C.
|
| 302 |
+
(2.12)
|
| 303 |
+
This action has a SL(2, R) × SL(2, R) global symmetry, namely under the following global
|
| 304 |
+
gauge transformation
|
| 305 |
+
U(s) → LU(s)R,
|
| 306 |
+
P(s) → R−1P(s)R,
|
| 307 |
+
L, R ∈ SL(2, R),
|
| 308 |
+
(2.13)
|
| 309 |
+
the action (2.9) is invariant.
|
| 310 |
+
In [57], it turns out that the system coupled with the external gauge fields A and ¯A
|
| 311 |
+
should be
|
| 312 |
+
S(U, P; A)C =
|
| 313 |
+
�
|
| 314 |
+
C
|
| 315 |
+
ds
|
| 316 |
+
�
|
| 317 |
+
Tr
|
| 318 |
+
�
|
| 319 |
+
PU−1DsU
|
| 320 |
+
�
|
| 321 |
+
+ λ(s)
|
| 322 |
+
�
|
| 323 |
+
Tr
|
| 324 |
+
�
|
| 325 |
+
P 2�
|
| 326 |
+
− C
|
| 327 |
+
��
|
| 328 |
+
,
|
| 329 |
+
(2.14)
|
| 330 |
+
where the covariant derivative is defined by
|
| 331 |
+
DsU = d
|
| 332 |
+
dsU + AsU − U ¯As,
|
| 333 |
+
As = Aµ
|
| 334 |
+
dxµ
|
| 335 |
+
ds .
|
| 336 |
+
(2.15)
|
| 337 |
+
5
|
| 338 |
+
|
| 339 |
+
The equations of motion become
|
| 340 |
+
U−1DsU + 2λP = 0,
|
| 341 |
+
(2.16)
|
| 342 |
+
d
|
| 343 |
+
dsP +
|
| 344 |
+
� ¯As, P
|
| 345 |
+
�
|
| 346 |
+
= 0,
|
| 347 |
+
(2.17)
|
| 348 |
+
Tr P 2 = C.
|
| 349 |
+
(2.18)
|
| 350 |
+
After introducing the covariant derivative, the global symmetry (2.13) is enhanced to the
|
| 351 |
+
local gauge symmetry. The action (2.14) is invariant under local gauge transformation
|
| 352 |
+
Aµ → L(x) (Aµ + ∂µ) L−1(x),
|
| 353 |
+
¯Aµ → R−1(x)
|
| 354 |
+
� ¯Aµ + ∂µ
|
| 355 |
+
�
|
| 356 |
+
R(x),
|
| 357 |
+
(2.19)
|
| 358 |
+
U(s) → L(xµ(s))U(s)R(xµ(s)),
|
| 359 |
+
P(s) → R(xµ(s))P(s)R(xµ(s)).
|
| 360 |
+
(2.20)
|
| 361 |
+
We have to point out that the equations of motion do not change under these gauge
|
| 362 |
+
transformations. This feature is useful to construct the solutions of the equations of motion
|
| 363 |
+
from the free theory solutions. If the gauge fields A and ¯A are pure gauge, the solutions for
|
| 364 |
+
the equations (2.16)-(2.18) can be obtained from the free theory solution through the gauge
|
| 365 |
+
transformation (2.19) and (2.20). We will treat more details in section 2.3.
|
| 366 |
+
2.2
|
| 367 |
+
Equivalence to the geodesic equation
|
| 368 |
+
This Wilson line probe should be equivalent to a massive particle moving in AdS3. Then we
|
| 369 |
+
will show that the usual geodesic equation with respect to the metric would appear in the
|
| 370 |
+
Wilson line path. We denote the Wilson line path in the bulk by xµ(s). Using the classical
|
| 371 |
+
equation of motion (2.16)-(2.18), the action (2.14) can be reduced into a second order one
|
| 372 |
+
S(U; A, ¯A)C =
|
| 373 |
+
√
|
| 374 |
+
C
|
| 375 |
+
�
|
| 376 |
+
C
|
| 377 |
+
ds
|
| 378 |
+
�
|
| 379 |
+
Tr (U−1DsU)2.
|
| 380 |
+
(2.21)
|
| 381 |
+
In this form, the action is essentially a gauged sigma model, whose equation of motion reads
|
| 382 |
+
d
|
| 383 |
+
ds
|
| 384 |
+
��
|
| 385 |
+
Au − ¯A
|
| 386 |
+
�
|
| 387 |
+
µ
|
| 388 |
+
dxµ
|
| 389 |
+
ds
|
| 390 |
+
�
|
| 391 |
+
+
|
| 392 |
+
� ¯Aµ, Au
|
| 393 |
+
ν
|
| 394 |
+
� dxµ
|
| 395 |
+
ds
|
| 396 |
+
dxν
|
| 397 |
+
ds = 0,
|
| 398 |
+
(2.22)
|
| 399 |
+
where
|
| 400 |
+
Au
|
| 401 |
+
s = U−1 d
|
| 402 |
+
dsU + U−1AsU.
|
| 403 |
+
(2.23)
|
| 404 |
+
For the given gauge fields (A, ¯A), the equation of motion depends on the choice of path
|
| 405 |
+
xµ(s). From the perspective of the equation of motion, we learn that U(s) acts like a gauge
|
| 406 |
+
transformation on the connection A. There is a good choice for U(s), so that the particle
|
| 407 |
+
does not move in the auxiliary space, i.e. U(s) = 1. In this case, the equation of motion
|
| 408 |
+
reduces to
|
| 409 |
+
d
|
| 410 |
+
ds
|
| 411 |
+
�
|
| 412 |
+
ea
|
| 413 |
+
µ
|
| 414 |
+
dxµ
|
| 415 |
+
ds
|
| 416 |
+
�
|
| 417 |
+
+ ωa
|
| 418 |
+
µbeb
|
| 419 |
+
ν
|
| 420 |
+
dxµ
|
| 421 |
+
ds
|
| 422 |
+
dxν
|
| 423 |
+
ds = 0.
|
| 424 |
+
(2.24)
|
| 425 |
+
6
|
| 426 |
+
|
| 427 |
+
This is precisely the geodesic equation for the curve xµ(s) on a spacetime with vielbein
|
| 428 |
+
and spin connection which is equivalent to the more familiar Christoffel symbols forms.
|
| 429 |
+
Furthermore, the on-shell the action (2.14) for U(s) = 1 becomes
|
| 430 |
+
S(U; A, ¯A)C =
|
| 431 |
+
√
|
| 432 |
+
2C
|
| 433 |
+
�
|
| 434 |
+
C
|
| 435 |
+
ds
|
| 436 |
+
�
|
| 437 |
+
gµν(x)dxµ
|
| 438 |
+
ds
|
| 439 |
+
dxν
|
| 440 |
+
ds ,
|
| 441 |
+
(2.25)
|
| 442 |
+
which is manifestly the proper distance along the geodesic.
|
| 443 |
+
We have learned that the Wilson line in AdS3 gravity can be expressed as a path integral
|
| 444 |
+
of an auxiliary quantum mechanical system, whose action is (2.14). The on-shell action turns
|
| 445 |
+
out to be the proper distance along the geodesic. Thus in the classical limit, one can find
|
| 446 |
+
that the value of the Wilson line
|
| 447 |
+
WR(xi, xf) = exp(−
|
| 448 |
+
√
|
| 449 |
+
2CL(xi, xf)),
|
| 450 |
+
(2.26)
|
| 451 |
+
where L(xi, xf) is the length of the bulk geodesic connecting these two endpoints on the
|
| 452 |
+
boundary. Holographically, it was proposed by Ryu and Takayanagi that the field-theoretical
|
| 453 |
+
entanglement entropies correspond to the length of the bulk geodesics ending on the bound-
|
| 454 |
+
ary [59, 60]. In terms of the Chern-Simons description of AdS3 gravity, we can calculate the
|
| 455 |
+
entanglement entropy from the Wilson line
|
| 456 |
+
SEE = − log(WR(C)).
|
| 457 |
+
(2.27)
|
| 458 |
+
In [57], it was also shown that the Wilson line backreaction on the geometry would create a
|
| 459 |
+
non-trivial holonomy, which can be interpreted as the conical singularity in the bulk. The
|
| 460 |
+
conical defects hence reproduce the field-theoretical entanglement entropy formula. In the
|
| 461 |
+
later of this paper, we would like to use the Wilson line technique to compute the holographic
|
| 462 |
+
entanglement entropy in Chern-Simons AdS3 gravity, including the T ¯T-deformed AdS3.
|
| 463 |
+
2.3
|
| 464 |
+
Holographic entanglement entropy
|
| 465 |
+
In this section, we calculate WR(C) with C ending on the AdS3 boundary at two points
|
| 466 |
+
denoted by xi = x(si), xf = x(sf). Classically, we just need to calculate the on-shell action
|
| 467 |
+
of the auxiliary system
|
| 468 |
+
Son-shell =
|
| 469 |
+
�
|
| 470 |
+
C
|
| 471 |
+
ds Tr
|
| 472 |
+
�
|
| 473 |
+
PU−1DsU
|
| 474 |
+
�
|
| 475 |
+
= −2C
|
| 476 |
+
� sf
|
| 477 |
+
si
|
| 478 |
+
dsλ(s),
|
| 479 |
+
(2.28)
|
| 480 |
+
which depends on the solution of the equations of motion. The solutions can be constructed
|
| 481 |
+
from the free theory solutions, i.e. (2.10)-(2.12), through the gauge transformation (2.19)
|
| 482 |
+
and (2.20). First of all, we should note the solutions to free theory, denoting them by U0(s)
|
| 483 |
+
and P0, are
|
| 484 |
+
U0(s) = u0 exp(−2α(s)P0),
|
| 485 |
+
with
|
| 486 |
+
dα(s)
|
| 487 |
+
ds
|
| 488 |
+
= λ(s),
|
| 489 |
+
(2.29)
|
| 490 |
+
7
|
| 491 |
+
|
| 492 |
+
where P0 and u0 are constant. Next, we assume the bulk gauge fields are in pure gauge
|
| 493 |
+
A = L(x)dL−1(x),
|
| 494 |
+
¯A = R−1(x)dR(x).
|
| 495 |
+
(2.30)
|
| 496 |
+
In fact, most of the AdS3 solutions are in pure gauge, such as BTZ black hole and Ban˜ados
|
| 497 |
+
geometry. Then one can verify the following is the classical solution of (2.16)-(2.18)
|
| 498 |
+
U(s) = L(x(s))U0(s)R(x(s)),
|
| 499 |
+
P(s) = R−1(x(s))P0R(x(s)).
|
| 500 |
+
(2.31)
|
| 501 |
+
These solutions are directly obtained from the local gauge symmetry of the equations of
|
| 502 |
+
motion. As argued in [57], the boundary conditions for U(s) on the boundary ending points
|
| 503 |
+
can be chosen as
|
| 504 |
+
U(si) =L(x(si))u0 exp(−2α(si)P0)R(x(si)) = 1,
|
| 505 |
+
(2.32)
|
| 506 |
+
U(sf) =L(x(sf))u0 exp(−2α(sf)P0)R(x(sf)) = 1.
|
| 507 |
+
(2.33)
|
| 508 |
+
We then have to eliminate the initial value P0 and u0. Solving u0 from (2.32) and substituting
|
| 509 |
+
into (2.33), one can find
|
| 510 |
+
exp(−2∆αP0) =R(x(si))L(x(si))L−1(x(sf))R−1(x(sf)).
|
| 511 |
+
(2.34)
|
| 512 |
+
Taking the trace on both sides, we arrive at
|
| 513 |
+
cosh
|
| 514 |
+
�
|
| 515 |
+
−2∆α
|
| 516 |
+
√
|
| 517 |
+
2C
|
| 518 |
+
�
|
| 519 |
+
= 1
|
| 520 |
+
2Tr
|
| 521 |
+
�
|
| 522 |
+
R(x(si))L(x(si))L−1(x(sf))R−1(x(sf))
|
| 523 |
+
�
|
| 524 |
+
,
|
| 525 |
+
(2.35)
|
| 526 |
+
where we have used
|
| 527 |
+
Tr (exp(−2∆αP0)) = 2 cosh
|
| 528 |
+
�
|
| 529 |
+
−2∆α
|
| 530 |
+
√
|
| 531 |
+
2C
|
| 532 |
+
�
|
| 533 |
+
.
|
| 534 |
+
(2.36)
|
| 535 |
+
Finally, according to (2.27), we obtain the holographic entanglement entropy formula
|
| 536 |
+
SEE =
|
| 537 |
+
√
|
| 538 |
+
2C cosh−1
|
| 539 |
+
�1
|
| 540 |
+
2Tr
|
| 541 |
+
�
|
| 542 |
+
R(x(si))L(x(si))L−1(x(sf))R−1(x(sf))
|
| 543 |
+
��
|
| 544 |
+
.
|
| 545 |
+
(2.37)
|
| 546 |
+
We then use this formalism to check the holographic entanglement entropy in Poincare AdS3
|
| 547 |
+
and BTZ black hole.
|
| 548 |
+
2.3.1
|
| 549 |
+
Poincar´e AdS3
|
| 550 |
+
For the case of Poincare AdS3, the line element reads
|
| 551 |
+
ds2 = dr2
|
| 552 |
+
r2 + r2(dθ2 − dt2).
|
| 553 |
+
(2.38)
|
| 554 |
+
In terms of the Chern-Simons gauge connection, this geometry is described by
|
| 555 |
+
A =dr
|
| 556 |
+
r L0 + rL1(dθ + dt),
|
| 557 |
+
(2.39)
|
| 558 |
+
¯A = − dr
|
| 559 |
+
r L0 − rL−1(dθ − dt).
|
| 560 |
+
(2.40)
|
| 561 |
+
8
|
| 562 |
+
|
| 563 |
+
The gauge connections can be written in pure gauge form
|
| 564 |
+
A =LdL−1,
|
| 565 |
+
L = exp(− ln rL0) exp(−(θ + t)L1),
|
| 566 |
+
(2.41)
|
| 567 |
+
¯A =R−1dR,
|
| 568 |
+
R = exp((θ − t)L−1) exp(− ln rL0).
|
| 569 |
+
(2.42)
|
| 570 |
+
In order to calculate the entanglement entropy, we consider a time slice (t = 0) of this
|
| 571 |
+
geometry and impose the following boundary conditions for the ending points of the Wilson
|
| 572 |
+
line
|
| 573 |
+
r(si) = r(sf) = r0,
|
| 574 |
+
(2.43)
|
| 575 |
+
∆θ = θ(sf) − θ(si) = l,
|
| 576 |
+
(2.44)
|
| 577 |
+
which means we work on a constant radial boundary and the length of the interval is l.
|
| 578 |
+
Plugging (2.41) and (2.42) into (2.37), one obtain
|
| 579 |
+
SEE =
|
| 580 |
+
√
|
| 581 |
+
2C cosh−1
|
| 582 |
+
�
|
| 583 |
+
1 + r2
|
| 584 |
+
0l2
|
| 585 |
+
2
|
| 586 |
+
�
|
| 587 |
+
.
|
| 588 |
+
(2.45)
|
| 589 |
+
Then taking the limit r0 ≫ 1, so that the result corresponds to the theory living on the
|
| 590 |
+
conformal boundary, we arrive at 1
|
| 591 |
+
SEE = c
|
| 592 |
+
3 log
|
| 593 |
+
�l
|
| 594 |
+
ǫ
|
| 595 |
+
�
|
| 596 |
+
.
|
| 597 |
+
(2.46)
|
| 598 |
+
where the UV cutoff of the boundary field theory corresponds to the radial cutoff in the
|
| 599 |
+
bulk, and the central charge relarelatesthe expectation value of Casimir
|
| 600 |
+
ǫ = 1
|
| 601 |
+
r0
|
| 602 |
+
,
|
| 603 |
+
√
|
| 604 |
+
2C = c
|
| 605 |
+
6.
|
| 606 |
+
(2.47)
|
| 607 |
+
The relation between the expectation value of Casimir and central charge can be derived by
|
| 608 |
+
calculating the Wilson line defect, for the details see Appendix B. This result is exactly the
|
| 609 |
+
entanglement entropy of CFT2. The same answer can also be obtained by solving the bulk
|
| 610 |
+
geodesic equation. However, in terms of the Wilson line form, we do not require the solution
|
| 611 |
+
of any differential equations and follow from purely algebraic operations. This technique
|
| 612 |
+
can be used for more complicated AdS3 geometry.
|
| 613 |
+
2.3.2
|
| 614 |
+
BTZ black hole
|
| 615 |
+
For the BTZ black hole, the metric in Fefferman–Graham gauge is
|
| 616 |
+
ds2 = dr2
|
| 617 |
+
r2 + r2
|
| 618 |
+
�
|
| 619 |
+
dzd¯z + 1
|
| 620 |
+
r2L0dz2 + 1
|
| 621 |
+
r2 ¯L0d¯z2 + 1
|
| 622 |
+
r4L0 ¯L0dzd¯z
|
| 623 |
+
�
|
| 624 |
+
,
|
| 625 |
+
(2.48)
|
| 626 |
+
1We have used the relation
|
| 627 |
+
cosh−1(x) ∼ log(2x)
|
| 628 |
+
for
|
| 629 |
+
x ≫ 1.
|
| 630 |
+
9
|
| 631 |
+
|
| 632 |
+
where L0 and ¯L0 are constants related to the mass and angular momentum of the black hole
|
| 633 |
+
L0 = M − J
|
| 634 |
+
2
|
| 635 |
+
,
|
| 636 |
+
¯L0 = M + J
|
| 637 |
+
2
|
| 638 |
+
.
|
| 639 |
+
(2.49)
|
| 640 |
+
The corresponding Chern-Simons gauge connections read
|
| 641 |
+
A =dr
|
| 642 |
+
r L0 +
|
| 643 |
+
�
|
| 644 |
+
rL1 − 1
|
| 645 |
+
rL0L−1
|
| 646 |
+
�
|
| 647 |
+
dz,
|
| 648 |
+
(2.50)
|
| 649 |
+
¯A = − dr
|
| 650 |
+
r L0 +
|
| 651 |
+
�1
|
| 652 |
+
r
|
| 653 |
+
¯L0L1 − rL−1
|
| 654 |
+
�
|
| 655 |
+
d¯z.
|
| 656 |
+
(2.51)
|
| 657 |
+
In this case, one can obtain
|
| 658 |
+
L (r, z, ¯z) = exp (− ln rL0) exp (−zL1 + L0zL−1) ,
|
| 659 |
+
(2.52)
|
| 660 |
+
R (r, z, ¯z) = exp
|
| 661 |
+
� ¯L0¯zL1 − ¯zL−1
|
| 662 |
+
�
|
| 663 |
+
exp (− ln rL0) .
|
| 664 |
+
(2.53)
|
| 665 |
+
In addition, such solutions can be parametrized as
|
| 666 |
+
A = b−1(d + a)b,
|
| 667 |
+
¯A = b(d + ¯a)b−1,
|
| 668 |
+
b = eln rL0,
|
| 669 |
+
(2.54)
|
| 670 |
+
Then a, ¯a are also flat connections, but do not depend on the radial coordinate
|
| 671 |
+
a = (L1 − L0L−1) dz,
|
| 672 |
+
(2.55)
|
| 673 |
+
¯a =
|
| 674 |
+
� ¯L0L1 − L−1
|
| 675 |
+
�
|
| 676 |
+
d¯z.
|
| 677 |
+
(2.56)
|
| 678 |
+
Following the same steps in pure AdS3 and the boundary conditions for the ending points
|
| 679 |
+
of the Wilson line, we can get
|
| 680 |
+
Tr
|
| 681 |
+
�
|
| 682 |
+
R(r0, θ(si), 0)L(r0, θ(si), 0)L−1(r0, θ(sf), 0)R−1(r0, θ(sf), 0)
|
| 683 |
+
�
|
| 684 |
+
= − 2 cosh
|
| 685 |
+
�
|
| 686 |
+
l
|
| 687 |
+
�
|
| 688 |
+
L0
|
| 689 |
+
�
|
| 690 |
+
cosh
|
| 691 |
+
�
|
| 692 |
+
l
|
| 693 |
+
� ¯L0
|
| 694 |
+
�
|
| 695 |
+
+
|
| 696 |
+
�
|
| 697 |
+
L0 ¯L0 + r4
|
| 698 |
+
0
|
| 699 |
+
�
|
| 700 |
+
sinh
|
| 701 |
+
�
|
| 702 |
+
l√L0
|
| 703 |
+
�
|
| 704 |
+
sinh
|
| 705 |
+
�
|
| 706 |
+
l
|
| 707 |
+
� ¯L0
|
| 708 |
+
�
|
| 709 |
+
r2
|
| 710 |
+
0
|
| 711 |
+
√L0
|
| 712 |
+
� ¯L0
|
| 713 |
+
∼
|
| 714 |
+
r2
|
| 715 |
+
0 sinh
|
| 716 |
+
�
|
| 717 |
+
l√L0
|
| 718 |
+
�
|
| 719 |
+
sinh
|
| 720 |
+
�
|
| 721 |
+
l
|
| 722 |
+
� ¯L0
|
| 723 |
+
�
|
| 724 |
+
√L0
|
| 725 |
+
� ¯L0
|
| 726 |
+
,
|
| 727 |
+
(r0 ≫ 1)
|
| 728 |
+
(2.57)
|
| 729 |
+
This result leads to the entanglement entropy
|
| 730 |
+
SEE =c
|
| 731 |
+
6 log
|
| 732 |
+
|
| 733 |
+
|
| 734 |
+
r2
|
| 735 |
+
0 sinh
|
| 736 |
+
�
|
| 737 |
+
l√L0
|
| 738 |
+
�
|
| 739 |
+
sinh
|
| 740 |
+
�
|
| 741 |
+
l
|
| 742 |
+
� ¯L0
|
| 743 |
+
�
|
| 744 |
+
√L0
|
| 745 |
+
� ¯L0
|
| 746 |
+
|
| 747 |
+
.
|
| 748 |
+
(2.58)
|
| 749 |
+
If we consider the spinless black hole, i.e. L0 = ¯L0, the entanglement entropy reduces to
|
| 750 |
+
SEE =c
|
| 751 |
+
3 log
|
| 752 |
+
�β0
|
| 753 |
+
πǫ sinh
|
| 754 |
+
�πl
|
| 755 |
+
β0
|
| 756 |
+
��
|
| 757 |
+
,
|
| 758 |
+
β0 =
|
| 759 |
+
π
|
| 760 |
+
√L0
|
| 761 |
+
,
|
| 762 |
+
(2.59)
|
| 763 |
+
where β0 is the inverse temperature of the BTZ black hole [62–64]. This result coincides
|
| 764 |
+
with the entanglement entropy of a CFT in thermal state.
|
| 765 |
+
10
|
| 766 |
+
|
| 767 |
+
2.4
|
| 768 |
+
Loops and thermal entropy
|
| 769 |
+
One can also consider the Wilson loops in AdS3. In this case, WR(C) turns out to be the
|
| 770 |
+
proper distance around the horizon, which corresponds to the black hole thermal entropy.
|
| 771 |
+
We will then check it in the BTZ black hole. Consider the Wilson loop along the S1 cycle
|
| 772 |
+
θ ∼ θ + 2π. In contrast to the open interval case, the closed path should be smooth and
|
| 773 |
+
hence impose the periodic boundary condition
|
| 774 |
+
U (sf) = U(si),
|
| 775 |
+
P (sf) = P(si).
|
| 776 |
+
(2.60)
|
| 777 |
+
According to (2.31), the boundary condition for P(s) implies
|
| 778 |
+
�
|
| 779 |
+
P0, R (si) R−1(sf)
|
| 780 |
+
�
|
| 781 |
+
= 0,
|
| 782 |
+
(2.61)
|
| 783 |
+
Hence, the boundary condition for U(s) implies
|
| 784 |
+
exp (−2∆αP0) = u−1
|
| 785 |
+
0
|
| 786 |
+
�
|
| 787 |
+
L−1 (sf) L(si)
|
| 788 |
+
�
|
| 789 |
+
u0
|
| 790 |
+
�
|
| 791 |
+
R(si)R−1 (sf)
|
| 792 |
+
�
|
| 793 |
+
.
|
| 794 |
+
(2.62)
|
| 795 |
+
In addition, note the relations
|
| 796 |
+
L−1 (sf) L(si) = exp
|
| 797 |
+
��
|
| 798 |
+
dθaθ
|
| 799 |
+
�
|
| 800 |
+
,
|
| 801 |
+
(2.63)
|
| 802 |
+
R(si)R−1 (sf) = exp
|
| 803 |
+
�
|
| 804 |
+
−
|
| 805 |
+
�
|
| 806 |
+
dθ¯aθ
|
| 807 |
+
�
|
| 808 |
+
,
|
| 809 |
+
(2.64)
|
| 810 |
+
which are the holonomies of the connection, we can rewrite (2.62) as
|
| 811 |
+
exp (−2∆αP0) = u−1
|
| 812 |
+
0 exp (2πaθ) u0 exp (−2π¯aθ) .
|
| 813 |
+
(2.65)
|
| 814 |
+
Here we just consider the case of BTZ black hole, so that one can perform the simple integral
|
| 815 |
+
over θ.
|
| 816 |
+
From (2.61), we learn that P0 and ¯aθ can be diagonalized simultaneously. If the initial
|
| 817 |
+
value of u0 is fixed, we can always choose a matrix V , such that aθ can also be diagonalized
|
| 818 |
+
by u0V
|
| 819 |
+
exp (−2∆αλP) = (u0V )−1 exp (2πaθ) u0V exp
|
| 820 |
+
�
|
| 821 |
+
−2π¯λθ
|
| 822 |
+
�
|
| 823 |
+
= exp (2πλθ) exp
|
| 824 |
+
�
|
| 825 |
+
−2π¯λθ
|
| 826 |
+
�
|
| 827 |
+
,
|
| 828 |
+
(2.66)
|
| 829 |
+
where λP, λθ and ¯λθ are diagonalized matrix of P0, aθ and ¯aθ. Contracting (2.66) with L0,
|
| 830 |
+
we obtain the on-shell action for the loop
|
| 831 |
+
Sth = 2π
|
| 832 |
+
√
|
| 833 |
+
2CTr
|
| 834 |
+
�
|
| 835 |
+
(λθ − ¯λθ)L0
|
| 836 |
+
�
|
| 837 |
+
.
|
| 838 |
+
(2.67)
|
| 839 |
+
For the BTZ black hole, the diagonalized gauge connections are
|
| 840 |
+
λθ = 2
|
| 841 |
+
�
|
| 842 |
+
L0L0,
|
| 843 |
+
¯λθ = −2
|
| 844 |
+
� ¯L0L0.
|
| 845 |
+
(2.68)
|
| 846 |
+
Finally, the Wilson loop gives precisely the entropy of the BTZ black hole
|
| 847 |
+
Sth = 2π
|
| 848 |
+
�c
|
| 849 |
+
6L0 + 2π
|
| 850 |
+
�c
|
| 851 |
+
6
|
| 852 |
+
¯L0.
|
| 853 |
+
(2.69)
|
| 854 |
+
11
|
| 855 |
+
|
| 856 |
+
3
|
| 857 |
+
Holographic entanglement entropy in T ¯T - deformed
|
| 858 |
+
AdS3
|
| 859 |
+
We turn to investigate the entanglement entropy of T ¯T deformed CFTs from the gravity
|
| 860 |
+
side. In [41], it is proposed that the holographic interpretation of T ¯T deformed CFTs is
|
| 861 |
+
still AdS3 gravity but with the mixed boundary condition. The AdS3 solutions associated
|
| 862 |
+
with the mixed boundary condition can be obtained from the Ba˜nados geometry through
|
| 863 |
+
a coordinate transformation. As the deformed geometry is still AdS3, we prefer to work in
|
| 864 |
+
Chern-Simons formulation. In this section, we introduce the T ¯T deformed AdS3 geometry.
|
| 865 |
+
The holographic entanglement entropy of T ¯T deformed CFTs can be obtained using the
|
| 866 |
+
Wilson line technique in the deformed AdS3.
|
| 867 |
+
3.1
|
| 868 |
+
T ¯T deformed AdS3 geometry
|
| 869 |
+
We start from the general AdS3 solution with a flat conformal boundary, which is called the
|
| 870 |
+
Ba˜nados geometry [65]. In Fefferman-Graham gauge, the line element reads
|
| 871 |
+
ds2 = dr2
|
| 872 |
+
r2 + r2
|
| 873 |
+
�
|
| 874 |
+
dzd¯z + 1
|
| 875 |
+
r2L(z)dz2 + 1
|
| 876 |
+
r2 ¯L(¯z)d¯z2 + 1
|
| 877 |
+
r4L(z) ¯L(¯z)dzd¯z
|
| 878 |
+
�
|
| 879 |
+
,
|
| 880 |
+
(3.1)
|
| 881 |
+
The parameters L(z) and ¯L(¯z) are arbitrary holomorphic and antiholomorphic functions,
|
| 882 |
+
which are related to the energy and angular momentum
|
| 883 |
+
L = E + J
|
| 884 |
+
2
|
| 885 |
+
,
|
| 886 |
+
¯L = E − J
|
| 887 |
+
2
|
| 888 |
+
.
|
| 889 |
+
(3.2)
|
| 890 |
+
The corresponding Chern-Simons gauge fields are
|
| 891 |
+
A =dr
|
| 892 |
+
r L0 +
|
| 893 |
+
�
|
| 894 |
+
rL1 − 1
|
| 895 |
+
rL(z)L−1
|
| 896 |
+
�
|
| 897 |
+
dz,
|
| 898 |
+
(3.3)
|
| 899 |
+
¯A = − dr
|
| 900 |
+
r L0 −
|
| 901 |
+
�1
|
| 902 |
+
r
|
| 903 |
+
¯L(¯z)L1 − rL−1
|
| 904 |
+
�
|
| 905 |
+
d¯z.
|
| 906 |
+
(3.4)
|
| 907 |
+
In this sense, the deformed Ba˜nados geometry can be constructed through a field-dependent
|
| 908 |
+
coordinate transformation [41], which reads
|
| 909 |
+
dz =
|
| 910 |
+
1
|
| 911 |
+
1 − µ2Lµ ¯Lµ
|
| 912 |
+
(dw − µ ¯Lµd ¯w),
|
| 913 |
+
d¯z =
|
| 914 |
+
1
|
| 915 |
+
1 − µ2Lµ ¯Lµ
|
| 916 |
+
(d ¯w − µLµdw),
|
| 917 |
+
(3.5)
|
| 918 |
+
where µ is the deformation parameter.
|
| 919 |
+
One should note that the parameters L and ¯L
|
| 920 |
+
in (3.1) would turn into Lµ and ¯Lµ under the coordinate transformation. Generally, the
|
| 921 |
+
parameters Lµ and ¯Lµ are different from the undeformed ones L and ¯L. The relations
|
| 922 |
+
between deformed parameters Lµ, ¯Lµ and undeformed parameters L, ¯L can be fixed by two
|
| 923 |
+
ways. The first one is that the deformation smoothly changes the spectrum but does not
|
| 924 |
+
change the local degeneracy of states. Therefore, in the bulk, this implies that the T ¯T
|
| 925 |
+
12
|
| 926 |
+
|
| 927 |
+
deformation does not change the horizon area of a black hole.
|
| 928 |
+
The second one is that
|
| 929 |
+
the deformed geometry can be transformed into the undeformed one without changing the
|
| 930 |
+
periodicity of the spatial coordinate. Indeed, the transformation is different from the inverse
|
| 931 |
+
of (3.5). These considerations lead to
|
| 932 |
+
Lµ(1 − µ ¯Lµ)2
|
| 933 |
+
(1 − µ2Lµ ¯Lµ)2 = L,
|
| 934 |
+
¯Lµ(1 − µLµ)2
|
| 935 |
+
(1 − µ2Lµ ¯Lµ)2 = ¯L.
|
| 936 |
+
(3.6)
|
| 937 |
+
One can turn to [41] for more details about fixing these relations.
|
| 938 |
+
By using the coordinate transformation (3.5), we obtain the deformed Chern-Simons
|
| 939 |
+
gauge fields
|
| 940 |
+
A =1
|
| 941 |
+
rL0dr +
|
| 942 |
+
1
|
| 943 |
+
1 − µ2Lµ ¯L µ
|
| 944 |
+
�
|
| 945 |
+
rL1 − 1
|
| 946 |
+
rLµL−1
|
| 947 |
+
�
|
| 948 |
+
(dw − µ ¯Lµd ¯w),
|
| 949 |
+
(3.7)
|
| 950 |
+
¯A = − 1
|
| 951 |
+
rL0dr −
|
| 952 |
+
1
|
| 953 |
+
1 − µ2Lµ ¯Lµ
|
| 954 |
+
�1
|
| 955 |
+
r
|
| 956 |
+
¯LµL1 − rL−1
|
| 957 |
+
�
|
| 958 |
+
(d ¯w − µLµdw).
|
| 959 |
+
(3.8)
|
| 960 |
+
Note that L(z) and ¯L(¯z) correspond to the charges of the solution in the Ba˜nados geometry.
|
| 961 |
+
However, in the deformed geometry, the parameters L(z) and ¯L(¯z) do not correspond to
|
| 962 |
+
the charges. Indeed, the deformed energy and angular momentum can be obtained from
|
| 963 |
+
both field theory and gravity side
|
| 964 |
+
Eµ = 1
|
| 965 |
+
µ
|
| 966 |
+
�
|
| 967 |
+
1 −
|
| 968 |
+
�
|
| 969 |
+
1 − 2µ(L + ¯L) + µ2(L − ¯L)2
|
| 970 |
+
�
|
| 971 |
+
,
|
| 972 |
+
Jµ = J.
|
| 973 |
+
(3.9)
|
| 974 |
+
Analogous to (3.2), we introduce the new parameters
|
| 975 |
+
Q = Eµ + Jµ
|
| 976 |
+
2
|
| 977 |
+
= 1
|
| 978 |
+
2µ
|
| 979 |
+
�
|
| 980 |
+
1 + µ(L − ¯L) −
|
| 981 |
+
�
|
| 982 |
+
1 − 2µ(L + ¯L) + µ2(L − ¯L)2
|
| 983 |
+
�
|
| 984 |
+
,
|
| 985 |
+
(3.10)
|
| 986 |
+
¯Q = Eµ − Jµ
|
| 987 |
+
2
|
| 988 |
+
= 1
|
| 989 |
+
2µ
|
| 990 |
+
�
|
| 991 |
+
1 − µ(L − ¯L) −
|
| 992 |
+
�
|
| 993 |
+
1 − 2µ(L + ¯L) + µ2(L − ¯L)2
|
| 994 |
+
�
|
| 995 |
+
.
|
| 996 |
+
(3.11)
|
| 997 |
+
We can regard Q and ¯Q as the generalized parameters of L and ¯L in the deformed geometry,
|
| 998 |
+
and Q and ¯Q reduce to L and ¯L in the limit µ → 0. We find it is more convenient to
|
| 999 |
+
parametrize the deformed gauge fields or metric in terms of these two independent charges.
|
| 1000 |
+
In terms of these charges, the Chern-Simons gauge connection are formulated as
|
| 1001 |
+
A =dr
|
| 1002 |
+
r L0 +
|
| 1003 |
+
1 − µQ
|
| 1004 |
+
1 − µ(Q + ¯Q)
|
| 1005 |
+
�
|
| 1006 |
+
r(1 − µ ¯Q)L1 − 1
|
| 1007 |
+
rQL−1
|
| 1008 |
+
�
|
| 1009 |
+
dw
|
| 1010 |
+
−
|
| 1011 |
+
µ ¯Q
|
| 1012 |
+
1 − µ(Q + ¯Q)
|
| 1013 |
+
�
|
| 1014 |
+
r(1 − µ ¯Q)L1 − 1
|
| 1015 |
+
rQL−1
|
| 1016 |
+
�
|
| 1017 |
+
d ¯w,
|
| 1018 |
+
(3.12)
|
| 1019 |
+
¯A = − dr
|
| 1020 |
+
r L0 +
|
| 1021 |
+
µQ
|
| 1022 |
+
1 − µ(Q + ¯Q)
|
| 1023 |
+
�1
|
| 1024 |
+
r
|
| 1025 |
+
¯QL1 − r(1 − µQ)L−1
|
| 1026 |
+
�
|
| 1027 |
+
dw
|
| 1028 |
+
−
|
| 1029 |
+
1 − µ ¯Q
|
| 1030 |
+
1 − µ(Q + ¯Q)
|
| 1031 |
+
�1
|
| 1032 |
+
r
|
| 1033 |
+
¯QL1 − r(1 − µQ)L−1
|
| 1034 |
+
�
|
| 1035 |
+
d ¯w,
|
| 1036 |
+
(3.13)
|
| 1037 |
+
13
|
| 1038 |
+
|
| 1039 |
+
In the following, we prefer to use the coordinates θ = (w + ¯w)/2, t = (w − ¯w)/2, where
|
| 1040 |
+
t represents the time direction while θ denotes the spatial coordinate at the boundary with
|
| 1041 |
+
the identification θ ∼ θ + 2π. We then have
|
| 1042 |
+
Ar = 1
|
| 1043 |
+
rL0,
|
| 1044 |
+
Aθ =r(1 − µ ¯Q)L1 − 1
|
| 1045 |
+
rQL−1,
|
| 1046 |
+
At = K
|
| 1047 |
+
�
|
| 1048 |
+
r(1 − µ ¯Q)L1 − 1
|
| 1049 |
+
rQL−1
|
| 1050 |
+
�
|
| 1051 |
+
,
|
| 1052 |
+
(3.14)
|
| 1053 |
+
¯Ar = −1
|
| 1054 |
+
rL0,
|
| 1055 |
+
¯Aθ =1
|
| 1056 |
+
r
|
| 1057 |
+
¯QL1 − r(1 − µQ)L−1,
|
| 1058 |
+
¯At = ¯K
|
| 1059 |
+
�1
|
| 1060 |
+
r
|
| 1061 |
+
¯QL1 − r(1 − µQ)L−1
|
| 1062 |
+
�
|
| 1063 |
+
,
|
| 1064 |
+
(3.15)
|
| 1065 |
+
where
|
| 1066 |
+
K =1 + µ( ¯Q − Q)
|
| 1067 |
+
1 − µ(Q + ¯Q),
|
| 1068 |
+
¯K = −1 − µ( ¯Q − Q)
|
| 1069 |
+
1 − µ(Q + ¯Q).
|
| 1070 |
+
(3.16)
|
| 1071 |
+
The radial gauge (2.54) still holds for the deformed gauge fields, so that the induced gauge
|
| 1072 |
+
connections are
|
| 1073 |
+
aθ =(1 − µ ¯Q)L1 − QL−1,
|
| 1074 |
+
at = K
|
| 1075 |
+
�
|
| 1076 |
+
(1 − µ ¯Q)L1 − QL−1
|
| 1077 |
+
�
|
| 1078 |
+
,
|
| 1079 |
+
(3.17)
|
| 1080 |
+
¯aθ = ¯QL1 − (1 − µQ)L−1,
|
| 1081 |
+
¯at = ¯K
|
| 1082 |
+
�
|
| 1083 |
+
¯QL1 − (1 − µQ)L−1
|
| 1084 |
+
�
|
| 1085 |
+
.
|
| 1086 |
+
(3.18)
|
| 1087 |
+
In addition, we can also write down the deformed
|
| 1088 |
+
ds2 =dr2
|
| 1089 |
+
r2 +
|
| 1090 |
+
1
|
| 1091 |
+
r2(1 − µ(Q + ¯Q))2×
|
| 1092 |
+
�
|
| 1093 |
+
Q(1 − µQ)(1 − µr2)dw +
|
| 1094 |
+
�
|
| 1095 |
+
µQ ¯Q + r2(1 − µQ)(1 − µ ¯Q)
|
| 1096 |
+
�
|
| 1097 |
+
d ¯w
|
| 1098 |
+
�
|
| 1099 |
+
×
|
| 1100 |
+
�
|
| 1101 |
+
¯Q(1 − µ ¯Q)(1 − µr2)d ¯w +
|
| 1102 |
+
�
|
| 1103 |
+
µQ ¯Q + r2(1 − µQ)(1 − µ ¯Q)
|
| 1104 |
+
�
|
| 1105 |
+
dw
|
| 1106 |
+
�
|
| 1107 |
+
.
|
| 1108 |
+
(3.19)
|
| 1109 |
+
We will use the deformed geometry to calculate the holographic entanglement entropy in
|
| 1110 |
+
the T ¯T deformed CFTs. For simplicity, we just consider the constant charges Q and ¯Q,
|
| 1111 |
+
namely we work in T ¯T deformed BTZ black hole.
|
| 1112 |
+
3.2
|
| 1113 |
+
T ¯T -deformed holographic entanglement entropy
|
| 1114 |
+
For the T ¯T-deformed AdS3, the metric still satisfies the Einstein equation or flat connection
|
| 1115 |
+
condition in the Chern-Simons theory although it takes a complicated form. In the Poincar´e
|
| 1116 |
+
AdS3, the Wilson line would produce a back-reaction in the bulk geometry.
|
| 1117 |
+
The back-
|
| 1118 |
+
reaction would then lead to a conical defect on the ending points of Wilson line, which
|
| 1119 |
+
generates the n-sheet manifold on the boundary.
|
| 1120 |
+
According to the replica trick on the
|
| 1121 |
+
boundary field theory, the Wilson line exactly leads to the entanglement entropy. One can
|
| 1122 |
+
turn to Appendix B for details. We can always transform the T ¯T-deformed AdS3 solution
|
| 1123 |
+
into the Poincar´e form [66, 67]. However, the temperature (the period of Euclidean time) in
|
| 1124 |
+
deformed AdS3 is different from the original one. The crucial point is that we have to identify
|
| 1125 |
+
the deformed temperature and length of interval on the boundary under T ¯T deformation.
|
| 1126 |
+
14
|
| 1127 |
+
|
| 1128 |
+
We will treat these considerations in more details and obtain the T ¯T deformed holographic
|
| 1129 |
+
entanglement entropy in this section.
|
| 1130 |
+
Now, we can use the Wilson line technique to calculate the holographic entanglement
|
| 1131 |
+
entropy in T ¯T-deformed AdS3. First of all, we can give a glance at the Poincar´e AdS3,
|
| 1132 |
+
which turns out correspond to the zero temperature entanglement entropy. In Fefferman-
|
| 1133 |
+
Graham gauge, the Poincar´e AdS3 can be written as Ba˜nados geometry (3.1) with L and
|
| 1134 |
+
¯L vanish.
|
| 1135 |
+
In this case, the bulk geometry is the same as the undeformed one, so the
|
| 1136 |
+
zero temperature entanglement entropy remains unchanged. This result coincides with the
|
| 1137 |
+
perturbative calculation in field theory and cutoff perspective in the bulk [22, 24].
|
| 1138 |
+
We then consider the deformed BTZ black hole, in which the charges Q and ¯Q are
|
| 1139 |
+
constants. For the deformed geometry, on a time slice, we obtain
|
| 1140 |
+
L (r, θ, t = 0) = exp (− ln rL0) exp
|
| 1141 |
+
�
|
| 1142 |
+
−
|
| 1143 |
+
� x
|
| 1144 |
+
x0
|
| 1145 |
+
dxiai
|
| 1146 |
+
�
|
| 1147 |
+
= exp (− ln rL0) exp
|
| 1148 |
+
�
|
| 1149 |
+
−(1 − µ ¯Q)θL1 + QθL−1
|
| 1150 |
+
�
|
| 1151 |
+
,
|
| 1152 |
+
(3.20)
|
| 1153 |
+
R (r, θ, t = 0) = exp
|
| 1154 |
+
�� x
|
| 1155 |
+
x0
|
| 1156 |
+
dxi¯ai
|
| 1157 |
+
�
|
| 1158 |
+
exp (− ln rL0)
|
| 1159 |
+
= exp
|
| 1160 |
+
� ¯QθL1 − (1 − µQ)θL−1
|
| 1161 |
+
�
|
| 1162 |
+
exp (− ln rL0) .
|
| 1163 |
+
(3.21)
|
| 1164 |
+
As the deformed geometries are still AdS3 solution, we use the boundary condition for U(s)
|
| 1165 |
+
U(si) = 1,
|
| 1166 |
+
U(sf) = 1,
|
| 1167 |
+
(3.22)
|
| 1168 |
+
as well as the same boundary conditions for the ending points of the Wilson line
|
| 1169 |
+
r(si) = r(sf) = r0,
|
| 1170 |
+
(3.23)
|
| 1171 |
+
∆θ = θ(sf) − θ(si) = l.
|
| 1172 |
+
(3.24)
|
| 1173 |
+
We should point out that the boundary condition for U is actually the unique choice because
|
| 1174 |
+
of the Lorentz invariance at the boundary [57, 68]. As the T ¯T deformation does not break
|
| 1175 |
+
Lorentz invariance, we can use the same boundary condition (3.22) for U. It seems that l
|
| 1176 |
+
is just the length of the interval in the deformed boundary. But it equals to the deformed
|
| 1177 |
+
length of interval, because the length is defined in the (w, ¯w) coordinates.
|
| 1178 |
+
Using the gauge transformation (2.31), one can get the solution U(s) for the Wilson line
|
| 1179 |
+
coupled to the deformed gauge fields. The boundary condition for U(s) and ending points
|
| 1180 |
+
15
|
| 1181 |
+
|
| 1182 |
+
boundary condition for the Wilson line imply
|
| 1183 |
+
Tr
|
| 1184 |
+
�
|
| 1185 |
+
(R(si)L(si)) (R (sf) L (sf))−1 �
|
| 1186 |
+
=2 cosh
|
| 1187 |
+
�
|
| 1188 |
+
l
|
| 1189 |
+
�
|
| 1190 |
+
¯Q (1 − µQ)
|
| 1191 |
+
�
|
| 1192 |
+
cosh
|
| 1193 |
+
�
|
| 1194 |
+
l
|
| 1195 |
+
�
|
| 1196 |
+
Q(1 − µ ¯Q)
|
| 1197 |
+
�
|
| 1198 |
+
+
|
| 1199 |
+
r2
|
| 1200 |
+
0
|
| 1201 |
+
� ¯Q(1 − µQ)
|
| 1202 |
+
�
|
| 1203 |
+
Q(1 − µ ¯Q) sinh
|
| 1204 |
+
�
|
| 1205 |
+
l
|
| 1206 |
+
� ¯Q(1 − µQ)
|
| 1207 |
+
�
|
| 1208 |
+
sinh
|
| 1209 |
+
�
|
| 1210 |
+
l
|
| 1211 |
+
�
|
| 1212 |
+
Q(1 − µ ¯Q)
|
| 1213 |
+
�
|
| 1214 |
+
Q ¯Q
|
| 1215 |
+
+
|
| 1216 |
+
Q ¯Q sinh
|
| 1217 |
+
�
|
| 1218 |
+
l
|
| 1219 |
+
� ¯Q(1 − µQ)
|
| 1220 |
+
�
|
| 1221 |
+
sinh
|
| 1222 |
+
�
|
| 1223 |
+
l
|
| 1224 |
+
�
|
| 1225 |
+
Q(1 − µ ¯Q)
|
| 1226 |
+
�
|
| 1227 |
+
r2
|
| 1228 |
+
0
|
| 1229 |
+
� ¯Q(1 − µQ)
|
| 1230 |
+
�
|
| 1231 |
+
Q(1 − µ ¯Q)
|
| 1232 |
+
∼
|
| 1233 |
+
r2
|
| 1234 |
+
0
|
| 1235 |
+
� ¯Q(1 − µQ)
|
| 1236 |
+
�
|
| 1237 |
+
Q(1 − µ ¯Q) sinh
|
| 1238 |
+
�
|
| 1239 |
+
l
|
| 1240 |
+
� ¯Q(1 − µQ)
|
| 1241 |
+
�
|
| 1242 |
+
sinh
|
| 1243 |
+
�
|
| 1244 |
+
l
|
| 1245 |
+
�
|
| 1246 |
+
Q(1 − µ ¯Q)
|
| 1247 |
+
�
|
| 1248 |
+
Q ¯Q
|
| 1249 |
+
.
|
| 1250 |
+
(3.25)
|
| 1251 |
+
In the last step, we consider the r0 ≫ 1 limit. It is straightforward to get the holographic
|
| 1252 |
+
entanglement entropy for T ¯T deformation
|
| 1253 |
+
SEE =
|
| 1254 |
+
√
|
| 1255 |
+
2C cosh−1
|
| 1256 |
+
|
| 1257 |
+
|
| 1258 |
+
r2
|
| 1259 |
+
0
|
| 1260 |
+
� ¯Q(1 − µQ)
|
| 1261 |
+
�
|
| 1262 |
+
Q(1 − µ ¯Q) sinh
|
| 1263 |
+
�
|
| 1264 |
+
l
|
| 1265 |
+
� ¯Q(1 − µQ)
|
| 1266 |
+
�
|
| 1267 |
+
sinh
|
| 1268 |
+
�
|
| 1269 |
+
l
|
| 1270 |
+
�
|
| 1271 |
+
Q(1 − µ ¯Q)
|
| 1272 |
+
�
|
| 1273 |
+
2Q ¯Q
|
| 1274 |
+
|
| 1275 |
+
|
| 1276 |
+
∼c
|
| 1277 |
+
6 log
|
| 1278 |
+
|
| 1279 |
+
|
| 1280 |
+
r2
|
| 1281 |
+
0
|
| 1282 |
+
� ¯Q(1 − µQ)
|
| 1283 |
+
�
|
| 1284 |
+
Q(1 − µ ¯Q) sinh
|
| 1285 |
+
�
|
| 1286 |
+
l
|
| 1287 |
+
� ¯Q(1 − µQ)
|
| 1288 |
+
�
|
| 1289 |
+
sinh
|
| 1290 |
+
�
|
| 1291 |
+
l
|
| 1292 |
+
�
|
| 1293 |
+
Q(1 − µ ¯Q)
|
| 1294 |
+
�
|
| 1295 |
+
Q ¯Q
|
| 1296 |
+
|
| 1297 |
+
.
|
| 1298 |
+
(3.26)
|
| 1299 |
+
If the original geometry is non-rotating BTZ black hole, namely Q = ¯Q, the deformed
|
| 1300 |
+
entanglement entropy becomes
|
| 1301 |
+
SEE =c
|
| 1302 |
+
3 log
|
| 1303 |
+
|
| 1304 |
+
|
| 1305 |
+
r0
|
| 1306 |
+
�
|
| 1307 |
+
Q(1 − µQ) sinh
|
| 1308 |
+
�
|
| 1309 |
+
l
|
| 1310 |
+
�
|
| 1311 |
+
Q(1 − µQ)
|
| 1312 |
+
�
|
| 1313 |
+
Q
|
| 1314 |
+
|
| 1315 |
+
.
|
| 1316 |
+
(3.27)
|
| 1317 |
+
For the deformed BTZ black hole, the temperature can be obtained by analysing the period
|
| 1318 |
+
of Euclidean time, which is discussed in the next section (4.10). We quote the result here
|
| 1319 |
+
β = 1
|
| 1320 |
+
T = π(1 − 2µQ)
|
| 1321 |
+
�
|
| 1322 |
+
Q(1 − µQ)
|
| 1323 |
+
.
|
| 1324 |
+
(3.28)
|
| 1325 |
+
This temperature can also be derived using the first law of thermodynamics, and we will
|
| 1326 |
+
show it in section 3.3. For the limit µ → 0, the temperature reduce to the BTZ black hole
|
| 1327 |
+
temperature. The length of interval l is already the deformed one, which can be seen from
|
| 1328 |
+
the coordinate transformation (3.5) on a time slice. In terms of the deformed temperature,
|
| 1329 |
+
we can express the entanglement entropy as
|
| 1330 |
+
SEE = c
|
| 1331 |
+
3 log
|
| 1332 |
+
��
|
| 1333 |
+
β2 + 4µπ2 + β
|
| 1334 |
+
2πǫ
|
| 1335 |
+
sinh
|
| 1336 |
+
�
|
| 1337 |
+
πl
|
| 1338 |
+
�
|
| 1339 |
+
β2 + 4µπ2
|
| 1340 |
+
��
|
| 1341 |
+
.
|
| 1342 |
+
(3.29)
|
| 1343 |
+
16
|
| 1344 |
+
|
| 1345 |
+
This is actually the T ¯T deformed entanglement entropy obtained from the holographic ap-
|
| 1346 |
+
proach. For µ = 0, the deformed entanglement entropy reduce to the familiar entanglement
|
| 1347 |
+
entropy of CFT at finite temperature. For the small µ, we can obtain the perturbative
|
| 1348 |
+
result
|
| 1349 |
+
SEE = c
|
| 1350 |
+
3 log
|
| 1351 |
+
� β
|
| 1352 |
+
πǫ sinh
|
| 1353 |
+
�πl
|
| 1354 |
+
β
|
| 1355 |
+
��
|
| 1356 |
+
+ µc
|
| 1357 |
+
3
|
| 1358 |
+
�π2
|
| 1359 |
+
β2 − 2π3l
|
| 1360 |
+
β3 coth
|
| 1361 |
+
�πl
|
| 1362 |
+
β
|
| 1363 |
+
��
|
| 1364 |
+
+ O(µ2).
|
| 1365 |
+
(3.30)
|
| 1366 |
+
In the “low temperature” limit β ≫ l, up to the first order, the entanglement entropy
|
| 1367 |
+
becomes
|
| 1368 |
+
SEE-low =c
|
| 1369 |
+
3 log
|
| 1370 |
+
� β
|
| 1371 |
+
πǫ sinh
|
| 1372 |
+
�πl
|
| 1373 |
+
β
|
| 1374 |
+
��
|
| 1375 |
+
+ µc
|
| 1376 |
+
3
|
| 1377 |
+
�π2
|
| 1378 |
+
β2
|
| 1379 |
+
�
|
| 1380 |
+
+ O(µ2).
|
| 1381 |
+
(3.31)
|
| 1382 |
+
In the “high temperature” limit β ≪ l, the first order corrected entanglement entropy is
|
| 1383 |
+
SEE-high =c
|
| 1384 |
+
3 log
|
| 1385 |
+
� β
|
| 1386 |
+
πǫ sinh
|
| 1387 |
+
�πl
|
| 1388 |
+
β
|
| 1389 |
+
��
|
| 1390 |
+
− 2µc
|
| 1391 |
+
3
|
| 1392 |
+
π3l
|
| 1393 |
+
β3 coth
|
| 1394 |
+
�πl
|
| 1395 |
+
β
|
| 1396 |
+
�
|
| 1397 |
+
+ O(µ2).
|
| 1398 |
+
(3.32)
|
| 1399 |
+
The “high temperature” result coincides with the result obtained from both boundary field
|
| 1400 |
+
side and AdS3 with cutoff perspective [22, 24]2. We apply the Wlison line approach to the
|
| 1401 |
+
T ¯T-deformed AdS3 and obtain the holographic entanglement entropy formula, which agree
|
| 1402 |
+
with the perturbation results. However, the “low temperature” result is different from the
|
| 1403 |
+
cutoff AdS3 perspective.
|
| 1404 |
+
We are more interested in the non-perturbative result.
|
| 1405 |
+
In order to make sure the
|
| 1406 |
+
entanglement entropy is real, we have
|
| 1407 |
+
− β2
|
| 1408 |
+
4π2 < µ,
|
| 1409 |
+
(3.34)
|
| 1410 |
+
which means the holographic description maybe lose when µ out of this region. For µ > 0
|
| 1411 |
+
the entanglement entropy is always real. In the following discussion, we just consider the
|
| 1412 |
+
µ > 0 case, which also corresponds to the cutoff perspective. For a fixed temperature, we
|
| 1413 |
+
can consider the entanglement entropy for large deformation parameter
|
| 1414 |
+
SEE = c
|
| 1415 |
+
3 log
|
| 1416 |
+
� l
|
| 1417 |
+
2ǫ
|
| 1418 |
+
�
|
| 1419 |
+
+ βc
|
| 1420 |
+
6π
|
| 1421 |
+
1
|
| 1422 |
+
õ +
|
| 1423 |
+
�cl2
|
| 1424 |
+
72 − β2c
|
| 1425 |
+
24π2
|
| 1426 |
+
� 1
|
| 1427 |
+
µ + O
|
| 1428 |
+
� 1
|
| 1429 |
+
µ
|
| 1430 |
+
�
|
| 1431 |
+
.
|
| 1432 |
+
(3.35)
|
| 1433 |
+
The leading order coincides with the entanglement entropy of the zero temperature CFT
|
| 1434 |
+
with the length of interval l/2. This result implies the T ¯T deformation behaves like the
|
| 1435 |
+
2Note that our convention is different from Ref. [22]. In [22], the deformation parameter is related to
|
| 1436 |
+
the radial cutoff r2
|
| 1437 |
+
c =
|
| 1438 |
+
6
|
| 1439 |
+
µπc, while we have r2
|
| 1440 |
+
c = 1
|
| 1441 |
+
µ in this paper. Therefore, if one replaces µ by µπc
|
| 1442 |
+
6 , the
|
| 1443 |
+
equation (3.32) becomes
|
| 1444 |
+
SEE-high = c
|
| 1445 |
+
3 log
|
| 1446 |
+
� β
|
| 1447 |
+
πǫ sinh
|
| 1448 |
+
�πl
|
| 1449 |
+
β
|
| 1450 |
+
��
|
| 1451 |
+
− µπ4c2l
|
| 1452 |
+
9β3
|
| 1453 |
+
coth
|
| 1454 |
+
�πl
|
| 1455 |
+
β
|
| 1456 |
+
�
|
| 1457 |
+
.
|
| 1458 |
+
(3.33)
|
| 1459 |
+
which is exactly the result in [22].
|
| 1460 |
+
17
|
| 1461 |
+
|
| 1462 |
+
free theory at the large µ limit. The similar feature was also found in [69, 70], in which
|
| 1463 |
+
the authors shown that at the level of the equations of motion the left- and right-chiral
|
| 1464 |
+
sectors of T ¯T deformed free theories are decoupled when the deformation parameter is
|
| 1465 |
+
sent to infinity. Moreover, the Casini-Huerta entropic c-function [71] for the T ¯T deformed
|
| 1466 |
+
entanglement entropy is
|
| 1467 |
+
C(l, µ) = ldSEE
|
| 1468 |
+
dl
|
| 1469 |
+
=
|
| 1470 |
+
πcl
|
| 1471 |
+
3
|
| 1472 |
+
�
|
| 1473 |
+
β2 + 4π2µ
|
| 1474 |
+
coth
|
| 1475 |
+
�
|
| 1476 |
+
πl
|
| 1477 |
+
�
|
| 1478 |
+
β2 + 4π2µ
|
| 1479 |
+
�
|
| 1480 |
+
,
|
| 1481 |
+
(3.36)
|
| 1482 |
+
which is always positive, and does not depend on the ultraviolet regulator. We also find
|
| 1483 |
+
that
|
| 1484 |
+
∂C(l, µ)
|
| 1485 |
+
∂l
|
| 1486 |
+
= πc
|
| 1487 |
+
3
|
| 1488 |
+
|
| 1489 |
+
|
| 1490 |
+
|
| 1491 |
+
|
| 1492 |
+
coth
|
| 1493 |
+
�
|
| 1494 |
+
πl
|
| 1495 |
+
√
|
| 1496 |
+
β2+4π2µ
|
| 1497 |
+
�
|
| 1498 |
+
�
|
| 1499 |
+
β2 + 4π2µ
|
| 1500 |
+
−
|
| 1501 |
+
πlcsch2
|
| 1502 |
+
�
|
| 1503 |
+
πl
|
| 1504 |
+
√
|
| 1505 |
+
β2+4π2µ
|
| 1506 |
+
�
|
| 1507 |
+
β2 + 4π2µ
|
| 1508 |
+
|
| 1509 |
+
|
| 1510 |
+
|
| 1511 |
+
≥ 0,
|
| 1512 |
+
(3.37)
|
| 1513 |
+
which implies the entropic c-function is non–decreasing along the renormalization group
|
| 1514 |
+
flow towards the ultraviolet. The similar result was also found in single trace T ¯T deforma-
|
| 1515 |
+
tion [72].
|
| 1516 |
+
3.3
|
| 1517 |
+
Thermal entropy
|
| 1518 |
+
The thermal entropy of the deformed BTZ black hole can also be calculated from the Wilson
|
| 1519 |
+
loop. As discussed in section 2.4, the thermal entropy can be obtained by diagonalizing the
|
| 1520 |
+
induced gauge connections aθ and ¯aθ in (3.17) and (3.18). For the deformed BTZ black
|
| 1521 |
+
hole, the diagonalized gauge connections read
|
| 1522 |
+
λθ = 2
|
| 1523 |
+
�
|
| 1524 |
+
Q(1 − µ ¯Q)L0 = 2
|
| 1525 |
+
√
|
| 1526 |
+
LL0,
|
| 1527 |
+
(3.38)
|
| 1528 |
+
¯λθ = −2
|
| 1529 |
+
�
|
| 1530 |
+
¯Q(1 − µQ)L0 = −2
|
| 1531 |
+
�
|
| 1532 |
+
¯LL0.
|
| 1533 |
+
(3.39)
|
| 1534 |
+
Finally, according to (2.67), we obtain the thermal entropy
|
| 1535 |
+
S = 2π
|
| 1536 |
+
�c
|
| 1537 |
+
6L + 2π
|
| 1538 |
+
�c
|
| 1539 |
+
6
|
| 1540 |
+
¯L,
|
| 1541 |
+
(3.40)
|
| 1542 |
+
which is the same as the BTZ black hole entropy. This result means the black hole entropy
|
| 1543 |
+
does not change under the T ¯T deformation. On the field theory side, the degeneracy of
|
| 1544 |
+
states do not change under the T ¯T flow.
|
| 1545 |
+
For the deformed theory, the thermal entropy should be expressed in terms of the
|
| 1546 |
+
deformed energy. In case of Q = ¯Q, the entropy can be written as
|
| 1547 |
+
S = 4π
|
| 1548 |
+
�c
|
| 1549 |
+
6Q(1 − µQ) = 2π
|
| 1550 |
+
�c
|
| 1551 |
+
6Eµ(2 − µEµ),
|
| 1552 |
+
(3.41)
|
| 1553 |
+
18
|
| 1554 |
+
|
| 1555 |
+
which agrees with the result in [3]. The thermal entropy can help us to define the tempera-
|
| 1556 |
+
ture in the T ¯T-deformed theory. In fact, according to the first law of thermodynamics, the
|
| 1557 |
+
temperature can be determined by
|
| 1558 |
+
T = ∂Eµ
|
| 1559 |
+
∂S =
|
| 1560 |
+
�
|
| 1561 |
+
6
|
| 1562 |
+
c
|
| 1563 |
+
�
|
| 1564 |
+
Q(1 − µQ)
|
| 1565 |
+
π(1 − 2µQ) ∼
|
| 1566 |
+
�
|
| 1567 |
+
Q(1 − µQ)
|
| 1568 |
+
π(1 − 2µQ) ,
|
| 1569 |
+
(3.42)
|
| 1570 |
+
where we have used the convention k = c/6 = 1 in the definiton of temperature. This is
|
| 1571 |
+
actually the temperature we have used in (3.28).
|
| 1572 |
+
3.4
|
| 1573 |
+
Two intervals entanglement entropy
|
| 1574 |
+
We proceed to consider the entanglement entropy of the system consists of two disjoint
|
| 1575 |
+
intervals. For the single interval case, we have shown that the entanglement entropy is the
|
| 1576 |
+
Wilson line or length of geodesic in AdS3 with ending points on the spatial infinity boundary
|
| 1577 |
+
for both Brown-Henneaux boundary condition and mixed boundary condition. According
|
| 1578 |
+
to Ryu-Takayanagi’s proposal [59, 60], we have two choices for how to draw the geodesics
|
| 1579 |
+
that end on the ending points of two intervals, which are shown in Figure 1. For each choice,
|
| 1580 |
+
the two intervals entanglement entropy decouples into a sum of single interval cases. The
|
| 1581 |
+
Figure 1:
|
| 1582 |
+
The two minimal surfaces for the two intervals boundary region. We consider the
|
| 1583 |
+
two intervals have the same length l separated by x. The left is the disconnected case, and
|
| 1584 |
+
the right is the connected case.
|
| 1585 |
+
two intervals holographic entanglement entropy should be the minimal one of them
|
| 1586 |
+
SEE-2 = min{Sdis, Scon}.
|
| 1587 |
+
(3.43)
|
| 1588 |
+
This implies that there are two phases of the entanglement entropy. It turns out that there
|
| 1589 |
+
actually exist a phase transition between the connected and disconnected phase [73].
|
| 1590 |
+
We first brief review the zero temperature entanglement entropy of two disjoint intervals.
|
| 1591 |
+
We assume the two intervals have the same length l separated by x, described in Figure 1.
|
| 1592 |
+
Then the difference between two phases is
|
| 1593 |
+
∆S = Sdis − Scon = c
|
| 1594 |
+
3 log
|
| 1595 |
+
�
|
| 1596 |
+
l2
|
| 1597 |
+
x(2l + x)
|
| 1598 |
+
�
|
| 1599 |
+
.
|
| 1600 |
+
(3.44)
|
| 1601 |
+
19
|
| 1602 |
+
|
| 1603 |
+
COF6CI60One can find the phase transition critical point is determined by the cross-ratio
|
| 1604 |
+
η =
|
| 1605 |
+
l2
|
| 1606 |
+
(l + x)2 = 1
|
| 1607 |
+
2
|
| 1608 |
+
or
|
| 1609 |
+
x
|
| 1610 |
+
l =
|
| 1611 |
+
√
|
| 1612 |
+
2 − 1.
|
| 1613 |
+
(3.45)
|
| 1614 |
+
For the finite temperature case, the similar phase transition was shown in [74, 75]. However,
|
| 1615 |
+
there is no quantity like cross-ratio to illustrate the critical point.
|
| 1616 |
+
Now we would like to investigate the similar feature for the T ¯T deformed entanglement
|
| 1617 |
+
entropy. For the different choices of Wilson lines or RT surfaces, we have
|
| 1618 |
+
Sdis =c
|
| 1619 |
+
3 log
|
| 1620 |
+
|
| 1621 |
+
|
| 1622 |
+
π2µ + 1
|
| 1623 |
+
2β
|
| 1624 |
+
��
|
| 1625 |
+
β2 + 4π2µ + β
|
| 1626 |
+
�
|
| 1627 |
+
π2ǫ2
|
| 1628 |
+
sinh2
|
| 1629 |
+
�
|
| 1630 |
+
πl
|
| 1631 |
+
�
|
| 1632 |
+
β2 + 4µπ2
|
| 1633 |
+
�
|
| 1634 |
+
,
|
| 1635 |
+
(3.46)
|
| 1636 |
+
Scon =c
|
| 1637 |
+
3 log
|
| 1638 |
+
|
| 1639 |
+
|
| 1640 |
+
π2µ + 1
|
| 1641 |
+
2β
|
| 1642 |
+
��
|
| 1643 |
+
β2 + 4π2µ + β
|
| 1644 |
+
�
|
| 1645 |
+
π2ǫ2
|
| 1646 |
+
sinh
|
| 1647 |
+
�
|
| 1648 |
+
πx
|
| 1649 |
+
�
|
| 1650 |
+
β2 + 4µπ2
|
| 1651 |
+
�
|
| 1652 |
+
sinh
|
| 1653 |
+
�
|
| 1654 |
+
π(2l + x)
|
| 1655 |
+
�
|
| 1656 |
+
β2 + 4µπ2
|
| 1657 |
+
�
|
| 1658 |
+
.
|
| 1659 |
+
(3.47)
|
| 1660 |
+
The two intervals entanglement entropy is the minimal one of them. In order to determine
|
| 1661 |
+
which is the minimal one and under what conditions the phase transition happens, we
|
| 1662 |
+
consider the difference between two RT surfaces
|
| 1663 |
+
∆S =Sdis − Scon = c
|
| 1664 |
+
3 log
|
| 1665 |
+
|
| 1666 |
+
|
| 1667 |
+
|
| 1668 |
+
|
| 1669 |
+
sinh2
|
| 1670 |
+
�
|
| 1671 |
+
πl
|
| 1672 |
+
√
|
| 1673 |
+
β2+4µπ2
|
| 1674 |
+
�
|
| 1675 |
+
sinh
|
| 1676 |
+
�
|
| 1677 |
+
πx
|
| 1678 |
+
√
|
| 1679 |
+
β2+4µπ2
|
| 1680 |
+
�
|
| 1681 |
+
sinh
|
| 1682 |
+
�
|
| 1683 |
+
π(2l+x)
|
| 1684 |
+
√
|
| 1685 |
+
β2+4µπ2
|
| 1686 |
+
�
|
| 1687 |
+
|
| 1688 |
+
|
| 1689 |
+
|
| 1690 |
+
.
|
| 1691 |
+
(3.48)
|
| 1692 |
+
This quantity is also related to the mutual information between two disjoint subsystems.
|
| 1693 |
+
From (3.48), we learn that ∆S behaves like the undeformed one but with different tem-
|
| 1694 |
+
perature. We first consider the low temperature and high temperature limit. For the low
|
| 1695 |
+
temperature limit β ≫ 1, we have
|
| 1696 |
+
∆S = c
|
| 1697 |
+
3 log
|
| 1698 |
+
�
|
| 1699 |
+
l2
|
| 1700 |
+
x(2l + x)
|
| 1701 |
+
�
|
| 1702 |
+
+ O
|
| 1703 |
+
�
|
| 1704 |
+
1/β2�
|
| 1705 |
+
.
|
| 1706 |
+
(3.49)
|
| 1707 |
+
The leading order is exactly the zero temperature case.
|
| 1708 |
+
The phase transition occur at
|
| 1709 |
+
x/l =
|
| 1710 |
+
√
|
| 1711 |
+
2−1 and does not depend on the deformation parameter. For the high temperature
|
| 1712 |
+
limit β ≪ 1, we have
|
| 1713 |
+
∆S = c
|
| 1714 |
+
3 log
|
| 1715 |
+
|
| 1716 |
+
|
| 1717 |
+
cosh
|
| 1718 |
+
�
|
| 1719 |
+
l
|
| 1720 |
+
õ
|
| 1721 |
+
�
|
| 1722 |
+
− 1
|
| 1723 |
+
cosh
|
| 1724 |
+
�
|
| 1725 |
+
l+x
|
| 1726 |
+
õ
|
| 1727 |
+
�
|
| 1728 |
+
− cosh
|
| 1729 |
+
�
|
| 1730 |
+
l
|
| 1731 |
+
õ
|
| 1732 |
+
�
|
| 1733 |
+
|
| 1734 |
+
+ O
|
| 1735 |
+
�
|
| 1736 |
+
β2�
|
| 1737 |
+
.
|
| 1738 |
+
(3.50)
|
| 1739 |
+
In this case, the critical point depends on the deformation parameter.
|
| 1740 |
+
We find it is convenient to introduce the following parameters
|
| 1741 |
+
˜l = x
|
| 1742 |
+
l ,
|
| 1743 |
+
˜x = x
|
| 1744 |
+
β ,
|
| 1745 |
+
˜µ = µ
|
| 1746 |
+
β2.
|
| 1747 |
+
(3.51)
|
| 1748 |
+
20
|
| 1749 |
+
|
| 1750 |
+
In terms of the new parameters, the ∆S reduces to
|
| 1751 |
+
∆S = c
|
| 1752 |
+
3 log
|
| 1753 |
+
|
| 1754 |
+
|
| 1755 |
+
|
| 1756 |
+
|
| 1757 |
+
sinh2
|
| 1758 |
+
�
|
| 1759 |
+
π˜x
|
| 1760 |
+
˜l√
|
| 1761 |
+
1+4˜µπ2
|
| 1762 |
+
�
|
| 1763 |
+
sinh
|
| 1764 |
+
�
|
| 1765 |
+
π˜x
|
| 1766 |
+
√
|
| 1767 |
+
1+4˜µπ2
|
| 1768 |
+
�
|
| 1769 |
+
sinh
|
| 1770 |
+
�
|
| 1771 |
+
π(2+˜l)˜x
|
| 1772 |
+
˜l√
|
| 1773 |
+
1+4˜µπ2
|
| 1774 |
+
�
|
| 1775 |
+
|
| 1776 |
+
|
| 1777 |
+
|
| 1778 |
+
,
|
| 1779 |
+
(3.52)
|
| 1780 |
+
in which the temperature is implicit. We plot the critical lines ∆S = 0 in (˜l, ˜x) plane for
|
| 1781 |
+
different deformation parameters in Figure 2. Then we consider some special limit about
|
| 1782 |
+
0.0
|
| 1783 |
+
0.1
|
| 1784 |
+
0.2
|
| 1785 |
+
0.3
|
| 1786 |
+
0.4
|
| 1787 |
+
0.5
|
| 1788 |
+
0.00
|
| 1789 |
+
0.05
|
| 1790 |
+
0.10
|
| 1791 |
+
0.15
|
| 1792 |
+
0.20
|
| 1793 |
+
l
|
| 1794 |
+
∼
|
| 1795 |
+
x
|
| 1796 |
+
∼
|
| 1797 |
+
Critical lines: ΔS
|
| 1798 |
+
=0
|
| 1799 |
+
μ∼=-0.02
|
| 1800 |
+
μ∼=-0.01
|
| 1801 |
+
μ∼=0
|
| 1802 |
+
μ∼=0.01
|
| 1803 |
+
μ∼=0.02
|
| 1804 |
+
μ∼=0.03
|
| 1805 |
+
μ∼=0.4
|
| 1806 |
+
Figure 2:
|
| 1807 |
+
Plot the critical lines ∆S = 0 in ˜l − ˜x plane for different deformation parameters.
|
| 1808 |
+
The critical lines separate the connected phase (left side) and disconnected phase (right
|
| 1809 |
+
side).
|
| 1810 |
+
The green line corresponds to the undeformed case.
|
| 1811 |
+
The dashed line denotes the
|
| 1812 |
+
zero temperature critical line ˜l =
|
| 1813 |
+
√
|
| 1814 |
+
2 − 1. The critical lines tend to the zero temperature case
|
| 1815 |
+
with the increase of deformation parameter.
|
| 1816 |
+
the critical lines. For ˜x ≪ 1, we have
|
| 1817 |
+
∆S = c
|
| 1818 |
+
3
|
| 1819 |
+
�
|
| 1820 |
+
log
|
| 1821 |
+
�
|
| 1822 |
+
1
|
| 1823 |
+
˜l2 + 2˜l
|
| 1824 |
+
�
|
| 1825 |
+
− π2(˜l + 1)2˜x2
|
| 1826 |
+
3˜l2 (1 + 4˜µπ2)
|
| 1827 |
+
�
|
| 1828 |
+
+ O
|
| 1829 |
+
�
|
| 1830 |
+
˜x3�
|
| 1831 |
+
.
|
| 1832 |
+
(3.53)
|
| 1833 |
+
The leading order is just the zero temperature case and also does not depend on the
|
| 1834 |
+
deformation parameter. This result can be seen from Figure 2 that the critical lines coincide
|
| 1835 |
+
with the zero temperature one for small ˜x.
|
| 1836 |
+
It is interesting to investigate the µ dependence of phase transition. For the small ˜µ, there
|
| 1837 |
+
is actually exist a phase transition, which has been discussed in [24] using the perturbative
|
| 1838 |
+
method. We can also see from Figure 2 the critical line is around the undeformed case for
|
| 1839 |
+
21
|
| 1840 |
+
|
| 1841 |
+
both ˜µ < 0 and ˜µ > 0. For the ˜µ ≫ 1 region, we have
|
| 1842 |
+
∆S = c
|
| 1843 |
+
3 log
|
| 1844 |
+
�
|
| 1845 |
+
1
|
| 1846 |
+
˜l2 + 2˜l
|
| 1847 |
+
�
|
| 1848 |
+
− c(˜l + 1)2˜x2
|
| 1849 |
+
36˜l2˜µ
|
| 1850 |
+
+ O(1/˜µ2).
|
| 1851 |
+
(3.54)
|
| 1852 |
+
The leading order is the just the zero temperature case. One can also see from Figure 2
|
| 1853 |
+
that the critical lines would become the zero temperature one as the increase of deformation
|
| 1854 |
+
parameters. This result implies the T ¯T deformed theory becomes a decoupled free theory
|
| 1855 |
+
for large µ limit [69, 70].
|
| 1856 |
+
These results show that there still exist the phase transition for two intervals entangle-
|
| 1857 |
+
ment entropy under T ¯T deformation. The transition point is depends on the deformation
|
| 1858 |
+
parameter. The T ¯T deformation does not introduce new phases. For large deformation
|
| 1859 |
+
parameter, the the critical point is the same as zero temperature CFT case, it would be
|
| 1860 |
+
interesting to study this feature from the field theoretic results.
|
| 1861 |
+
4
|
| 1862 |
+
Geodesic line method
|
| 1863 |
+
In this section we re-compute the holographic entanglement entropy in BTZ background
|
| 1864 |
+
with mix boundary condition using RT formula, i.e., identifying the holographic entan-
|
| 1865 |
+
glement entropy as the geodesic distance. The results turn out to be consistent with the
|
| 1866 |
+
computation via Wilson line method.
|
| 1867 |
+
The metric of BTZ black hole with mass M and angular momentum J takes the form
|
| 1868 |
+
(2.48).
|
| 1869 |
+
3 For simplicity we consider the case where the black hole being static J = 0. It
|
| 1870 |
+
follows from (3.6) that the deformed parameters Lµ, ¯Lµ are constant and satisfy
|
| 1871 |
+
Lµ = ¯Lµ = 1 − µM ± √1 − 2µM
|
| 1872 |
+
Mµ2
|
| 1873 |
+
,
|
| 1874 |
+
(4.1)
|
| 1875 |
+
where only the solution with “-” is well defined in µ → 0 limit. We start from the following
|
| 1876 |
+
metric
|
| 1877 |
+
ds2 =dr2
|
| 1878 |
+
r2 + r2�
|
| 1879 |
+
dzd¯z + 1
|
| 1880 |
+
r2(Lµdz2 + ¯Lµd¯z2) + 1
|
| 1881 |
+
r4Lµ ¯Lµdzd¯z
|
| 1882 |
+
�
|
| 1883 |
+
,
|
| 1884 |
+
(4.2)
|
| 1885 |
+
in which we have replaced the L, ¯L by Lµ, ¯Lµ in the BTZ black hole solution, so that we
|
| 1886 |
+
can obtain the deformed BTZ only by using the coordinate transformation. Let z = x + iy,
|
| 1887 |
+
and define
|
| 1888 |
+
r =
|
| 1889 |
+
�
|
| 1890 |
+
Lµeρ,
|
| 1891 |
+
x =
|
| 1892 |
+
¯x
|
| 1893 |
+
�
|
| 1894 |
+
4Lµ
|
| 1895 |
+
,
|
| 1896 |
+
y =
|
| 1897 |
+
¯y
|
| 1898 |
+
�
|
| 1899 |
+
4Lµ
|
| 1900 |
+
,
|
| 1901 |
+
(4.3)
|
| 1902 |
+
then the metric becomes the global AdS3
|
| 1903 |
+
ds2 =dρ2 + cosh2 ρd¯x2 + sinh2 ρd¯y2,
|
| 1904 |
+
(4.4)
|
| 1905 |
+
3We follow the convention in [41], and set 4πG = 1, l = 1 and R = 2π (periodicity of spatial dimension)
|
| 1906 |
+
in their paper. We also use r which is related with the radial coordinate ρ in [41] as r2 = 1/ρ. The cutoff
|
| 1907 |
+
in [41] locates at ρ = ρc = µ, then in r-coordinate, r0 = rc = 1/√µ.
|
| 1908 |
+
22
|
| 1909 |
+
|
| 1910 |
+
where ¯y is treated as the Euclidean time and ¯x the spatial coordinate. The requirement
|
| 1911 |
+
of no conical singularity in ρ − ¯y plane implies the identification ¯y ∼ ¯y + 2π, where the
|
| 1912 |
+
periodicity is related with the temperature for BTZ black hole. It is convenient to work in
|
| 1913 |
+
embedding coordinate
|
| 1914 |
+
Y 0 = cosh ρ cosh ¯x,
|
| 1915 |
+
Y 3 = cosh ρ sinh ¯x,
|
| 1916 |
+
Y 1 = sinh ρ sin ¯y,
|
| 1917 |
+
Y 2 = sinh ρ cos ¯y.
|
| 1918 |
+
(4.5)
|
| 1919 |
+
In this coordinate system the BTZ black hole is a hypersurface −(Y 0)2 + (Y 3)2 + (Y 1)2 +
|
| 1920 |
+
(Y 2)2 = −1 in the background ds2 = −d(Y 0)2 + d(Y 1)2 + d(Y 2)2 + d(Y 3)2. The geodesic
|
| 1921 |
+
distant d between two points Y a
|
| 1922 |
+
1 , Y b
|
| 1923 |
+
2 is simply computed by
|
| 1924 |
+
cosh d = −Y1 · Y2 = Y 0
|
| 1925 |
+
1 Y 0
|
| 1926 |
+
2 − Y 1
|
| 1927 |
+
1 Y 1
|
| 1928 |
+
2 − Y 2
|
| 1929 |
+
1 Y 2
|
| 1930 |
+
2 − Y 3
|
| 1931 |
+
1 Y 3
|
| 1932 |
+
2 .
|
| 1933 |
+
(4.6)
|
| 1934 |
+
The deformed metric corresponding to T ¯T deformation can be obtained by transforma-
|
| 1935 |
+
tion of
|
| 1936 |
+
dz =
|
| 1937 |
+
1
|
| 1938 |
+
1 − µ2Lµ ¯Lµ
|
| 1939 |
+
(dw − µ ¯Lµd ¯w),
|
| 1940 |
+
d¯z =
|
| 1941 |
+
1
|
| 1942 |
+
1 − µ2Lµ ¯Lµ
|
| 1943 |
+
(d ¯w − µLµdw).
|
| 1944 |
+
(4.7)
|
| 1945 |
+
In the present case, (4.7) can be solved straightforwardly as
|
| 1946 |
+
z =
|
| 1947 |
+
1
|
| 1948 |
+
1 − µ2Lµ ¯Lµ
|
| 1949 |
+
(w − µ ¯Lµ ¯w),
|
| 1950 |
+
¯z =
|
| 1951 |
+
1
|
| 1952 |
+
1 − µ2Lµ ¯Lµ
|
| 1953 |
+
( ¯w − µLµw).
|
| 1954 |
+
(4.8)
|
| 1955 |
+
And its inverse
|
| 1956 |
+
w = z + µ ¯Lµ¯z,
|
| 1957 |
+
¯w = µLµz + ¯z,
|
| 1958 |
+
(4.9)
|
| 1959 |
+
where w = θ + it, ¯w = θ − it. From the periodicity of ¯y discussed above, we can work out
|
| 1960 |
+
the periodic of t, which is
|
| 1961 |
+
t ∼ t + 2π(1 − µLµ)
|
| 1962 |
+
�
|
| 1963 |
+
4Lµ
|
| 1964 |
+
= t + β,
|
| 1965 |
+
β = π(1 − 2µQ)
|
| 1966 |
+
�
|
| 1967 |
+
Q(1 − µQ)
|
| 1968 |
+
,
|
| 1969 |
+
(4.10)
|
| 1970 |
+
where the β is the inverse temperature of deformed black hole, as well as the inverse
|
| 1971 |
+
temperature of the T ¯T deformed CFT.
|
| 1972 |
+
To compute the HEE of a single interval, we consider two endding points on the boundary
|
| 1973 |
+
locate at (r1, t1, θ1) = (
|
| 1974 |
+
�
|
| 1975 |
+
Lµeρ0, 0, 0) and (r2, t2, θ2) = (
|
| 1976 |
+
�
|
| 1977 |
+
Lµeρ0, 0, l) respectively.
|
| 1978 |
+
Then
|
| 1979 |
+
w1 = ¯w1 = 0, w2 = ¯w2 = l
|
| 1980 |
+
z1 = ¯z1 = 0,
|
| 1981 |
+
z2 = ¯z2 =
|
| 1982 |
+
l
|
| 1983 |
+
1 + µLµ
|
| 1984 |
+
.
|
| 1985 |
+
(4.11)
|
| 1986 |
+
In terms of embedding coordinates
|
| 1987 |
+
Y 0
|
| 1988 |
+
1 = cosh ρ0,
|
| 1989 |
+
Y 3
|
| 1990 |
+
1 = 0,
|
| 1991 |
+
Y 1
|
| 1992 |
+
1 =
|
| 1993 |
+
0,
|
| 1994 |
+
Y 2
|
| 1995 |
+
1 = sinh ρ0,
|
| 1996 |
+
(4.12)
|
| 1997 |
+
and
|
| 1998 |
+
Y 0
|
| 1999 |
+
2 = cosh ρ0 cosh
|
| 2000 |
+
�
|
| 2001 |
+
4Lµz2,
|
| 2002 |
+
Y 3
|
| 2003 |
+
2 = cosh ρ sinh
|
| 2004 |
+
�
|
| 2005 |
+
4Lµz2,
|
| 2006 |
+
Y 1
|
| 2007 |
+
2 = 0,
|
| 2008 |
+
Y 2
|
| 2009 |
+
2 = sinh ρ0.
|
| 2010 |
+
(4.13)
|
| 2011 |
+
23
|
| 2012 |
+
|
| 2013 |
+
Finally using (4.6), the geodesic distance between the points is
|
| 2014 |
+
cosh d = cosh2 ρ0 cosh
|
| 2015 |
+
�
|
| 2016 |
+
4Lµz2 − sinh2 ρ0
|
| 2017 |
+
=
|
| 2018 |
+
Q
|
| 2019 |
+
2r2
|
| 2020 |
+
0(1 − µQ) sinh2 l
|
| 2021 |
+
�
|
| 2022 |
+
Q(1 − µQ) + cosh2 l
|
| 2023 |
+
�
|
| 2024 |
+
Q(1 − µQ)
|
| 2025 |
+
+ r2
|
| 2026 |
+
0(1 − µQ)
|
| 2027 |
+
2Q
|
| 2028 |
+
sinh2 l
|
| 2029 |
+
�
|
| 2030 |
+
Q(1 − µQ),
|
| 2031 |
+
(4.14)
|
| 2032 |
+
where we made the replacement
|
| 2033 |
+
�
|
| 2034 |
+
Lµz2 = l
|
| 2035 |
+
�
|
| 2036 |
+
Q(1 − µQ). It follows that the HEE is
|
| 2037 |
+
SEE = 1
|
| 2038 |
+
4G cosh−1
|
| 2039 |
+
�
|
| 2040 |
+
Q
|
| 2041 |
+
2r2
|
| 2042 |
+
0(1 − µQ) sinh2 l
|
| 2043 |
+
�
|
| 2044 |
+
Q(1 − µQ) + cosh2 l
|
| 2045 |
+
�
|
| 2046 |
+
Q(1 − µQ)
|
| 2047 |
+
+ r2
|
| 2048 |
+
0(1 − µQ)
|
| 2049 |
+
2Q
|
| 2050 |
+
sinh2 l
|
| 2051 |
+
�
|
| 2052 |
+
Q(1 − µQ)
|
| 2053 |
+
�
|
| 2054 |
+
.
|
| 2055 |
+
(4.15)
|
| 2056 |
+
For the r0 → ∞ limit, note the definition of temperature (4.10) and relation 1/4G = c/6,
|
| 2057 |
+
we arrive at
|
| 2058 |
+
SEE = c
|
| 2059 |
+
3 log
|
| 2060 |
+
��
|
| 2061 |
+
β2 + 4µπ2 + β
|
| 2062 |
+
2πǫ
|
| 2063 |
+
sinh
|
| 2064 |
+
�
|
| 2065 |
+
πl
|
| 2066 |
+
�
|
| 2067 |
+
β2 + 4µπ2
|
| 2068 |
+
��
|
| 2069 |
+
,
|
| 2070 |
+
ǫ = 1
|
| 2071 |
+
r0
|
| 2072 |
+
.
|
| 2073 |
+
(4.16)
|
| 2074 |
+
This is coincide with (3.29) in the case of non-rotating BTZ black hole. We obtain the same
|
| 2075 |
+
holographic entanglement entropy formula by calculating the RT surface in the deformed
|
| 2076 |
+
BTZ black hole.
|
| 2077 |
+
5
|
| 2078 |
+
Conclusion and discussion
|
| 2079 |
+
The T ¯T deformed CFT was proposed dual to the AdS3 with a certain mixed boundary
|
| 2080 |
+
condition. The AdS3 with mixed boundary condition or the T ¯T-deformed AdS3 geometry
|
| 2081 |
+
can be obtained from the Ban˜ados geometry using the dynamical change of coordinates.
|
| 2082 |
+
In this paper, we studied the holographic entanglement entropy in the T ¯T-deformed AdS3
|
| 2083 |
+
under this situation. In terms of Chern-Simons form, we derived the exact holographic
|
| 2084 |
+
entanglement entropy formula using the Wilson line technique. For the zero temperature
|
| 2085 |
+
case, the entanglement entropy turned out unchanged under the T ¯T deformation. For the
|
| 2086 |
+
finite temperature case, we calculated the Wilson line with ending points on the boundary
|
| 2087 |
+
of deformed AdS3. After identifying the deformed temperature and length of interval on
|
| 2088 |
+
the boundary, we found the Wilson line lead to holographic entanglement entropy formula,
|
| 2089 |
+
which is closely related to the entanglement entropy in T ¯T-deformed CFTs.
|
| 2090 |
+
The same
|
| 2091 |
+
formula was also obtained by calculating the RT surface in the T ¯T-deformed BTZ black
|
| 2092 |
+
hole. The deformed entanglement entropy formula can reproduce the known perturbative
|
| 2093 |
+
results, which were obtained from both field theory and cutoff AdS3. We also showed that
|
| 2094 |
+
the entropic c-function is always positive and non–decreasing along the renormalization
|
| 2095 |
+
24
|
| 2096 |
+
|
| 2097 |
+
group flow towards the ultraviolet. For the non-perturbative region, our results show that
|
| 2098 |
+
the entanglement entropy behaves like entanglement entropy of CFT at zero temperature.
|
| 2099 |
+
Moreover, we also considered the two intervals entanglement entropy and found there still
|
| 2100 |
+
exist a certain phase transition between disconnected and connected phase. It turned out
|
| 2101 |
+
that the critical point for the phase transition depends on the deformation parameters. The
|
| 2102 |
+
critical point is sensitive to the deformation parameter for the high temperature region. But
|
| 2103 |
+
the critical point becomes independent of deformation parameter for the low temperature
|
| 2104 |
+
region. For a fixed temperature, the critical point tends to the zero temperature case at
|
| 2105 |
+
large deformation parameter, which is shown in Figure 2.
|
| 2106 |
+
Finally, we want to point out that the holographic entanglement entropy formula was
|
| 2107 |
+
derived from the holographic study and the formula agrees with the pertubative result.
|
| 2108 |
+
However, we still need an exact calculation from T ¯T-deformed CFTs. In addition, since we
|
| 2109 |
+
found the entanglement entropy behaves like a free CFT, it would be interesting to study
|
| 2110 |
+
the T ¯T deformation for large deformation parameter following [69, 70].
|
| 2111 |
+
Acknowledgements
|
| 2112 |
+
We are grateful to Song He for suggesting this topic. We would like to thank Yunfeng Jiang,
|
| 2113 |
+
Zhangcheng Liu, Hao Ouyang, Qiang Wen and Long Zhao for helpful discussions. This work
|
| 2114 |
+
is supported by the National Natural Science Foundation of China (No.12105113).
|
| 2115 |
+
A
|
| 2116 |
+
Conventions
|
| 2117 |
+
In this paper, we choose the following standard Lie algebra generators of sl(2, R)
|
| 2118 |
+
L−1 =
|
| 2119 |
+
� 0
|
| 2120 |
+
1
|
| 2121 |
+
0
|
| 2122 |
+
0
|
| 2123 |
+
�
|
| 2124 |
+
,
|
| 2125 |
+
L0 =
|
| 2126 |
+
� 1
|
| 2127 |
+
2
|
| 2128 |
+
0
|
| 2129 |
+
0
|
| 2130 |
+
−1
|
| 2131 |
+
2
|
| 2132 |
+
�
|
| 2133 |
+
,
|
| 2134 |
+
L1 =
|
| 2135 |
+
�
|
| 2136 |
+
0
|
| 2137 |
+
0
|
| 2138 |
+
−1
|
| 2139 |
+
0
|
| 2140 |
+
�
|
| 2141 |
+
,
|
| 2142 |
+
(A.1)
|
| 2143 |
+
whose commutators simplify to
|
| 2144 |
+
[La, Lb] = (a − b)La+b,
|
| 2145 |
+
a, b ∈ {0, ±1}.
|
| 2146 |
+
(A.2)
|
| 2147 |
+
The non-zero components of non-degenerate bilinear form are given by
|
| 2148 |
+
Tr(L0L0) = 1
|
| 2149 |
+
2,
|
| 2150 |
+
Tr(L−1L1) = Tr(L1L−1) = −1.
|
| 2151 |
+
(A.3)
|
| 2152 |
+
We use the following representation of the sl(2, R) Lie algebra, i.e. the highest-weight
|
| 2153 |
+
representation. The highest-weight state |h⟩ satisfies
|
| 2154 |
+
L1|h⟩ = 0,
|
| 2155 |
+
L0|h⟩ = h|h⟩.
|
| 2156 |
+
(A.4)
|
| 2157 |
+
There is an infinite tower of descendant states found by acting with the raising operator
|
| 2158 |
+
|h, n⟩ = (L−1)n|h⟩.
|
| 2159 |
+
(A.5)
|
| 2160 |
+
25
|
| 2161 |
+
|
| 2162 |
+
These states form an irreducible, unitary, and infinite-dimensional representation of sl(2, R).
|
| 2163 |
+
The quadratic Casimir operator of the algebra is
|
| 2164 |
+
C = 2L2
|
| 2165 |
+
0 − (L1L−1 + L−1L1),
|
| 2166 |
+
(A.6)
|
| 2167 |
+
which commutes with all the elements of the algebra. The expectation value of Casimir
|
| 2168 |
+
operator on highest-weight state is
|
| 2169 |
+
C = ⟨h|C|h⟩ = 2h2 − 2h.
|
| 2170 |
+
(A.7)
|
| 2171 |
+
B
|
| 2172 |
+
Wilson line defects
|
| 2173 |
+
The Wilson line as a probe in the bulk will produce a back-reaction in the bulk. To solve
|
| 2174 |
+
for this back-reaction, we consider the total action
|
| 2175 |
+
S = SCS[A] − SCS[ ¯A] + B + S(U; A, ¯A)C.
|
| 2176 |
+
(B.1)
|
| 2177 |
+
where B denotes the boundary term, the last term is the auxiliary action associated with
|
| 2178 |
+
the Wilson line. For different boundary conditions, there will be different boundary terms.
|
| 2179 |
+
In case of the T ¯T deformation, the boundary term turns out to be
|
| 2180 |
+
B = k
|
| 2181 |
+
4π
|
| 2182 |
+
�
|
| 2183 |
+
∂M
|
| 2184 |
+
d2x1
|
| 2185 |
+
µ
|
| 2186 |
+
��
|
| 2187 |
+
1 − 2µ
|
| 2188 |
+
�
|
| 2189 |
+
Tr(AθAθ) + Tr( ¯Aθ ¯Aθ)
|
| 2190 |
+
�
|
| 2191 |
+
+ µ2 �
|
| 2192 |
+
Tr(AθAθ) − Tr( ¯Aθ ¯Aθ)
|
| 2193 |
+
�2 − 1
|
| 2194 |
+
�
|
| 2195 |
+
.
|
| 2196 |
+
(B.2)
|
| 2197 |
+
This boundary term leads to the T ¯T deformed spectrum and can also help to reduce the
|
| 2198 |
+
gravitational action to T ¯T deformed Alekseev-Shatashvili action on the boundary [45]. The
|
| 2199 |
+
boundary term does not contribute to the equation of motion, but the Wilson line term will
|
| 2200 |
+
contribute as a source for the equations of motion
|
| 2201 |
+
k
|
| 2202 |
+
2πFµν =
|
| 2203 |
+
�
|
| 2204 |
+
dsdxρ
|
| 2205 |
+
ds εµνρδ(3)(x − x(s))UPU−1,
|
| 2206 |
+
(B.3)
|
| 2207 |
+
k
|
| 2208 |
+
2π
|
| 2209 |
+
¯Fµν = −
|
| 2210 |
+
�
|
| 2211 |
+
dsdxρ
|
| 2212 |
+
ds εµνρδ(3)(x − x(s))P.
|
| 2213 |
+
(B.4)
|
| 2214 |
+
We can choose the Wilson line trajectory as a bulk geodesic, the corresponding Wilson line
|
| 2215 |
+
variables is
|
| 2216 |
+
r(s) = s,
|
| 2217 |
+
U(s) = 1,
|
| 2218 |
+
P(s) =
|
| 2219 |
+
√
|
| 2220 |
+
2CL0.
|
| 2221 |
+
(B.5)
|
| 2222 |
+
Contracting (B.3) and (B.4) with the tangent vector to the curve, we find the non-vanishing
|
| 2223 |
+
components of field strength F, ¯F are tangent to the curve
|
| 2224 |
+
Fµν
|
| 2225 |
+
dxµ
|
| 2226 |
+
ds = 0,
|
| 2227 |
+
(B.6)
|
| 2228 |
+
¯Fµν
|
| 2229 |
+
dxµ
|
| 2230 |
+
ds = 0.
|
| 2231 |
+
(B.7)
|
| 2232 |
+
26
|
| 2233 |
+
|
| 2234 |
+
Since we can always transform the AdS3 solution into the Poincar´e coordinate [66, 67], we
|
| 2235 |
+
just consider the Poincar´e AdS3. The solution is asymptotic AdS3 in Poincar´e coordinate
|
| 2236 |
+
A =L(asource + d)L−1,
|
| 2237 |
+
L = e− ln rL0e−zL1,
|
| 2238 |
+
(B.8)
|
| 2239 |
+
¯A =R−1(asource + d)R,
|
| 2240 |
+
R = e−¯zL−1e− ln rL0,
|
| 2241 |
+
(B.9)
|
| 2242 |
+
where the coupling to the source is taken into account by
|
| 2243 |
+
asource =
|
| 2244 |
+
�
|
| 2245 |
+
C
|
| 2246 |
+
2
|
| 2247 |
+
1
|
| 2248 |
+
k
|
| 2249 |
+
�dz
|
| 2250 |
+
z − d¯z
|
| 2251 |
+
¯z
|
| 2252 |
+
�
|
| 2253 |
+
L0.
|
| 2254 |
+
(B.10)
|
| 2255 |
+
With the help of the identities ∂ 1
|
| 2256 |
+
¯z = ¯∂ 1
|
| 2257 |
+
z = πδ(2)(z, ¯z), one can verify these connections satisfy
|
| 2258 |
+
the sourced equations of motion. The connections are flat except for where the Wilson line
|
| 2259 |
+
sources them. We can obtain the specific form of the gauge field
|
| 2260 |
+
A =L0
|
| 2261 |
+
dr
|
| 2262 |
+
r + rL1dz +
|
| 2263 |
+
�
|
| 2264 |
+
C
|
| 2265 |
+
2
|
| 2266 |
+
1
|
| 2267 |
+
k
|
| 2268 |
+
�dz
|
| 2269 |
+
z − d¯z
|
| 2270 |
+
¯z
|
| 2271 |
+
�
|
| 2272 |
+
(L0 − rzL1),
|
| 2273 |
+
(B.11)
|
| 2274 |
+
¯A = − L0
|
| 2275 |
+
dr
|
| 2276 |
+
r − rL−1d¯z +
|
| 2277 |
+
�
|
| 2278 |
+
C
|
| 2279 |
+
2
|
| 2280 |
+
1
|
| 2281 |
+
k
|
| 2282 |
+
�dz
|
| 2283 |
+
z − d¯z
|
| 2284 |
+
¯z
|
| 2285 |
+
�
|
| 2286 |
+
(L0 − r¯zL−1).
|
| 2287 |
+
(B.12)
|
| 2288 |
+
This solution produces the metric
|
| 2289 |
+
ds2 = dr2
|
| 2290 |
+
r2 +
|
| 2291 |
+
r2 �
|
| 2292 |
+
−
|
| 2293 |
+
√
|
| 2294 |
+
2
|
| 2295 |
+
√
|
| 2296 |
+
Ck (zd¯z − ¯zdz)2 + C (zd¯z − ¯zdz)2 − 2k2z¯zdzd¯z
|
| 2297 |
+
�
|
| 2298 |
+
2k2z¯z
|
| 2299 |
+
.
|
| 2300 |
+
(B.13)
|
| 2301 |
+
Consider the map from plane to cylinder (τ, ϑ)
|
| 2302 |
+
z = eτ+iϑ,
|
| 2303 |
+
¯z = eτ−iϑ,
|
| 2304 |
+
(B.14)
|
| 2305 |
+
the metric becomes
|
| 2306 |
+
ds2 =dr2
|
| 2307 |
+
r2 − r2e2τ
|
| 2308 |
+
|
| 2309 |
+
|
| 2310 |
+
dτ 2 +
|
| 2311 |
+
dϑ2 �√
|
| 2312 |
+
2C − k
|
| 2313 |
+
�2
|
| 2314 |
+
k2
|
| 2315 |
+
|
| 2316 |
+
|
| 2317 |
+
.
|
| 2318 |
+
(B.15)
|
| 2319 |
+
One can see this is precisely the metric for AdS3 with a conical singularity surrounding the
|
| 2320 |
+
Wilson line. The boundary geometry with Wilson line back-reaction becomes the n-sheet
|
| 2321 |
+
cylinder if we set the defect angle to be 2π(1 − 1
|
| 2322 |
+
n). Then we can find the relation
|
| 2323 |
+
√
|
| 2324 |
+
2C
|
| 2325 |
+
k
|
| 2326 |
+
= (n − 1) + O((n − 1)2).
|
| 2327 |
+
(B.16)
|
| 2328 |
+
Since the Wilson line action generates the n-sheet manifold, the partition function for n-
|
| 2329 |
+
sheet manifold can be written as
|
| 2330 |
+
Zn = log WR(C) = −
|
| 2331 |
+
√
|
| 2332 |
+
2CL(xi, xj),
|
| 2333 |
+
(B.17)
|
| 2334 |
+
27
|
| 2335 |
+
|
| 2336 |
+
therefore the entanglement entropy can be obtained
|
| 2337 |
+
SEE = lim
|
| 2338 |
+
n→1
|
| 2339 |
+
1
|
| 2340 |
+
1 − n log WR(C) = kL(xi, xj),
|
| 2341 |
+
(B.18)
|
| 2342 |
+
which is coincide with the RT formula.
|
| 2343 |
+
The stress tensor corresponds to Poincar´e AdS3 vanishes, namely L = 0 in (3.1). For
|
| 2344 |
+
the BTZ black hole, the stress tensor is a constant. According to the transformation law
|
| 2345 |
+
of the stress-tensor, we can transform the stress tensor to a constant by using a conformal
|
| 2346 |
+
map. After rescaling the radial coordinate, the BTZ black hole becomes Poincar´e AdS3
|
| 2347 |
+
geometry with different period of the time direction. For the deformed BTZ black hole, we
|
| 2348 |
+
can perform the following coordinate transformation to (3.19)
|
| 2349 |
+
w = (1 − µQ)ξ + Q¯ξ,
|
| 2350 |
+
(B.19)
|
| 2351 |
+
¯w = (1 − µ ¯Q)¯ξ + ¯Qξ,
|
| 2352 |
+
(B.20)
|
| 2353 |
+
r = (1 − µQ)(1 − µ ¯Q)˜r.
|
| 2354 |
+
(B.21)
|
| 2355 |
+
so that the metric becomes the same as BTZ black hole
|
| 2356 |
+
ds2 = d˜r2
|
| 2357 |
+
˜r2 + ˜r2
|
| 2358 |
+
�
|
| 2359 |
+
dξd¯ξ + 1
|
| 2360 |
+
˜r2
|
| 2361 |
+
�
|
| 2362 |
+
Ldξ2 + ¯Ld¯ξ2�
|
| 2363 |
+
+ L ¯L
|
| 2364 |
+
˜r4 dξd¯ξ
|
| 2365 |
+
�
|
| 2366 |
+
.
|
| 2367 |
+
(B.22)
|
| 2368 |
+
One should note that the temperature (the period of Euclidean time) is different from the
|
| 2369 |
+
original BTZ black hole. The above consideration for the holographic entanglement entropy
|
| 2370 |
+
still holds for BTZ black hole and deformed BTZ black hole.
|
| 2371 |
+
References
|
| 2372 |
+
[1] F. A. Smirnov and A. B. Zamolodchikov, “On space of integrable quantum field
|
| 2373 |
+
theories,” Nucl. Phys. B 915, 363-383 (2017) [arXiv:1608.05499 [hep-th]].
|
| 2374 |
+
[2] A. Cavagli`a, S. Negro, I. M. Sz´ecs´enyi and R. Tateo, “T ¯T-deformed 2D Quantum Field
|
| 2375 |
+
Theories,” JHEP 10, 112 (2016) [arXiv:1608.05534 [hep-th]].
|
| 2376 |
+
[3] L. McGough, M. Mezei and H. Verlinde, “Moving the CFT into the bulk with TT,”
|
| 2377 |
+
JHEP 04, 010 (2018) [arXiv:1611.03470 [hep-th]].
|
| 2378 |
+
[4] P. Kraus, J. Liu and D. Marolf, “Cutoff AdS3 versus the TT deformation,” JHEP 07,
|
| 2379 |
+
027 (2018) [arXiv:1801.02714 [hep-th]].
|
| 2380 |
+
[5] A. B. Zamolodchikov, “Expectation value of composite field T anti-T in two-
|
| 2381 |
+
dimensional quantum field theory,” [arXiv:hep-th/0401146 [hep-th]].
|
| 2382 |
+
[6] J. Cardy, “The TT deformation of quantum field theory as random geometry,” JHEP
|
| 2383 |
+
10, 186 (2018) [arXiv:1801.06895 [hep-th]].
|
| 2384 |
+
28
|
| 2385 |
+
|
| 2386 |
+
[7] S. Dubovsky, V. Gorbenko and G. Hern´andez-Chifflet, “TT partition function from
|
| 2387 |
+
topological gravity,” JHEP 09, 158 (2018) [arXiv:1805.07386 [hep-th]].
|
| 2388 |
+
[8] S. Datta and Y. Jiang, “T ¯T deformed partition functions,” JHEP 08, 106 (2018)
|
| 2389 |
+
[arXiv:1806.07426 [hep-th]].
|
| 2390 |
+
[9] O. Aharony, S. Datta, A. Giveon, Y. Jiang and D. Kutasov, “Modular invariance and
|
| 2391 |
+
uniqueness of T ¯T deformed CFT,” JHEP 01, 086 (2019) [arXiv:1808.02492 [hep-th]].
|
| 2392 |
+
[10] G. Bonelli, N. Doroud and M. Zhu, “T ¯T-deformations in closed form,” JHEP 06, 149
|
| 2393 |
+
(2018) [arXiv:1804.10967 [hep-th]].
|
| 2394 |
+
[11] G. Jorjadze and S. Theisen, “Canonical maps and integrability in T ¯T deformed 2d
|
| 2395 |
+
CFTs,” [arXiv:2001.03563 [hep-th]].
|
| 2396 |
+
[12] S. Dubovsky, V. Gorbenko and M. Mirbabayi, “Asymptotic fragility, near AdS2
|
| 2397 |
+
holography and TT,” JHEP 09, 136 (2017) [arXiv:1706.06604 [hep-th]].
|
| 2398 |
+
[13] N. Callebaut, J. Kruthoff and H. Verlinde, “TT deformed CFT as a non-critical string,”
|
| 2399 |
+
JHEP 04, 084 (2020) [arXiv:1910.13578 [hep-th]].
|
| 2400 |
+
[14] A. J. Tolley, “TT deformations, massive gravity and non-critical strings,” JHEP 06,
|
| 2401 |
+
050 (2020) [arXiv:1911.06142 [hep-th]].
|
| 2402 |
+
[15] M. Guica and R. Monten, “Infinite pseudo-conformal symmetries of classical T ¯T, J ¯T
|
| 2403 |
+
and JTa - deformed CFTs,” SciPost Phys. 11, 078 (2021) [arXiv:2011.05445 [hep-th]].
|
| 2404 |
+
[16] M. Guica, “J ¯T-deformed CFTs as non-local CFTs,” [arXiv:2110.07614 [hep-th]].
|
| 2405 |
+
[17] W. Donnelly and V. Shyam, “Entanglement entropy and TT deformation,” Phys. Rev.
|
| 2406 |
+
Lett. 121, no.13, 131602 (2018) [arXiv:1806.07444 [hep-th]].
|
| 2407 |
+
[18] S. Chakraborty, A. Giveon, N. Itzhaki and D. Kutasov, “Entanglement beyond AdS,”
|
| 2408 |
+
Nucl. Phys. B 935, 290-309 (2018) [arXiv:1805.06286 [hep-th]].
|
| 2409 |
+
[19] J. Cardy,
|
| 2410 |
+
“T ¯T
|
| 2411 |
+
deformation of correlation functions,”
|
| 2412 |
+
JHEP 12,
|
| 2413 |
+
160 (2019)
|
| 2414 |
+
[arXiv:1907.03394 [hep-th]].
|
| 2415 |
+
[20] J. Kruthoff and O. Parrikar, “On the flow of states under TT,” [arXiv:2006.03054
|
| 2416 |
+
[hep-th]].
|
| 2417 |
+
[21] M. Guica, “On correlation functions in J ¯T-deformed CFTs,” J. Phys. A 52, no.18,
|
| 2418 |
+
184003 (2019) [arXiv:1902.01434 [hep-th]].
|
| 2419 |
+
[22] B. Chen, L. Chen and P. X. Hao, “Entanglement entropy in TT-deformed CFT,” Phys.
|
| 2420 |
+
Rev. D 98, no.8, 086025 (2018) [arXiv:1807.08293 [hep-th]].
|
| 2421 |
+
[23] S. He and H. Shu, “Correlation functions, entanglement and chaos in the TT/JT-
|
| 2422 |
+
deformed CFTs,” JHEP 02, 088 (2020) [arXiv:1907.12603 [hep-th]].
|
| 2423 |
+
29
|
| 2424 |
+
|
| 2425 |
+
[24] H. S. Jeong, K. Y. Kim and M. Nishida, “Entanglement and R´enyi entropy of multiple
|
| 2426 |
+
intervals in TT-deformed CFT and holography,” Phys. Rev. D 100, no.10, 106015
|
| 2427 |
+
(2019) [arXiv:1906.03894 [hep-th]].
|
| 2428 |
+
[25] G. Jafari, A. Naseh and H. Zolfi, “Path Integral Optimization for T ¯T Deformation,”
|
| 2429 |
+
Phys. Rev. D 101, no.2, 026007 (2020) [arXiv:1909.02357 [hep-th]].
|
| 2430 |
+
[26] S. He, J. R. Sun and Y. Sun, “The correlation function of (1,1) and (2,2) supersymmet-
|
| 2431 |
+
ric theories with T ¯T deformation,” JHEP 04, 100 (2020) [arXiv:1912.11461 [hep-th]].
|
| 2432 |
+
[27] S. He and Y. Sun, “Correlation functions of CFTs on a torus with a TT deformation,”
|
| 2433 |
+
Phys. Rev. D 102, no.2, 026023 (2020) [arXiv:2004.07486 [hep-th]].
|
| 2434 |
+
[28] S. Hirano, T. Nakajima and M. Shigemori, “TT Deformation of stress-tensor correlators
|
| 2435 |
+
from random geometry,” JHEP 04, 270 (2021) [arXiv:2012.03972 [hep-th]].
|
| 2436 |
+
[29] S. He, Y. Sun and Y. X. Zhang, “TT-flow effects on torus partition functions,” JHEP
|
| 2437 |
+
09, 061 (2021) [arXiv:2011.02902 [hep-th]].
|
| 2438 |
+
[30] S. He, “Note on higher-point correlation functions of the T ¯T or J ¯T deformed CFTs,”
|
| 2439 |
+
Sci. China Phys. Mech. Astron. 64, no.9, 291011 (2021) [arXiv:2012.06202 [hep-th]].
|
| 2440 |
+
[31] S. He and Y. Z. Li, “Higher Genus Correlation Functions in CFTs with T ¯T
|
| 2441 |
+
Deformation,” [arXiv:2202.04810 [hep-th]].
|
| 2442 |
+
[32] Y. Jiang, “A pedagogical review on solvable irrelevant deformations of 2D quantum field
|
| 2443 |
+
theory,” Commun. Theor. Phys. 73, no.5, 057201 (2021) [arXiv:1904.13376 [hep-th]].
|
| 2444 |
+
[33] G. Giribet, “T ¯T-deformations, AdS/CFT and correlation functions,” JHEP 02, 114
|
| 2445 |
+
(2018) [arXiv:1711.02716 [hep-th]].
|
| 2446 |
+
[34] W. Donnelly, E. LePage, Y. Y. Li, A. Pereira and V. Shyam, “Quantum corrections to
|
| 2447 |
+
finite radius holography and holographic entanglement entropy,” JHEP 05, 006 (2020)
|
| 2448 |
+
[arXiv:1909.11402 [hep-th]].
|
| 2449 |
+
[35] S. Grieninger, “Entanglement entropy and TT deformations beyond antipodal points
|
| 2450 |
+
from holography,” JHEP 11, 171 (2019) [arXiv:1908.10372 [hep-th]].
|
| 2451 |
+
[36] P. Caputa, P. Caputa, S. Datta, S. Datta, Y. Jiang, Y. Jiang, P. Kraus and
|
| 2452 |
+
P. Kraus, “Geometrizing TT,” JHEP 03, 140 (2021) [erratum: JHEP 09, 110 (2022)]
|
| 2453 |
+
[arXiv:2011.04664 [hep-th]].
|
| 2454 |
+
[37] E. A. Mazenc, V. Shyam and R. M. Soni, “A T ¯T Deformation for Curved Spacetimes
|
| 2455 |
+
from 3d Gravity,” [arXiv:1912.09179 [hep-th]].
|
| 2456 |
+
[38] Y. Li and Y. Zhou, “Cutoff AdS3 versus TT CFT2 in the large central charge sector:
|
| 2457 |
+
correlators of energy-momentum tensor,” JHEP 12, 168 (2020) [arXiv:2005.01693 [hep-
|
| 2458 |
+
th]].
|
| 2459 |
+
30
|
| 2460 |
+
|
| 2461 |
+
[39] Y. Li, “Comments on large central charge T ¯T deformed conformal field theory and
|
| 2462 |
+
cutoff AdS holography,” [arXiv:2012.14414 [hep-th]].
|
| 2463 |
+
[40] P. Kraus, R. Monten and R. M. Myers, “3D Gravity in a Box,” SciPost Phys. 11, 070
|
| 2464 |
+
(2021) [arXiv:2103.13398 [hep-th]].
|
| 2465 |
+
[41] M. Guica and R. Monten, “T ¯T and the mirage of a bulk cutoff,” SciPost Phys. 10,
|
| 2466 |
+
no.2, 024 (2021) [arXiv:1906.11251 [hep-th]].
|
| 2467 |
+
[42] R. Conti, S. Negro and R. Tateo, “The TT perturbation and its geometric interpreta-
|
| 2468 |
+
tion,” JHEP 02, 085 (2019) [arXiv:1809.09593 [hep-th]].
|
| 2469 |
+
[43] R. Conti, S. Negro and R. Tateo, “Conserved currents and T¯Ts irrelevant deformations
|
| 2470 |
+
of 2D integrable field theories,” JHEP 11, 120 (2019) [arXiv:1904.09141 [hep-th]].
|
| 2471 |
+
[44] H. Ouyang and H. Shu, “T ¯T deformation of chiral bosons and Chern–Simons AdS3
|
| 2472 |
+
gravity,” Eur. Phys. J. C 80, no.12, 1155 (2020) [arXiv:2006.10514 [hep-th]].
|
| 2473 |
+
[45] M. He and Y. h. Gao, “T ¯T/J ¯T-deformed WZW models from Chern-Simons AdS3
|
| 2474 |
+
gravity with mixed boundary conditions,” Phys. Rev. D 103, no.12, 126019 (2021)
|
| 2475 |
+
[arXiv:2012.05726 [hep-th]].
|
| 2476 |
+
[46] S. Ebert, E. Hijano, P. Kraus, R. Monten and R. M. Myers, “Field Theory of Interacting
|
| 2477 |
+
Boundary Gravitons,” SciPost Phys. 13, no.2, 038 (2022) [arXiv:2201.01780 [hep-th]].
|
| 2478 |
+
[47] P. Kraus, R. Monten and K. Roumpedakis, “Refining the cutoff 3d gravity/TT
|
| 2479 |
+
correspondence,” JHEP 10, 094 (2022) [arXiv:2206.00674 [hep-th]].
|
| 2480 |
+
[48] M. Asrat and J. Kudler-Flam, “T ¯T, the entanglement wedge cross section, and
|
| 2481 |
+
the breakdown of the split property,” Phys. Rev. D 102, no.4, 045009 (2020)
|
| 2482 |
+
[arXiv:2005.08972 [hep-th]].
|
| 2483 |
+
[49] K. Allameh, A. F. Astaneh and A. Hassanzadeh, “Aspects of holographic entanglement
|
| 2484 |
+
entropy for T ¯T-deformed CFTs,” Phys. Lett. B 826, 136914 (2022) [arXiv:2111.11338
|
| 2485 |
+
[hep-th]].
|
| 2486 |
+
[50] M. R. Setare and S. N. Sajadi, “Holographic entanglement entropy in T ¯T-deformed
|
| 2487 |
+
CFTs,” Gen. Rel. Grav. 54, no.8, 85 (2022) [arXiv:2203.16445 [hep-th]].
|
| 2488 |
+
[51] S. He, Z. C. Liu and Y. Sun, “Entanglement entropy and modular Hamiltonian of
|
| 2489 |
+
free fermion with deformations on a torus,” JHEP 09, 247 (2022) [arXiv:2207.06308
|
| 2490 |
+
[hep-th]].
|
| 2491 |
+
[52] H. S. Jeong, W. B. Pan, Y. W. Sun and Y. T. Wang, “Holographic study of T ¯T like
|
| 2492 |
+
deformed HV QFTs: holographic entanglement entropy,” [arXiv:2211.00518 [hep-th]].
|
| 2493 |
+
[53] E. Witten, “(2+1)-Dimensional Gravity as an Exactly Soluble System,” Nucl. Phys. B
|
| 2494 |
+
311, 46 (1988)
|
| 2495 |
+
31
|
| 2496 |
+
|
| 2497 |
+
[54] E. Llabr´es, “General solutions in Chern-Simons gravity and TT-deformations,” JHEP
|
| 2498 |
+
01, 039 (2021) [arXiv:1912.13330 [hep-th]].
|
| 2499 |
+
[55] M. He, S. He and Y. h. Gao, “Surface charges in Chern-Simons gravity with TT
|
| 2500 |
+
deformation,” JHEP 03, 044 (2022) [arXiv:2109.12885 [hep-th]].
|
| 2501 |
+
[56] S. Ebert, C. Ferko, H. Y. Sun and Z. Sun, “T ¯T in JT Gravity and BF Gauge Theory,”
|
| 2502 |
+
SciPost Phys. 13, no.4, 096 (2022) [arXiv:2205.07817 [hep-th]].
|
| 2503 |
+
[57] M. Ammon, A. Castro and N. Iqbal, “Wilson Lines and Entanglement Entropy in
|
| 2504 |
+
Higher Spin Gravity,” JHEP 10, 110 (2013) [arXiv:1306.4338 [hep-th]].
|
| 2505 |
+
[58] X. Huang, C. T. Ma, H. Shu and C. H. Wu, “U(1) CS theory vs SL(2) CS formulation:
|
| 2506 |
+
Boundary theory and Wilson line,” Nucl. Phys. B 984, 115971 (2022) [arXiv:2011.03953
|
| 2507 |
+
[hep-th]].
|
| 2508 |
+
[59] S. Ryu and T. Takayanagi, “Holographic derivation of entanglement entropy from
|
| 2509 |
+
AdS/CFT,” Phys. Rev. Lett. 96, 181602 (2006) [arXiv:hep-th/0603001 [hep-th]].
|
| 2510 |
+
[60] S. Ryu and T. Takayanagi, “Aspects of Holographic Entanglement Entropy,” JHEP
|
| 2511 |
+
08, 045 (2006) [arXiv:hep-th/0605073 [hep-th]].
|
| 2512 |
+
[61] G. Dzhordzhadze, L. O’Raifeartaigh and I. Tsutsui, “Quantization of a relativistic
|
| 2513 |
+
particle on the SL(2,R) manifold based on Hamiltonian reduction,” Phys. Lett. B 336,
|
| 2514 |
+
388-394 (1994) [arXiv:hep-th/9407059 [hep-th]].
|
| 2515 |
+
[62] O. Aharony, S. S. Gubser, J. M. Maldacena, H. Ooguri and Y. Oz, “Large N field
|
| 2516 |
+
theories, string theory and gravity,” Phys. Rept. 323, 183-386 (2000) [arXiv:hep-
|
| 2517 |
+
th/9905111 [hep-th]].
|
| 2518 |
+
[63] M. Banados, C. Teitelboim and J. Zanelli, “The Black hole in three-dimensional space-
|
| 2519 |
+
time,” Phys. Rev. Lett. 69, 1849-1851 (1992) [arXiv:hep-th/9204099 [hep-th]].
|
| 2520 |
+
[64] V. E. Hubeny, M. Rangamani and T. Takayanagi, “A Covariant holographic entangle-
|
| 2521 |
+
ment entropy proposal,” JHEP 07, 062 (2007) [arXiv:0705.0016 [hep-th]].
|
| 2522 |
+
[65] M. Banados, “Three-dimensional quantum geometry and black holes,” AIP Conf. Proc.
|
| 2523 |
+
484, no.1, 147-169 (1999) [arXiv:hep-th/9901148 [hep-th]].
|
| 2524 |
+
[66] M. Rooman and P. Spindel, “Uniqueness of the asymptotic AdS(3) geometry,” Class.
|
| 2525 |
+
Quant. Grav. 18, 2117-2124 (2001) [arXiv:gr-qc/0011005 [gr-qc]].
|
| 2526 |
+
[67] K. Krasnov, “On holomorphic factorization in asymptotically AdS 3-D gravity,” Class.
|
| 2527 |
+
Quant. Grav. 20, 4015-4042 (2003) [arXiv:hep-th/0109198 [hep-th]].
|
| 2528 |
+
[68] A. Castro, D. M. Hofman and N. Iqbal, “Entanglement Entropy in Warped Conformal
|
| 2529 |
+
Field Theories,” JHEP 02, 033 (2016) [arXiv:1511.00707 [hep-th]].
|
| 2530 |
+
32
|
| 2531 |
+
|
| 2532 |
+
[69] S. Chakrabarti and M. Raman, “Chiral Decoupling from Irrelevant Deformations,”
|
| 2533 |
+
JHEP 04, 190 (2020) [arXiv:2001.06870 [hep-th]].
|
| 2534 |
+
[70] S. Chakrabarti, D. Gupta, A. Manna and M. Raman, “Irrelevant deformations of chiral
|
| 2535 |
+
bosons,” JHEP 02, 028 (2021) [arXiv:2011.06352 [hep-th]].
|
| 2536 |
+
[71] H. Casini and M. Huerta, “A c-theorem for the entanglement entropy,” J. Phys. A 40,
|
| 2537 |
+
7031-7036 (2007) [arXiv:cond-mat/0610375 [cond-mat]].
|
| 2538 |
+
[72] M. Asrat, “Entropic c–functions in T ¯T, J ¯T, T ¯J deformations,” Nucl. Phys. B 960,
|
| 2539 |
+
115186 (2020) [arXiv:1911.04618 [hep-th]].
|
| 2540 |
+
[73] T. Hartman, “Entanglement Entropy at Large Central Charge,” [arXiv:1303.6955 [hep-
|
| 2541 |
+
th]].
|
| 2542 |
+
[74] M. Headrick, “Entanglement Renyi entropies in holographic theories,” Phys. Rev. D
|
| 2543 |
+
82, 126010 (2010) [arXiv:1006.0047 [hep-th]].
|
| 2544 |
+
[75] W. Fischler, A. Kundu and S. Kundu, “Holographic Mutual Information at Finite
|
| 2545 |
+
Temperature,” Phys. Rev. D 87, no.12, 126012 (2013) [arXiv:1212.4764 [hep-th]].
|
| 2546 |
+
33
|
| 2547 |
+
|
-tE3T4oBgHgl3EQfSwk7/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
.gitattributes
CHANGED
|
@@ -3502,3 +3502,69 @@ i9A0T4oBgHgl3EQfIf_T/content/2301.02077v1.pdf filter=lfs diff=lfs merge=lfs -tex
|
|
| 3502 |
2dE1T4oBgHgl3EQfAAKW/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3503 |
FNE3T4oBgHgl3EQfVgrM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3504 |
E9E4T4oBgHgl3EQf6w7l/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3502 |
2dE1T4oBgHgl3EQfAAKW/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3503 |
FNE3T4oBgHgl3EQfVgrM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3504 |
E9E4T4oBgHgl3EQf6w7l/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3505 |
+
CdE0T4oBgHgl3EQfgQEc/content/2301.02414v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3506 |
+
INE5T4oBgHgl3EQfWw9u/content/2301.05561v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3507 |
+
ldE3T4oBgHgl3EQfKAlg/content/2301.04349v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3508 |
+
ldE3T4oBgHgl3EQfKAlg/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3509 |
+
N9E3T4oBgHgl3EQfxAtK/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3510 |
+
29AzT4oBgHgl3EQfuf0F/content/2301.01690v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3511 |
+
a9E0T4oBgHgl3EQfngEk/content/2301.02512v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3512 |
+
cNFST4oBgHgl3EQfDTjC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3513 |
+
mNFLT4oBgHgl3EQfei-P/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3514 |
+
W9AyT4oBgHgl3EQf9Pog/content/2301.00869v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3515 |
+
79AyT4oBgHgl3EQf2_lP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3516 |
+
qtE2T4oBgHgl3EQfKwZ0/content/2301.03706v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3517 |
+
CdE0T4oBgHgl3EQfgQEc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3518 |
+
U9E1T4oBgHgl3EQfuwXx/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3519 |
+
a9E0T4oBgHgl3EQfngEk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3520 |
+
YNAzT4oBgHgl3EQf1_6A/content/2301.01808v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3521 |
+
wtE5T4oBgHgl3EQfMg6G/content/2301.05482v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3522 |
+
NtAyT4oBgHgl3EQftPkw/content/2301.00590v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3523 |
+
FNE3T4oBgHgl3EQfVgrM/content/2301.04461v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3524 |
+
PdFAT4oBgHgl3EQfzh4f/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3525 |
+
NtAyT4oBgHgl3EQftPkw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3526 |
+
7dE5T4oBgHgl3EQfQA7X/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3527 |
+
nNFKT4oBgHgl3EQfFC0t/content/2301.11718v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3528 |
+
7dE5T4oBgHgl3EQfQA7X/content/2301.05510v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3529 |
+
SdAyT4oBgHgl3EQfuPmB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3530 |
+
YNAzT4oBgHgl3EQf1_6A/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3531 |
+
QNFRT4oBgHgl3EQf6zgQ/content/2301.13677v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3532 |
+
rtFKT4oBgHgl3EQf0y6v/content/2301.11917v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3533 |
+
8dE0T4oBgHgl3EQffgB5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3534 |
+
I9E3T4oBgHgl3EQfXAr0/content/2301.04476v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3535 |
+
k9E4T4oBgHgl3EQfTwx8/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3536 |
+
wtE5T4oBgHgl3EQfMg6G/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3537 |
+
KtAyT4oBgHgl3EQff_jL/content/2301.00352v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3538 |
+
QNFRT4oBgHgl3EQf6zgQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3539 |
+
OtE0T4oBgHgl3EQfjwFX/content/2301.02463v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3540 |
+
49AyT4oBgHgl3EQfpPiQ/content/2301.00522v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3541 |
+
ONE0T4oBgHgl3EQf0gLr/content/2301.02688v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3542 |
+
qtE2T4oBgHgl3EQfKwZ0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3543 |
+
8NE1T4oBgHgl3EQfngRF/content/2301.03309v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3544 |
+
I9E3T4oBgHgl3EQfXAr0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3545 |
+
ftAyT4oBgHgl3EQfjviX/content/2301.00421v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3546 |
+
ftAyT4oBgHgl3EQfjviX/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3547 |
+
OtE0T4oBgHgl3EQfjwFX/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3548 |
+
YdE2T4oBgHgl3EQfvAhk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3549 |
+
29AzT4oBgHgl3EQfuf0F/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3550 |
+
t9FAT4oBgHgl3EQfhR3i/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3551 |
+
xNAyT4oBgHgl3EQfnfi-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3552 |
+
D9E5T4oBgHgl3EQfUg90/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3553 |
+
gdE3T4oBgHgl3EQffwpl/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3554 |
+
ONE0T4oBgHgl3EQf0gLr/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3555 |
+
RdAzT4oBgHgl3EQfXPxJ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3556 |
+
49AyT4oBgHgl3EQfpPiQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3557 |
+
k9E4T4oBgHgl3EQfTwx8/content/2301.05010v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3558 |
+
F9E1T4oBgHgl3EQfFAMn/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3559 |
+
D9E5T4oBgHgl3EQfUg90/content/2301.05544v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3560 |
+
09E4T4oBgHgl3EQfzA3P/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3561 |
+
htFAT4oBgHgl3EQf9h4V/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3562 |
+
e9AyT4oBgHgl3EQfw_mi/content/2301.00659v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3563 |
+
WtFJT4oBgHgl3EQf4y1Z/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3564 |
+
i9A0T4oBgHgl3EQfIf_T/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3565 |
+
jNFIT4oBgHgl3EQfqCsi/content/2301.11325v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3566 |
+
8dE0T4oBgHgl3EQffgB5/content/2301.02405v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3567 |
+
htFAT4oBgHgl3EQf9h4V/content/2301.08756v1.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3568 |
+
cNAyT4oBgHgl3EQfXPfd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3569 |
+
rtA0T4oBgHgl3EQfK_-E/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
|
| 3570 |
+
P9E3T4oBgHgl3EQfyAsA/content/2301.04715v1.pdf filter=lfs diff=lfs merge=lfs -text
|
09E4T4oBgHgl3EQfzA3P/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f8e5d567be869ac3f92430e6b03f6cccb05d659e130ee5b470625f1009d2f3fc
|
| 3 |
+
size 5505069
|
19E2T4oBgHgl3EQfiwfE/content/tmp_files/2301.03962v1.pdf.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
19E2T4oBgHgl3EQfiwfE/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
1NAzT4oBgHgl3EQfRftD/content/tmp_files/2301.01216v1.pdf.txt
ADDED
|
@@ -0,0 +1,1690 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
1
|
| 2 |
+
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
|
| 3 |
+
An End-to-End Multi-Scale Network for Action
|
| 4 |
+
Prediction in Videos
|
| 5 |
+
|
| 6 |
+
Xiaofa Liu, Jianqin Yin, Member, IEEE, Yuan Sun, Zhicheng Zhang, Jin Tang
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
Abstract—In this paper, we develop an efficient multi-scale
|
| 10 |
+
network to predict action classes in partial videos in an end-to-
|
| 11 |
+
end manner. Unlike most existing methods with offline feature
|
| 12 |
+
generation, our method directly takes frames as input and further
|
| 13 |
+
models motion evolution on two different temporal scales.
|
| 14 |
+
Therefore, we solve the complexity problems of the two stages of
|
| 15 |
+
modeling and the problem of insufficient temporal and spatial
|
| 16 |
+
information of a single scale. Our proposed End-to-End Multi-
|
| 17 |
+
Scale Network (E2EMSNet) is composed of two scales which are
|
| 18 |
+
named segment scale and observed global scale. The segment
|
| 19 |
+
scale leverages temporal difference over consecutive frames for
|
| 20 |
+
finer motion patterns by supplying 2D convolutions. For observed
|
| 21 |
+
global scale, a Long Short-Term Memory (LSTM) is incorporated
|
| 22 |
+
to capture motion features of observed frames. Our model
|
| 23 |
+
provides a simple and efficient modeling framework with a small
|
| 24 |
+
computational cost. Our E2EMSNet is evaluated on three
|
| 25 |
+
challenging datasets: BIT, HMDB51, and UCF101. The extensive
|
| 26 |
+
experiments demonstrate the effective-ness of our method for
|
| 27 |
+
action prediction in videos.
|
| 28 |
+
|
| 29 |
+
Index terms: action prediction, multi-scale network, end-to-
|
| 30 |
+
end method.
|
| 31 |
+
I.
|
| 32 |
+
INTRODUCTION
|
| 33 |
+
HE goal of action prediction in videos is to predict the
|
| 34 |
+
class label of an ongoing action from an observed part
|
| 35 |
+
of it over temporal axis so far[1]. It is a subset of a
|
| 36 |
+
broader research domain on human activity analysis. Different
|
| 37 |
+
from conventional action recognition with fully executed
|
| 38 |
+
actions[2][3][4], it is more challenging to predict the action
|
| 39 |
+
label in ongoing actions due to the incompleteness of actions
|
| 40 |
+
and the continuous evolution of actions. It has attracted a lot of
|
| 41 |
+
research attention because of its wide application in some
|
| 42 |
+
scenarios with high real-time requirements, such as human-
|
| 43 |
+
machine interaction, security surveillance, etc.
|
| 44 |
+
Although the previous work has achieved promising results
|
| 45 |
+
|
| 46 |
+
▪ This work was supported partly by the National Natural Science
|
| 47 |
+
Foundation of China (Grant No. 62173045, 61673192), partly by the
|
| 48 |
+
Fundamental Research Funds for the Central Universities (Grant No. 2020XD-
|
| 49 |
+
A04-3), and the Natural Science Foundation of Hainan Province (Grant No.
|
| 50 |
+
622RC675). (Corresponding author: Jianqin Yin).
|
| 51 |
+
▪ Xiaofa Liu is with the School of Modern Post, Beijing University of Posts
|
| 52 |
+
and
|
| 53 |
+
Telecom-munications,
|
| 54 |
+
Beijing
|
| 55 |
+
100876,
|
| 56 |
+
China
|
| 57 |
+
(e-mail:
|
| 58 |
+
liuxiaofamail@163.com )
|
| 59 |
+
▪ Jianqin Yin, Zhicheng Zhang, and Jin Tang are with the school of Artificial
|
| 60 |
+
Intelligence, Beijing University of Posts and Telecommunications, Beijing
|
| 61 |
+
100876,
|
| 62 |
+
China
|
| 63 |
+
(e-mail:
|
| 64 |
+
jqyin@bupt.edu.cn,
|
| 65 |
+
zczhang@bupt.edu.cn,
|
| 66 |
+
tangjin@bupt.edu.cn ).
|
| 67 |
+
▪ Yuan Sun is with Electronic Engineering School, Beijing University of
|
| 68 |
+
Posts
|
| 69 |
+
and
|
| 70 |
+
Telecommunications,
|
| 71 |
+
Beijing
|
| 72 |
+
100876,
|
| 73 |
+
China
|
| 74 |
+
(e-mail:
|
| 75 |
+
sunyuan@bupt.edu.cn ).
|
| 76 |
+
by adopting a two-stage approach, there generally had
|
| 77 |
+
problems of complex modeling and feature redundancy. The
|
| 78 |
+
previous method separated feature extraction from predictive
|
| 79 |
+
modeling[5][6][7][8][9][10][11][12]. This separation operati-
|
| 80 |
+
on makes the spatio-temporal representation obtained may
|
| 81 |
+
deviate from the action prediction. Moreover, it complicates
|
| 82 |
+
the model design. Secondly, because the feature is generated
|
| 83 |
+
offline, the complete action must be divided into fixed
|
| 84 |
+
segments in advance, which not only results in the redundancy
|
| 85 |
+
of the feature in the time dimension, but also is not applicable
|
| 86 |
+
to the evolving action.
|
| 87 |
+
Therefore, in this paper, we propose an end-to-end method,
|
| 88 |
+
which effectively reduces the complexity of the model and
|
| 89 |
+
introduces more fine-grained spatio-temporal information. We
|
| 90 |
+
designed the end-to-end network from three aspects, sampling
|
| 91 |
+
method, local spatio-temporal information representation, and
|
| 92 |
+
long-term time sequence fusion. In order to adapt the end-to-
|
| 93 |
+
end structure to the evolving motion, we first changed the
|
| 94 |
+
preprocessing and feature generation method, which will be
|
| 95 |
+
described in Part 3. Second, to reduce computational
|
| 96 |
+
consumption to achieve end-to-end structure, we use 2D
|
| 97 |
+
convolution instead of two-stream networks or 3D
|
| 98 |
+
convolutions to extract local spatio-temporal features. Finally,
|
| 99 |
+
to enhance the temporal information of action evolution, we
|
| 100 |
+
present an observed global scale to fuse the historical evolution
|
| 101 |
+
information of actions.
|
| 102 |
+
Similar to the application of spatial multi-scale in image
|
| 103 |
+
field, multi-scale research in the temporal dimension is also
|
| 104 |
+
increasing in video analytics. Compared to images, the
|
| 105 |
+
variation of temporal scales in videos poses additional
|
| 106 |
+
challenges. How to effectively utilize the motion evolution
|
| 107 |
+
information at different time scales has gradually gained
|
| 108 |
+
attention in video motion analysis. Feichtenhofer[4] et al.
|
| 109 |
+
proposed SlowFast network for video recognition. Their
|
| 110 |
+
method utilizes two branches, a slow pathway with low frame
|
| 111 |
+
rate and a fast pathway with high frame rate, to capture spatial
|
| 112 |
+
semantics and motion at fine temporal resolution. Wang[13] et
|
| 113 |
+
al. proposed an efficient multi-scale model for action
|
| 114 |
+
recognition, which utilizes short-term and long-term temporal
|
| 115 |
+
difference modules to capture both short-term and long-term
|
| 116 |
+
motion information better.
|
| 117 |
+
Most of the existing action prediction methods are
|
| 118 |
+
insufficient to focus on multi-scale temporal, making them fail
|
| 119 |
+
to capture fine-grained temporal information. They use a fixed
|
| 120 |
+
frame rate to sample each partial video, and use a fixed
|
| 121 |
+
temporal scale for feature generation and modeling[1][5]
|
| 122 |
+
[6][7][8][9][11]. Although these methods simplify the
|
| 123 |
+
T
|
| 124 |
+
|
| 125 |
+
2
|
| 126 |
+
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
|
| 127 |
+
processing of the input of feature generation and reduce the
|
| 128 |
+
computation to a certain extent, they ignore the evolution of
|
| 129 |
+
action. Too much fine-grained information will be lost, and the
|
| 130 |
+
spatio-temporal information in the video cannot be fully
|
| 131 |
+
utilized.
|
| 132 |
+
Our method takes both the local evolution information
|
| 133 |
+
between adjacent frames and the global evolution information
|
| 134 |
+
of the entire observed video sequence into account. Therefore,
|
| 135 |
+
we design two temporal scales to increase fine-grained timing
|
| 136 |
+
information. Firstly, the segment scale uses RGB frames with
|
| 137 |
+
temporal difference to capture temporal information in each
|
| 138 |
+
segment. Secondly, the observed global scale uses LSTM
|
| 139 |
+
module to fuse all the observed action evolution information.
|
| 140 |
+
Through modeling in short-term and long-term time scales, our
|
| 141 |
+
method can be mining more fine-grained temporal information
|
| 142 |
+
without increasing the computational load.
|
| 143 |
+
Our E2EMSNet provides a simple yet effective framework
|
| 144 |
+
for the problem of ongoing action prediction in videos. In
|
| 145 |
+
summary, our main contributions lie in the following three
|
| 146 |
+
aspects:
|
| 147 |
+
● We propose a simple end-to-end approach for action
|
| 148 |
+
prediction in videos. To the best of our knowledge, this is the
|
| 149 |
+
first work focusing on this problem.
|
| 150 |
+
● We investigate two scales in the temporal dimension to
|
| 151 |
+
model the evolution of actions, and propose a segment
|
| 152 |
+
summarization and propagation framework. The segment scale
|
| 153 |
+
is used to model the local evolution of the action, and the
|
| 154 |
+
observed global scale is used to model the global evolution of
|
| 155 |
+
the action.
|
| 156 |
+
● We achieve a trade-off of efficiency and effectiveness.
|
| 157 |
+
We achieve state-of-the-art performance on several datasets
|
| 158 |
+
while using only 2D convolutions framework and RGB format
|
| 159 |
+
of features.
|
| 160 |
+
|
| 161 |
+
II. RELATED WORK
|
| 162 |
+
A. Action Recognition
|
| 163 |
+
Action recognition methods take fully observed videos as
|
| 164 |
+
input and output labels of human actions. Action recognition
|
| 165 |
+
has been extensively studied in past few years[2][3][4][13][14].
|
| 166 |
+
These studies can be roughly divided into two categories.
|
| 167 |
+
Methods in the first category are two-stream CNNs, which was
|
| 168 |
+
first proposed in[15]. It used two inputs of RGB and optical
|
| 169 |
+
flow to model appearance and motion information separately
|
| 170 |
+
in videos with a late fusion. In addition, follow-up research has
|
| 171 |
+
adopted two RGB inputs sampled at different FPS or carefully
|
| 172 |
+
designed temporal modules for efficiency, including Non-local
|
| 173 |
+
Net[16], STM[17], SlowFast[4], and Correlation Net[18]. The
|
| 174 |
+
second method is to use 3D CNNs[19][20]. It proposed 3D
|
| 175 |
+
convolution and pooling to learn spatiotemporal features from
|
| 176 |
+
videos directly. Several variants adopted a 2D + 1D paradigm
|
| 177 |
+
to reduce the computation cost of 3D convolution, which
|
| 178 |
+
implement by decomposing 3D CNNs into a 2D convolution
|
| 179 |
+
and a 1D temporal convolution[21][22][23]. Several works
|
| 180 |
+
focused on designing more powerful and efficient temporal
|
| 181 |
+
modules, such as TSM[14], TAM[24], TEA[25], and TDN[13].
|
| 182 |
+
More recent works tried clip-based architecture search for
|
| 183 |
+
video recognition, focusing on capturing appearance and
|
| 184 |
+
motion or context information in a more fine-grained and
|
| 185 |
+
efficient manner[13][26]. Although these methods mainly
|
| 186 |
+
learned features for the videos with full action executions, their
|
| 187 |
+
core ideas have certain reference significance for ongoing
|
| 188 |
+
action prediction in videos.
|
| 189 |
+
|
| 190 |
+
B. Action Prediction
|
| 191 |
+
Action prediction methods were proposed to predict the
|
| 192 |
+
action given a partially observed video. [9] was the first work
|
| 193 |
+
along
|
| 194 |
+
these
|
| 195 |
+
lines,
|
| 196 |
+
they
|
| 197 |
+
formulated
|
| 198 |
+
the
|
| 199 |
+
problem
|
| 200 |
+
probabilistically and proposed a dynamic bag-of-words
|
| 201 |
+
approach, modeling how feature distributions of activities
|
| 202 |
+
change as observations increase. In the last decade, researchers
|
| 203 |
+
approach this task from various perspectives and can be
|
| 204 |
+
grouped into three major divisions[27]. The first method can
|
| 205 |
+
be formulated as one-shot mappings from partial observations
|
| 206 |
+
to groundtruth labels of full observations. The basic
|
| 207 |
+
assumption underlying these methods is that a partial
|
| 208 |
+
observation of an action video provides sufficient information
|
| 209 |
+
to define the appropriate overall action class regardless of the
|
| 210 |
+
unobserved part. Follow-up research work[28][29][6][30]
|
| 211 |
+
adopted more robust features, hierarchical extractions, and
|
| 212 |
+
learning-based classifiers to perform more fine-grained
|
| 213 |
+
analysis of an initial partial observation for better performance.
|
| 214 |
+
The second division is knowledge distillation-based methods.
|
| 215 |
+
These methods distill the information from the full
|
| 216 |
+
observations into partial observations[31][5][11][32]. These
|
| 217 |
+
methods attempted to lend power from unobserved data in
|
| 218 |
+
training to either enrich the feature representation of partial
|
| 219 |
+
data or encourage the classifiers to easily recognize partial data.
|
| 220 |
+
Another way to exploit future information is by propagating
|
| 221 |
+
the partial observation into the future in a temporal
|
| 222 |
+
extrapolation fashion[33][34] [12][35][36]. For example, [12]
|
| 223 |
+
learned to propagate frame-wise residuals in feature space to
|
| 224 |
+
complete partial observation.
|
| 225 |
+
|
| 226 |
+
|
| 227 |
+
Fig. 1. Relevant definitions in action prediction in videos: full video, partial video, segments, and observation ratio.
|
| 228 |
+
|
| 229 |
+
Full video
|
| 230 |
+
X[1:T]
|
| 231 |
+
Segments
|
| 232 |
+
(K-10)
|
| 233 |
+
Partial video x[1:t]
|
| 234 |
+
k=2,observationratio:r=k/K
|
| 235 |
+
=2/10=0.23
|
| 236 |
+
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
|
| 237 |
+
C. Multiple temporal scales for action analysis in videos
|
| 238 |
+
Temporal sequence forecasting usually faces the following
|
| 239 |
+
situations for scenarios with insignificant periodic motion:
|
| 240 |
+
long-term forecasts need to consider trend information (long-
|
| 241 |
+
term dependencies), and short-term forecasts need to consider
|
| 242 |
+
fine-grained volatility (short-term dependencies). The current
|
| 243 |
+
difficulty is how to model long-term dynamic dependencies
|
| 244 |
+
and consider long-term and short-term dependencies. There
|
| 245 |
+
are two methods currently. The main existing method is
|
| 246 |
+
hierarchical modeling, which is achieved by establishing
|
| 247 |
+
hidden layers of different granularities[37][38][39][40][41] or
|
| 248 |
+
decomposing the original data to obtain data of different
|
| 249 |
+
granularities[42][43]. The second method is designing the gate
|
| 250 |
+
mechanism, which achieved by modifying the internal
|
| 251 |
+
structure of RNN[44]. We inherit this idea that both long-term
|
| 252 |
+
and short-term dependencies in video must be carefully
|
| 253 |
+
considered, and a trade-off approach is adopted.
|
| 254 |
+
III. OUR METHOD
|
| 255 |
+
In this section, we detail our approach to mining ongoing
|
| 256 |
+
action evolution information in videos using multiple scales in
|
| 257 |
+
an end-to-end fashion. Specifically, we first describe the
|
| 258 |
+
problem formulation. Then, we elaborate on our end-to-end
|
| 259 |
+
framework and method for multi-scale modeling of ongoing
|
| 260 |
+
action sequences.
|
| 261 |
+
|
| 262 |
+
A. Problem formulation
|
| 263 |
+
Given a video containing human motion (the video may
|
| 264 |
+
contain arbitrary incomplete motion), the goal is to predict the
|
| 265 |
+
class label. We follow the problem formulation in the[31],
|
| 266 |
+
which has been widely adopted in subsequent work[5][7][11].
|
| 267 |
+
As shown in Fig. 1, Given a full video
|
| 268 |
+
[1: ]
|
| 269 |
+
X
|
| 270 |
+
T with complete
|
| 271 |
+
action execution, 1 represents the first frame of the video, and
|
| 272 |
+
T represents the last frame. We use
|
| 273 |
+
[1, ],
|
| 274 |
+
[1, ]
|
| 275 |
+
x
|
| 276 |
+
t t
|
| 277 |
+
T
|
| 278 |
+
|
| 279 |
+
to
|
| 280 |
+
simulate the action execution in video from 1 to t , defined as
|
| 281 |
+
partial video. In order to facilitate quantitative experiments,
|
| 282 |
+
we usually divide a full video into K segments, each
|
| 283 |
+
containing (
|
| 284 |
+
/
|
| 285 |
+
)
|
| 286 |
+
T
|
| 287 |
+
K frames. Assuming that the action is
|
| 288 |
+
executed to the
|
| 289 |
+
,
|
| 290 |
+
[1,2,...,
|
| 291 |
+
]
|
| 292 |
+
kth k
|
| 293 |
+
K
|
| 294 |
+
=
|
| 295 |
+
segment, the observation
|
| 296 |
+
ratio is defined as
|
| 297 |
+
/
|
| 298 |
+
r
|
| 299 |
+
k
|
| 300 |
+
K
|
| 301 |
+
=
|
| 302 |
+
. As defined above, as shown in
|
| 303 |
+
Fig.1, the full video X , is divided into K segments. Among
|
| 304 |
+
them, the partial video marked with green has an observation
|
| 305 |
+
ratio
|
| 306 |
+
/
|
| 307 |
+
2 /10
|
| 308 |
+
0.2
|
| 309 |
+
r
|
| 310 |
+
k
|
| 311 |
+
K
|
| 312 |
+
=
|
| 313 |
+
=
|
| 314 |
+
=
|
| 315 |
+
, and it can be considered that its
|
| 316 |
+
action has been executed 20%.
|
| 317 |
+
|
| 318 |
+
B. Data processing
|
| 319 |
+
We adopt a data processing method different from the
|
| 320 |
+
previous method. As shown in Fig. 2, the upper part is the data
|
| 321 |
+
processing method used in the previous method. They first
|
| 322 |
+
divided a complete video X into K segments, and combined
|
| 323 |
+
segments into partial videos to simulate action evolution. Then
|
| 324 |
+
the partial video is sampled to extract the spatio-temporal
|
| 325 |
+
representation. The problem caused by this is that each partial
|
| 326 |
+
video needs to be separately extracted for spatio-temporal
|
| 327 |
+
representation, which divides the continuous evolution of
|
| 328 |
+
action. The feature extraction of partial videos with higher
|
| 329 |
+
observation rates cannot use the previous partial videos with
|
| 330 |
+
lower observation rates. It will cause redundancy in the time
|
| 331 |
+
dimension. At the same time, with the increase in the
|
| 332 |
+
observation rate, the temporal information will become more
|
| 333 |
+
and more sparse. Compared with them, we directly extract the
|
| 334 |
+
local spatio-temporal representations of each segment. In this
|
| 335 |
+
way, the previous spatio-temporal information can be
|
| 336 |
+
continuously used with the evolution of actions. This makes
|
| 337 |
+
our model more robust to action duration, and more abundant
|
| 338 |
+
spatio-temporal information can be obtained.
|
| 339 |
+
|
| 340 |
+
|
| 341 |
+
|
| 342 |
+
Fig. 2. Differences in data processing between our method and previous methods. The upper is the data processing method used
|
| 343 |
+
in the previous method, and the lower is the data processing strategy used in our method.
|
| 344 |
+
|
| 345 |
+
LOTTE
|
| 346 |
+
Full
|
| 347 |
+
video X
|
| 348 |
+
Segments
|
| 349 |
+
artial video
|
| 350 |
+
Observation ratio=0.1
|
| 351 |
+
Partial video I
|
| 352 |
+
Observation rafio=0.2
|
| 353 |
+
Partial video k
|
| 354 |
+
Observation ratio=k/K
|
| 355 |
+
Partial video
|
| 356 |
+
Sampling and feature extraction
|
| 357 |
+
Feature of partial video
|
| 358 |
+
OTT
|
| 359 |
+
Full
|
| 360 |
+
video X
|
| 361 |
+
Segments
|
| 362 |
+
Sampling and feature extraction
|
| 363 |
+
Localfeature
|
| 364 |
+
Observed global feature
|
| 365 |
+
Obserred global feature 11
|
| 366 |
+
Obserred global feature m
|
| 367 |
+
Feature of partial video
|
| 368 |
+
Observation ratio=0.1
|
| 369 |
+
Obserration ratio=0.2
|
| 370 |
+
Obserration ratio=0.34
|
| 371 |
+
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
|
| 372 |
+
C. Network architectures
|
| 373 |
+
In this subsection, we elaborate on our network structure.
|
| 374 |
+
Due to the data processing method mentioned in the previous
|
| 375 |
+
section and the design of network structure, we can model
|
| 376 |
+
action evolution in a finer-grained manner without increasing
|
| 377 |
+
the computational load. First, we introduce how to extract
|
| 378 |
+
short-term features for short time windows, which we call the
|
| 379 |
+
segment scale. Then, we introduce how to fuse the segment
|
| 380 |
+
scale to generate observed global features for the observed
|
| 381 |
+
local videos.
|
| 382 |
+
Segment scale. Compared with images, video is a dynamic
|
| 383 |
+
sequence of pictures arranged in time, so the temporal context
|
| 384 |
+
relationship of frames and the spatial relationship organization
|
| 385 |
+
of a single frame need to be considered simultaneously. For
|
| 386 |
+
extracting and fusion of two kinds of relations in local time
|
| 387 |
+
windows, directly stacking frames as input will bring a lot of
|
| 388 |
+
redundant information. This method is inefficient. Moreover,
|
| 389 |
+
it will introduce too much noise and reduce the robustness of
|
| 390 |
+
the model. If only a single image frame is used as input, the
|
| 391 |
+
dynamic information of the temporal window will be lost.
|
| 392 |
+
RGB temporal difference turned out to be an efficient
|
| 393 |
+
alternative modality to optical flow as motion representation
|
| 394 |
+
[45][13]. To extract the spatio-temporal features of each local
|
| 395 |
+
temporal window, we adopt the idea in[13] as a short-term
|
| 396 |
+
feature extraction module. Different from action recognition,
|
| 397 |
+
in the action prediction problem, we cannot get the spatio-
|
| 398 |
+
temporal information after the current frame, so we only keep
|
| 399 |
+
the short-term TDM (temporal difference module) in[13].
|
| 400 |
+
Specifically, for each segment, we randomly sample 5 frames
|
| 401 |
+
2
|
| 402 |
+
1
|
| 403 |
+
1
|
| 404 |
+
2
|
| 405 |
+
[
|
| 406 |
+
,
|
| 407 |
+
,
|
| 408 |
+
,
|
| 409 |
+
,
|
| 410 |
+
]
|
| 411 |
+
t
|
| 412 |
+
t
|
| 413 |
+
t
|
| 414 |
+
t
|
| 415 |
+
t
|
| 416 |
+
I
|
| 417 |
+
I
|
| 418 |
+
I
|
| 419 |
+
I I
|
| 420 |
+
I
|
| 421 |
+
−
|
| 422 |
+
−
|
| 423 |
+
+
|
| 424 |
+
+
|
| 425 |
+
=
|
| 426 |
+
, then the RGB difference information of
|
| 427 |
+
these frames is down-sampled, and the 2D convolutions
|
| 428 |
+
network is used to obtain the depth feature
|
| 429 |
+
( )
|
| 430 |
+
i
|
| 431 |
+
S I
|
| 432 |
+
, as
|
| 433 |
+
expressed in Equation (1).
|
| 434 |
+
( )
|
| 435 |
+
(
|
| 436 |
+
(
|
| 437 |
+
(
|
| 438 |
+
( ))))
|
| 439 |
+
i
|
| 440 |
+
i
|
| 441 |
+
S I
|
| 442 |
+
Upsample CNN Downsample D I
|
| 443 |
+
=
|
| 444 |
+
|
| 445 |
+
(1)
|
| 446 |
+
At the same time, to preserve the original frame-level
|
| 447 |
+
representation as much as possible, we fuse the original
|
| 448 |
+
features
|
| 449 |
+
tI with
|
| 450 |
+
( )
|
| 451 |
+
i
|
| 452 |
+
S I
|
| 453 |
+
after convolutions (in our actual
|
| 454 |
+
experiment, the original feature passes through a layer of 2D
|
| 455 |
+
CNN, as shown in Equation (2)).
|
| 456 |
+
(
|
| 457 |
+
)
|
| 458 |
+
( )
|
| 459 |
+
( )
|
| 460 |
+
i
|
| 461 |
+
t
|
| 462 |
+
S fuse
|
| 463 |
+
S I
|
| 464 |
+
CNN I
|
| 465 |
+
=
|
| 466 |
+
+
|
| 467 |
+
|
| 468 |
+
(2)
|
| 469 |
+
The fused feature is fused again with the feature from RGB
|
| 470 |
+
difference (Equation (3)). Finally, the feature of each segment
|
| 471 |
+
is obtained, which is the representation of segment scale.
|
| 472 |
+
(
|
| 473 |
+
)
|
| 474 |
+
( (
|
| 475 |
+
))
|
| 476 |
+
(
|
| 477 |
+
( ( )))
|
| 478 |
+
i
|
| 479 |
+
S out
|
| 480 |
+
CNN S fuse
|
| 481 |
+
CNN Downsample D I
|
| 482 |
+
=
|
| 483 |
+
+
|
| 484 |
+
|
| 485 |
+
(3)
|
| 486 |
+
Observed global scale. In action prediction, the action
|
| 487 |
+
evolution of the human body is an ongoing sequence of
|
| 488 |
+
information, and we use the observation rate to simulate its
|
| 489 |
+
progress. Therefore, the segments are temporally sequential,
|
| 490 |
+
and the representative actions can only evolve from front to
|
| 491 |
+
back. In the previous section, we model the local spatio-
|
| 492 |
+
temporal action of each segment. More logically, as time
|
| 493 |
+
progresses, each segment’s local temporal window is added to
|
| 494 |
+
the historical sequence before it. Therefore, the crux of the
|
| 495 |
+
problem is how to effectively utilize all observed segments to
|
| 496 |
+
reconstruct the historical global evolution.
|
| 497 |
+
|
| 498 |
+
|
| 499 |
+
Fig. 3. Overview of End-to-End Multi-scale Network. Given a full video, split it into K segments. For each segment, a CNN-based
|
| 500 |
+
module extracts the local motion evolution to achieve more fine-grained modeling, which we call the segment scale. Then, temporal
|
| 501 |
+
modeling is performed on each segment in chronological order to model the observed global action evolution, which we call the
|
| 502 |
+
observed global scale.
|
| 503 |
+
|
| 504 |
+
Full
|
| 505 |
+
video
|
| 506 |
+
ISegments
|
| 507 |
+
CNN-BasedArchitecture
|
| 508 |
+
Local
|
| 509 |
+
feature
|
| 510 |
+
X1
|
| 511 |
+
+X2
|
| 512 |
+
1x3
|
| 513 |
+
Xi
|
| 514 |
+
X10
|
| 515 |
+
RNN-Based
|
| 516 |
+
h1
|
| 517 |
+
h2
|
| 518 |
+
h3
|
| 519 |
+
hi
|
| 520 |
+
Architecture
|
| 521 |
+
★Y1
|
| 522 |
+
Y2
|
| 523 |
+
y3
|
| 524 |
+
vyi
|
| 525 |
+
VY10
|
| 526 |
+
Global
|
| 527 |
+
feature
|
| 528 |
+
Action classification
|
| 529 |
+
Baslketball5
|
| 530 |
+
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
|
| 531 |
+
Moreover, in the actual scene, the evolution of the action
|
| 532 |
+
cannot know its end time and duration, which means that the
|
| 533 |
+
overall length of the history is uncertain. Therefore, it is natural
|
| 534 |
+
to use the variable-length input characteristics of LSTM to
|
| 535 |
+
model the global spatiotemporal characteristics of historical
|
| 536 |
+
observations, as shown in formula (4).
|
| 537 |
+
( )
|
| 538 |
+
( (
|
| 539 |
+
))
|
| 540 |
+
Y i
|
| 541 |
+
L S out
|
| 542 |
+
=
|
| 543 |
+
|
| 544 |
+
(4)
|
| 545 |
+
As shown in Fig. 3, when the action evolves to the third
|
| 546 |
+
segment, the LSTM adds the short-term time window of the
|
| 547 |
+
third segment to the historical observation in the time
|
| 548 |
+
dimension. Implemented the observed global evolution to
|
| 549 |
+
model the first three segments progressively. In this way, the
|
| 550 |
+
spatiotemporal relationship in each segment can be modeled in
|
| 551 |
+
a more fine-grained manner, and the subsequent segments are
|
| 552 |
+
modeled in a progressive manner to model the historical global
|
| 553 |
+
history without additional computational consumption.
|
| 554 |
+
IV. EXPERIMENTS
|
| 555 |
+
In this section, we present the experiment results of our
|
| 556 |
+
framework. First, we describe the evaluation datasets and
|
| 557 |
+
implementation details. Then, we compare our E2EMSNet
|
| 558 |
+
with state-of-the-art methods.
|
| 559 |
+
|
| 560 |
+
A. Datasets
|
| 561 |
+
We evaluate our method on three video datasets: BIT[46],
|
| 562 |
+
HMDB51[47] and UCF101[48]. BIT consists of 8 classes of
|
| 563 |
+
human interactions (bow, boxing, handshake, high-five, hug,
|
| 564 |
+
kick, pat, push), with 50 videos per class. Videos are captured
|
| 565 |
+
in realistic scenes with cluttered backgrounds, partially
|
| 566 |
+
occluded body parts, moving objects, and variations in subject
|
| 567 |
+
appearance, scale, illumination condition, and viewpoint. Even
|
| 568 |
+
though BIT has a limited number of classes and videos, it is a
|
| 569 |
+
complex dataset because of their backgrounds and the
|
| 570 |
+
similarity of the beginning and ending scenes. The ratio of
|
| 571 |
+
videos between training and testing is 17:8. HMDB51 is a
|
| 572 |
+
large-scale human action recognition dataset that comprises 51
|
| 573 |
+
daily action categories. It contains some fine-grained human
|
| 574 |
+
facial motions, such as smiling, laughing, etc, in static
|
| 575 |
+
background windows, which are not seen in other comparable
|
| 576 |
+
datasets, and challenges the spatiotemporal modeling of
|
| 577 |
+
actions. There are 6766 video clips with at least 102 videos for
|
| 578 |
+
each class. There are three official data splits. UCF101 is a
|
| 579 |
+
dataset collected from Youtube and trimmed for action
|
| 580 |
+
recognition (each video contains exactly one action). It
|
| 581 |
+
includes 101 distinct action classes and 13320 overall video
|
| 582 |
+
clips with at least 100 videos for each category. All videos are
|
| 583 |
+
divided into 25 groups and updated with the setup of Three
|
| 584 |
+
Train/Test Splits.
|
| 585 |
+
|
| 586 |
+
B. Implementation details
|
| 587 |
+
Thanks to our end-to-end network structure design, we can
|
| 588 |
+
easily generalize to various video datasets. In experiments, we
|
| 589 |
+
use ResNet50 with the short-term module in [13] to build
|
| 590 |
+
segment scale. On the three datasets, we simulated the action
|
| 591 |
+
evolution with the observation rate from 0.1 to 1, with a step
|
| 592 |
+
size of 0.1, to obtain ten segments, and use each segment as a
|
| 593 |
+
segment scale. Our network structure can use any length and
|
| 594 |
+
number of segments as the segment scale. For each segment,
|
| 595 |
+
we randomly sample 5 frames for computing RGB differential
|
| 596 |
+
information. We employ convolutional layers pre-trained on
|
| 597 |
+
kinetics400, and set dropout to reduce overfitting. We first
|
| 598 |
+
convert the video into video frames, and each video frame is
|
| 599 |
+
resized to have shorter side in [256, 320] and a crop of
|
| 600 |
+
224×224 is randomly cropped. We use two NVIDIA GeForce
|
| 601 |
+
RTX 3090s to train our model. On the BIT dataset, we follow
|
| 602 |
+
the official settings to divide the training set and test set.
|
| 603 |
+
Specifically, in each category, 34 videos are used as the
|
| 604 |
+
training set, and 16 videos are used as the test set. On the
|
| 605 |
+
HMDB51 dataset, we follow the standard evaluation protocol
|
| 606 |
+
using three training/testing splits, and report the average
|
| 607 |
+
accuracy over three splits. On the UCF101 dataset, we use the
|
| 608 |
+
first 15 groups of videos for model training, the following 3
|
| 609 |
+
groups for model validation, and the remaining 7 groups for
|
| 610 |
+
testing.
|
| 611 |
+
|
| 612 |
+
C. Comparison with the state of the art
|
| 613 |
+
In this subsection, we compare out E2EMSNet with those
|
| 614 |
+
state-of-the-art methods, including DBoW[9], MTSSVM[28],
|
| 615 |
+
MMAPM[31], Deep-SCN[5], AAPNet [49], RGN-KF[12],
|
| 616 |
+
RSPG + AS-GCN[8], AORAP[50], and AASE +JOLO-
|
| 617 |
+
GCN[51] on the BIT dataset, MTSSVM[28], Global-local[52],
|
| 618 |
+
AKT[7], STRR[30] on the HMDB51 dataset, MTSSVM[28],
|
| 619 |
+
DeepSCN[5], AAPNet[49], Teacher-Student[11], RGN-KF
|
| 620 |
+
[12], RSPG + AS-GCN[8], SPR-Net[53], JVS + JCC +
|
| 621 |
+
JFIP[32], STRR (ResNet18) [30], and Xinxiao Wu et al.[54]
|
| 622 |
+
on the UCF101 dataset. We reported the results of these
|
| 623 |
+
compared methods provided by authors.
|
| 624 |
+
TableⅠillustrates the accuracy of action prediction and
|
| 625 |
+
compares our method with several state-of-the-art methods on
|
| 626 |
+
the BIT dataset. As seen from the results, our method achieves
|
| 627 |
+
significant improvements in observation rates from 0.1 to 1.
|
| 628 |
+
This can be explained by the fact that our method can make
|
| 629 |
+
reliable predictions on actions as the actions evolve.
|
| 630 |
+
|
| 631 |
+
TABLE I
|
| 632 |
+
THE ACCURACY (%) OF DIFFERENT ACTION PREDICTION METHODS ON BIT DATASET AT DIFFERENT
|
| 633 |
+
OBSERVATION RATIOS FROM 0.1 TO 1. NOTE THAT THE MISSING VALUE IS BECAUSE THE EXPERIMENTAL
|
| 634 |
+
RESULTS OF THE CORRESPONDING OBSERVATION RATE ARE NOT PROVIDED IN THE ORIGINAL PAPER.
|
| 635 |
+
Method
|
| 636 |
+
Input
|
| 637 |
+
Feature-dim
|
| 638 |
+
Observation Ratio
|
| 639 |
+
0.1
|
| 640 |
+
0.2
|
| 641 |
+
0.3
|
| 642 |
+
0.4
|
| 643 |
+
0.5
|
| 644 |
+
0.6
|
| 645 |
+
0.7
|
| 646 |
+
0.8
|
| 647 |
+
0.9
|
| 648 |
+
1.0
|
| 649 |
+
Avg.
|
| 650 |
+
DBoW[9]
|
| 651 |
+
|
| 652 |
+
Hand-crafted
|
| 653 |
+
22.66
|
| 654 |
+
25.78
|
| 655 |
+
40.63
|
| 656 |
+
43.75
|
| 657 |
+
46.88
|
| 658 |
+
54.69
|
| 659 |
+
55.47
|
| 660 |
+
54.69
|
| 661 |
+
55.47
|
| 662 |
+
53.13
|
| 663 |
+
45.31
|
| 664 |
+
|
| 665 |
+
6
|
| 666 |
+
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
|
| 667 |
+
MTSSVM[28]
|
| 668 |
+
|
| 669 |
+
Hand-crafted
|
| 670 |
+
28.12
|
| 671 |
+
32.81
|
| 672 |
+
45.31
|
| 673 |
+
55.45
|
| 674 |
+
60.00
|
| 675 |
+
61.72
|
| 676 |
+
67.19
|
| 677 |
+
70.31
|
| 678 |
+
71.09
|
| 679 |
+
76.56
|
| 680 |
+
56.85
|
| 681 |
+
MMAPM[31]
|
| 682 |
+
|
| 683 |
+
Hand-crafted
|
| 684 |
+
32.81
|
| 685 |
+
36.72
|
| 686 |
+
53.90
|
| 687 |
+
59.38
|
| 688 |
+
67.97
|
| 689 |
+
63.28
|
| 690 |
+
68.75
|
| 691 |
+
75.00
|
| 692 |
+
75.78
|
| 693 |
+
79.90
|
| 694 |
+
61.32
|
| 695 |
+
DeepSCN[5]
|
| 696 |
+
RGB
|
| 697 |
+
3D-CNN +
|
| 698 |
+
Hand-crafted
|
| 699 |
+
37.50
|
| 700 |
+
44.53
|
| 701 |
+
59.83
|
| 702 |
+
71.88
|
| 703 |
+
78.13
|
| 704 |
+
85.16
|
| 705 |
+
86.72
|
| 706 |
+
87.50
|
| 707 |
+
88.28
|
| 708 |
+
90.63
|
| 709 |
+
73.01
|
| 710 |
+
AAPNet[49]
|
| 711 |
+
RGB
|
| 712 |
+
3D-CNN +
|
| 713 |
+
Hand-crafted
|
| 714 |
+
38.84
|
| 715 |
+
45.31
|
| 716 |
+
64.84
|
| 717 |
+
73.40
|
| 718 |
+
80.47
|
| 719 |
+
88.28
|
| 720 |
+
88.28
|
| 721 |
+
89.06
|
| 722 |
+
89.84
|
| 723 |
+
91.40
|
| 724 |
+
74.97
|
| 725 |
+
RGN-KF[12]
|
| 726 |
+
RGB + Flow
|
| 727 |
+
2D-CNN
|
| 728 |
+
35.16
|
| 729 |
+
46.09
|
| 730 |
+
67.97
|
| 731 |
+
75.78
|
| 732 |
+
82.03
|
| 733 |
+
88.28
|
| 734 |
+
92.19
|
| 735 |
+
92.28
|
| 736 |
+
92.16
|
| 737 |
+
92.16
|
| 738 |
+
76.41
|
| 739 |
+
RSPG+AS-GCN[8]
|
| 740 |
+
Skeleton
|
| 741 |
+
LSTM
|
| 742 |
+
55.70
|
| 743 |
+
|
| 744 |
+
77.30
|
| 745 |
+
|
| 746 |
+
91.00
|
| 747 |
+
|
| 748 |
+
93.00
|
| 749 |
+
|
| 750 |
+
93.00
|
| 751 |
+
94.00
|
| 752 |
+
|
| 753 |
+
AORAP[50]
|
| 754 |
+
RGB + Flow
|
| 755 |
+
2D-CNN
|
| 756 |
+
40.16
|
| 757 |
+
|
| 758 |
+
71.48
|
| 759 |
+
|
| 760 |
+
92.89
|
| 761 |
+
|
| 762 |
+
96.8
|
| 763 |
+
|
| 764 |
+
|
| 765 |
+
96.48
|
| 766 |
+
79.56
|
| 767 |
+
AASE + JOLO-GCN[51]
|
| 768 |
+
Skeleton
|
| 769 |
+
LSTM
|
| 770 |
+
|
| 771 |
+
|
| 772 |
+
80.20
|
| 773 |
+
|
| 774 |
+
92.40
|
| 775 |
+
|
| 776 |
+
|
| 777 |
+
|
| 778 |
+
|
| 779 |
+
|
| 780 |
+
|
| 781 |
+
OCRL [6]
|
| 782 |
+
RGB
|
| 783 |
+
3D-CNN
|
| 784 |
+
|
| 785 |
+
|
| 786 |
+
65.6
|
| 787 |
+
|
| 788 |
+
84.4
|
| 789 |
+
|
| 790 |
+
90.6
|
| 791 |
+
|
| 792 |
+
89.1
|
| 793 |
+
|
| 794 |
+
|
| 795 |
+
E2EMSNet (Ours)
|
| 796 |
+
RGB
|
| 797 |
+
2D-CNN + LSTM
|
| 798 |
+
82.81
|
| 799 |
+
89.06
|
| 800 |
+
96.88
|
| 801 |
+
98.43
|
| 802 |
+
98.43
|
| 803 |
+
96.88
|
| 804 |
+
100
|
| 805 |
+
100
|
| 806 |
+
100
|
| 807 |
+
100
|
| 808 |
+
96.25
|
| 809 |
+
TableⅡshows the experimental results on the HMDB51
|
| 810 |
+
dataset, and tableⅢshows the experimental results on the
|
| 811 |
+
UCF101 dataset. Thanks to the design of our segment scale,
|
| 812 |
+
action evolution can be modeled in a more fine-grained way.
|
| 813 |
+
As shown in the table, at 0.2 of observation rate, the accuracy
|
| 814 |
+
rate on HMDB51 dataset is increased by more than 10%, and
|
| 815 |
+
the accuracy rate on UCF101 in increased by more than 3%
|
| 816 |
+
except the results in[32]. This means that our method can better
|
| 817 |
+
predict its class in the early stages of the action. As the
|
| 818 |
+
observation rate increases, our method can achieve a more
|
| 819 |
+
competitive
|
| 820 |
+
performance,
|
| 821 |
+
although
|
| 822 |
+
the
|
| 823 |
+
performance
|
| 824 |
+
improvement is limited.
|
| 825 |
+
At the same time, we have to admit that on the HMDB51
|
| 826 |
+
and UCF101datasets, although our method has achieved
|
| 827 |
+
relatively good performance when the observation rate is low,
|
| 828 |
+
as the action continues to evolve and the temporal scale
|
| 829 |
+
continues to grow, our model is limited in the later observation
|
| 830 |
+
ratios. We think that the modeling ability of observed global
|
| 831 |
+
scale for long time windows is insufficient.
|
| 832 |
+
|
| 833 |
+
TABLE II
|
| 834 |
+
THE ACCURACY (%) OF DIFFERENT ACTION PREDICTION METHODS ON HMDB51 DATASET AT DIFFERENT
|
| 835 |
+
OBSERVATION RATIOS FROM 0.1 TO 1. NOTE THAT THE MISSING VALUE IS BECAUSE THE EXPERIMENTAL
|
| 836 |
+
RESULTS OF THE CORRESPONDING OBSERVATION RATE ARE NOT PROVIDED IN THE ORIGINAL PAPER.
|
| 837 |
+
Method
|
| 838 |
+
Input
|
| 839 |
+
Feature-dim
|
| 840 |
+
Observation Ratio
|
| 841 |
+
0.1
|
| 842 |
+
0.2
|
| 843 |
+
0.3
|
| 844 |
+
0.4
|
| 845 |
+
0.5
|
| 846 |
+
0.6
|
| 847 |
+
0.7
|
| 848 |
+
0.8
|
| 849 |
+
0.9
|
| 850 |
+
1.0
|
| 851 |
+
Avg.
|
| 852 |
+
MTSSVM[28]
|
| 853 |
+
|
| 854 |
+
Hand-crafted
|
| 855 |
+
13.60
|
| 856 |
+
|
| 857 |
+
26.70
|
| 858 |
+
|
| 859 |
+
33.80
|
| 860 |
+
|
| 861 |
+
37.80
|
| 862 |
+
|
| 863 |
+
38.80
|
| 864 |
+
|
| 865 |
+
|
| 866 |
+
Global-local[52]
|
| 867 |
+
|
| 868 |
+
Hand-crafted
|
| 869 |
+
38.80
|
| 870 |
+
43.80
|
| 871 |
+
49.10
|
| 872 |
+
50.40
|
| 873 |
+
52.60
|
| 874 |
+
54.70
|
| 875 |
+
56.30
|
| 876 |
+
56.90
|
| 877 |
+
57.30
|
| 878 |
+
57.30
|
| 879 |
+
51.72
|
| 880 |
+
AKT[7]
|
| 881 |
+
RGB
|
| 882 |
+
3D-CNN
|
| 883 |
+
43.50
|
| 884 |
+
48.40
|
| 885 |
+
51.20
|
| 886 |
+
54.20
|
| 887 |
+
56.40
|
| 888 |
+
58.40
|
| 889 |
+
59.60
|
| 890 |
+
60.20
|
| 891 |
+
61.10
|
| 892 |
+
61.80
|
| 893 |
+
55.48
|
| 894 |
+
STRR[30]
|
| 895 |
+
RGB
|
| 896 |
+
3D-CNN
|
| 897 |
+
45.10
|
| 898 |
+
|
| 899 |
+
52.35
|
| 900 |
+
|
| 901 |
+
56.73
|
| 902 |
+
|
| 903 |
+
5941
|
| 904 |
+
|
| 905 |
+
61.11
|
| 906 |
+
|
| 907 |
+
|
| 908 |
+
E2EMSNet (Ours)
|
| 909 |
+
RGB
|
| 910 |
+
2D-CNN + LSTM
|
| 911 |
+
59.21
|
| 912 |
+
60.52
|
| 913 |
+
62.23
|
| 914 |
+
64.47
|
| 915 |
+
64.73
|
| 916 |
+
64.86
|
| 917 |
+
64.86
|
| 918 |
+
65.26
|
| 919 |
+
65.13
|
| 920 |
+
65.39
|
| 921 |
+
63.67
|
| 922 |
+
|
| 923 |
+
Table III
|
| 924 |
+
THE ACCURACY (%) OF DIFFERENT ACTION PREDICTION METHODS ON UCF101 DATASET AT DIFFERENT
|
| 925 |
+
OBSERVATION RATIOS FROM 0.1 TO 1. NOTE THAT THE MISSING VALUE IS BECAUSE THE EXPERIMENTAL
|
| 926 |
+
RESULTS OF THE CORRESPONDING OBSERVATION RATE ARE NOT PROVIDED IN THE ORIGINAL PAPER.
|
| 927 |
+
Method
|
| 928 |
+
Input
|
| 929 |
+
Feature-dim
|
| 930 |
+
Observation Ratio
|
| 931 |
+
0.1
|
| 932 |
+
0.2
|
| 933 |
+
0.3
|
| 934 |
+
0.4
|
| 935 |
+
0.5
|
| 936 |
+
0.6
|
| 937 |
+
0.7
|
| 938 |
+
0.8
|
| 939 |
+
0.9
|
| 940 |
+
1.0
|
| 941 |
+
Avg.
|
| 942 |
+
MTSSVM[28]
|
| 943 |
+
|
| 944 |
+
Hand-crafted
|
| 945 |
+
40.05
|
| 946 |
+
72.83
|
| 947 |
+
80.02
|
| 948 |
+
82.18
|
| 949 |
+
82.39
|
| 950 |
+
83.12
|
| 951 |
+
83.37
|
| 952 |
+
83.51
|
| 953 |
+
83.69
|
| 954 |
+
82.82
|
| 955 |
+
77.39
|
| 956 |
+
DeepSCN[5]
|
| 957 |
+
RGB
|
| 958 |
+
3D-CNN +
|
| 959 |
+
Hand-crafted
|
| 960 |
+
45.02
|
| 961 |
+
77.64
|
| 962 |
+
82.95
|
| 963 |
+
85.36
|
| 964 |
+
85.75
|
| 965 |
+
86.70
|
| 966 |
+
87.10
|
| 967 |
+
87.42
|
| 968 |
+
87.50
|
| 969 |
+
87.63
|
| 970 |
+
81.30
|
| 971 |
+
|
| 972 |
+
7
|
| 973 |
+
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
|
| 974 |
+
AAPNet[49]
|
| 975 |
+
RGB
|
| 976 |
+
3D-CNN +
|
| 977 |
+
Hand-crafted
|
| 978 |
+
59.85
|
| 979 |
+
80.85
|
| 980 |
+
86.78
|
| 981 |
+
86.47
|
| 982 |
+
86.94
|
| 983 |
+
88.34
|
| 984 |
+
88.34
|
| 985 |
+
89.85
|
| 986 |
+
90.85
|
| 987 |
+
91.99
|
| 988 |
+
85.02
|
| 989 |
+
Teacher-Student[11]
|
| 990 |
+
RGB
|
| 991 |
+
3D-CNN
|
| 992 |
+
83.32
|
| 993 |
+
87.13
|
| 994 |
+
88.92
|
| 995 |
+
89.82
|
| 996 |
+
90.85
|
| 997 |
+
91.04
|
| 998 |
+
91.28
|
| 999 |
+
91.23
|
| 1000 |
+
91.31
|
| 1001 |
+
91.47
|
| 1002 |
+
89.63
|
| 1003 |
+
RGN-KF[12]
|
| 1004 |
+
RGB + Flow
|
| 1005 |
+
2D-CNN
|
| 1006 |
+
83.12
|
| 1007 |
+
85.16
|
| 1008 |
+
88.44
|
| 1009 |
+
90.78
|
| 1010 |
+
91.42
|
| 1011 |
+
92.03
|
| 1012 |
+
92.00
|
| 1013 |
+
93.19
|
| 1014 |
+
93.13
|
| 1015 |
+
93.13
|
| 1016 |
+
90.24
|
| 1017 |
+
RSPG+AS-GCN[8]
|
| 1018 |
+
Skeleton
|
| 1019 |
+
LSTM
|
| 1020 |
+
|
| 1021 |
+
|
| 1022 |
+
90.30
|
| 1023 |
+
|
| 1024 |
+
93.10
|
| 1025 |
+
|
| 1026 |
+
|
| 1027 |
+
|
| 1028 |
+
|
| 1029 |
+
94.70
|
| 1030 |
+
|
| 1031 |
+
SPR-Net[53]
|
| 1032 |
+
RGB
|
| 1033 |
+
3D-CNN
|
| 1034 |
+
88.70
|
| 1035 |
+
|
| 1036 |
+
|
| 1037 |
+
|
| 1038 |
+
91.60
|
| 1039 |
+
|
| 1040 |
+
|
| 1041 |
+
|
| 1042 |
+
|
| 1043 |
+
91.40
|
| 1044 |
+
|
| 1045 |
+
JVS+JCC+JFIP[32]
|
| 1046 |
+
RGB
|
| 1047 |
+
(2D+1D)-CNN
|
| 1048 |
+
|
| 1049 |
+
91.70
|
| 1050 |
+
|
| 1051 |
+
|
| 1052 |
+
|
| 1053 |
+
|
| 1054 |
+
|
| 1055 |
+
|
| 1056 |
+
|
| 1057 |
+
|
| 1058 |
+
|
| 1059 |
+
STRR (ResNet18)[30]
|
| 1060 |
+
RGB
|
| 1061 |
+
3D-CNN
|
| 1062 |
+
80.86
|
| 1063 |
+
|
| 1064 |
+
88.61
|
| 1065 |
+
|
| 1066 |
+
89.31
|
| 1067 |
+
|
| 1068 |
+
90.31
|
| 1069 |
+
|
| 1070 |
+
89.82
|
| 1071 |
+
|
| 1072 |
+
|
| 1073 |
+
Xinxiao Wu et al.[54]
|
| 1074 |
+
RGB + Flow
|
| 1075 |
+
2D-CNN
|
| 1076 |
+
82.36
|
| 1077 |
+
85.57
|
| 1078 |
+
88.97
|
| 1079 |
+
|
| 1080 |
+
91.32
|
| 1081 |
+
|
| 1082 |
+
92.41
|
| 1083 |
+
|
| 1084 |
+
93.02
|
| 1085 |
+
|
| 1086 |
+
|
| 1087 |
+
E2EMSNet (Ours)
|
| 1088 |
+
RGB
|
| 1089 |
+
2D-CNN + LSTM
|
| 1090 |
+
88.77
|
| 1091 |
+
90.31
|
| 1092 |
+
90.94
|
| 1093 |
+
91.33
|
| 1094 |
+
91.96
|
| 1095 |
+
92.73
|
| 1096 |
+
93.11
|
| 1097 |
+
92.98
|
| 1098 |
+
92.98
|
| 1099 |
+
92.73
|
| 1100 |
+
91.78
|
| 1101 |
+
|
| 1102 |
+
D. Ablation study
|
| 1103 |
+
Here, we provide more evaluation results on the UCF101
|
| 1104 |
+
dataset.
|
| 1105 |
+
Influence of multi-scale architecture. TableⅣ. Illustrates
|
| 1106 |
+
the results of the ablation study for different scale architecture.
|
| 1107 |
+
First, we introduce the details of the ablation study. Then, we
|
| 1108 |
+
analyze the effects of multi-scale architecture by comparing
|
| 1109 |
+
the results with different settings.
|
| 1110 |
+
TABLE IV
|
| 1111 |
+
THE ACCURACY (%) AT DIFFERENT SCALE SETTINGS
|
| 1112 |
+
ON THE UCF101 DATASET.
|
| 1113 |
+
Observation
|
| 1114 |
+
ratio
|
| 1115 |
+
0.1
|
| 1116 |
+
0.3
|
| 1117 |
+
0.5
|
| 1118 |
+
0.9
|
| 1119 |
+
Avg.
|
| 1120 |
+
The
|
| 1121 |
+
segment
|
| 1122 |
+
scale only
|
| 1123 |
+
90.56
|
| 1124 |
+
91.58
|
| 1125 |
+
91.83
|
| 1126 |
+
91.45
|
| 1127 |
+
91.55
|
| 1128 |
+
The
|
| 1129 |
+
segment
|
| 1130 |
+
scale+observed
|
| 1131 |
+
global scale
|
| 1132 |
+
90.05
|
| 1133 |
+
90.82
|
| 1134 |
+
92.60
|
| 1135 |
+
92.47
|
| 1136 |
+
91.78
|
| 1137 |
+
|
| 1138 |
+
‘The segment scale only’ uses the CNN-based module for
|
| 1139 |
+
action prediction. ‘The segment scale + observed global scale’
|
| 1140 |
+
uses the CNN-based and LSTM modules to learn different
|
| 1141 |
+
scale information. In the first setting, for action clips with
|
| 1142 |
+
different observation rates, we sample 5 frames and use the
|
| 1143 |
+
segment scale only for prediction. In the second setting, we
|
| 1144 |
+
adopt a complete structure with segment scale and observed
|
| 1145 |
+
global scale. Even though the average accuracy difference is
|
| 1146 |
+
insignificant, the multi-scale structure is essential for ongoing
|
| 1147 |
+
action prediction. Results of ‘The segment scale only’ has little
|
| 1148 |
+
discrimination under different observation rates, as shown in
|
| 1149 |
+
Fig 4. This indicates that its feature representation and
|
| 1150 |
+
discriminative degree for different observation rates are
|
| 1151 |
+
insufficient. At the same time, due to the sparse sampling of
|
| 1152 |
+
long-time scales, we believe this manner will perform worse
|
| 1153 |
+
for complex actions and actions with long duration. Conversely,
|
| 1154 |
+
adding observed global scale and changing the sampling
|
| 1155 |
+
strategy will make the prediction process more cognitive (As
|
| 1156 |
+
the observation rate increases, the confidence of the prediction
|
| 1157 |
+
should be increasing.). Moreover, due to the more fine-grained
|
| 1158 |
+
feature extraction for actions, it has better robustness to
|
| 1159 |
+
complex and long-duration actions.
|
| 1160 |
+
|
| 1161 |
+
|
| 1162 |
+
Fig. 4. Prediction accuracy (%) under two scale settings on
|
| 1163 |
+
UCF101 dataset.
|
| 1164 |
+
Influence of hyperparameters. Finally, we briefly
|
| 1165 |
+
introduce the experimental results on UCF101 dataset under
|
| 1166 |
+
different hyperparameter settings. To ensure a single variable,
|
| 1167 |
+
we have conducted comparative experiments on the following
|
| 1168 |
+
hyperparameters, and the results are shown in TableⅤ.
|
| 1169 |
+
|
| 1170 |
+
E. Analysis of the performance of different actions
|
| 1171 |
+
We follow the grouping of the UCF101 dataset and divide it
|
| 1172 |
+
into five groups: Human-Object interaction, Body-Motion
|
| 1173 |
+
only, Human-Human interaction, Playing musical instruments,
|
| 1174 |
+
and Sports. We selected three action categories under each
|
| 1175 |
+
group, for a total of fifteen action categories, to visually
|
| 1176 |
+
analyze their classification results. We selected the following
|
| 1177 |
+
action categories: Blowing Candles, Blow Dry Hair, Cutting In
|
| 1178 |
+
Kitchen, Apply Eye Makeup, Baby Crawling, Pull Ups,
|
| 1179 |
+
Haircut, Head Massage, Punch, Playing Guitar, Playing Piano,
|
| 1180 |
+
Playing Violin, Basketball, Basketball Dunk, Biking. We keep
|
| 1181 |
+
two modules, segment scale and observed global scale, and
|
| 1182 |
+
only modify and retrain the last classification layer. The
|
| 1183 |
+
confusion matrix of the results of 15 actions at progress level
|
| 1184 |
+
of 20% is shown in Fig. 5. It can be seen intuitively from the
|
| 1185 |
+
figure that our model still has stable prediction performance
|
| 1186 |
+
for action prediction in different scenarios, even in the very
|
| 1187 |
+
early stage of actions. Only a few actions (Haircut, Blow Dry
|
| 1188 |
+
Hair, and Head Massage) with very similar external features
|
| 1189 |
+
were mispredicted. As shown in Fig6, it is an appearance
|
| 1190 |
+
comparison of Haircut, Blow Dry Hair, and Head Massage. It
|
| 1191 |
+
can be seen that three actions are difficult to distinguish,
|
| 1192 |
+
resulting in the problem of mispredicted.
|
| 1193 |
+
89.5
|
| 1194 |
+
90
|
| 1195 |
+
90.5
|
| 1196 |
+
91
|
| 1197 |
+
91.5
|
| 1198 |
+
92
|
| 1199 |
+
92.5
|
| 1200 |
+
93
|
| 1201 |
+
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
|
| 1202 |
+
Accuracy (%)
|
| 1203 |
+
Observation ratio
|
| 1204 |
+
Segment-scale
|
| 1205 |
+
Two-scales
|
| 1206 |
+
|
| 1207 |
+
8
|
| 1208 |
+
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
|
| 1209 |
+
TABLE V
|
| 1210 |
+
THE ACCURACY (%) ON UCF101 DATASET UNDER SEVERAL HYPERPARAMETERS. (NOTE: LIMITED BY
|
| 1211 |
+
RESOURCES AND TIME, OUR EXPERIMENTAL RESULTS DO NOT GUARANTEE THAT ALL HYPERPARAMETERS
|
| 1212 |
+
HAVE BEEN ADJUSTED TO THE OPTIMUM.)
|
| 1213 |
+
|
| 1214 |
+
Fig. 5. Confusion matrix of the result of 15 classes at
|
| 1215 |
+
progress level of 20% on UCF101 dataset.
|
| 1216 |
+
|
| 1217 |
+
|
| 1218 |
+
|
| 1219 |
+
Fig. 6. Appearance comparison of Haircut, Blow Dry Hair,
|
| 1220 |
+
and Head Massage.
|
| 1221 |
+
V.
|
| 1222 |
+
CONCLUSION
|
| 1223 |
+
In this paper, we have proposed a network model,
|
| 1224 |
+
E2EMSNet, for action prediction in videos. We propose two
|
| 1225 |
+
temporal scales, segment scale and observed global scale, to
|
| 1226 |
+
model the evolution of actions, and fuse the two scales into an
|
| 1227 |
+
end-to-end framework. A stack of 2D convolutional layers
|
| 1228 |
+
with input of RGB difference is introduced to model the local
|
| 1229 |
+
evolution of actions in a more fine-grained way. Next, the
|
| 1230 |
+
LSTM layer fuses each segment scale in the temporal
|
| 1231 |
+
dimension into an observed global scale to model the long-
|
| 1232 |
+
term evolution of actions. After experimental validation and
|
| 1233 |
+
analysis, our method possesses powerful local scale modeling
|
| 1234 |
+
capability to model ongoing actions. However, due to the
|
| 1235 |
+
growth of the time scale and the increasing noise, our observed
|
| 1236 |
+
scale cannot achieve the global modeling ability we expected
|
| 1237 |
+
for the evolving actions, which will also be the focus of our
|
| 1238 |
+
future work.
|
| 1239 |
+
Hyperparameter variables
|
| 1240 |
+
Observation Ratios
|
| 1241 |
+
0.1
|
| 1242 |
+
0.3
|
| 1243 |
+
0.5
|
| 1244 |
+
0.7
|
| 1245 |
+
0.9
|
| 1246 |
+
Avg.
|
| 1247 |
+
Hidden size of LSTM
|
| 1248 |
+
512
|
| 1249 |
+
90.05
|
| 1250 |
+
90.82
|
| 1251 |
+
92.60
|
| 1252 |
+
92.22
|
| 1253 |
+
92.48
|
| 1254 |
+
91.78
|
| 1255 |
+
1024
|
| 1256 |
+
88.77
|
| 1257 |
+
90.05
|
| 1258 |
+
90.82
|
| 1259 |
+
91.07
|
| 1260 |
+
91.20
|
| 1261 |
+
90.60
|
| 1262 |
+
2048
|
| 1263 |
+
88.23
|
| 1264 |
+
88.93
|
| 1265 |
+
89.95
|
| 1266 |
+
91.03
|
| 1267 |
+
91.73
|
| 1268 |
+
90.14
|
| 1269 |
+
Learning rate
|
| 1270 |
+
0.0001
|
| 1271 |
+
82.14
|
| 1272 |
+
84.06
|
| 1273 |
+
85.97
|
| 1274 |
+
87.12
|
| 1275 |
+
88.01
|
| 1276 |
+
85.85
|
| 1277 |
+
0.0005
|
| 1278 |
+
90.05
|
| 1279 |
+
90.82
|
| 1280 |
+
92.60
|
| 1281 |
+
92.22
|
| 1282 |
+
92.48
|
| 1283 |
+
91.78
|
| 1284 |
+
0.001
|
| 1285 |
+
89.41
|
| 1286 |
+
90.31
|
| 1287 |
+
91.07
|
| 1288 |
+
90.82
|
| 1289 |
+
90.56
|
| 1290 |
+
90.57
|
| 1291 |
+
Decay step (decay
|
| 1292 |
+
rate=0.1)
|
| 1293 |
+
20, 80
|
| 1294 |
+
89.41
|
| 1295 |
+
90.05
|
| 1296 |
+
91.58
|
| 1297 |
+
91.84
|
| 1298 |
+
91.96
|
| 1299 |
+
91.09
|
| 1300 |
+
40, 100
|
| 1301 |
+
90.31
|
| 1302 |
+
90.18
|
| 1303 |
+
91.45
|
| 1304 |
+
92.09
|
| 1305 |
+
92.09
|
| 1306 |
+
91.28
|
| 1307 |
+
60, 100
|
| 1308 |
+
90.18
|
| 1309 |
+
91.07
|
| 1310 |
+
91.71
|
| 1311 |
+
92.35
|
| 1312 |
+
92.61
|
| 1313 |
+
91.78
|
| 1314 |
+
|
| 1315 |
+
Confusionmatrix
|
| 1316 |
+
1.0
|
| 1317 |
+
Basketball
|
| 1318 |
+
Haircut
|
| 1319 |
+
CuttingInKitchen
|
| 1320 |
+
Blow Dry Hair
|
| 1321 |
+
0.8
|
| 1322 |
+
Pull Ups
|
| 1323 |
+
True label
|
| 1324 |
+
ApplyEyeMakeup
|
| 1325 |
+
Playing Violin
|
| 1326 |
+
0.6
|
| 1327 |
+
Punch
|
| 1328 |
+
Biking
|
| 1329 |
+
0.4
|
| 1330 |
+
BasketballDunk
|
| 1331 |
+
BabyCrawling
|
| 1332 |
+
HeadMassage
|
| 1333 |
+
Playing Piano
|
| 1334 |
+
0.2
|
| 1335 |
+
Blowing Candles
|
| 1336 |
+
PlayingGuitar
|
| 1337 |
+
Baby Crawling
|
| 1338 |
+
0.0
|
| 1339 |
+
Basketball
|
| 1340 |
+
Haircut
|
| 1341 |
+
Kitchen
|
| 1342 |
+
DryHair
|
| 1343 |
+
Pull Ups
|
| 1344 |
+
Makeup
|
| 1345 |
+
Violin
|
| 1346 |
+
Punch
|
| 1347 |
+
Biking
|
| 1348 |
+
Basketball Dunk
|
| 1349 |
+
Head Massage
|
| 1350 |
+
Playing Piano
|
| 1351 |
+
Blowing Candles
|
| 1352 |
+
Playing Guitar
|
| 1353 |
+
M
|
| 1354 |
+
Playing
|
| 1355 |
+
Blow
|
| 1356 |
+
Eye
|
| 1357 |
+
Cutting
|
| 1358 |
+
Apply
|
| 1359 |
+
PredictedlabelHaircut
|
| 1360 |
+
Blow Dry
|
| 1361 |
+
Hair
|
| 1362 |
+
Head
|
| 1363 |
+
Massage10
|
| 1364 |
+
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
|
| 1365 |
+
References
|
| 1366 |
+
[1] Liu J, Shahroudy A, Wang G, et al. Skeleton-based online
|
| 1367 |
+
action prediction using scale selection network[J]. IEEE
|
| 1368 |
+
transactions on pattern analysis and machine intelligence,
|
| 1369 |
+
2019, 42(6): 1453-1467.
|
| 1370 |
+
[2] Y. Hou, Z. Li, P. Wang and W. Li, "Skeleton Optical
|
| 1371 |
+
Spectra-Based Action Recognition Using Convolutional
|
| 1372 |
+
Neural Networks," in IEEE Transactions on Circuits and
|
| 1373 |
+
Systems for Video Technology, vol. 28, no. 3, pp. 807-
|
| 1374 |
+
811, 2016.
|
| 1375 |
+
[3] H. Luo, G. Lin, Y. Yao, Z. Tang, Q. Wu and X. Hua,
|
| 1376 |
+
"Dense Semantics-Assisted Networks for Video Action
|
| 1377 |
+
Recognition," in IEEE Transactions on Circuits and
|
| 1378 |
+
Systems for Video Technology, vol. 32, no. 5, pp. 3073-
|
| 1379 |
+
3084, 2021.
|
| 1380 |
+
[4] Feichtenhofer C, Fan H, Malik J, et al. Slowfast networks
|
| 1381 |
+
for video recognition[C]//Proceedings of the IEEE/CVF
|
| 1382 |
+
international conference on computer vision. 2019: 6202-
|
| 1383 |
+
6211.
|
| 1384 |
+
[5] Kong Y, Tao Z, Fu Y. Deep sequential context networks
|
| 1385 |
+
for action prediction[C]//Proceedings of the IEEE
|
| 1386 |
+
conference on computer vision and pattern recognition.
|
| 1387 |
+
2017: 1473-1481.
|
| 1388 |
+
[6] Li M, Chen L, Lu J, et al. Order-Constrained
|
| 1389 |
+
Representation
|
| 1390 |
+
Learning
|
| 1391 |
+
for
|
| 1392 |
+
Instructional
|
| 1393 |
+
Video
|
| 1394 |
+
Prediction[J]. IEEE Transactions on Circuits and Systems
|
| 1395 |
+
for Video Technology, vol. 32, no. 8, pp. 5438-5452, 2022.
|
| 1396 |
+
[7] Cai Y, Li H, Hu J F, et al. Action knowledge transfer for
|
| 1397 |
+
action prediction with partial videos[C]//Procee-dings of
|
| 1398 |
+
the AAAI conference on artificial intelligence. 2019,
|
| 1399 |
+
33(01): 8118-8125.
|
| 1400 |
+
[8] Chen L, Lu J, Song Z, et al. Recurrent semantic
|
| 1401 |
+
preserving generation for action prediction[J]. IEEE
|
| 1402 |
+
Transactions on Circuits and Systems for Video
|
| 1403 |
+
Technology, 2020, 31(1): 231-245.
|
| 1404 |
+
[9] Ryoo M S. Human activity prediction: Early recognition
|
| 1405 |
+
of ongoing activities from streaming videos[C]//2011
|
| 1406 |
+
International Conference on Computer Vision. IEEE,
|
| 1407 |
+
2011: 1036-1043.
|
| 1408 |
+
[10] Kong Y, Gao S, Sun B, et al. Action prediction from
|
| 1409 |
+
videos via memorizing hard-to-predict samples[C]//
|
| 1410 |
+
Proceedings of the AAAI Conference on Artificial
|
| 1411 |
+
Intelligence. 2018, 32(1).
|
| 1412 |
+
[11] Wang X, Hu J F, Lai J H, et al. Progressive teacher-
|
| 1413 |
+
student learning for early action prediction[C]//
|
| 1414 |
+
Proceedings of the IEEE/CVF Conference on Computer
|
| 1415 |
+
Vision and Pattern Recognition. 2019: 3556-3565.
|
| 1416 |
+
[12] Zhao H, Wildes R P. Spatiotemporal feature residual
|
| 1417 |
+
propagation for action prediction[C]//Proceedings of the
|
| 1418 |
+
IEEE/CVF International Conference on Computer Vision.
|
| 1419 |
+
2019: 7003-7012.
|
| 1420 |
+
[13] Wang L, Tong Z, Ji B, et al. Tdn: Temporal difference
|
| 1421 |
+
networks for efficient action recognition[C]//Pro-
|
| 1422 |
+
ceedings of the IEEE/CVF Conference on Computer
|
| 1423 |
+
Vision and Pattern Recognition. 2021: 1895-1904.
|
| 1424 |
+
[14] Lin J, Gan C, Han S. Tsm: Temporal shift module for
|
| 1425 |
+
efficient video understanding[C]//Proceedings of the
|
| 1426 |
+
IEEE/CVF International Conference on Computer Vision.
|
| 1427 |
+
2019: 7083-7093.
|
| 1428 |
+
[15] Simonyan K, Zisserman A. Two-stream convolution-al
|
| 1429 |
+
networks for action recognition in videos[J]. Advances in
|
| 1430 |
+
neural information processing systems, 2014, 27.
|
| 1431 |
+
[16] Wang X, Girshick R, Gupta A, et al. Non-local neural
|
| 1432 |
+
networks[C]//Proceedings of the IEEE conference on
|
| 1433 |
+
computer vision and pattern recognition. 2018: 7794-
|
| 1434 |
+
7803.
|
| 1435 |
+
[17] Jiang B, Wang M M, Gan W, et al. Stm: Spatiotemporal
|
| 1436 |
+
and motion encoding for action recognition[C]//
|
| 1437 |
+
Proceedings of the IEEE/CVF International Conference
|
| 1438 |
+
on Computer Vision. 2019: 2000-2009.
|
| 1439 |
+
[18] Wang H, Tran D, Torresani L, et al. Video modeling with
|
| 1440 |
+
correlation networks[C]//Proceedings of the IEEE/CVF
|
| 1441 |
+
Conference on Computer Vision and Pattern Recognition.
|
| 1442 |
+
2020: 352-361.
|
| 1443 |
+
[19] Ji S, Xu W, Yang M, et al. 3D convolutional neural
|
| 1444 |
+
networks for human action recognition[J]. IEEE
|
| 1445 |
+
transactions on pattern analysis and machine intelligence,
|
| 1446 |
+
2012, 35(1): 221-231.
|
| 1447 |
+
[20] Tran D, Bourdev L, Fergus R, et al. Learning
|
| 1448 |
+
spatiotemporal features with 3d convolutional networks
|
| 1449 |
+
[C]//Proceedings of the IEEE international conference on
|
| 1450 |
+
computer vision. 2015: 4489-4497.
|
| 1451 |
+
[21] Tran D, Wang H, Torresani L, et al. A closer look at
|
| 1452 |
+
|
| 1453 |
+
10
|
| 1454 |
+
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
|
| 1455 |
+
spatiotemporal convolutions for action recognition[C]//
|
| 1456 |
+
Proceedings of the IEEE conference on Computer Vision
|
| 1457 |
+
and Pattern Recognition. 2018: 6450-6459.
|
| 1458 |
+
[22] Xie S, Sun C, Huang J, et al. Rethinking spatiotemporal
|
| 1459 |
+
feature learning: Speed-accuracy trade-offs in video
|
| 1460 |
+
classification[C]//Proceedings
|
| 1461 |
+
of
|
| 1462 |
+
the
|
| 1463 |
+
European
|
| 1464 |
+
conference on computer vision (ECCV). 2018: 305-321.
|
| 1465 |
+
[23] Li K, Li X, Wang Y, et al. CT-net: Channel tensorization
|
| 1466 |
+
network for video classification[J]. arXiv preprint
|
| 1467 |
+
arXiv:2106.01603, 2021.
|
| 1468 |
+
[24] Liu Z, Wang L, Wu W, et al. Tam: Temporal adaptive
|
| 1469 |
+
module for video recognition[C]//Proceedings of the
|
| 1470 |
+
IEEE/CVF International Conference on Computer Vision.
|
| 1471 |
+
2021: 13708-13718.
|
| 1472 |
+
[25] Li Y, Ji B, Shi X, et al. Tea: Temporal excitation and
|
| 1473 |
+
aggregation for action recognition[C]//Proceedings of the
|
| 1474 |
+
IEEE/CVF conference on computer vision and pattern
|
| 1475 |
+
recognition. 2020: 909-918.
|
| 1476 |
+
[26] Feichtenhofer C. X3d: Expanding architectures for
|
| 1477 |
+
efficient video recognition[C]//Proceedings of the
|
| 1478 |
+
IEEE/CVF Conference on Computer Vision and Pattern
|
| 1479 |
+
Recognition. 2020: 203-213.
|
| 1480 |
+
[27] Zhao H, Wildes R P. Review of Video Predictive
|
| 1481 |
+
Understanding: Early Action Recognition and Future
|
| 1482 |
+
Action Prediction[J]. arXiv preprint arXiv:2107. 05140,
|
| 1483 |
+
2021.
|
| 1484 |
+
[28] Kong Y, Kit D, Fu Y. A discriminative model with
|
| 1485 |
+
multiple temporal scales for action prediction[C]//
|
| 1486 |
+
European conference on computer vision. Springer,
|
| 1487 |
+
Cham, 2014: 596-611.
|
| 1488 |
+
[29] Singh G, Saha S, Sapienza M, et al. Online real-time
|
| 1489 |
+
multiple
|
| 1490 |
+
spatiotemporal
|
| 1491 |
+
action
|
| 1492 |
+
localisation
|
| 1493 |
+
and
|
| 1494 |
+
prediction[C]//Proceedings of the IEEE International
|
| 1495 |
+
Conference on Computer Vision. 2017: 3637-3646.
|
| 1496 |
+
[30] Wu X, Wang R, Hou J, et al. Spatial–temporal relation
|
| 1497 |
+
reasoning for action prediction in videos[J]. International
|
| 1498 |
+
Journal of Computer Vision, 2021, 129(5): 1484-1505.
|
| 1499 |
+
[31] Kong Y, Fu Y. Max-margin action prediction machine [J].
|
| 1500 |
+
IEEE transactions on pattern analysis and machine
|
| 1501 |
+
intelligence, 2015, 38(9): 1844-1858.
|
| 1502 |
+
[32] Fernando B, Herath S. Anticipating human actions by
|
| 1503 |
+
correlating past with the future with jaccard similarity
|
| 1504 |
+
measures[C]//Proceedings of the IEEE/CVF Conference
|
| 1505 |
+
on Computer Vision and Pattern Recognition. 2021:
|
| 1506 |
+
13224-13233.
|
| 1507 |
+
[33] Vondrick C, Pirsiavash H, Torralba A. Anticipating visual
|
| 1508 |
+
representations from unlabeled video[C]// Proceedings of
|
| 1509 |
+
the IEEE conference on computer vision and pattern
|
| 1510 |
+
recognition. 2016: 98-106.
|
| 1511 |
+
[34] Shi Y, Fernando B, Hartley R. Action anticipation with
|
| 1512 |
+
rbf kernelized feature mapping rnn[C]// Proceedings of
|
| 1513 |
+
the European Conference on Computer Vision (ECCV).
|
| 1514 |
+
2018: 301-317.
|
| 1515 |
+
[35] Gammulle H, Denman S, Sridharan S, et al. Predicting
|
| 1516 |
+
the future: A jointly learnt model for action anticipation[C]
|
| 1517 |
+
//Proceedings of the IEEE/CVF International Conference
|
| 1518 |
+
on Computer Vision. 2019: 5562-5571.
|
| 1519 |
+
[36] Chen J, Bao W, Kong Y. Group activity prediction with
|
| 1520 |
+
sequential relational anticipation model[C]//European
|
| 1521 |
+
Conference on Computer Vision. Springer, Cham, 2020:
|
| 1522 |
+
581-597.
|
| 1523 |
+
[37] Oord A, Dieleman S, Zen H, et al. Wavenet: A generative
|
| 1524 |
+
model for raw audio[J]. arXiv preprint arXiv:1609.03499,
|
| 1525 |
+
2016.
|
| 1526 |
+
[38] Heng Wang, Alexander Kläser, Cordelia Schmid, Liu
|
| 1527 |
+
Cheng-Lin. Action Recognition by Dense Trajectories.
|
| 1528 |
+
CVPR 2011 - IEEE Conference on Computer Vision &
|
| 1529 |
+
Pattern Recognition, Jun 2011, Colorado Springs, United
|
| 1530 |
+
States. pp.3169-3176, ff10.1109/CVPR.2011.5995407ff.
|
| 1531 |
+
ffinria-00583818f
|
| 1532 |
+
[39] Zheng Z, An G, Ruan Q. Multi-level recurrent residual
|
| 1533 |
+
networks for action recognition[J]. arXiv preprint
|
| 1534 |
+
arXiv:1711.08238, 2017.
|
| 1535 |
+
[40] Zhao Y, Xiong Y, Wang L, et al. Temporal action
|
| 1536 |
+
detection
|
| 1537 |
+
with
|
| 1538 |
+
structured
|
| 1539 |
+
segment
|
| 1540 |
+
networks[C]//
|
| 1541 |
+
Proceedings of the IEEE International Conference on
|
| 1542 |
+
Computer Vision. 2017: 2914-2923.
|
| 1543 |
+
[41] Chung J, Ahn S, Bengio Y. Hierarchical multiscale
|
| 1544 |
+
recurrent neural networks[J]. arXiv preprint arXiv:
|
| 1545 |
+
1609.01704, 2016.
|
| 1546 |
+
[42] Wang J, Wang Z, Li J, et al. Multilevel wavelet
|
| 1547 |
+
decomposition network for interpretable time series
|
| 1548 |
+
|
| 1549 |
+
11
|
| 1550 |
+
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
|
| 1551 |
+
|
| 1552 |
+
analysis[C]//Proceedings of the 24th ACM SIGKDD
|
| 1553 |
+
International Conference on Knowledge Discovery &
|
| 1554 |
+
Data Mining. 2018: 2437-2446.
|
| 1555 |
+
[43] Hu H, Wang L, Qi G J. Learning to adaptively scale
|
| 1556 |
+
recurrent neural networks[C]//Proceedings of the AAAI
|
| 1557 |
+
Conference on Artificial Intelligence. 2019, 33(01):
|
| 1558 |
+
3822-3829.
|
| 1559 |
+
[44] Campos V, Jou B, Giró-i-Nieto X, et al. Skip rnn:
|
| 1560 |
+
Learning to skip state updates in recurrent neural
|
| 1561 |
+
networks[J]. arXiv preprint arXiv:1708.06834, 2017.
|
| 1562 |
+
[45] Zhao Y, Xiong Y, Lin D. Recognize actions by
|
| 1563 |
+
disentangling components of dynamics[C]//Procee-dings
|
| 1564 |
+
of the IEEE Conference on Computer Vision and Pattern
|
| 1565 |
+
Recognition. 2018: 6566-6575.
|
| 1566 |
+
[46] Kong Y, Jia Y, Fu Y. Learning human interaction by
|
| 1567 |
+
interactive
|
| 1568 |
+
phrases[C]//European
|
| 1569 |
+
conference
|
| 1570 |
+
on
|
| 1571 |
+
computer vision. Springer, Berlin, Heidelberg, 2012:
|
| 1572 |
+
300-313.
|
| 1573 |
+
[47] Kuehne H, Jhuang H, Garrote E, et al. HMDB: a large
|
| 1574 |
+
video database for human motion recognition[C]// 2011
|
| 1575 |
+
International conference on computer vision. IEEE, 2011:
|
| 1576 |
+
2556-2563.
|
| 1577 |
+
[48] Soomro K, Zamir A R, Shah M. UCF101: A dataset of
|
| 1578 |
+
101 human actions classes from videos in the wild[J].
|
| 1579 |
+
arXiv preprint arXiv:1212.0402, 2012.
|
| 1580 |
+
[49] Kong Y, Tao Z, Fu Y. Adversarial action prediction
|
| 1581 |
+
networks[J]. IEEE transactions on pattern analysis and
|
| 1582 |
+
machine intelligence, 2018, 42(3): 539-553.
|
| 1583 |
+
[50] Liu C, Gao Y, Li Z, et al. Action Prediction Network with
|
| 1584 |
+
Auxiliary Observation Ratio Regression[C]// 2021 IEEE
|
| 1585 |
+
International Conference on Multimedia and Expo
|
| 1586 |
+
(ICME). IEEE, 2021: 1-6.
|
| 1587 |
+
[51] Chen L, Lu J, Song Z, et al. Ambiguousness-Aware State
|
| 1588 |
+
Evolution for Action Prediction[J]. IEEE Transactions on
|
| 1589 |
+
Circuits and Systems for Video Technology, vol. 32, no.
|
| 1590 |
+
9, pp. 6058-6072, 2022.
|
| 1591 |
+
[52] Lai S, Zheng W S, Hu J F, et al. Global-local temporal
|
| 1592 |
+
saliency action prediction[J]. IEEE Transactions on
|
| 1593 |
+
Image Processing, 2017, 27(5): 2272-2285.
|
| 1594 |
+
[53] Hou J, Wu X, Wang R, et al. Confidence-guided self
|
| 1595 |
+
refinement for action prediction in untrimmed videos[J].
|
| 1596 |
+
IEEE Transactions on Image Processing, 2020, 29: 6017-
|
| 1597 |
+
6031.
|
| 1598 |
+
[54] Wu X, Zhao J, Wang R. Anticipating Future Relations via
|
| 1599 |
+
Graph Growing for Action Prediction[C]// Proceedings of
|
| 1600 |
+
the AAAI Conference on Artificial Intelligence. 2021,
|
| 1601 |
+
35(4): 2952-2960.
|
| 1602 |
+
|
| 1603 |
+
|
| 1604 |
+
|
| 1605 |
+
12
|
| 1606 |
+
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
|
| 1607 |
+
|
| 1608 |
+
|
| 1609 |
+
Xiaofa Liu received the B.S. degree from
|
| 1610 |
+
Hohai University, Nanjing, China, in 2017.
|
| 1611 |
+
He is currently pursuing the M.S. degree
|
| 1612 |
+
in mechanical engineering with the
|
| 1613 |
+
School
|
| 1614 |
+
of
|
| 1615 |
+
Modern
|
| 1616 |
+
Post,
|
| 1617 |
+
Beijing
|
| 1618 |
+
University
|
| 1619 |
+
of
|
| 1620 |
+
Posts
|
| 1621 |
+
and
|
| 1622 |
+
Telecom-
|
| 1623 |
+
munications, Beijing, China. His research
|
| 1624 |
+
interests include robotics, and computer
|
| 1625 |
+
vision.
|
| 1626 |
+
|
| 1627 |
+
|
| 1628 |
+
Jianqin Yin (Member, IEEE) received
|
| 1629 |
+
the
|
| 1630 |
+
Ph.D.
|
| 1631 |
+
degree
|
| 1632 |
+
from
|
| 1633 |
+
Shandong
|
| 1634 |
+
University, Jinan, China, in 2013. She
|
| 1635 |
+
currently is a Professor with the School of
|
| 1636 |
+
Artificial Intelligence, Beijing University
|
| 1637 |
+
of Posts and Telecommunications, Beijing,
|
| 1638 |
+
China. Her research interests include
|
| 1639 |
+
service
|
| 1640 |
+
robot,
|
| 1641 |
+
pattern
|
| 1642 |
+
recognition,
|
| 1643 |
+
machine learning, and image processing.
|
| 1644 |
+
|
| 1645 |
+
|
| 1646 |
+
Yuan Sun received the Ph.D. degree from
|
| 1647 |
+
Beijing University of Aeronautics and
|
| 1648 |
+
Astronautics, Beijing, China, in 2016. She
|
| 1649 |
+
currently is an Assistant Professor with
|
| 1650 |
+
Electronic Engineering School, Beijing
|
| 1651 |
+
University
|
| 1652 |
+
of
|
| 1653 |
+
Posts
|
| 1654 |
+
and
|
| 1655 |
+
Telecommunications, Beijing, China. Her
|
| 1656 |
+
research
|
| 1657 |
+
interests
|
| 1658 |
+
include
|
| 1659 |
+
satellite
|
| 1660 |
+
navigation
|
| 1661 |
+
technology,
|
| 1662 |
+
and
|
| 1663 |
+
satellite
|
| 1664 |
+
autonomous integrity.
|
| 1665 |
+
|
| 1666 |
+
|
| 1667 |
+
Zhicheng Zhang received the Ph.D.
|
| 1668 |
+
degree from Jilin University, Changchun,
|
| 1669 |
+
China, in 2011. He currently is an
|
| 1670 |
+
Associate Professor with the School of
|
| 1671 |
+
Artificial Intelligence, Beijing University
|
| 1672 |
+
of Posts and Telecommunications, Beijing,
|
| 1673 |
+
China. His research interests include
|
| 1674 |
+
Intelligent optimization and its application,
|
| 1675 |
+
signal detection and estimation, machine learning.
|
| 1676 |
+
|
| 1677 |
+
|
| 1678 |
+
Jin Tang received the Ph.D. degree from
|
| 1679 |
+
Beijing Institute of Technology, Beijing,
|
| 1680 |
+
China, in 2007. currently is an Assistant
|
| 1681 |
+
Professor with Artificial Intelligence
|
| 1682 |
+
School, Beijing University of Posts and
|
| 1683 |
+
Telecommunications, Beijing, China. Her
|
| 1684 |
+
research
|
| 1685 |
+
interests
|
| 1686 |
+
include
|
| 1687 |
+
signal
|
| 1688 |
+
processing, pattern recognition, and deep
|
| 1689 |
+
learning.
|
| 1690 |
+
|
1NAzT4oBgHgl3EQfRftD/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
29AzT4oBgHgl3EQfuf0F/content/2301.01690v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5542bad206b71ce3a93e10f0f19d3aa166d1be2b0e784b2dc6e117e6228f5085
|
| 3 |
+
size 315162
|
29AzT4oBgHgl3EQfuf0F/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:96a500235116c85f79d3d79044691778219a7dae16028f7d33968fe8c2685aaa
|
| 3 |
+
size 4128813
|
29AzT4oBgHgl3EQfuf0F/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4b3b2873db8c4bc59007048cefaaf7fb0e8d2c7402ad725817c68934a98a128a
|
| 3 |
+
size 173651
|
29FQT4oBgHgl3EQfGTVE/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5020d0bb4c1e9b1ee7fe4158d3430a4888ccaf5752a76aed0eb9e2c2a1286ab3
|
| 3 |
+
size 97841
|
3tAzT4oBgHgl3EQfD_po/content/tmp_files/2301.00985v1.pdf.txt
ADDED
|
@@ -0,0 +1,2547 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 2 |
+
1
|
| 3 |
+
More is Better: A Database for Spontaneous
|
| 4 |
+
Micro-Expression with High Frame Rates
|
| 5 |
+
Sirui Zhao, Huaying Tang, Xinglong Mao, Shifeng Liu, Hanqing Tao, Hao Wang, Tong Xu, Member, IEEE,
|
| 6 |
+
and Enhong Chen, Senior Member, IEEE,
|
| 7 |
+
Abstract—As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial
|
| 8 |
+
expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming
|
| 9 |
+
increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis
|
| 10 |
+
and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models.
|
| 11 |
+
Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the
|
| 12 |
+
problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called
|
| 13 |
+
DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated
|
| 14 |
+
by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on
|
| 15 |
+
DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the
|
| 16 |
+
class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable
|
| 17 |
+
reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of
|
| 18 |
+
automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
|
| 19 |
+
Index Terms—Emotion recognition, facial micro-expression, micro-expression recognition, datasets
|
| 20 |
+
!
|
| 21 |
+
1
|
| 22 |
+
INTRODUCTION
|
| 23 |
+
F
|
| 24 |
+
ACIAL expression is essential for humans to transmit
|
| 25 |
+
emotional information, accounting for 55% of our daily
|
| 26 |
+
communication [1]. As a particular facial expression, micro-
|
| 27 |
+
expression (ME) usually refers to the spontaneous and
|
| 28 |
+
subtle facial movements that appear instantaneously when
|
| 29 |
+
an individual tries to hide or suppress real emotions un-
|
| 30 |
+
der pressure. The concept of ME was first proposed in
|
| 31 |
+
1966 [2]. Subsequently, Ekman et al. [3] discovered a ME
|
| 32 |
+
case in a video of a psychiatrist and depressed patient
|
| 33 |
+
conversation in 1969. Concretely, throughout the pleasant
|
| 34 |
+
conversation, when the psychiatrist asked the patient about
|
| 35 |
+
her plans, a distressed expression quickly flashed across the
|
| 36 |
+
patient’s face, which was called ME by Ekman. As MEs
|
| 37 |
+
can effectively reveal the genuine emotions of individuals,
|
| 38 |
+
recognizing MEs can provide essential technical support in
|
| 39 |
+
•
|
| 40 |
+
Sirui Zhao is with the School of Computer Science and Technology,
|
| 41 |
+
University of Science and Technology of China, Hefei, Anhui 230027,
|
| 42 |
+
China, and also with the School of Computer Science and Technology,
|
| 43 |
+
Southwest University of Science and Technology, Mianyang 621010,
|
| 44 |
+
China.
|
| 45 |
+
E-mail: sirui@mail.ustc.edu.cn
|
| 46 |
+
•
|
| 47 |
+
Huaying Tang, Hanqing Tao are with the School of Computer Science and
|
| 48 |
+
Technology, University of Science and Technology of China, Hefei, Anhui
|
| 49 |
+
230027, China.
|
| 50 |
+
E-mail: {iamthy, hqtao}@mail.ustc.edu.cn
|
| 51 |
+
•
|
| 52 |
+
Xinglong Mao, Shifeng Liu, Hao Wang, Tong Xu and Enhong Chen are
|
| 53 |
+
with School of Data Science, University of Science and Technology of
|
| 54 |
+
China, Hefei, Anhui 230027, China.
|
| 55 |
+
E-mail: {maoxl, lsf0619}@mail.ustc.edu.cn,
|
| 56 |
+
{wanghao3, tongxu, cheneh}@ustc.edu.cn
|
| 57 |
+
This work has been submitted to the IEEE for possible publication. Copyright
|
| 58 |
+
may be transferred without notice, after which this version may no longer be
|
| 59 |
+
accessible.
|
| 60 |
+
Sirui Zhao, Huaying Tang, Xinglong Mao and Shifeng Liu contributed
|
| 61 |
+
equally. Corresponding authors: Enhong Chen and Tong Xu.
|
| 62 |
+
Manuscript received December xx, xx; revised xx xx, xx.
|
| 63 |
+
lie detection, psychological healing, and public safety [4],
|
| 64 |
+
[5], [6], [7].
|
| 65 |
+
In essence, ME is a kind of psychic stress reaction. Com-
|
| 66 |
+
pared with ordinary facial expression (also called macro-
|
| 67 |
+
expression, MaE), ME has the characteristics of short dura-
|
| 68 |
+
tion (less than 0.5s), partial movement, and low movement
|
| 69 |
+
intensity, so it is challenging to recognize MEs accurately.
|
| 70 |
+
Figure 1 illustrates the comparison between a ME and a
|
| 71 |
+
MaE with the same emotion category. It shows vividly that
|
| 72 |
+
the MaE is obvious enough to be distinguished easily by
|
| 73 |
+
a single image, while the ME is subtle and can only be
|
| 74 |
+
observed through an image sequence.
|
| 75 |
+
The early research on ME recognition (MER) was mainly
|
| 76 |
+
based on manual analysis in the field of psychology. How-
|
| 77 |
+
ever, the manual analysis relies on expert experience, which
|
| 78 |
+
is time-consuming and labor-intensive, and has low recog-
|
| 79 |
+
nition accuracy. Therefore, it is urgent to use computers’
|
| 80 |
+
powerful perception and computing power for automatic
|
| 81 |
+
MER. In recent years, lots of efforts in the fields of com-
|
| 82 |
+
puter vision and affective computing have been devoted
|
| 83 |
+
to automatic MER. For example, in order to extract the
|
| 84 |
+
spatial-temporal MEs, Pfister et al. [8] introduced a local
|
| 85 |
+
binary pattern from three orthogonal planes (LBP-TOP) [9]
|
| 86 |
+
for MER. Liu et al. [10] proposed Mian Directional Mean Op-
|
| 87 |
+
tical Flow (MDMO). Wang et al. [11] proposed Transferring
|
| 88 |
+
Long-term Convolutional Nerual Network (TLCNN). Zhao
|
| 89 |
+
et al. [12] proposed a novel two-stage learning (i.e., prior
|
| 90 |
+
learning and target learning) method based on a siamese 3D
|
| 91 |
+
convolutional neural network for MER. However, due to the
|
| 92 |
+
lack of support for a large number of well-labeled ME data,
|
| 93 |
+
the recognition accuracy and robustness of these methods
|
| 94 |
+
are challenging to meet the needs of actual scenarios. There-
|
| 95 |
+
fore, it is urgent to build a large-scale ME dataset.
|
| 96 |
+
arXiv:2301.00985v1 [cs.CV] 3 Jan 2023
|
| 97 |
+
|
| 98 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 99 |
+
2
|
| 100 |
+
···
|
| 101 |
+
···
|
| 102 |
+
···
|
| 103 |
+
···
|
| 104 |
+
onset
|
| 105 |
+
0
|
| 106 |
+
apex
|
| 107 |
+
1.25
|
| 108 |
+
offset
|
| 109 |
+
2.08
|
| 110 |
+
second
|
| 111 |
+
(a) An example of MaE with ”Happiness” emotion.
|
| 112 |
+
···
|
| 113 |
+
···
|
| 114 |
+
···
|
| 115 |
+
···
|
| 116 |
+
onset
|
| 117 |
+
0
|
| 118 |
+
apex
|
| 119 |
+
0.19
|
| 120 |
+
offset
|
| 121 |
+
0.36
|
| 122 |
+
second
|
| 123 |
+
(b) An example of ME with “Happiness” emotion.
|
| 124 |
+
Fig. 1: Examples of MaE and ME from the same person with a timeline in seconds, both belong to the ”Happiness” emotion
|
| 125 |
+
category. Noteworthy, the onset frame and the offset frame denote the start and end time of an expression respectively,
|
| 126 |
+
and the apex frame represents the moment when an expression changes most dramatically. White arrows on the face of
|
| 127 |
+
the apex frame indicate the general directions of facial movements, and the longer and thicker the arrows, the greater the
|
| 128 |
+
intensity of facial movements.
|
| 129 |
+
Over
|
| 130 |
+
the
|
| 131 |
+
past
|
| 132 |
+
decade,
|
| 133 |
+
although
|
| 134 |
+
researchers
|
| 135 |
+
have
|
| 136 |
+
published
|
| 137 |
+
several
|
| 138 |
+
spontaneous
|
| 139 |
+
ME
|
| 140 |
+
datasets,
|
| 141 |
+
such
|
| 142 |
+
as
|
| 143 |
+
SMIC [13], CASME II [14], SAMM [15], MMEW [16] and
|
| 144 |
+
CAS(ME)3 [17], these datasets have a small sample size,
|
| 145 |
+
which still cannot completely meet the need of MER models
|
| 146 |
+
for large-scale ME samples. In fact, building a large-scale
|
| 147 |
+
spontaneous ME dataset is full of challenges, mainly from
|
| 148 |
+
three aspects: First, it is difficult to induce MEs because they
|
| 149 |
+
are facial movements that are disclosed after an individual
|
| 150 |
+
attempts to suppress them. Second, it is difficult to label and
|
| 151 |
+
distinguish ME fragments because the movement of ME is
|
| 152 |
+
weak and fast, which is hard for the naked eye to perceive.
|
| 153 |
+
Third, due to the short duration of MEs, high-speed cameras
|
| 154 |
+
are often needed to collect them. However, the data collected
|
| 155 |
+
by high-speed cameras are redundant, so labeling ME clips
|
| 156 |
+
is extremely time-consuming and labor-intensive.
|
| 157 |
+
In order to solve the challenge of ME data shortage,
|
| 158 |
+
this paper constructs the current largest ME dataset called
|
| 159 |
+
DFME (Dynamic Facial Micro-expressions) to advance the
|
| 160 |
+
development of MER. Specifically, our DFME includes 7,526
|
| 161 |
+
well-labeled ME videos induced by 671 participants and
|
| 162 |
+
annotated by more than 20 annotators throughout three con-
|
| 163 |
+
secutive years. Subsequently, four popular spatiotemporal
|
| 164 |
+
video feature learning models were reproduced on DFME
|
| 165 |
+
to perform MER so as to objectively verify the availability
|
| 166 |
+
of the dataset and provide a benchmark for subsequent
|
| 167 |
+
research. In addition, aiming at the class imbalance and
|
| 168 |
+
key-frame sequence sampling problems existing in MER,
|
| 169 |
+
we explored different solutions to DFME. In general, the
|
| 170 |
+
contributions of this paper could be summarized as follows:
|
| 171 |
+
•
|
| 172 |
+
This paper focuses on solving the problem of lacking
|
| 173 |
+
abundant spontaneous ME data and builds a new
|
| 174 |
+
ME dataset called DFME containing 7,526 ME videos
|
| 175 |
+
across multiple high frame rates (i.e., 200fps, 300fps,
|
| 176 |
+
500fps). To the best of our knowledge, DFME has the
|
| 177 |
+
largest ME sample size at present.
|
| 178 |
+
•
|
| 179 |
+
We reproduced four spatiotemporal feature learning
|
| 180 |
+
models to carry out MER tasks in DFME, objectively
|
| 181 |
+
verifying the reliability of data quality, and providing
|
| 182 |
+
a benchmark for subsequent MER studies.
|
| 183 |
+
•
|
| 184 |
+
We explored and analyzed different solutions to the
|
| 185 |
+
class imbalance and key-frame sequence sampling
|
| 186 |
+
problems in dynamic MER respectively on DFME,
|
| 187 |
+
so as to provide a reference for future research.
|
| 188 |
+
The rest of this paper is organized as follows. First, we
|
| 189 |
+
summarize currently existing ME datasets and review re-
|
| 190 |
+
lated work on MER in the next section. In section 3, we elab-
|
| 191 |
+
orate on the building details and statistical properties of our
|
| 192 |
+
DFME dataset. Then the comprehensive dataset evaluation
|
| 193 |
+
is developed and discussed in Section 4. Finally, research
|
| 194 |
+
conclusions and future work are addressed in Section 5.
|
| 195 |
+
2
|
| 196 |
+
RELATED WORK
|
| 197 |
+
In this section, we first review the existing public sponta-
|
| 198 |
+
neous ME datasets related to MER. Then, we summarize
|
| 199 |
+
some representative MER studies based on deep learning
|
| 200 |
+
technologies.
|
| 201 |
+
2.1
|
| 202 |
+
Micro-expression Datasets
|
| 203 |
+
The premise of obtaining an automatic MER algorithm with
|
| 204 |
+
excellent performance is to hold a dataset with sufficient ME
|
| 205 |
+
samples whose labels are credible and whose visual features
|
| 206 |
+
are distinguishable. As an emerging field of affective com-
|
| 207 |
+
puting, the number of ME datasets is still relatively limited.
|
| 208 |
+
|
| 209 |
+
香JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 210 |
+
3
|
| 211 |
+
TABLE 1: Statistical Information of Current Spontaneous ME Datasets
|
| 212 |
+
ME Datasets
|
| 213 |
+
Participants
|
| 214 |
+
Samples of MEs
|
| 215 |
+
Annotation Labels
|
| 216 |
+
Number
|
| 217 |
+
Gender
|
| 218 |
+
(Male/Female)
|
| 219 |
+
Age
|
| 220 |
+
Number
|
| 221 |
+
Frame Rate
|
| 222 |
+
Resolution
|
| 223 |
+
Emotion
|
| 224 |
+
FACS AU
|
| 225 |
+
HS
|
| 226 |
+
16
|
| 227 |
+
164
|
| 228 |
+
100
|
| 229 |
+
640×480
|
| 230 |
+
Pos (51) Neg (70) Sur (43)
|
| 231 |
+
SMIC
|
| 232 |
+
VIS
|
| 233 |
+
8
|
| 234 |
+
10/6
|
| 235 |
+
Range: 22-34
|
| 236 |
+
Mean=28.1
|
| 237 |
+
71
|
| 238 |
+
25
|
| 239 |
+
640×480
|
| 240 |
+
Pos (28) Neg (23) Sur (20)
|
| 241 |
+
No
|
| 242 |
+
NIR
|
| 243 |
+
8
|
| 244 |
+
71
|
| 245 |
+
25
|
| 246 |
+
640×480
|
| 247 |
+
Pos (28) Neg (23) Sur (20)
|
| 248 |
+
CASME
|
| 249 |
+
35
|
| 250 |
+
22/13
|
| 251 |
+
Mean=22.03
|
| 252 |
+
195
|
| 253 |
+
60
|
| 254 |
+
640×480
|
| 255 |
+
1280×720
|
| 256 |
+
Amu (5) Dis (88) Fear (2)
|
| 257 |
+
Con (3) Sad (6) Tense (28)
|
| 258 |
+
Sur (20) Rep (40)
|
| 259 |
+
Yes
|
| 260 |
+
CASME II
|
| 261 |
+
35
|
| 262 |
+
/
|
| 263 |
+
Mean=22.03
|
| 264 |
+
247
|
| 265 |
+
200
|
| 266 |
+
640×480
|
| 267 |
+
Hap (33) Dis (60) Sur (25)
|
| 268 |
+
Rep (27) Oth (102)
|
| 269 |
+
Yes
|
| 270 |
+
CAS(ME)2
|
| 271 |
+
22
|
| 272 |
+
9/13
|
| 273 |
+
Range: 19-26
|
| 274 |
+
Mean=22.59
|
| 275 |
+
57
|
| 276 |
+
30
|
| 277 |
+
640×480
|
| 278 |
+
Pos (8) Neg (21) Sur (9)
|
| 279 |
+
Oth (19)
|
| 280 |
+
Yes
|
| 281 |
+
SAMM
|
| 282 |
+
32
|
| 283 |
+
16/16
|
| 284 |
+
Range: 19-57
|
| 285 |
+
Mean=33.24
|
| 286 |
+
159
|
| 287 |
+
200
|
| 288 |
+
2040×1088
|
| 289 |
+
Hap (24) Dis (8) Fear (7)
|
| 290 |
+
Ang (20) Sur (13) Sad (3)
|
| 291 |
+
Oth (84)
|
| 292 |
+
Yes
|
| 293 |
+
MEVIEW
|
| 294 |
+
16
|
| 295 |
+
/
|
| 296 |
+
/
|
| 297 |
+
29
|
| 298 |
+
30
|
| 299 |
+
1280×720
|
| 300 |
+
Hap (5) Dis (1) Fear (3)
|
| 301 |
+
Ang (1) Sur (8) Con(4)
|
| 302 |
+
Unc (7)
|
| 303 |
+
Yes
|
| 304 |
+
MMEW
|
| 305 |
+
36
|
| 306 |
+
/
|
| 307 |
+
Mean=22.35
|
| 308 |
+
300
|
| 309 |
+
90
|
| 310 |
+
1920×1080
|
| 311 |
+
Hap (36) Dis (72) Fear (16)
|
| 312 |
+
Ang (8) Sur (89) Sad (13)
|
| 313 |
+
Oth (66)
|
| 314 |
+
Yes
|
| 315 |
+
CAS(ME)3
|
| 316 |
+
PART A
|
| 317 |
+
100
|
| 318 |
+
50/50
|
| 319 |
+
/
|
| 320 |
+
943
|
| 321 |
+
30
|
| 322 |
+
1280×720
|
| 323 |
+
Hap (64) Dis (281) Fear (93)
|
| 324 |
+
Ang (70) Sur (201) Sad (64)
|
| 325 |
+
Oth (170)
|
| 326 |
+
Yes
|
| 327 |
+
PART C
|
| 328 |
+
31
|
| 329 |
+
9/22
|
| 330 |
+
Mean=23.5
|
| 331 |
+
166
|
| 332 |
+
30
|
| 333 |
+
1280×720
|
| 334 |
+
Pos (16) Neg(99) Sur (30)
|
| 335 |
+
Oth (20)
|
| 336 |
+
4DME
|
| 337 |
+
DI4D
|
| 338 |
+
65
|
| 339 |
+
38/27
|
| 340 |
+
Range: 22-57
|
| 341 |
+
Mean=27.8
|
| 342 |
+
267
|
| 343 |
+
60
|
| 344 |
+
1200×1600
|
| 345 |
+
Pos (34) Neg (127) Sur (30)
|
| 346 |
+
Rep (6) PosSur (13) NegSur (8)
|
| 347 |
+
RepSur (3) PosRep(8)
|
| 348 |
+
NegRep(7) Oth(31)
|
| 349 |
+
Yes
|
| 350 |
+
Grayscale
|
| 351 |
+
267
|
| 352 |
+
60
|
| 353 |
+
640×480
|
| 354 |
+
RGB
|
| 355 |
+
267
|
| 356 |
+
30
|
| 357 |
+
640×480
|
| 358 |
+
Depth
|
| 359 |
+
267
|
| 360 |
+
30
|
| 361 |
+
640×480
|
| 362 |
+
PART A
|
| 363 |
+
72
|
| 364 |
+
31/41
|
| 365 |
+
1118
|
| 366 |
+
500
|
| 367 |
+
1024×768
|
| 368 |
+
Hap (111) Dis (321) Fear (143)
|
| 369 |
+
Ang (97) Con (77) Sur (187)
|
| 370 |
+
Sad (142) Oth (40)
|
| 371 |
+
DFME
|
| 372 |
+
PART B
|
| 373 |
+
92
|
| 374 |
+
61/31
|
| 375 |
+
Range: 17-40
|
| 376 |
+
Mean=22.43
|
| 377 |
+
969
|
| 378 |
+
300
|
| 379 |
+
1024×768
|
| 380 |
+
Hap (78) Dis (406) Fear (115)
|
| 381 |
+
Ang (56) Con (45) Sur (143)
|
| 382 |
+
Sad (119) Oth (7)
|
| 383 |
+
Yes
|
| 384 |
+
PART C
|
| 385 |
+
492
|
| 386 |
+
282/210
|
| 387 |
+
5439
|
| 388 |
+
200
|
| 389 |
+
1024×768
|
| 390 |
+
Hap (803) Dis (1801) Fear (634)
|
| 391 |
+
Ang (466) Con (279) Sur (878)
|
| 392 |
+
Sad (374) Oth (204)
|
| 393 |
+
1 Some datasets contain not only MEs but also MaEs, as well as long video clips for the detection task. But here we only show the information
|
| 394 |
+
about ME data. Note that all statistical data are from the corresponding original paper or downloaded datasets.
|
| 395 |
+
2 The number of participants was counted based on the data given in the corresponding original paper, but some participants were not
|
| 396 |
+
successfully induced to make MEs.
|
| 397 |
+
3 Pos: Positive; Neg: Negative; Sur: Surprise; Amu: Amusement; Hap: Happiness; Dis: Disgust; Rep: Repression; Ang: Anger; Sad: Sadness;
|
| 398 |
+
Con: Contempt; Unc: Unclear; Oth: Others; PosSur: Positively surprise; NegSur: Negatively surprise; RepSur: Repressively surprise; PosRep:
|
| 399 |
+
Positively repression; NegRep: Negatively repression.
|
| 400 |
+
Nevertheless, since more and more researchers have begun
|
| 401 |
+
to pay attention to ME analysis, some high-quality datasets
|
| 402 |
+
are gradually springing up. Table 1 clearly summarizes the
|
| 403 |
+
characteristics of these datasets.
|
| 404 |
+
As the two earliest proposed ME datasets, samples in
|
| 405 |
+
the USF-HD [18] and Polikovsky [19] datasets are all posed
|
| 406 |
+
MEs. The participants were first required to watch video
|
| 407 |
+
clips containing ME samples and then posed them by imi-
|
| 408 |
+
tation. However, naturally generated MEs strongly correlate
|
| 409 |
+
with emotions, while the posed ones are deliberately dis-
|
| 410 |
+
played and have nothing to do with the current emotional
|
| 411 |
+
state of the participants. Consequently, these two datasets
|
| 412 |
+
are rarely used by researchers for ME analysis.
|
| 413 |
+
The subsequent researchers proposed to induce spon-
|
| 414 |
+
taneous MEs with the neutralization paradigm. Under
|
| 415 |
+
this paradigm, several strong emotional stimuli were used
|
| 416 |
+
to elicit expressions, during which participants were in-
|
| 417 |
+
structed to keep a neutral face as much as possible, and
|
| 418 |
+
a certain degree of high-pressure mechanism was given
|
| 419 |
+
to them. Datasets adopting the neutralization paradigm
|
| 420 |
+
include SMIC [13], CASME [20], CASME II [14], CAS(ME)2
|
| 421 |
+
[21], SAMM [15], MMEW [16], and 4DME [22], which will
|
| 422 |
+
|
| 423 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 424 |
+
4
|
| 425 |
+
be introduced in turn below.
|
| 426 |
+
SMIC dataset [13] is the first published spontaneous ME
|
| 427 |
+
dataset, which consists of three parts: HS, VIS, and NIR.
|
| 428 |
+
The HS part includes 164 ME samples from 16 participants,
|
| 429 |
+
recorded by a high-speed camera with a frame rate of
|
| 430 |
+
100 frames per second (fps) and a resolution of 640×480.
|
| 431 |
+
Both the VIS and NIR parts contain 71 ME samples from
|
| 432 |
+
8 individuals, while the former part was recorded using a
|
| 433 |
+
standard visual camera and the latter using a near-infrared
|
| 434 |
+
camera. Two annotators classified each ME into three emo-
|
| 435 |
+
tion categories (positive, negative, and surprise) based on the
|
| 436 |
+
participants’ self-reports about the elicitation videos. Facial
|
| 437 |
+
action units (AUs) were not annotated in SMIC.
|
| 438 |
+
CASME series datasets are released by the Institute of
|
| 439 |
+
Psychology, Chinese Academy of Sciences. As the earliest
|
| 440 |
+
dataset in this series, CASME [20] contains a total of 195
|
| 441 |
+
ME samples from 19 participants with a frame rate of
|
| 442 |
+
60fps. Two annotators labeled the facial AUs, together with
|
| 443 |
+
the corresponding onset, apex, and offset frames of each
|
| 444 |
+
ME sample frame by frame. According to the facial AUs,
|
| 445 |
+
participants’ self-reports, and the relevant video content,
|
| 446 |
+
MEs were divided into eight emotion categories: amuse-
|
| 447 |
+
ment, sadness, disgust, surprise, contempt, fear, repression, and
|
| 448 |
+
tense. CASME II [14] is an advanced version of CASME.
|
| 449 |
+
First, the number of ME samples in CASME II has been
|
| 450 |
+
expanded to 247 samples from 26 participants. Besides,
|
| 451 |
+
CASME II provides a higher frame rate of 200fps and facial
|
| 452 |
+
area resolution of 280×340 to capture more subtle changes
|
| 453 |
+
in expressions. Five emotion categories were labeled in
|
| 454 |
+
CASME II: happiness, disgust, surprise, repression, and others.
|
| 455 |
+
The CAS(ME)2 dataset [21] embodies two parts, both of
|
| 456 |
+
which were collected at 30fps and 640×480 pixels. Different
|
| 457 |
+
from all the other datasets above, there are 87 long video
|
| 458 |
+
clips containing both MaEs and MEs in the first part of
|
| 459 |
+
CAS(ME)2, which can be used to promote the research of
|
| 460 |
+
ME detection. The other part consists of 300 MaEs and 57
|
| 461 |
+
MEs, which were labeled with four emotion tags, including
|
| 462 |
+
positive, negative, surprise, and others.
|
| 463 |
+
SAMM dataset [15] has the highest resolution of all
|
| 464 |
+
published spontaneous ME datasets, which includes 159 ME
|
| 465 |
+
samples generated by 32 participants, with a frame rate of
|
| 466 |
+
200fps and a resolution of 2040×1088. To achieve a better
|
| 467 |
+
elicitation effect, before the formal start of the collection,
|
| 468 |
+
participants were asked to fill in a scale, and then a series
|
| 469 |
+
of stimulus videos were customized for each participant
|
| 470 |
+
according to the scale. This is how SAMM differs from
|
| 471 |
+
other datasets. SAMM contains seven emotion categories:
|
| 472 |
+
happiness, disgust, surprise, fear, anger, sadness, and others.
|
| 473 |
+
Three coders annotated the AUs and key-frames in detail
|
| 474 |
+
for each ME sample.
|
| 475 |
+
MMEW dataset [16] consists of 300 ME and 900 MaE
|
| 476 |
+
samples from 36 participants, which were collected with 90
|
| 477 |
+
fps and 1920×1080 resolution. Each expression sample is
|
| 478 |
+
marked with seven emotion labels (the same as SAMM),
|
| 479 |
+
AUs, and three key-frames. Compared with the previous
|
| 480 |
+
datasets, MMEW is more conducive to the models using
|
| 481 |
+
the MaE samples under the same parameter setting and
|
| 482 |
+
elicitated environment to assist in learning ME features.
|
| 483 |
+
To comprehensively capture the movement informa-
|
| 484 |
+
tion of ME in all directions as much as possible, 4DME
|
| 485 |
+
dataset [22] has made significant innovations in the record-
|
| 486 |
+
ing method. Each ME sample in this dataset has multi-
|
| 487 |
+
modality video data, including 4D facial data reconstructed
|
| 488 |
+
by 3D facial meshes sequences, traditional 2D frontal facial
|
| 489 |
+
grayscale, RGB and depth videos. 4DME contains 267 MEs
|
| 490 |
+
and 123 MaEs from 41 participants, thus 1,068 ME videos
|
| 491 |
+
of four forms and 492 MaE videos in total. In addition,
|
| 492 |
+
five emotion labels (positive, negative, surprise, repression, and
|
| 493 |
+
others) were annotated based on facial AUs only, noting that
|
| 494 |
+
each sample may have multiple emotion labels (up to two).
|
| 495 |
+
Unlike datasets with the neutralization paradigm, the
|
| 496 |
+
MEVIEW dataset [23] consists of video clips of two real
|
| 497 |
+
high-pressure scenes downloaded from the Internet. There
|
| 498 |
+
are 29 ME samples in total, with a frame rate of 30fps
|
| 499 |
+
and a resolution of 1280×720, divided into seven emotion
|
| 500 |
+
categories (the same as SAMM) with manual annotation.
|
| 501 |
+
Although these samples are from actual life scenarios and
|
| 502 |
+
have high ecological validity, there are many uncontrollable
|
| 503 |
+
factors, such as frequent camera shot switching, which re-
|
| 504 |
+
sults in fewer segments containing full human faces.
|
| 505 |
+
The CAS(ME)3 dataset [17] adopted the mock crime
|
| 506 |
+
paradigm to elicit MEs with high ecological validity. How-
|
| 507 |
+
ever, unlike MEVIEW, the collection was still controlled
|
| 508 |
+
in the laboratory environment, yielding 166 MEs and 347
|
| 509 |
+
MaEs. CAS(ME)3 also contains two other parts: one consists
|
| 510 |
+
of 943 MEs and 3,143 MaEs collected using the neutraliza-
|
| 511 |
+
tion paradigm, respectively marked with AUs, key-frames,
|
| 512 |
+
and seven emotion labels (the same as SAMM) for each
|
| 513 |
+
sample; the other part contains 1,508 unlabeled long video
|
| 514 |
+
clips, which can be used for the self-supervised learning task
|
| 515 |
+
of ME detection and recognition. This dataset was collected
|
| 516 |
+
at a frame rate of 30fps with a resolution of 1280×720.
|
| 517 |
+
Despite more and more datasets striving to record the
|
| 518 |
+
movement characteristics of MEs more detailedly and com-
|
| 519 |
+
prehensively through various methods, these datasets are
|
| 520 |
+
still small-scale datasets. In automatic ME analysis, mod-
|
| 521 |
+
els based on deep learning have become mainstream by
|
| 522 |
+
practice. However, due to the insufficient sample size, the
|
| 523 |
+
complexity of the model can easily lead to overfitting in
|
| 524 |
+
the training process. Though we can alleviate this problem
|
| 525 |
+
by using data augmentation to increase the number of
|
| 526 |
+
samples, many uncontrollable noises might be introduced.
|
| 527 |
+
Some work has proposed using composite datasets to train
|
| 528 |
+
the model, but different datasets have different parameter
|
| 529 |
+
settings, and thus such a simple fusion is not reasonable.
|
| 530 |
+
In addition, due to the short duration and low intensity
|
| 531 |
+
of MEs, a higher frame rate may contribute to capturing
|
| 532 |
+
more details. Nevertheless, the highest frame rate of all
|
| 533 |
+
above datasets is only 200fps, and most are less than 100fps.
|
| 534 |
+
Therefore, it is necessary to establish a larger-scale ME
|
| 535 |
+
dataset with a higher frame rate.
|
| 536 |
+
2.2
|
| 537 |
+
Micro-expression Recognition Approaches
|
| 538 |
+
In the past decade, MER has attracted more and more
|
| 539 |
+
attention from scholars in affective computing and com-
|
| 540 |
+
puter vision. The first attempt at automatic, spontaneous
|
| 541 |
+
MER dates back to 2011, Pfister et al. [8] utilized a local
|
| 542 |
+
binary pattern from three orthogonal planes (LBP-TOP) to
|
| 543 |
+
explore MER on the first spontaneous ME dataset SMIC.
|
| 544 |
+
Since then, more and more efforts have been devoted to
|
| 545 |
+
automatic MER. In general, the current MER methods can be
|
| 546 |
+
|
| 547 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 548 |
+
5
|
| 549 |
+
roughly divided into hand-crafted feature based and deep
|
| 550 |
+
learning based methods. Typical hand-crafted ME features
|
| 551 |
+
include LBP-TOP [9], HOOF [24], 3DHOG [19], and their
|
| 552 |
+
variants [25], [26], [27]. However, the hand-crafted feature
|
| 553 |
+
based methods heavily rely on complex expert knowledge,
|
| 554 |
+
and the extracted ME features have limited discrimination.
|
| 555 |
+
Current MER methods mainly use deep neural networks for
|
| 556 |
+
high-level expression feature learning and emotion classifi-
|
| 557 |
+
cation, and focus on solving the challenges that ME is subtle
|
| 558 |
+
and ME data shortage for model training. Further, according
|
| 559 |
+
to whether the MER model considers the ME temporal
|
| 560 |
+
information or not, we divide the current deep learning
|
| 561 |
+
based MER methods into single frame based MER and video
|
| 562 |
+
sequence based MER. In the following subsections, we will
|
| 563 |
+
categorize and summarize these two types of MER methods.
|
| 564 |
+
2.2.1
|
| 565 |
+
Single frame based MER methods.
|
| 566 |
+
The single frame based MER method usually only uses the
|
| 567 |
+
highest intensity frame, i.e., the apex frame with RGB or
|
| 568 |
+
optical-flow format in the ME video, as the input of neural
|
| 569 |
+
networks to learn the spatial ME features. After considering
|
| 570 |
+
the challenge of lacking sufficient ME samples, Peng et
|
| 571 |
+
al. [28] first selected ResNet-10 [29] pre-trained on a large-
|
| 572 |
+
scale image dataset as the backbone and then continued to
|
| 573 |
+
fine-tune the classification network on large MaE samples
|
| 574 |
+
for MER using apex frames. Encouragingly, the recognition
|
| 575 |
+
accuracy exceeds the hand-crafted methods based on LBP-
|
| 576 |
+
TOP, HOOF, and 3DHOG. Inspired by the success of capsule
|
| 577 |
+
models on image recognition, Quang et al. [30] proposed
|
| 578 |
+
a CapsuleNet for MER using only apex frames. Recently,
|
| 579 |
+
Wang et al. [31] proposed an expression-identity disentangle
|
| 580 |
+
network for MER by leveraging MaE databases as guidance.
|
| 581 |
+
Li et al. [32] first spotted the apex frame by estimating pixel-
|
| 582 |
+
level change rates in the frequency domain, then proposed a
|
| 583 |
+
joint feature learning architecture coupling local and global
|
| 584 |
+
information from the detected apex frames to recognize
|
| 585 |
+
MEs. At the same time, Liong et al. [33] explored the
|
| 586 |
+
effectiveness and superiority of using the optical flow of
|
| 587 |
+
the apex frame in ME video. Inspired by this work, Liu et
|
| 588 |
+
al. [34] first calculated the optical-flow image of the apex
|
| 589 |
+
frame to the onset frame in the ME clips and then used
|
| 590 |
+
the pre-trained ResNet-18 network to encode the optical-
|
| 591 |
+
flow image for MER. In particular, they introduced domain
|
| 592 |
+
adversarial training strategies to address the challenge of
|
| 593 |
+
lacking large-scale ME data for training and won first place
|
| 594 |
+
for MEGC2019. Furthermore, Zhou et al. [35] proposed
|
| 595 |
+
a novel Feature Refinement (FR) with expression-specific
|
| 596 |
+
feature learning and fusion for MER based on optical-flow
|
| 597 |
+
information of apex frames. Gong et al. [36] proposed a
|
| 598 |
+
meta-learning-based multi-model fusion network for MER.
|
| 599 |
+
Overall, the single frame based MER investigations are
|
| 600 |
+
conducted on apex frames of ME videos without temporal
|
| 601 |
+
information, which can reduce the complexity of the used
|
| 602 |
+
deep neural networks. In addition, the single frame based
|
| 603 |
+
MER method has the advantage of finding large-scale im-
|
| 604 |
+
ages for transfer learning to effectively solve the problem of
|
| 605 |
+
model overfitting with insufficient ME data. Nevertheless,
|
| 606 |
+
the single frame based MER discards the temporal informa-
|
| 607 |
+
tion in the ME video, which contains rich ME clues and is
|
| 608 |
+
an important feature that distinguishes MEs from MaEs.
|
| 609 |
+
2.2.2
|
| 610 |
+
Video sequence based MER methods.
|
| 611 |
+
Unlike the single frame based MER, video sequence based
|
| 612 |
+
MER can learn spatiotemporal ME feature from the whole
|
| 613 |
+
ME video or its sub-sequence. Thus, the video sequence
|
| 614 |
+
based MER is preferred to the single frame based MER
|
| 615 |
+
for providing details. After fully considering the important
|
| 616 |
+
expression states in the ME video, Kim et al. [37] first
|
| 617 |
+
used CNN to encode the spatial feature of each expression
|
| 618 |
+
state (i.e., onset, onset to apex transition, apex, apex to
|
| 619 |
+
offset transition and offset), then adopted LSTM to learn the
|
| 620 |
+
temporal feature based on the encoded spatial ME feature.
|
| 621 |
+
Wang et al. [11] proposed Transferring Long-term Convo-
|
| 622 |
+
lutional Nerual Network (TLCNN) to solve the learning of
|
| 623 |
+
spatial-temporal ME feature under small sample ME data.
|
| 624 |
+
The TLCNN is also based on the CNN-LSTM structure and
|
| 625 |
+
transfers knowledge from large-scale expression data and
|
| 626 |
+
single frames of ME video clips. Khor et al. [38] proposed an
|
| 627 |
+
Enriched Long-term Recurrent Convolutional Network (EL-
|
| 628 |
+
RCN) that makes spatial and temporal enrichment by stack-
|
| 629 |
+
ing different input data and features. Unlike the CNN-LSTM
|
| 630 |
+
architecture, 3D convolution network (3DCNN) [39] can
|
| 631 |
+
simultaneously learn the spatial and temporal ME features.
|
| 632 |
+
Based on 3DCNN, Peng et al. [40] proposed a Dual Tempo-
|
| 633 |
+
ral Scale Convolutional Neural Network (DTSCNN), which
|
| 634 |
+
uses the optical-flow sequences of ME videos as model
|
| 635 |
+
input to obtain high-level ME features and can adapt to a
|
| 636 |
+
different frame rate of ME video clips. Wang et al. [41] pro-
|
| 637 |
+
posed a MER framework based on Eulerian motion based
|
| 638 |
+
3DCNN (EM-CED), which uses the pre-extracted Eulerian
|
| 639 |
+
motion feature maps as input and with a global attention
|
| 640 |
+
module to encode rich spatiotemporal information. Xia et
|
| 641 |
+
al. [42] proposed a deep recurrent convolutional networks
|
| 642 |
+
based MER approach, which modeled the spatiotemporal
|
| 643 |
+
ME deformations in views of facial appearance and geom-
|
| 644 |
+
etry separately. To solve the challenge of extracting high-
|
| 645 |
+
level ME features from the training model lacking sufficient
|
| 646 |
+
and class-balanced ME samples, Zhao et al. [12] extracted
|
| 647 |
+
the ME optical-flow sequence to express the original ME
|
| 648 |
+
video and proposed a novel two-stage learning (i.e., prior
|
| 649 |
+
learning and target learning) method based on a siamese
|
| 650 |
+
3D convolutional neural network for MER. Sun et al. [43]
|
| 651 |
+
proposed a knowledge transfer technique that distills and
|
| 652 |
+
transfers knowledge from action units for MER based on
|
| 653 |
+
crucial temporal sequences, where knowledge from a pre-
|
| 654 |
+
trained deep teacher neural network is distilled and trans-
|
| 655 |
+
ferred to a shallow student neural network. Zhao et al. [44]
|
| 656 |
+
proposed a deep prototypical learning framework on RGB
|
| 657 |
+
key-frame sequences, namely ME-PLAN, based on a 3D
|
| 658 |
+
residual prototypical network and a local-wise attention
|
| 659 |
+
module for MER. Recently, with the advancement of deep
|
| 660 |
+
learning technology, some excellent neural networks, such
|
| 661 |
+
as GCN [45] and transformers, have also been used for MER.
|
| 662 |
+
Although video sequence based MER makes full use
|
| 663 |
+
of spatial-temporal information of ME, the corresponding
|
| 664 |
+
model has higher structural complexity and faces seri-
|
| 665 |
+
ous over-fitting problems on the current small-scale ME
|
| 666 |
+
datasets. Therefore, building a large-scale ME dataset is still
|
| 667 |
+
the primary task of developing an automatic MER system,
|
| 668 |
+
which plays a pivotal role.
|
| 669 |
+
|
| 670 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 671 |
+
6
|
| 672 |
+
LED lights with
|
| 673 |
+
reflector umbrellas
|
| 674 |
+
Participant
|
| 675 |
+
Participant’s monitor
|
| 676 |
+
(playing elicitation
|
| 677 |
+
videos)
|
| 678 |
+
High-speed camera (1024×768, freely configurable frame rate )
|
| 679 |
+
4T-sized high-speed
|
| 680 |
+
acquisition memory
|
| 681 |
+
Collector
|
| 682 |
+
Collector’s monitor
|
| 683 |
+
for recoding MEs
|
| 684 |
+
Collector’s monitor
|
| 685 |
+
for playing videos
|
| 686 |
+
10 Gigabit optical fiber
|
| 687 |
+
transmission line
|
| 688 |
+
Fig. 2: Experimental environment for eliciting MEs
|
| 689 |
+
3
|
| 690 |
+
DFME
|
| 691 |
+
As the old saying goes, ”One can’t make bricks without
|
| 692 |
+
straw”. Similarly, it is difficult to design an automatic MER
|
| 693 |
+
model with high recognition rate and reliability without
|
| 694 |
+
sufficient training and testing samples of ME. However, due
|
| 695 |
+
to the short-duration, low-intensity, and local-movement
|
| 696 |
+
characteristics of ME, it is extremely challenging to construct
|
| 697 |
+
large-scale ME datasets. To solve the problem of ME data
|
| 698 |
+
hunger, we construct a dataset of spontaneous ME with the
|
| 699 |
+
largest sample size at present, called DFME. In the following
|
| 700 |
+
subsections, we will elaborate on the building details and
|
| 701 |
+
statistical properties of our DFME dataset.
|
| 702 |
+
3.1
|
| 703 |
+
Participant and Equipment
|
| 704 |
+
In our DFME, 671 participants were recruited (381 males
|
| 705 |
+
and 290 females), mainly for college students and teaching
|
| 706 |
+
staff. Participants were age-distributed between 17 and 40
|
| 707 |
+
years, with a mean age of 22.43 years (standard deviation =
|
| 708 |
+
2.54), and all from China. Before starting the formal exper-
|
| 709 |
+
iment, the participants were informed about the purpose,
|
| 710 |
+
experimental procedure, possible benefits and risks of our
|
| 711 |
+
research. On confirming their voluntary participation in the
|
| 712 |
+
experiment, participants would sign an informed consent
|
| 713 |
+
form and choose whether to allow their facial images and
|
| 714 |
+
videos used for the academic paper.
|
| 715 |
+
Considering the low intensity and short duration of MEs,
|
| 716 |
+
the recording process is easily disturbed by other factors, so
|
| 717 |
+
it is carried out in a well-controlled laboratory environment,
|
| 718 |
+
as shown in Fig. 2. In this environment, we set up three LED
|
| 719 |
+
lights with reflector umbrellas to ensure a bright and stable
|
| 720 |
+
light source on the participants’ faces during experiments.
|
| 721 |
+
In addition, we used a self-developed high-speed camera
|
| 722 |
+
(1024×768, freely configurable frame rates) to capture MEs,
|
| 723 |
+
and used a 10 Gigabit optical fiber transmission line to
|
| 724 |
+
connect the camera with a 4T-sized high-speed acquisition
|
| 725 |
+
memory to store the collected ME video clips in real-time.
|
| 726 |
+
3.2
|
| 727 |
+
Elicitation Material and Procedure
|
| 728 |
+
At present, there are three generations of ME-eliciting
|
| 729 |
+
paradigms. Although the third generation has the highest
|
| 730 |
+
TABLE 2: Video clips for eliciting MEs
|
| 731 |
+
Video ID
|
| 732 |
+
During Time
|
| 733 |
+
Emotion Category
|
| 734 |
+
Mean Score(0-5)
|
| 735 |
+
02sa
|
| 736 |
+
3’44”
|
| 737 |
+
Sadness
|
| 738 |
+
4
|
| 739 |
+
03sa
|
| 740 |
+
4’18”
|
| 741 |
+
Sadness
|
| 742 |
+
3.36
|
| 743 |
+
06c
|
| 744 |
+
2’01”
|
| 745 |
+
Contempt
|
| 746 |
+
2.83
|
| 747 |
+
07a
|
| 748 |
+
1’26”
|
| 749 |
+
Anger
|
| 750 |
+
3.49
|
| 751 |
+
08su
|
| 752 |
+
1’26”
|
| 753 |
+
Surprise
|
| 754 |
+
2.16
|
| 755 |
+
09f
|
| 756 |
+
2’22”
|
| 757 |
+
Fear
|
| 758 |
+
3.72
|
| 759 |
+
10a
|
| 760 |
+
2’58”
|
| 761 |
+
Anger
|
| 762 |
+
4.33
|
| 763 |
+
11d
|
| 764 |
+
1’24”
|
| 765 |
+
Disgust
|
| 766 |
+
3.95
|
| 767 |
+
13f
|
| 768 |
+
2’14”
|
| 769 |
+
Fear
|
| 770 |
+
3.36
|
| 771 |
+
14d
|
| 772 |
+
1’22”
|
| 773 |
+
Disgust
|
| 774 |
+
3.23
|
| 775 |
+
17h
|
| 776 |
+
1’17”
|
| 777 |
+
Happiness
|
| 778 |
+
2.81
|
| 779 |
+
18h
|
| 780 |
+
1’58”
|
| 781 |
+
Happiness
|
| 782 |
+
3.08
|
| 783 |
+
20d
|
| 784 |
+
0’46”
|
| 785 |
+
Disgust
|
| 786 |
+
2.87
|
| 787 |
+
21c
|
| 788 |
+
1’44”
|
| 789 |
+
Contempt
|
| 790 |
+
2.11
|
| 791 |
+
23sa
|
| 792 |
+
1’44”
|
| 793 |
+
Sadness
|
| 794 |
+
3.25
|
| 795 |
+
ecological validity, it is inevitable to interact and have
|
| 796 |
+
conversations with the participants when simulating the
|
| 797 |
+
natural scenes. These irrelevant body and mouth move-
|
| 798 |
+
ments caused by speaking are also a kind of noise for MEs.
|
| 799 |
+
Therefore, we still use the neutralization paradigm to elicit
|
| 800 |
+
MEs to avoid noise as much as possible and focus more
|
| 801 |
+
on the movement characteristics of MEs and facilitate the
|
| 802 |
+
operation, control, and implementation. The specific details
|
| 803 |
+
of the elicitation process will be introduced below.
|
| 804 |
+
The effectiveness of elicitation materials determines the
|
| 805 |
+
quantity and quality of MEs, so selecting the materials with
|
| 806 |
+
high emotional valence is very crucial [14]. The stimuli we
|
| 807 |
+
used were all video clips from the Internet, ranging in length
|
| 808 |
+
from 46 seconds to 258 seconds. In order to find more
|
| 809 |
+
effective stimulus materials, we recruited 50 volunteers to
|
| 810 |
+
evaluate 30 video clips collected previously. The evalua-
|
| 811 |
+
tion process was as follows: after watching each video,
|
| 812 |
+
volunteers were asked to choose only one emotion from
|
| 813 |
+
happiness, contempt, disgust, sadness, fear, surprise and
|
| 814 |
+
anger as the main emotion evoked by this video, and score
|
| 815 |
+
the stimulus level on a scale of 1 to 5, corresponding to
|
| 816 |
+
the intensity from weakest to strongest. Finally, we took the
|
| 817 |
+
emotion selected by more than half of the volunteers as the
|
| 818 |
+
emotional class of each video, and by ranking the average
|
| 819 |
+
|
| 820 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 821 |
+
7
|
| 822 |
+
stimulus intensity values, we obtained the optimal 15 video
|
| 823 |
+
clips as elicitation materials adopted in our experiment.
|
| 824 |
+
Specific statistical details are shown in Table 2.
|
| 825 |
+
The collection took place in a configured laboratory
|
| 826 |
+
environment. Before start, each participant was taken to a
|
| 827 |
+
specific seat. By adjusting the height of the seat, the focal
|
| 828 |
+
length of the camera and the brightness of the LED lamps,
|
| 829 |
+
we ensured that the participant’s face appeared utterly,
|
| 830 |
+
clearly, and brightly in the centre of the screen. Then the
|
| 831 |
+
monitor in front of the participant would play ten randomly
|
| 832 |
+
selected elicitation videos covering all seven basic emotional
|
| 833 |
+
types that had been previously verified effective in turn.
|
| 834 |
+
While watching videos, participants were required to keep
|
| 835 |
+
a neutral face as far as possible and control the occurrence of
|
| 836 |
+
their facial expressions. If they failed and repeatedly showed
|
| 837 |
+
obvious expressions, they would have to complete an ex-
|
| 838 |
+
traordinarily long and boring questionnaire as punishment.
|
| 839 |
+
In addition, they were asked to keep their sitting posture
|
| 840 |
+
upright, without excessive head movements, and devote
|
| 841 |
+
their full attention to the video played. After watching each
|
| 842 |
+
video, participants would have a period of rest to ease their
|
| 843 |
+
emotions. During this procedure, they were also asked to
|
| 844 |
+
fill in an affective grade scale according to the emotional
|
| 845 |
+
experience generated just now, and form a self-report in-
|
| 846 |
+
cluding the timestamp where the expression occurred, emo-
|
| 847 |
+
tion category and intensity based on the video sequences
|
| 848 |
+
recorded by the high-speed camera, which would help the
|
| 849 |
+
subsequent annotators understand their MEs. Due to the
|
| 850 |
+
existence of cognitive differences, the emotional orientation
|
| 851 |
+
of the elicitation materials and the internal emotional expe-
|
| 852 |
+
rience of participants are sometimes not exactly consistent.
|
| 853 |
+
What’s more, external expressions of the same emotion are
|
| 854 |
+
also diverse on account of individual differences. Therefore,
|
| 855 |
+
it is worth noting that requiring participants to clarify their
|
| 856 |
+
true inner emotions when expressions appear in their self-
|
| 857 |
+
reports is necessary.
|
| 858 |
+
3.3
|
| 859 |
+
ME Annotation
|
| 860 |
+
Building the DFME dataset required a two-stage annotation:
|
| 861 |
+
the sample selection stage as well as the coding and cate-
|
| 862 |
+
gories labeling stage. We clipped short fragments containing
|
| 863 |
+
valid expression samples from the collected long video
|
| 864 |
+
sequences in the first stage. The second stage included three
|
| 865 |
+
rounds of fine-grained annotation, through which we con-
|
| 866 |
+
firmed all MEs and labeled their key-frames, facial muscle
|
| 867 |
+
action units (AUs), and emotion categories. Furthermore, we
|
| 868 |
+
performed annotation agreement test to verify the reliability
|
| 869 |
+
of emotion labels.
|
| 870 |
+
3.3.1
|
| 871 |
+
Sample Selection
|
| 872 |
+
In the sample selection stage, by taking a manual segmen-
|
| 873 |
+
tation roughly, the video sequences collected containing
|
| 874 |
+
participants’ facial information were segmented into sev-
|
| 875 |
+
eral shorter video fragments containing a single or more
|
| 876 |
+
MaEs or MEs. Using the self-developed video annotation
|
| 877 |
+
software, an experienced annotator checked through the
|
| 878 |
+
collected original video sequences frame by frame to locate
|
| 879 |
+
the fragments of facial muscle movements. With the guid-
|
| 880 |
+
ance of the self-reports from participants, the annotator was
|
| 881 |
+
able to effectively distinguish whether the facial movements
|
| 882 |
+
were expressions definitely related to emotion, or interfer-
|
| 883 |
+
ence data unrelated to emotion (such as violent blinking
|
| 884 |
+
caused by dry eyelids, habitual mouth opening, etc.), and
|
| 885 |
+
the former was retained while the latter was abandoned.
|
| 886 |
+
Besides, we also kept some fragments with blinking or eye
|
| 887 |
+
movements if they contained MaE or ME data.
|
| 888 |
+
3.3.2
|
| 889 |
+
Coding and Categories Labeling
|
| 890 |
+
After the previous sample selection stage, three rounds of
|
| 891 |
+
fine-grained annotation were adopted successively in this
|
| 892 |
+
stage to determine the MEs together with their three key-
|
| 893 |
+
frames (i.e., onset frame, apex frame and offset frame), facial
|
| 894 |
+
muscle action unit (AU) labels and emotion category labels.
|
| 895 |
+
The apex frame is the frame corresponding to the mo-
|
| 896 |
+
ment when facial expression changes most dramatically. In
|
| 897 |
+
the first round of the fine-grained annotation, five annota-
|
| 898 |
+
tors independently marked out the onset, apex, and offset
|
| 899 |
+
frame of each expression clip, and the median value of their
|
| 900 |
+
annotation results was determined as the final result of the
|
| 901 |
+
three key-frames. Then we filtered the expressions whose
|
| 902 |
+
duration from onset to offset frame was less than 500ms or
|
| 903 |
+
from onset to apex frame was less than 250ms as the ME
|
| 904 |
+
samples, and those out of the time limit were considered as
|
| 905 |
+
the samples of MaEs. For instance, MEs collected at a frame
|
| 906 |
+
rate of 500fps should meet either foffset − fonset + 1 ≤ 250
|
| 907 |
+
or fapex−fonset+1 ≤ 125, where fk represents the moment
|
| 908 |
+
index corresponding to the key-frame k.
|
| 909 |
+
In the second round of fine-grained annotation, we
|
| 910 |
+
mainly annotated the AUs that occurred in MEs using the
|
| 911 |
+
Facial Action Coding System (FACS) [46]. There may exist
|
| 912 |
+
one single AU (such as AU4) or a combination of more
|
| 913 |
+
different AUs (for example, AU6+AU12) in a ME. When
|
| 914 |
+
multiple categories of AUs appear, some obscure ones are
|
| 915 |
+
easily overlooked. To enhance the reliability and integrity of
|
| 916 |
+
the AU labels, two experienced annotators independently
|
| 917 |
+
labeled the AUs for all the MEs identified previously. Ac-
|
| 918 |
+
cording to the actual induction of the participants during
|
| 919 |
+
the experiments, and also referring to the AUs mainly in-
|
| 920 |
+
volved in the previously published ME datasets, we totally
|
| 921 |
+
included 24 different categories of AUs for annotation. Of
|
| 922 |
+
these AUs, six categories appear in the upper face, 13 in
|
| 923 |
+
the lower face, and the other five belong to miscellaneous
|
| 924 |
+
actions. Table 3 lists the specific AU numbers and their
|
| 925 |
+
corresponding face actions. Since the manually annotated
|
| 926 |
+
AU intensity is highly subjective, to avoid this shortcoming,
|
| 927 |
+
annotators merely indicated whether each AU appeared
|
| 928 |
+
during the annotation rather than defining the intensity of
|
| 929 |
+
its occurrence.
|
| 930 |
+
After labeling the AUs, the two annotators determined
|
| 931 |
+
the final AU label through crosscheck and discussion. The
|
| 932 |
+
reliability between the two annotators was 0.83, which was
|
| 933 |
+
calculated as
|
| 934 |
+
R = 2 × AU(A1) ∩ AU(A2)
|
| 935 |
+
AllAU
|
| 936 |
+
(1)
|
| 937 |
+
where AU(A1) ∩ AU(A2) means the number of AUs both
|
| 938 |
+
annotators agreed, and AllAU is the total number of AUs in
|
| 939 |
+
a ME labeled out by the two annotators.
|
| 940 |
+
In the third round of fine-grained labeling, we performed
|
| 941 |
+
the emotion labeling of MEs taking eight categories into
|
| 942 |
+
|
| 943 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 944 |
+
8
|
| 945 |
+
TABLE 3: Key AUs Included in DFME
|
| 946 |
+
Upper Face Action Units
|
| 947 |
+
Lower Face Action Units
|
| 948 |
+
Miscellaneous Actions
|
| 949 |
+
AU1
|
| 950 |
+
Inner Brow Raiser
|
| 951 |
+
AU9
|
| 952 |
+
Nose Wrinkler
|
| 953 |
+
AU18
|
| 954 |
+
Lip Pucker
|
| 955 |
+
AU31
|
| 956 |
+
Jaw Clencher
|
| 957 |
+
AU2
|
| 958 |
+
Outer Brow Raiser
|
| 959 |
+
AU10
|
| 960 |
+
Upper Lip Raiser
|
| 961 |
+
AU20
|
| 962 |
+
Lip Stretcher
|
| 963 |
+
AU38
|
| 964 |
+
Nostril Dilator
|
| 965 |
+
AU4
|
| 966 |
+
Brow Lowerer
|
| 967 |
+
AU12
|
| 968 |
+
Lip Corner Puller
|
| 969 |
+
AU23
|
| 970 |
+
Lip Tightener
|
| 971 |
+
AU39
|
| 972 |
+
Nostril Compressor
|
| 973 |
+
AU5
|
| 974 |
+
Upper Lid Raiser
|
| 975 |
+
AU14
|
| 976 |
+
Dimpler
|
| 977 |
+
AU24
|
| 978 |
+
Lip Presser
|
| 979 |
+
M57
|
| 980 |
+
Head Forward
|
| 981 |
+
AU6
|
| 982 |
+
Cheek Raiser
|
| 983 |
+
AU15
|
| 984 |
+
Lip Corner Depressor
|
| 985 |
+
AU25
|
| 986 |
+
Lips Part
|
| 987 |
+
M58
|
| 988 |
+
Head Back
|
| 989 |
+
AU7
|
| 990 |
+
Lid Tightener
|
| 991 |
+
AU16
|
| 992 |
+
Lower Lip Depressor
|
| 993 |
+
AU28
|
| 994 |
+
Lip Suck
|
| 995 |
+
AU17
|
| 996 |
+
Chin Raiser
|
| 997 |
+
(a) Anger
|
| 998 |
+
(AU4+AU5)
|
| 999 |
+
(b) Contempt
|
| 1000 |
+
(Left-AU6+
|
| 1001 |
+
Left-AU12)
|
| 1002 |
+
(c) Disgust
|
| 1003 |
+
(AU4+AU7+
|
| 1004 |
+
AU10)
|
| 1005 |
+
(d) Fear
|
| 1006 |
+
(AU1+AU4+
|
| 1007 |
+
AU7+AU20)
|
| 1008 |
+
(e) Happiness
|
| 1009 |
+
(AU6+AU12)
|
| 1010 |
+
(f) Sadness
|
| 1011 |
+
(AU17)
|
| 1012 |
+
(g) Surprise
|
| 1013 |
+
(AU1+AU2+
|
| 1014 |
+
AU5)
|
| 1015 |
+
Fig. 3: Representative ME Samples of Seven Basic Emotion Categories in DFME
|
| 1016 |
+
account: anger, contempt, disgust, fear, happiness, sadness, sur-
|
| 1017 |
+
prise, and others. ’Others’ represents MEs that are difficult
|
| 1018 |
+
to divide into the former seven prototypical emotion cat-
|
| 1019 |
+
egories. Seven annotators independently gave the emotion
|
| 1020 |
+
labels of all MEs, taking the emotion category that more
|
| 1021 |
+
than half agreed with as the final label.
|
| 1022 |
+
In previous spontaneous ME datasets, the reference basis
|
| 1023 |
+
of emotion labeling was not precisely the same. In some
|
| 1024 |
+
datasets, as represented by SMIC, emotion labels were de-
|
| 1025 |
+
termined based on self-reports provided by participants.
|
| 1026 |
+
Some other studies believed that seeing is believing, so their
|
| 1027 |
+
annotation was based on the correspondence between AUs
|
| 1028 |
+
and emotions. However, on the one hand, unlike MaEs,
|
| 1029 |
+
only part of the AUs can appear simultaneously in MEs
|
| 1030 |
+
due to their low intensity, and some AUs are shared by
|
| 1031 |
+
different emotion categories, which may lead to category
|
| 1032 |
+
confusion. On the other hand, we should not ignore the
|
| 1033 |
+
differences in self-emotional cognition of different partici-
|
| 1034 |
+
pants, which means that the self-reports given for the whole
|
| 1035 |
+
piece of elicitation materials may be rough and inaccurate.
|
| 1036 |
+
Therefore, in DFME, the emotion labels were determined
|
| 1037 |
+
through a comprehensive analysis of facial AUs, self-reports
|
| 1038 |
+
of participants, and elicitation material contents, which is
|
| 1039 |
+
consistent with the method adopted by the CASME series.
|
| 1040 |
+
It is worth mentioning that we obtained the participants’
|
| 1041 |
+
fine-grained self-reports in the data collection process, and
|
| 1042 |
+
this is also the information that we recommend as a priority
|
| 1043 |
+
for reference when determining emotion labels. We matched
|
| 1044 |
+
the corresponding timestamps of MEs and elicitation ma-
|
| 1045 |
+
terials through playback, enabling participants to report
|
| 1046 |
+
their emotions for each time of successful ME induction,
|
| 1047 |
+
which significantly improved the confidence of self-reports
|
| 1048 |
+
in emotion labeling. Fig.3 shows some representative ME
|
| 1049 |
+
samples of seven basic emotion categories in DFME.
|
| 1050 |
+
3.3.3
|
| 1051 |
+
Annotation Agreement
|
| 1052 |
+
Having reliable emotion categories of MEs is of vital sig-
|
| 1053 |
+
nificance for a dataset. In this section, we utilized Fleiss’s
|
| 1054 |
+
Kappa test [47] to evaluate the quality of our emotion
|
| 1055 |
+
annotation encouraged by work [48]. Fleiss’s Kappa is a
|
| 1056 |
+
measure of the agreement among three or more annotators,
|
| 1057 |
+
testing the consistency of annotation results. Therefore, we
|
| 1058 |
+
consider Fleiss’s Kappa as an excellent indicator to evaluate
|
| 1059 |
+
the reliability of emotion annotation.
|
| 1060 |
+
In DFME, seven annotators independently labeled each
|
| 1061 |
+
ME sample based on facial AUs, an accurate self-report, and
|
| 1062 |
+
the corresponding elicitation material content. The samples
|
| 1063 |
+
were divided into eight emotion categories: {1: anger, 2:
|
| 1064 |
+
contempt, 3: disgust, 4: fear, 5: happiness, 6: sadness, 7:
|
| 1065 |
+
surprise, 8: others}. At this time, let n = 7 represent the
|
| 1066 |
+
total number of annotation personnel, N indicate the total
|
| 1067 |
+
number of ME video clips, K = 8 represent the number
|
| 1068 |
+
of emotion categories. nij is the number of annotators who
|
| 1069 |
+
assigned the i-th ME video clip to the j-th category, so we
|
| 1070 |
+
can calculate pj, the proportion of all assignments which
|
| 1071 |
+
were to the j-th emotion:
|
| 1072 |
+
pj =
|
| 1073 |
+
1
|
| 1074 |
+
N × n
|
| 1075 |
+
N
|
| 1076 |
+
�
|
| 1077 |
+
i=1
|
| 1078 |
+
nij,
|
| 1079 |
+
(2)
|
| 1080 |
+
K
|
| 1081 |
+
�
|
| 1082 |
+
j=1
|
| 1083 |
+
pj = 1.
|
| 1084 |
+
(3)
|
| 1085 |
+
Then, the extent of agreement among the n annotators
|
| 1086 |
+
for the i-th ME video clip indicated by Pi is calculated. In
|
| 1087 |
+
other words, it can be indexed by the proportion of pairs
|
| 1088 |
+
agreeing in their evaluation of the i-th ME out of all the
|
| 1089 |
+
n(n − 1) possible pairs of agreement:
|
| 1090 |
+
Pi =
|
| 1091 |
+
1
|
| 1092 |
+
n × (n − 1)[(
|
| 1093 |
+
K
|
| 1094 |
+
�
|
| 1095 |
+
j=1
|
| 1096 |
+
n2
|
| 1097 |
+
ij) − n],
|
| 1098 |
+
(4)
|
| 1099 |
+
The mean of Pi is therefore:
|
| 1100 |
+
P = 1
|
| 1101 |
+
N
|
| 1102 |
+
N
|
| 1103 |
+
�
|
| 1104 |
+
i=1
|
| 1105 |
+
Pi,
|
| 1106 |
+
(5)
|
| 1107 |
+
|
| 1108 |
+
二JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 1109 |
+
9
|
| 1110 |
+
TABLE 4: AUs of High Occurrence in MEs of Seven Basic Emotion Categories
|
| 1111 |
+
Anger
|
| 1112 |
+
Contempt
|
| 1113 |
+
Disgust
|
| 1114 |
+
Fear
|
| 1115 |
+
Happiness
|
| 1116 |
+
Sadness
|
| 1117 |
+
Surprise
|
| 1118 |
+
AU
|
| 1119 |
+
pct(%)1
|
| 1120 |
+
AU
|
| 1121 |
+
pct(%)
|
| 1122 |
+
AU
|
| 1123 |
+
pct(%)
|
| 1124 |
+
AU
|
| 1125 |
+
pct(%)
|
| 1126 |
+
AU
|
| 1127 |
+
pct(%)
|
| 1128 |
+
AU
|
| 1129 |
+
pct(%)
|
| 1130 |
+
AU
|
| 1131 |
+
pct(%)
|
| 1132 |
+
AU4
|
| 1133 |
+
72.5
|
| 1134 |
+
L/R-AU122
|
| 1135 |
+
78.7
|
| 1136 |
+
AU4
|
| 1137 |
+
73.6
|
| 1138 |
+
AU4
|
| 1139 |
+
54.1
|
| 1140 |
+
AU12
|
| 1141 |
+
79.8
|
| 1142 |
+
AU4
|
| 1143 |
+
42.2
|
| 1144 |
+
AU1
|
| 1145 |
+
65.6
|
| 1146 |
+
AU7
|
| 1147 |
+
29.1
|
| 1148 |
+
AU6
|
| 1149 |
+
19.2
|
| 1150 |
+
AU7
|
| 1151 |
+
40.4
|
| 1152 |
+
AU7
|
| 1153 |
+
35.3
|
| 1154 |
+
AU6
|
| 1155 |
+
61.6
|
| 1156 |
+
AU14
|
| 1157 |
+
26.1
|
| 1158 |
+
AU5
|
| 1159 |
+
60.2
|
| 1160 |
+
AU24
|
| 1161 |
+
16.3
|
| 1162 |
+
L/R-AU10
|
| 1163 |
+
10.6
|
| 1164 |
+
AU10
|
| 1165 |
+
11.8
|
| 1166 |
+
AU5
|
| 1167 |
+
16.2
|
| 1168 |
+
AU24
|
| 1169 |
+
12.1
|
| 1170 |
+
AU24
|
| 1171 |
+
19.2
|
| 1172 |
+
AU2
|
| 1173 |
+
60.0
|
| 1174 |
+
AU5
|
| 1175 |
+
7.6
|
| 1176 |
+
AU7
|
| 1177 |
+
7.8
|
| 1178 |
+
AU24
|
| 1179 |
+
8.4
|
| 1180 |
+
AU24
|
| 1181 |
+
14.5
|
| 1182 |
+
L/R-AU12
|
| 1183 |
+
10.1
|
| 1184 |
+
AU7
|
| 1185 |
+
16.5
|
| 1186 |
+
L/R-AU2
|
| 1187 |
+
25.6
|
| 1188 |
+
AU23
|
| 1189 |
+
5.6
|
| 1190 |
+
L/R-AU2
|
| 1191 |
+
5.7
|
| 1192 |
+
AU14
|
| 1193 |
+
6.7
|
| 1194 |
+
AU1
|
| 1195 |
+
11.1
|
| 1196 |
+
AU10
|
| 1197 |
+
6.2
|
| 1198 |
+
AU17
|
| 1199 |
+
10.8
|
| 1200 |
+
L/R-AU1
|
| 1201 |
+
17.8
|
| 1202 |
+
AU14
|
| 1203 |
+
5.6
|
| 1204 |
+
AU14
|
| 1205 |
+
5.7
|
| 1206 |
+
AU14
|
| 1207 |
+
8.8
|
| 1208 |
+
AU15
|
| 1209 |
+
6.9
|
| 1210 |
+
L/R-AU5
|
| 1211 |
+
10.7
|
| 1212 |
+
AU10
|
| 1213 |
+
5.2
|
| 1214 |
+
AU17
|
| 1215 |
+
6.0
|
| 1216 |
+
AU23
|
| 1217 |
+
5.1
|
| 1218 |
+
AU17
|
| 1219 |
+
4.8
|
| 1220 |
+
AU10
|
| 1221 |
+
4.8
|
| 1222 |
+
AU1
|
| 1223 |
+
4.8
|
| 1224 |
+
1 percentage(pct): the statistical range is all MEs from the first 300 participants.
|
| 1225 |
+
2 L/R means the Left/Right half part of an AU.
|
| 1226 |
+
And we also have Pe:
|
| 1227 |
+
Pe =
|
| 1228 |
+
K
|
| 1229 |
+
�
|
| 1230 |
+
j=1
|
| 1231 |
+
p2
|
| 1232 |
+
j,
|
| 1233 |
+
(6)
|
| 1234 |
+
Finally, we can calculate κ by:
|
| 1235 |
+
κ = P − Pe
|
| 1236 |
+
1 − Pe
|
| 1237 |
+
.
|
| 1238 |
+
(7)
|
| 1239 |
+
Thus, we obtained κ = 0.72 through performing Fleiss’s
|
| 1240 |
+
Kappa test in DFME. According to Table 5, we know that all
|
| 1241 |
+
of our emotion annotators achieve substantial agreement,
|
| 1242 |
+
meaning that our emotion labels are quite reliable.
|
| 1243 |
+
TABLE 5: Interpretation of κ for Fleiss’Kappa Test
|
| 1244 |
+
κ
|
| 1245 |
+
Interpretation
|
| 1246 |
+
≤ 0
|
| 1247 |
+
Poor agreement
|
| 1248 |
+
0.01-0.20
|
| 1249 |
+
Slight agreement
|
| 1250 |
+
0.21-0.40
|
| 1251 |
+
Fair agreement
|
| 1252 |
+
0.41-0.60
|
| 1253 |
+
Moderate agreement
|
| 1254 |
+
0.61-0.80
|
| 1255 |
+
Substantial agreement
|
| 1256 |
+
0.81-1.00
|
| 1257 |
+
Almost perfect agreement
|
| 1258 |
+
3.4
|
| 1259 |
+
Statistical Properties of DFME
|
| 1260 |
+
The DFME dataset consists of three parts: PART A, PART
|
| 1261 |
+
B, and PART C. The only difference between these three
|
| 1262 |
+
parts is the frame rate setting of the high-speed camera in
|
| 1263 |
+
the experiment. In PART A, all 1,118 ME samples from 72
|
| 1264 |
+
participants have a frame rate of 500fps. The frame rate of
|
| 1265 |
+
PART B is 300fps with 969 ME samples from 92 participants.
|
| 1266 |
+
PART C has the most data size with 5,439 ME samples
|
| 1267 |
+
from 492 participants, whose frame rate is 200fps. Although
|
| 1268 |
+
we recruited a total of 671 participants, 15 of them had
|
| 1269 |
+
strong control over their facial expressions, from whom we
|
| 1270 |
+
could not collect any ME sample. Therefore, the final DFME
|
| 1271 |
+
dataset contains 7,526 ME samples from 656 participants,
|
| 1272 |
+
and we gave each sample an emotion category label as well
|
| 1273 |
+
as AU labels annotated according to FACS. Fig.4 describes
|
| 1274 |
+
the distribution of ME samples detailedly.
|
| 1275 |
+
Given that we have collected the fine-grained self-
|
| 1276 |
+
reports and the AU labels with considerable reliability, this
|
| 1277 |
+
is conducive to finding the emotion-AU correspondence
|
| 1278 |
+
rule in MEs. Therefore, we counted the ratio of high-
|
| 1279 |
+
occurrence AUs in each emotion (Table 4), which reflects the
|
| 1280 |
+
existence preference of AU in MEs with different emotions,
|
| 1281 |
+
not affected by the emotional category imbalance problem in
|
| 1282 |
+
the dataset. We also matched the emotion and AU combina-
|
| 1283 |
+
tions according to the statistical results, and the conclusions
|
| 1284 |
+
are shown as Table 6.
|
| 1285 |
+
TABLE 6: Matching Emotion and AU Combinations in MEs
|
| 1286 |
+
Emotion Categories
|
| 1287 |
+
AU Combinations
|
| 1288 |
+
Anger
|
| 1289 |
+
AU4+AU5, AU23
|
| 1290 |
+
Contempt
|
| 1291 |
+
L/R-AU12, AU6+L/R-AU12
|
| 1292 |
+
Disgust
|
| 1293 |
+
AU4+AU7+AU10, AU14
|
| 1294 |
+
Fear
|
| 1295 |
+
AU14+AU24, AU1+AU4, AU4+AU5
|
| 1296 |
+
Happiness
|
| 1297 |
+
AU6+AU12, AU12
|
| 1298 |
+
Sadness
|
| 1299 |
+
AU14, AU17, AU15, AU14+AU24
|
| 1300 |
+
Surprise
|
| 1301 |
+
AU1+AU2+AU5, AU1+AU2, AU5
|
| 1302 |
+
Shared1
|
| 1303 |
+
AU4, AU4+AU7, AU7, AU24
|
| 1304 |
+
1 Shared: the AU combinations commonly appearing in
|
| 1305 |
+
Anger, Disgust, Fear and Sadness with high frequency.
|
| 1306 |
+
Based on the statistical results presented in Table 4, we
|
| 1307 |
+
have some findings to discuss:
|
| 1308 |
+
•
|
| 1309 |
+
In MaEs, AU9 (nose wrinkler) is highly associated
|
| 1310 |
+
with disgust, and AU20 (lip stretcher) is related to
|
| 1311 |
+
fear. These two AUs frequently appear in MaEs but
|
| 1312 |
+
are not easily induced in MEs. We ought not to
|
| 1313 |
+
conclude that these AUs’ association with their corre-
|
| 1314 |
+
sponding emotions no longer exists in MEs. Instead,
|
| 1315 |
+
when participants tried to restrain their emotions,
|
| 1316 |
+
it was easier for them to control the movement of
|
| 1317 |
+
certain facial muscles such as AU9 and AU20 rather
|
| 1318 |
+
than others.
|
| 1319 |
+
•
|
| 1320 |
+
AU4 (brow lowerer), AU7 (lid tightener), and AU24
|
| 1321 |
+
(lip presser) simultaneously occur at high frequency
|
| 1322 |
+
in different negative emotions (disgust, anger, fear,
|
| 1323 |
+
sadness, etc.). Without the assistance of participants’
|
| 1324 |
+
fine-grained self-reports, it is definitely challenging
|
| 1325 |
+
to distinguish MEs of negative emotions merely rely-
|
| 1326 |
+
ing on these common AUs, which is also one of the
|
| 1327 |
+
reasons why some models excessively confuse the
|
| 1328 |
+
disgust MEs with those of other negative emotions in
|
| 1329 |
+
the seven-classification automatic MER task.
|
| 1330 |
+
•
|
| 1331 |
+
In the positive emotion (i.e., happiness), some AUs
|
| 1332 |
+
related to negative emotions can occur together with
|
| 1333 |
+
AU6 or AU12, specifically, including AU10 (associ-
|
| 1334 |
+
ated with disgust), AU24 (associated with negative
|
| 1335 |
+
emotions), and Left/Right-AU12 (associated with
|
| 1336 |
+
contempt). The appearance of these extra AUs is a
|
| 1337 |
+
|
| 1338 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 1339 |
+
10
|
| 1340 |
+
Disgust
|
| 1341 |
+
Surprise
|
| 1342 |
+
Happiness
|
| 1343 |
+
Fear
|
| 1344 |
+
Sadness
|
| 1345 |
+
Anger
|
| 1346 |
+
Contempt
|
| 1347 |
+
Others
|
| 1348 |
+
PART A
|
| 1349 |
+
321
|
| 1350 |
+
187
|
| 1351 |
+
111
|
| 1352 |
+
143
|
| 1353 |
+
142
|
| 1354 |
+
97
|
| 1355 |
+
77
|
| 1356 |
+
40
|
| 1357 |
+
PART B
|
| 1358 |
+
406
|
| 1359 |
+
143
|
| 1360 |
+
78
|
| 1361 |
+
115
|
| 1362 |
+
119
|
| 1363 |
+
56
|
| 1364 |
+
45
|
| 1365 |
+
7
|
| 1366 |
+
PART C
|
| 1367 |
+
1801
|
| 1368 |
+
878
|
| 1369 |
+
803
|
| 1370 |
+
634
|
| 1371 |
+
374
|
| 1372 |
+
466
|
| 1373 |
+
279
|
| 1374 |
+
204
|
| 1375 |
+
Combined
|
| 1376 |
+
2528
|
| 1377 |
+
1208
|
| 1378 |
+
992
|
| 1379 |
+
892
|
| 1380 |
+
635
|
| 1381 |
+
619
|
| 1382 |
+
401
|
| 1383 |
+
251
|
| 1384 |
+
2528
|
| 1385 |
+
1208
|
| 1386 |
+
992
|
| 1387 |
+
892
|
| 1388 |
+
635
|
| 1389 |
+
619
|
| 1390 |
+
401
|
| 1391 |
+
251
|
| 1392 |
+
0
|
| 1393 |
+
500
|
| 1394 |
+
1000
|
| 1395 |
+
1500
|
| 1396 |
+
2000
|
| 1397 |
+
2500
|
| 1398 |
+
Positive
|
| 1399 |
+
Surprise
|
| 1400 |
+
Negative
|
| 1401 |
+
Others
|
| 1402 |
+
Fig. 4: Distribution of ME Samples in DFME. Each column represents the total sample number of an emotion category, and
|
| 1403 |
+
the three pieces colored from light to deep show the proportion of samples in PART A, PART B, and PART C, respectively.
|
| 1404 |
+
sign of participants trying to suppress their positive
|
| 1405 |
+
feelings, hide their smiles and twist their expressions.
|
| 1406 |
+
4
|
| 1407 |
+
DATASET EVALUATION
|
| 1408 |
+
In this section, we conducted comprehensive experiments
|
| 1409 |
+
to verify the effectiveness of our DFME dataset for auto-
|
| 1410 |
+
matic MER task based on influential spatiotemporal feature
|
| 1411 |
+
learning models. In addition, we specifically analyzed the
|
| 1412 |
+
class imbalance problem in ME datasets, and explored two
|
| 1413 |
+
kinds of strategies to solve the class imbalance problem
|
| 1414 |
+
in our DFME. Furthermore, we explored the influence of
|
| 1415 |
+
different sampling strategies of ME key-frame sequence on
|
| 1416 |
+
MER. These experiments can provide reference for future
|
| 1417 |
+
MER research using DFME dataset.
|
| 1418 |
+
4.1
|
| 1419 |
+
Evaluation Dataset
|
| 1420 |
+
The DFME dataset is described in detail in Section 3. For the
|
| 1421 |
+
subsequent MER verification, we combined 7, 275 samples
|
| 1422 |
+
with clear emotion labels in PART A, B and C of DFME
|
| 1423 |
+
as our experimental dataset. The emotion labels include
|
| 1424 |
+
disgust, surprise, happiness, fear, sadness, anger and contempt.
|
| 1425 |
+
4.2
|
| 1426 |
+
Data Preprocessing
|
| 1427 |
+
In facial expression recognition, many variables, such as
|
| 1428 |
+
backgrounds, head poses and unequal video lengths, can
|
| 1429 |
+
affect the final recognition results. Therefore, before formally
|
| 1430 |
+
conducting automatic MER experiments, we need to prepro-
|
| 1431 |
+
cess all ME videos in the following steps to minimize the
|
| 1432 |
+
influence of irrelevant variables.
|
| 1433 |
+
4.2.1
|
| 1434 |
+
Face Alignment
|
| 1435 |
+
To eliminate the differences in pose and angle among all ME
|
| 1436 |
+
samples, we need to perform face alignment. In this step,
|
| 1437 |
+
we took the following operations for each ME sample. We
|
| 1438 |
+
first selected a frontal face image as a reference and adopted
|
| 1439 |
+
Style Aggregated Network (SAN) [49] to extract its facial
|
| 1440 |
+
landmarks. Afterwards, we used Procrustes analysis [50] to
|
| 1441 |
+
compute an affine transformation based on landmarks of
|
| 1442 |
+
the onset frame and landmarks of the reference image. The
|
| 1443 |
+
reason why we did not use landmarks of all frames in the
|
| 1444 |
+
ME video is to avoid errors introduced by the calculation of
|
| 1445 |
+
landmarks and transformations having a significant impact
|
| 1446 |
+
on real MEs. Finally, the transformation was operated for
|
| 1447 |
+
each frame to align the faces. Besides, some landmarks are
|
| 1448 |
+
located in regions where MEs may appear, which may not
|
| 1449 |
+
be stable enough for alignment. Thus, we excluded such
|
| 1450 |
+
landmarks when performing the alignment.
|
| 1451 |
+
4.2.2
|
| 1452 |
+
Face Cropping
|
| 1453 |
+
Since the movement of MEs is mainly in the facial area,
|
| 1454 |
+
face cropping is also a necessary step to eliminate the bias
|
| 1455 |
+
caused by different backgrounds. After face alignment, we
|
| 1456 |
+
chose RetinaFace [51] to crop the faces. For reasons similar to
|
| 1457 |
+
face alignment, face cropping was based on the onset frame
|
| 1458 |
+
instead of each frame of a sample.
|
| 1459 |
+
4.2.3
|
| 1460 |
+
ME key-frame sequence sampling
|
| 1461 |
+
Different ME videos have different lengths, while deep
|
| 1462 |
+
learning models usually require a fixed input size, which
|
| 1463 |
+
is shorter than ME sample lengths. Before inputting into
|
| 1464 |
+
the model, we need to normalize the temporal length of all
|
| 1465 |
+
ME videos. In general, video classification models usually
|
| 1466 |
+
adopt the uniform sampling method to unify the video
|
| 1467 |
+
length. However, this processing strategy is coarse-grained
|
| 1468 |
+
for recognizing ME with local and subtle movements. Fol-
|
| 1469 |
+
lowing previous studies [12], [44] and to be compatible with
|
| 1470 |
+
popular video classification models, this work extracts 16
|
| 1471 |
+
key-frames from each ME video based on the annotated
|
| 1472 |
+
three ME key-frames (i.e., onset frame, apex frame, and
|
| 1473 |
+
offset frame) and temporal adaptive sampling strategy [44].
|
| 1474 |
+
4.3
|
| 1475 |
+
Evaluation Protocols and Metrics
|
| 1476 |
+
Due to the small sample size of previous datasets such as
|
| 1477 |
+
CASME II [14], SAMM [15], and SMIC [13], most MER stud-
|
| 1478 |
+
|
| 1479 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 1480 |
+
11
|
| 1481 |
+
ies adopted the leave-one-subject-out strategy when evalu-
|
| 1482 |
+
ating on them. Nevertheless, considering that the number of
|
| 1483 |
+
ME clips in DFME is relatively large, this paper put to use a
|
| 1484 |
+
simpler and more efficient 10-fold cross-validation strategy.
|
| 1485 |
+
For each fold, 10% of the data were sampled as the test set,
|
| 1486 |
+
and the remaining 90% as the training set. In addition, three
|
| 1487 |
+
commonly used MEs classification indicators, namely Accu-
|
| 1488 |
+
racy, Unweighted F1-Score and Unweighted Average Recall,
|
| 1489 |
+
were used to evaluate the MER performance. Specifically,
|
| 1490 |
+
before calculating them, we need to obtain the True Positive
|
| 1491 |
+
(TPi), False Positive (FPi), and False Negative (FNi) for
|
| 1492 |
+
each class i (K classes in total, and K = 7 in DFME). In the
|
| 1493 |
+
end, we took the average results of ten experiments as the
|
| 1494 |
+
final result.
|
| 1495 |
+
4.3.1
|
| 1496 |
+
Accuracy (ACC)
|
| 1497 |
+
Accuracy is one of the most common metrics, which can
|
| 1498 |
+
evaluate the overall performance of the recognition method
|
| 1499 |
+
on the dataset. It was calculated as follows:
|
| 1500 |
+
ACC =
|
| 1501 |
+
K
|
| 1502 |
+
�
|
| 1503 |
+
i=1
|
| 1504 |
+
TPi
|
| 1505 |
+
Ni
|
| 1506 |
+
,
|
| 1507 |
+
(8)
|
| 1508 |
+
where Ni is the number of samples of the i-th class.
|
| 1509 |
+
4.3.2
|
| 1510 |
+
Unweighted F1-score (UF1)
|
| 1511 |
+
Unweighted F1-score (UF1), also known as macro-averaged
|
| 1512 |
+
F1-score, is defined as shown below:
|
| 1513 |
+
UF1 = 1
|
| 1514 |
+
K
|
| 1515 |
+
K
|
| 1516 |
+
�
|
| 1517 |
+
i=1
|
| 1518 |
+
UF1i,
|
| 1519 |
+
(9)
|
| 1520 |
+
where we have:
|
| 1521 |
+
UF1i =
|
| 1522 |
+
2 · TPi
|
| 1523 |
+
2 · TPi + FPi + FNi
|
| 1524 |
+
.
|
| 1525 |
+
(10)
|
| 1526 |
+
Class imbalance is an intractable problem in the MER task,
|
| 1527 |
+
so introducing UF1 as an evaluation metric can better mea-
|
| 1528 |
+
sure the method’s performance in all classes rather than in
|
| 1529 |
+
some major classes.
|
| 1530 |
+
4.3.3
|
| 1531 |
+
Unweighted Average Recall (UAR)
|
| 1532 |
+
Unweighted Average Recall (UAR) is also a more reason-
|
| 1533 |
+
able metric than accuracy in case of class imbalance.
|
| 1534 |
+
UAR = 1
|
| 1535 |
+
K
|
| 1536 |
+
K
|
| 1537 |
+
�
|
| 1538 |
+
i=1
|
| 1539 |
+
TPi
|
| 1540 |
+
Ni
|
| 1541 |
+
.
|
| 1542 |
+
(11)
|
| 1543 |
+
Both UF1 and UAR can effectively evaluate whether MER
|
| 1544 |
+
methods give correct predictions in all classes.
|
| 1545 |
+
4.4
|
| 1546 |
+
Evaluation Baseline Models
|
| 1547 |
+
Although the spatiotemporal convolution models with
|
| 1548 |
+
deeper layers and more parameters have achieved amazing
|
| 1549 |
+
performance in the video classification tasks, due to the
|
| 1550 |
+
scarcity of ME data, previous MER studies rarely use such
|
| 1551 |
+
a model with a large number of parameters. In fact, both
|
| 1552 |
+
time and space contain unique features of ME, and MER
|
| 1553 |
+
should take into account both dimensions. To verify the
|
| 1554 |
+
feasibility of applying large 3D models on our large-scale
|
| 1555 |
+
dataset and to provide a reference for backbone selection of
|
| 1556 |
+
MER methods based on extensive data, we have selected
|
| 1557 |
+
the following standard backbone networks based on 3D
|
| 1558 |
+
convolution architecture for validation experiments.
|
| 1559 |
+
4.4.1
|
| 1560 |
+
3D-ResNet (R3D)
|
| 1561 |
+
Hara et al. proposed 3D-ResNet (R3D) [52] for tasks such as
|
| 1562 |
+
video classification and recognition. Since then, R3D is often
|
| 1563 |
+
used as the backbone in approaches to video-related tasks.
|
| 1564 |
+
The basic idea of this model is to replace the 2D convolu-
|
| 1565 |
+
tional kernels with spatiotemporal 3D kernels according to
|
| 1566 |
+
the 2D-ResNet [29] network structure.
|
| 1567 |
+
4.4.2
|
| 1568 |
+
Pseudo-3D ResNet (P3D)
|
| 1569 |
+
Pseudo-3D ResNet
|
| 1570 |
+
(P3D) [53] is another 3D model back-
|
| 1571 |
+
bone that has achieved good results in video tasks. It can be
|
| 1572 |
+
considered as an improved version of R3D. The key point
|
| 1573 |
+
of this model is the simulation of the 3×3×3 convolution
|
| 1574 |
+
filter by using a 1×3×3 spatial domain convolution filter
|
| 1575 |
+
and a 3×1×1 temporal domain convolution filter. Hence the
|
| 1576 |
+
authors named it Pseudo-3D ResNet. This change controls
|
| 1577 |
+
the model size and improves training efficiency and experi-
|
| 1578 |
+
mental performance.
|
| 1579 |
+
4.4.3
|
| 1580 |
+
3D-DenseNet (D3D)
|
| 1581 |
+
DenseNet [54] has achieved excellent performance in image
|
| 1582 |
+
tasks. It expanded the residual connection of ResNet. All
|
| 1583 |
+
layers in DenseNet connect directly with each other. 3D-
|
| 1584 |
+
DenseNet
|
| 1585 |
+
(D3D) has also been widely used in the video
|
| 1586 |
+
field. In the field of MER, Cai et al. [55] proposed a 3D-
|
| 1587 |
+
DenseNet-based method.
|
| 1588 |
+
4.4.4
|
| 1589 |
+
Inflated 3D ConvNet (I3D)
|
| 1590 |
+
Inflated 3D ConvNet (I3D) [56] is based on 2D ConvNet in-
|
| 1591 |
+
flation. The model size has increased significantly compared
|
| 1592 |
+
to the 2D model. Therefore, the data requirements have
|
| 1593 |
+
also increased significantly. For this reason, the authors also
|
| 1594 |
+
published a large-scale video dataset Kinetics [56] simulta-
|
| 1595 |
+
neously. The results on Kinetics demonstrate the excellent
|
| 1596 |
+
performance of I3D when the amount of data is sufficient.
|
| 1597 |
+
4.5
|
| 1598 |
+
Evaluation Implementation Settings
|
| 1599 |
+
Our MER experiments were all conducted on 2 NVIDIA
|
| 1600 |
+
GeForce RTX 3090 GPUs or a single NVIDIA A100-PCIE-
|
| 1601 |
+
40GB GPU. Following the original settings, the length of
|
| 1602 |
+
ME clips for all models was 16 frames, and for R3D, P3D,
|
| 1603 |
+
D3D and I3D, the sizes of each input image were 224×224,
|
| 1604 |
+
160×160, 224×224 and 224×224 respectively.
|
| 1605 |
+
During training, cross-entropy loss and stochastic gradi-
|
| 1606 |
+
ent descent (SGD) with a momentum of 0.9 were used to
|
| 1607 |
+
optimize the model parameters, and the batch size was set
|
| 1608 |
+
to 32 for all four models. For R3D, P3D, D3D, and I3D, the
|
| 1609 |
+
initial learning rates were set to 0.005, 0.01, 0.05, and 0.005,
|
| 1610 |
+
respectively, and learning rates were divided by 10 every 10
|
| 1611 |
+
epochs.
|
| 1612 |
+
4.6
|
| 1613 |
+
Evaluation Baseline Results
|
| 1614 |
+
To demonstrate the effectiveness of our DFME dataset for
|
| 1615 |
+
automatic MER tasks, we conducted a comprehensive MER
|
| 1616 |
+
experiment based on the above four baseline models. The
|
| 1617 |
+
evaluation baseline results are shown in Table 7, and the
|
| 1618 |
+
recognition confusion matrix of each baseline model is
|
| 1619 |
+
shown in Figure 5.
|
| 1620 |
+
|
| 1621 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 1622 |
+
12
|
| 1623 |
+
anger
|
| 1624 |
+
contempt
|
| 1625 |
+
disgust
|
| 1626 |
+
fear
|
| 1627 |
+
happiness
|
| 1628 |
+
sadness
|
| 1629 |
+
surprise
|
| 1630 |
+
Predicted label
|
| 1631 |
+
anger
|
| 1632 |
+
contempt
|
| 1633 |
+
disgust
|
| 1634 |
+
fear
|
| 1635 |
+
happiness
|
| 1636 |
+
sadness
|
| 1637 |
+
surprise
|
| 1638 |
+
True label
|
| 1639 |
+
16.48% 3.39% 41.68% 15.99% 3.55% 12.60% 6.30%
|
| 1640 |
+
7.48% 12.22% 25.44% 13.47% 18.45% 12.47% 10.47%
|
| 1641 |
+
6.41%
|
| 1642 |
+
3.44% 57.87% 14.72% 5.22%
|
| 1643 |
+
7.16%
|
| 1644 |
+
5.18%
|
| 1645 |
+
5.61%
|
| 1646 |
+
3.59% 34.30% 31.95% 3.92%
|
| 1647 |
+
7.17% 13.45%
|
| 1648 |
+
2.12%
|
| 1649 |
+
8.67% 15.02% 11.59% 51.51% 6.05%
|
| 1650 |
+
5.04%
|
| 1651 |
+
8.66%
|
| 1652 |
+
5.51% 24.57% 10.24% 3.46% 37.17% 10.39%
|
| 1653 |
+
2.90%
|
| 1654 |
+
3.89%
|
| 1655 |
+
7.70% 15.07% 2.81%
|
| 1656 |
+
6.37% 61.26%
|
| 1657 |
+
R3D Model
|
| 1658 |
+
10
|
| 1659 |
+
20
|
| 1660 |
+
30
|
| 1661 |
+
40
|
| 1662 |
+
50
|
| 1663 |
+
60
|
| 1664 |
+
(a) R3D
|
| 1665 |
+
anger
|
| 1666 |
+
contempt
|
| 1667 |
+
disgust
|
| 1668 |
+
fear
|
| 1669 |
+
happiness
|
| 1670 |
+
sadness
|
| 1671 |
+
surprise
|
| 1672 |
+
Predicted label
|
| 1673 |
+
anger
|
| 1674 |
+
contempt
|
| 1675 |
+
disgust
|
| 1676 |
+
fear
|
| 1677 |
+
happiness
|
| 1678 |
+
sadness
|
| 1679 |
+
surprise
|
| 1680 |
+
True label
|
| 1681 |
+
23.75% 1.78% 42.97% 8.08%
|
| 1682 |
+
4.20% 11.15% 8.08%
|
| 1683 |
+
4.49% 12.22% 40.40% 9.23% 15.46% 5.99% 12.22%
|
| 1684 |
+
6.25%
|
| 1685 |
+
2.25% 61.59% 10.88% 5.34%
|
| 1686 |
+
4.91%
|
| 1687 |
+
8.78%
|
| 1688 |
+
5.61%
|
| 1689 |
+
2.69% 32.96% 32.74% 6.17%
|
| 1690 |
+
4.04% 15.81%
|
| 1691 |
+
1.31%
|
| 1692 |
+
4.54% 21.57% 6.15% 54.33% 2.52%
|
| 1693 |
+
9.58%
|
| 1694 |
+
10.87% 2.83% 25.20% 8.03%
|
| 1695 |
+
1.73% 39.53% 11.81%
|
| 1696 |
+
3.64%
|
| 1697 |
+
1.99% 25.08% 15.31% 7.04%
|
| 1698 |
+
5.96% 40.98%
|
| 1699 |
+
P3D model
|
| 1700 |
+
10
|
| 1701 |
+
20
|
| 1702 |
+
30
|
| 1703 |
+
40
|
| 1704 |
+
50
|
| 1705 |
+
60
|
| 1706 |
+
(b) P3D
|
| 1707 |
+
anger
|
| 1708 |
+
contempt
|
| 1709 |
+
disgust
|
| 1710 |
+
fear
|
| 1711 |
+
happiness
|
| 1712 |
+
sadness
|
| 1713 |
+
surprise
|
| 1714 |
+
Predicted label
|
| 1715 |
+
anger
|
| 1716 |
+
contempt
|
| 1717 |
+
disgust
|
| 1718 |
+
fear
|
| 1719 |
+
happiness
|
| 1720 |
+
sadness
|
| 1721 |
+
surprise
|
| 1722 |
+
True label
|
| 1723 |
+
16.48% 1.78% 49.60% 6.46%
|
| 1724 |
+
6.14% 12.60% 6.95%
|
| 1725 |
+
4.24%
|
| 1726 |
+
6.23% 37.41% 6.48% 26.68% 10.72% 8.23%
|
| 1727 |
+
4.47%
|
| 1728 |
+
0.99% 68.99% 10.01% 5.54%
|
| 1729 |
+
5.10%
|
| 1730 |
+
4.91%
|
| 1731 |
+
4.82%
|
| 1732 |
+
1.35% 44.28% 23.99% 6.05%
|
| 1733 |
+
5.72% 13.79%
|
| 1734 |
+
1.31%
|
| 1735 |
+
3.53% 16.43% 4.33% 65.32% 4.44%
|
| 1736 |
+
4.64%
|
| 1737 |
+
5.98%
|
| 1738 |
+
3.31% 28.50% 8.98%
|
| 1739 |
+
5.35% 38.90% 8.98%
|
| 1740 |
+
1.57%
|
| 1741 |
+
0.58% 12.67% 7.70%
|
| 1742 |
+
4.80%
|
| 1743 |
+
4.64% 68.05%
|
| 1744 |
+
D3D Model
|
| 1745 |
+
10
|
| 1746 |
+
20
|
| 1747 |
+
30
|
| 1748 |
+
40
|
| 1749 |
+
50
|
| 1750 |
+
60
|
| 1751 |
+
(c) D3D
|
| 1752 |
+
anger
|
| 1753 |
+
contempt
|
| 1754 |
+
disgust
|
| 1755 |
+
fear
|
| 1756 |
+
happiness
|
| 1757 |
+
sadness
|
| 1758 |
+
surprise
|
| 1759 |
+
Predicted label
|
| 1760 |
+
anger
|
| 1761 |
+
contempt
|
| 1762 |
+
disgust
|
| 1763 |
+
fear
|
| 1764 |
+
happiness
|
| 1765 |
+
sadness
|
| 1766 |
+
surprise
|
| 1767 |
+
True label
|
| 1768 |
+
22.78% 1.62% 45.56% 12.76% 2.75%
|
| 1769 |
+
8.72%
|
| 1770 |
+
5.82%
|
| 1771 |
+
5.99% 13.22% 32.67% 8.23% 20.45% 10.97% 8.48%
|
| 1772 |
+
5.02%
|
| 1773 |
+
2.37% 69.58% 10.76% 4.11%
|
| 1774 |
+
4.31%
|
| 1775 |
+
3.84%
|
| 1776 |
+
4.60%
|
| 1777 |
+
1.79% 38.79% 31.61% 4.82%
|
| 1778 |
+
6.05% 12.33%
|
| 1779 |
+
1.51%
|
| 1780 |
+
5.04% 14.72% 5.75% 66.94% 2.62%
|
| 1781 |
+
3.43%
|
| 1782 |
+
6.30%
|
| 1783 |
+
4.57% 25.67% 10.39% 2.52% 42.83% 7.72%
|
| 1784 |
+
1.82%
|
| 1785 |
+
2.15% 10.18% 9.93%
|
| 1786 |
+
2.90%
|
| 1787 |
+
2.81% 70.20%
|
| 1788 |
+
I3D model
|
| 1789 |
+
10
|
| 1790 |
+
20
|
| 1791 |
+
30
|
| 1792 |
+
40
|
| 1793 |
+
50
|
| 1794 |
+
60
|
| 1795 |
+
70
|
| 1796 |
+
(d) I3D
|
| 1797 |
+
Fig. 5: Confusion matrices of R3D, P3D, D3D and I3D baseline models.
|
| 1798 |
+
From Table 7, we can easily find that the I3D model
|
| 1799 |
+
achieved the best performance among the four backbone
|
| 1800 |
+
models with an average accuracy of 55.24%, an average UF1
|
| 1801 |
+
of 0.4576 and an average UAR of 0.4526, and the accuracy
|
| 1802 |
+
is higher than the 47% achieved by naked eyes [57]. Besides,
|
| 1803 |
+
the other three models were approximately as accurate as
|
| 1804 |
+
the naked eye in DFME. The above experimental results
|
| 1805 |
+
demonstrate the reliability of our DFME and provided a
|
| 1806 |
+
reference for the selection of backbone models for future
|
| 1807 |
+
works. Meanwhile, by observing the recognition confusion
|
| 1808 |
+
matrices shown in Figure 5, we also find that all baseline
|
| 1809 |
+
models present the same phenomenon, that is, these models
|
| 1810 |
+
are more inclined to recognize the categories with more
|
| 1811 |
+
samples. Obviously, this is mainly caused by the class im-
|
| 1812 |
+
balance problem in DFME. Therefore, how to learn more
|
| 1813 |
+
distinguishable spatiotemporal ME features from the ME
|
| 1814 |
+
data with unbalanced classes is a vital exploration direction
|
| 1815 |
+
of MER. Besides, confusion matrices shown in Figure 5
|
| 1816 |
+
illustrate that for all four backbone models, the disgust and
|
| 1817 |
+
fear samples are the most difficult to distinguish. This result
|
| 1818 |
+
is consistent with the statistics of the AU frequencies in Table
|
| 1819 |
+
4. In both disgust and fear samples, the most frequent AUs
|
| 1820 |
+
are AU4 and AU7, and AU10, AU14, and AU24 are also
|
| 1821 |
+
found in both classes of samples.
|
| 1822 |
+
TABLE 7: ME recognition performance of various baseline
|
| 1823 |
+
models
|
| 1824 |
+
Models
|
| 1825 |
+
ACC
|
| 1826 |
+
UF1
|
| 1827 |
+
UAR
|
| 1828 |
+
R3D [52]
|
| 1829 |
+
46.54%
|
| 1830 |
+
0.3817
|
| 1831 |
+
0.3827
|
| 1832 |
+
P3D [53]
|
| 1833 |
+
45.77%
|
| 1834 |
+
0.3830
|
| 1835 |
+
0.3801
|
| 1836 |
+
D3D [55]
|
| 1837 |
+
52.26%
|
| 1838 |
+
0.4070
|
| 1839 |
+
0.4107
|
| 1840 |
+
I3D [56]
|
| 1841 |
+
55.24%
|
| 1842 |
+
0.4576
|
| 1843 |
+
0.4526
|
| 1844 |
+
4.7
|
| 1845 |
+
Evaluation Discussion
|
| 1846 |
+
This section will focus on two key problems that are particu-
|
| 1847 |
+
larly considered when using our DFME for MER, including
|
| 1848 |
+
class imbalance problem and various key-frame sequence
|
| 1849 |
+
sampling strategies.
|
| 1850 |
+
4.7.1
|
| 1851 |
+
Class imbalance in DFME
|
| 1852 |
+
Since the existence of individual differences of subjects and
|
| 1853 |
+
the different inducing degrees of each category of ME,
|
| 1854 |
+
the collected spontaneous ME dataset is hard to avoid the
|
| 1855 |
+
problem of class imbalance. This is directly reflected in
|
| 1856 |
+
the previous three datasets widely used in MER, including
|
| 1857 |
+
SMIC, CASME II and SAMM, whose ratio of the most
|
| 1858 |
+
category to the least category is 1.63, 3.52 and 6.13 [58],
|
| 1859 |
+
respectively. Inevitably the class imbalance problem still
|
| 1860 |
+
exists in our DFME dataset.
|
| 1861 |
+
The statistic of emotion categories in DFME is shown
|
| 1862 |
+
in Table 3, from which we can find that the number of
|
| 1863 |
+
disgust samples is the largest among all emotion categories,
|
| 1864 |
+
accounting for about 1/3 of the proportion, and the negative
|
| 1865 |
+
samples (including disgust, fear, sadness, anger and contempt)
|
| 1866 |
+
accounted for about 2/3 of the proportion. Moreover, the
|
| 1867 |
+
confusion matrices in Figure 5 indicated the negative impact
|
| 1868 |
+
of class imbalance on models. All four backbone models
|
| 1869 |
+
tended to predict samples as disgust class more than others.
|
| 1870 |
+
To solve the class imbalance problem, introducing a class
|
| 1871 |
+
rebalancing strategy is an effective solution. In general, the
|
| 1872 |
+
class rebalancing methods can be roughly divided into two
|
| 1873 |
+
major categories: resampling and cost-sensitive reweighting.
|
| 1874 |
+
TABLE 8: MER Performance with and without
|
| 1875 |
+
Resampling.
|
| 1876 |
+
Metrics
|
| 1877 |
+
Resampling1
|
| 1878 |
+
ACC
|
| 1879 |
+
UF1
|
| 1880 |
+
UAR
|
| 1881 |
+
R3D
|
| 1882 |
+
w/o
|
| 1883 |
+
46.54%
|
| 1884 |
+
0.3817
|
| 1885 |
+
0.3827
|
| 1886 |
+
w
|
| 1887 |
+
47.05%
|
| 1888 |
+
0.3823
|
| 1889 |
+
0.3659
|
| 1890 |
+
P3D
|
| 1891 |
+
w/o
|
| 1892 |
+
45.77%
|
| 1893 |
+
0.3830
|
| 1894 |
+
0.3801
|
| 1895 |
+
w
|
| 1896 |
+
42.02%
|
| 1897 |
+
0.3949
|
| 1898 |
+
0.4078
|
| 1899 |
+
D3D
|
| 1900 |
+
w/o
|
| 1901 |
+
52.26%
|
| 1902 |
+
0.4070
|
| 1903 |
+
0.4107
|
| 1904 |
+
w
|
| 1905 |
+
48.37%
|
| 1906 |
+
0.4489
|
| 1907 |
+
0.4656
|
| 1908 |
+
I3D
|
| 1909 |
+
w/o
|
| 1910 |
+
55.24%
|
| 1911 |
+
0.4576
|
| 1912 |
+
0.4526
|
| 1913 |
+
w
|
| 1914 |
+
53.91%
|
| 1915 |
+
0.4902
|
| 1916 |
+
0.4924
|
| 1917 |
+
1 w/o: without resampling, w: with resampling
|
| 1918 |
+
Resampling is one of the most widely used class rebal-
|
| 1919 |
+
ancing methods. Moreover, uniform resampling is a fairly
|
| 1920 |
+
common one of all resampling strategies, which is also used
|
| 1921 |
+
in our experiments. Its main idea is to select each class of
|
| 1922 |
+
samples with an equal probability when training models,
|
| 1923 |
+
rather than sampling all samples uniformly.
|
| 1924 |
+
Table 8 and Figure 6 show the comparison of the re-
|
| 1925 |
+
sults with and without uniform resampling. The resampling
|
| 1926 |
+
strategy improved UAR and UF1 on the three models except
|
| 1927 |
+
for R3D, but the accuracy decreased. With the introduction
|
| 1928 |
+
of the uniform resampling strategy, the model could better
|
| 1929 |
+
learn the features of minor classes, but at the cost of weak-
|
| 1930 |
+
ening the ability to predict major classes correctly. How to
|
| 1931 |
+
|
| 1932 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 1933 |
+
13
|
| 1934 |
+
R3D
|
| 1935 |
+
P3D
|
| 1936 |
+
D3D
|
| 1937 |
+
I3D
|
| 1938 |
+
35.0
|
| 1939 |
+
37.5
|
| 1940 |
+
40.0
|
| 1941 |
+
42.5
|
| 1942 |
+
45.0
|
| 1943 |
+
47.5
|
| 1944 |
+
50.0
|
| 1945 |
+
52.5
|
| 1946 |
+
55.0
|
| 1947 |
+
ACC (%)
|
| 1948 |
+
Accuracy (ACC)
|
| 1949 |
+
without Resampling
|
| 1950 |
+
with Resampling
|
| 1951 |
+
(a) ACC
|
| 1952 |
+
R3D
|
| 1953 |
+
P3D
|
| 1954 |
+
D3D
|
| 1955 |
+
I3D
|
| 1956 |
+
0.36
|
| 1957 |
+
0.38
|
| 1958 |
+
0.40
|
| 1959 |
+
0.42
|
| 1960 |
+
0.44
|
| 1961 |
+
0.46
|
| 1962 |
+
0.48
|
| 1963 |
+
0.50
|
| 1964 |
+
UF1
|
| 1965 |
+
Unweighted F1-Score (UF1)
|
| 1966 |
+
without Resampling
|
| 1967 |
+
with Resampling
|
| 1968 |
+
(b) UF1
|
| 1969 |
+
R3D
|
| 1970 |
+
P3D
|
| 1971 |
+
D3D
|
| 1972 |
+
I3D
|
| 1973 |
+
0.36
|
| 1974 |
+
0.38
|
| 1975 |
+
0.40
|
| 1976 |
+
0.42
|
| 1977 |
+
0.44
|
| 1978 |
+
0.46
|
| 1979 |
+
0.48
|
| 1980 |
+
0.50
|
| 1981 |
+
UAR
|
| 1982 |
+
Unweighted Average Recall (UAR)
|
| 1983 |
+
without Resampling
|
| 1984 |
+
with Resampling
|
| 1985 |
+
(c) UAR
|
| 1986 |
+
Fig. 6: Comparison of MER results with and without Resampling
|
| 1987 |
+
reduce the information loss of the major classes in MER is a
|
| 1988 |
+
problem that needs to be addressed in future works.
|
| 1989 |
+
Reweighting approaches attempt to rebalance different
|
| 1990 |
+
classes by reweighting their loss during training models.
|
| 1991 |
+
Class-Balanced Loss
|
| 1992 |
+
(CBLoss) [59] is a representative of
|
| 1993 |
+
reweighting loss, which is simple and effective and, there-
|
| 1994 |
+
fore, used extensively in different tasks. CBLoss proposed
|
| 1995 |
+
the concept of effective number to estimate the actual impact
|
| 1996 |
+
of samples of each class on the model. It can also be
|
| 1997 |
+
combined with other losses, including Focal Loss [60], which
|
| 1998 |
+
reweighted samples in different classes according to their
|
| 1999 |
+
difficulty to be predicted. This feature further enhances the
|
| 2000 |
+
adaptability of CBLoss to different domains. The losses we
|
| 2001 |
+
calculated in our experiments are shown in Table 10.
|
| 2002 |
+
The results of CBLoss are shown in Table 9. Similar to
|
| 2003 |
+
uniform resampling, CBLoss also improved the UAR and
|
| 2004 |
+
UF1 for all four models at the cost of ACC in our experi-
|
| 2005 |
+
ments. This result demonstrates that CBLoss is compatible
|
| 2006 |
+
with various models and suffers from similar problems as
|
| 2007 |
+
resampling. Besides, CBLoss can be easily used for different
|
| 2008 |
+
tasks with different models, but we should carefully fine-
|
| 2009 |
+
tune it in various conditions to achieve better results. In
|
| 2010 |
+
particular, the choice of β may need further study, which
|
| 2011 |
+
controls the relationship between the effective number and
|
| 2012 |
+
the actual number of samples.
|
| 2013 |
+
TABLE 9: MER Performance with Different Losses
|
| 2014 |
+
Metrics
|
| 2015 |
+
Losses
|
| 2016 |
+
ACC
|
| 2017 |
+
UF1
|
| 2018 |
+
UAR
|
| 2019 |
+
R3D
|
| 2020 |
+
Cross Entropy Loss
|
| 2021 |
+
46.54%
|
| 2022 |
+
0.3817
|
| 2023 |
+
0.3827
|
| 2024 |
+
Class Balanced Loss
|
| 2025 |
+
46.61%
|
| 2026 |
+
0.3951
|
| 2027 |
+
0.3914
|
| 2028 |
+
P3D
|
| 2029 |
+
Cross Entropy Loss
|
| 2030 |
+
45.77%
|
| 2031 |
+
0.3830
|
| 2032 |
+
0.3801
|
| 2033 |
+
Class Balanced Loss
|
| 2034 |
+
43.23%
|
| 2035 |
+
0.3921
|
| 2036 |
+
0.3955
|
| 2037 |
+
D3D
|
| 2038 |
+
Cross Entropy Loss
|
| 2039 |
+
52.26%
|
| 2040 |
+
0.4070
|
| 2041 |
+
0.4107
|
| 2042 |
+
Class Balanced Loss
|
| 2043 |
+
48.25%
|
| 2044 |
+
0.4219
|
| 2045 |
+
0.4302
|
| 2046 |
+
I3D
|
| 2047 |
+
Cross Entropy Loss
|
| 2048 |
+
55.24%
|
| 2049 |
+
0.4576
|
| 2050 |
+
0.4526
|
| 2051 |
+
Class Balanced Loss
|
| 2052 |
+
54.56%
|
| 2053 |
+
0.4789
|
| 2054 |
+
0.4777
|
| 2055 |
+
4.8
|
| 2056 |
+
ME key-frame sequence sampling Strategies
|
| 2057 |
+
The key-frame sequence is a concise description of the
|
| 2058 |
+
original video, which generally contains key information
|
| 2059 |
+
about the content of the video. How to sample effective
|
| 2060 |
+
ME key-frame sequence from the raw video is also an im-
|
| 2061 |
+
portant factor for accurate recognition of ME. Video-related
|
| 2062 |
+
TABLE 10: Cost-Sensitive Reweighting Losses. In this table,
|
| 2063 |
+
py and ny are the softmax probability and the sample
|
| 2064 |
+
number of the class y, and β is the hyperparameter in Class-
|
| 2065 |
+
Balanced Loss (β = 0.999 in our experiments).
|
| 2066 |
+
Loss
|
| 2067 |
+
Equation
|
| 2068 |
+
Cross Entropy Loss
|
| 2069 |
+
Lce = −log(py)
|
| 2070 |
+
Class-Balanced Loss [59]
|
| 2071 |
+
Lcb = −
|
| 2072 |
+
1−β
|
| 2073 |
+
1−βny log(py)
|
| 2074 |
+
recognition tasks usually adopt uniform sampling to obtain
|
| 2075 |
+
a fixed-length key-frame sequence as model input, but the
|
| 2076 |
+
instantaneously changing ME movements are often not
|
| 2077 |
+
uniformly distributed in spatial-temporal space. Previous
|
| 2078 |
+
studies [12], [44] have shown the superiority of key-frame
|
| 2079 |
+
temporal adaptive sampling based on three key moments
|
| 2080 |
+
of ME video, namely onset, apex and offset. Therefore, we
|
| 2081 |
+
hereby compare and analyze the corresponding recognition
|
| 2082 |
+
performance of these two sampling strategies (i.e., uniform
|
| 2083 |
+
sampling and temporal adaptive sampling) in DFME using
|
| 2084 |
+
baseline models.
|
| 2085 |
+
TABLE 11: Comparison of MER Performace with
|
| 2086 |
+
Different Key-Frame Sequence Sampling Strategies.
|
| 2087 |
+
Metrics
|
| 2088 |
+
Sampling Method1
|
| 2089 |
+
ACC
|
| 2090 |
+
UF1
|
| 2091 |
+
UAR
|
| 2092 |
+
R3D
|
| 2093 |
+
adaptive
|
| 2094 |
+
46.54%
|
| 2095 |
+
0.3817
|
| 2096 |
+
0.3827
|
| 2097 |
+
uniform
|
| 2098 |
+
46.49%
|
| 2099 |
+
0.3710
|
| 2100 |
+
0.3715
|
| 2101 |
+
P3D
|
| 2102 |
+
adaptive
|
| 2103 |
+
45.77%
|
| 2104 |
+
0.3830
|
| 2105 |
+
0.3801
|
| 2106 |
+
uniform
|
| 2107 |
+
45.31%
|
| 2108 |
+
0.3671
|
| 2109 |
+
0.3656
|
| 2110 |
+
D3D
|
| 2111 |
+
adaptive
|
| 2112 |
+
52.26%
|
| 2113 |
+
0.4070
|
| 2114 |
+
0.4107
|
| 2115 |
+
uniform
|
| 2116 |
+
52.62%
|
| 2117 |
+
0.4124
|
| 2118 |
+
0.4203
|
| 2119 |
+
I3D
|
| 2120 |
+
adaptive
|
| 2121 |
+
55.24%
|
| 2122 |
+
0.4576
|
| 2123 |
+
0.4526
|
| 2124 |
+
uniform
|
| 2125 |
+
55.21%
|
| 2126 |
+
0.4621
|
| 2127 |
+
0.4576
|
| 2128 |
+
1 adaptive: adaptive sampling in
|
| 2129 |
+
[44], uniform: uniform
|
| 2130 |
+
sampling
|
| 2131 |
+
Table 11 and Fig 7 show the recognition performance of
|
| 2132 |
+
uniform sampling and temporal adaptive sampling [44]. It is
|
| 2133 |
+
clear that the temporal adaptive sampling strategy achieved
|
| 2134 |
+
better results on R3D and P3D models while performing
|
| 2135 |
+
worse on D3D. For I3D, the recognition performance of the
|
| 2136 |
+
two sampling strategies is comparable. This result suggests
|
| 2137 |
+
that different baseline models may require different sam-
|
| 2138 |
+
pling approaches.
|
| 2139 |
+
|
| 2140 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 2141 |
+
14
|
| 2142 |
+
R3D
|
| 2143 |
+
P3D
|
| 2144 |
+
D3D
|
| 2145 |
+
I3D
|
| 2146 |
+
35.0
|
| 2147 |
+
37.5
|
| 2148 |
+
40.0
|
| 2149 |
+
42.5
|
| 2150 |
+
45.0
|
| 2151 |
+
47.5
|
| 2152 |
+
50.0
|
| 2153 |
+
52.5
|
| 2154 |
+
55.0
|
| 2155 |
+
ACC (%)
|
| 2156 |
+
Accuracy (ACC)
|
| 2157 |
+
Uniform Sampling
|
| 2158 |
+
Adaptive Sampling
|
| 2159 |
+
(a) ACC
|
| 2160 |
+
R3D
|
| 2161 |
+
P3D
|
| 2162 |
+
D3D
|
| 2163 |
+
I3D
|
| 2164 |
+
0.36
|
| 2165 |
+
0.38
|
| 2166 |
+
0.40
|
| 2167 |
+
0.42
|
| 2168 |
+
0.44
|
| 2169 |
+
0.46
|
| 2170 |
+
UF1
|
| 2171 |
+
Unweighted F1-Score (UF1)
|
| 2172 |
+
Uniform Sampling
|
| 2173 |
+
Adaptive Sampling
|
| 2174 |
+
(b) UF1
|
| 2175 |
+
R3D
|
| 2176 |
+
P3D
|
| 2177 |
+
D3D
|
| 2178 |
+
I3D
|
| 2179 |
+
0.36
|
| 2180 |
+
0.38
|
| 2181 |
+
0.40
|
| 2182 |
+
0.42
|
| 2183 |
+
0.44
|
| 2184 |
+
0.46
|
| 2185 |
+
UAR
|
| 2186 |
+
Unweighted Average Recall (UAR)
|
| 2187 |
+
Uniform Sampling
|
| 2188 |
+
Adaptive Sampling
|
| 2189 |
+
(c) UAR
|
| 2190 |
+
Fig. 7: Comparison of MER results of Adaptive Key-frame Sampling and Uniform Key-frame Sampling.
|
| 2191 |
+
5
|
| 2192 |
+
CONCLUSION AND FUTURE WORK
|
| 2193 |
+
In this work, we focused on solving the problem of lacking
|
| 2194 |
+
abundant spontaneous ME data for MER. To this end, we
|
| 2195 |
+
built a new ME dataset called DFME containing 7,526 ME
|
| 2196 |
+
videos across multiple frame rates. To the best of our knowl-
|
| 2197 |
+
edge, DFME has the largest ME sample size at present.
|
| 2198 |
+
Furthermore, to verify the feasibility and validity of DFME
|
| 2199 |
+
dataset for MER task, we reproduced four spatiotemporal
|
| 2200 |
+
visual feature learning models to carry out MER task in
|
| 2201 |
+
DFME, objectively verifying the reliability of data quality,
|
| 2202 |
+
and providing a benchmark for subsequent MER studies.
|
| 2203 |
+
Particularly, we explored and analyzed two key problems
|
| 2204 |
+
when using DFME for MER, including class imbalance and
|
| 2205 |
+
key-frame sequence sampling, so as to provide directions
|
| 2206 |
+
for future MER studies using DFME.
|
| 2207 |
+
In the future, we will strive to expand the DFME dataset
|
| 2208 |
+
to provide more abundant ME data for automatic ME
|
| 2209 |
+
analysis research, including the collection of multimodal
|
| 2210 |
+
ME data in multiple natural scenes. Based on this, we will
|
| 2211 |
+
also study the high accuracy and robust MER models, such
|
| 2212 |
+
as self-supervised MER combined with more samples with
|
| 2213 |
+
uncertain labels, and apply them to actual scenes.
|
| 2214 |
+
ACKNOWLEDGMENTS
|
| 2215 |
+
This work has received a lot of guidance and help from the
|
| 2216 |
+
teachers in the Micro-expression Laboratory of Institute of
|
| 2217 |
+
Psychology, Chinese Academy of Sciences. We would like to
|
| 2218 |
+
express our special thanks to them.
|
| 2219 |
+
REFERENCES
|
| 2220 |
+
[1]
|
| 2221 |
+
A. Mehrabian, “Communication without words,” in Communica-
|
| 2222 |
+
tion theory.
|
| 2223 |
+
Routledge, 2017, pp. 193–200.
|
| 2224 |
+
[2]
|
| 2225 |
+
E. A. Haggard and K. S. Isaacs, “Micromomentary facial ex-
|
| 2226 |
+
pressions as indicators of ego mechanisms in psychotherapy,” in
|
| 2227 |
+
Methods of research in psychotherapy.
|
| 2228 |
+
Springer, 1966, pp. 154–165.
|
| 2229 |
+
[3]
|
| 2230 |
+
P. Ekman and W. V. Friesen, “Nonverbal leakage and clues to
|
| 2231 |
+
deception,” Psychiatry, vol. 32, no. 1, pp. 88–106, 1969.
|
| 2232 |
+
[4]
|
| 2233 |
+
S. Porter and L. Ten Brinke, “Reading between the lies: Identifying
|
| 2234 |
+
concealed and falsified emotions in universal facial expressions,”
|
| 2235 |
+
Psychological science, vol. 19, no. 5, pp. 508–514, 2008.
|
| 2236 |
+
[5]
|
| 2237 |
+
P. Ekman, Telling lies: Clues to deceit in the marketplace, politics, and
|
| 2238 |
+
marriage (revised edition).
|
| 2239 |
+
WW Norton & Company, 2009.
|
| 2240 |
+
[6]
|
| 2241 |
+
S. Weinberger, “Intent to deceive? can the science of deception
|
| 2242 |
+
detection help to catch terrorists? sharon weinberger takes a close
|
| 2243 |
+
look at the evidence for it,” Nature, vol. 465, no. 7297, pp. 412–416,
|
| 2244 |
+
2010.
|
| 2245 |
+
[7]
|
| 2246 |
+
L. Hunter, L. Roland, and A. Ferozpuri, “Emotional expression
|
| 2247 |
+
processing and depressive symptomatology: Eye-tracking reveals
|
| 2248 |
+
differential importance of lower and middle facial areas of inter-
|
| 2249 |
+
est,” Depression Research and Treatment, vol. 2020, 2020.
|
| 2250 |
+
[8]
|
| 2251 |
+
T. Pfister, X. Li, G. Zhao, and M. Pietik¨ainen, “Recognising sponta-
|
| 2252 |
+
neous facial micro-expressions,” in 2011 international conference on
|
| 2253 |
+
computer vision.
|
| 2254 |
+
IEEE, 2011, pp. 1449–1456.
|
| 2255 |
+
[9]
|
| 2256 |
+
G. Zhao and M. Pietikainen, “Dynamic texture recognition using
|
| 2257 |
+
local binary patterns with an application to facial expressions,”
|
| 2258 |
+
IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 6,
|
| 2259 |
+
pp. 915–928, 2007.
|
| 2260 |
+
[10] Y.-J. Liu, J.-K. Zhang, W.-J. Yan, S.-J. Wang, G. Zhao, and X. Fu, “A
|
| 2261 |
+
main directional mean optical flow feature for spontaneous micro-
|
| 2262 |
+
expression recognition,” IEEE Transactions on Affective Computing,
|
| 2263 |
+
vol. 7, no. 4, pp. 299–310, 2015.
|
| 2264 |
+
[11] S.-J. Wang, B.-J. Li, Y.-J. Liu, W.-J. Yan, X. Ou, X. Huang, F. Xu,
|
| 2265 |
+
and X. Fu, “Micro-expression recognition with small sample size
|
| 2266 |
+
by transferring long-term convolutional neural network,” Neuro-
|
| 2267 |
+
computing, vol. 312, pp. 251–262, 2018.
|
| 2268 |
+
[12] S. Zhao, H. Tao, Y. Zhang, T. Xu, K. Zhang, Z. Hao, and E. Chen, “A
|
| 2269 |
+
two-stage 3d cnn based learning method for spontaneous micro-
|
| 2270 |
+
expression recognition,” Neurocomputing, vol. 448, pp. 276–289,
|
| 2271 |
+
2021.
|
| 2272 |
+
[13] X. Li, T. Pfister, X. Huang, G. Zhao, and M. Pietik¨ainen, “A spon-
|
| 2273 |
+
taneous micro-expression database: Inducement, collection and
|
| 2274 |
+
baseline,” in 2013 10th IEEE International Conference and Workshops
|
| 2275 |
+
on Automatic face and gesture recognition (fg).
|
| 2276 |
+
IEEE, 2013, pp. 1–6.
|
| 2277 |
+
[14] W.-J. Yan, X. Li, S.-J. Wang, G. Zhao, Y.-J. Liu, Y.-H. Chen, and
|
| 2278 |
+
X. Fu, “Casme ii: An improved spontaneous micro-expression
|
| 2279 |
+
database and the baseline evaluation,” PloS one, vol. 9, no. 1, p.
|
| 2280 |
+
e86041, 2014.
|
| 2281 |
+
[15] A. K. Davison, C. Lansley, N. Costen, K. Tan, and M. H. Yap,
|
| 2282 |
+
“Samm: A spontaneous micro-facial movement dataset,” IEEE
|
| 2283 |
+
transactions on affective computing, vol. 9, no. 1, pp. 116–129, 2016.
|
| 2284 |
+
[16] X. Ben, Y. Ren, J. Zhang, S.-J. Wang, K. Kpalma, W. Meng, and
|
| 2285 |
+
Y.-J. Liu, “Video-based facial micro-expression analysis: A survey
|
| 2286 |
+
of datasets, features and algorithms,” IEEE transactions on pattern
|
| 2287 |
+
analysis and machine intelligence, 2021.
|
| 2288 |
+
[17] J. Li, Z. Dong, S. Lu, S.-J. Wang, W.-J. Yan, Y. Ma, Y. Liu, C. Huang,
|
| 2289 |
+
and X. Fu, “Cas (me) 3: A third generation facial spontaneous
|
| 2290 |
+
micro-expression database with depth information and high eco-
|
| 2291 |
+
logical validity,” IEEE Transactions on Pattern Analysis and Machine
|
| 2292 |
+
Intelligence, 2022.
|
| 2293 |
+
[18] M. Shreve, S. Godavarthy, D. Goldgof, and S. Sarkar, “Macro-and
|
| 2294 |
+
micro-expression spotting in long videos using spatio-temporal
|
| 2295 |
+
strain,” in 2011 IEEE International Conference on Automatic Face &
|
| 2296 |
+
Gesture Recognition (FG).
|
| 2297 |
+
IEEE, 2011, pp. 51–56.
|
| 2298 |
+
[19] S. Polikovsky, Y. Kameda, and Y. Ohta, “Facial micro-expressions
|
| 2299 |
+
recognition using high speed camera and 3d-gradient descriptor,”
|
| 2300 |
+
2009.
|
| 2301 |
+
[20] W.-J. Yan, Q. Wu, Y.-J. Liu, S.-J. Wang, and X. Fu, “Casme
|
| 2302 |
+
database: A dataset of spontaneous micro-expressions collected
|
| 2303 |
+
from neutralized faces,” in 2013 10th IEEE international conference
|
| 2304 |
+
and workshops on automatic face and gesture recognition (FG).
|
| 2305 |
+
IEEE,
|
| 2306 |
+
2013, pp. 1–7.
|
| 2307 |
+
[21] F. Qu, S.-J. Wang, W.-J. Yan, H. Li, S. Wu, and X. Fu, “Cas(me)2: a
|
| 2308 |
+
|
| 2309 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 2310 |
+
15
|
| 2311 |
+
database for spontaneous macro-expression and micro-expression
|
| 2312 |
+
spotting and recognition,” IEEE Transactions on Affective Comput-
|
| 2313 |
+
ing, vol. 9, no. 4, pp. 424–436, 2017.
|
| 2314 |
+
[22] X. Li, S. Cheng, Y. Li, M. Behzad, J. Shen, S. Zafeiriou, M. Pantic,
|
| 2315 |
+
and G. Zhao, “4dme: A spontaneous 4d micro-expression dataset
|
| 2316 |
+
with multimodalities,” IEEE Transactions on Affective Computing,
|
| 2317 |
+
2022.
|
| 2318 |
+
[23] P. Hus´ak, J. Cech, and J. Matas, “Spotting facial micro-expressions
|
| 2319 |
+
“in the wild”,” in 22nd Computer Vision Winter Workshop (Retz),
|
| 2320 |
+
2017, pp. 1–9.
|
| 2321 |
+
[24] R. Chaudhry, A. Ravichandran, G. Hager, and R. Vidal, “His-
|
| 2322 |
+
tograms of oriented optical flow and binet-cauchy kernels on non-
|
| 2323 |
+
linear dynamical systems for the recognition of human actions,”
|
| 2324 |
+
in 2009 IEEE Conference on Computer Vision and Pattern Recognition.
|
| 2325 |
+
IEEE, 2009, pp. 1932–1939.
|
| 2326 |
+
[25] X. Huang, S.-J. Wang, X. Liu, G. Zhao, X. Feng, and M. Pietik¨ainen,
|
| 2327 |
+
“Discriminative spatiotemporal local binary pattern with revisited
|
| 2328 |
+
integral projection for spontaneous facial micro-expression recog-
|
| 2329 |
+
nition,” IEEE Transactions on Affective Computing, vol. 10, no. 1, pp.
|
| 2330 |
+
32–47, 2017.
|
| 2331 |
+
[26] X. Li, X. Hong, A. Moilanen, X. Huang, T. Pfister, G. Zhao, and
|
| 2332 |
+
M. Pietik¨ainen, “Towards reading hidden emotions: A compara-
|
| 2333 |
+
tive study of spontaneous micro-expression spotting and recogni-
|
| 2334 |
+
tion methods,” IEEE transactions on affective computing, vol. 9, no. 4,
|
| 2335 |
+
pp. 563–577, 2017.
|
| 2336 |
+
[27] F. Xu, J. Zhang, and J. Z. Wang, “Microexpression identification
|
| 2337 |
+
and categorization using a facial dynamics map,” IEEE Transac-
|
| 2338 |
+
tions on Affective Computing, vol. 8, no. 2, pp. 254–267, 2017.
|
| 2339 |
+
[28] M. Peng, Z. Wu, Z. Zhang, and T. Chen, “From macro to micro
|
| 2340 |
+
expression recognition: Deep learning on small datasets using
|
| 2341 |
+
transfer learning,” in 2018 13th IEEE International Conference on
|
| 2342 |
+
Automatic Face & Gesture Recognition (FG 2018).
|
| 2343 |
+
IEEE, 2018, pp.
|
| 2344 |
+
657–661.
|
| 2345 |
+
[29] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for
|
| 2346 |
+
image recognition,” in Proceedings of the IEEE conference on computer
|
| 2347 |
+
vision and pattern recognition, 2016, pp. 770–778.
|
| 2348 |
+
[30] N. Van Quang, J. Chun, and T. Tokuyama, “Capsulenet for micro-
|
| 2349 |
+
expression recognition,” in 2019 14th IEEE International Conference
|
| 2350 |
+
on Automatic Face & Gesture Recognition (FG 2019).
|
| 2351 |
+
IEEE, 2019, pp.
|
| 2352 |
+
1–7.
|
| 2353 |
+
[31] B. Xia, W. Wang, S. Wang, and E. Chen, “Learning from macro-
|
| 2354 |
+
expression: a micro-expression recognition framework,” in Pro-
|
| 2355 |
+
ceedings of the 28th ACM International Conference on Multimedia,
|
| 2356 |
+
2020, pp. 2936–2944.
|
| 2357 |
+
[32] Y. Li, X. Huang, and G. Zhao, “Joint local and global information
|
| 2358 |
+
learning with single apex frame detection for micro-expression
|
| 2359 |
+
recognition,” IEEE Transactions on Image Processing, vol. 30, pp.
|
| 2360 |
+
249–263, 2020.
|
| 2361 |
+
[33] S.-T. Liong, J. See, K. Wong, and R. C.-W. Phan, “Less is more:
|
| 2362 |
+
Micro-expression recognition from video using apex frame,” Sig-
|
| 2363 |
+
nal Processing: Image Communication, vol. 62, pp. 82–92, 2018.
|
| 2364 |
+
[34] Y. Liu, H. Du, L. Zheng, and T. Gedeon, “A neural micro-
|
| 2365 |
+
expression recognizer,” in 2019 14th IEEE International Conference
|
| 2366 |
+
on Automatic Face & Gesture Recognition (FG 2019).
|
| 2367 |
+
IEEE, 2019, pp.
|
| 2368 |
+
1–4.
|
| 2369 |
+
[35] L. Zhou, Q. Mao, X. Huang, F. Zhang, and Z. Zhang, “Feature
|
| 2370 |
+
refinement: An expression-specific feature learning and fusion
|
| 2371 |
+
method for micro-expression recognition,” Pattern Recognition, vol.
|
| 2372 |
+
122, p. 108275, 2022.
|
| 2373 |
+
[36] W. Gong, Y. Zhang, W. Wang, P. Cheng, and J. Gonz`alez, “Meta-
|
| 2374 |
+
mmfnet: Meta-learning based multi-model fusion network for
|
| 2375 |
+
micro-expression recognition,” ACM Transactions on Multimedia
|
| 2376 |
+
Computing, Communications, and Applications (TOMM), 2022.
|
| 2377 |
+
[37] D. H. Kim, W. J. Baddar, and Y. M. Ro, “Micro-expression recog-
|
| 2378 |
+
nition with expression-state constrained spatio-temporal feature
|
| 2379 |
+
representations,” in Proceedings of the 24th ACM international con-
|
| 2380 |
+
ference on Multimedia.
|
| 2381 |
+
ACM, 2016, pp. 382–386.
|
| 2382 |
+
[38] H.-Q. Khor, J. See, R. C. W. Phan, and W. Lin, “Enriched long-term
|
| 2383 |
+
recurrent convolutional network for facial micro-expression recog-
|
| 2384 |
+
nition,” in 2018 13th IEEE International Conference on Automatic Face
|
| 2385 |
+
& Gesture Recognition (FG 2018).
|
| 2386 |
+
IEEE, 2018, pp. 667–674.
|
| 2387 |
+
[39] S. Ji, W. Xu, M. Yang, and K. Yu, “3d convolutional neural net-
|
| 2388 |
+
works for human action recognition,” IEEE transactions on pattern
|
| 2389 |
+
analysis and machine intelligence, vol. 35, no. 1, pp. 221–231, 2012.
|
| 2390 |
+
[40] M. Peng, C. Wang, T. Chen, G. Liu, and X. Fu, “Dual temporal scale
|
| 2391 |
+
convolutional neural network for micro-expression recognition,”
|
| 2392 |
+
Frontiers in psychology, vol. 8, p. 1745, 2017.
|
| 2393 |
+
[41] Y. Wang, H. Ma, X. Xing, and Z. Pan, “Eulerian motion based
|
| 2394 |
+
3dcnn architecture for facial micro-expression recognition,” in
|
| 2395 |
+
International Conference on Multimedia Modeling.
|
| 2396 |
+
Springer, 2020,
|
| 2397 |
+
pp. 266–277.
|
| 2398 |
+
[42] Z. Xia, X. Hong, X. Gao, X. Feng, and G. Zhao, “Spatiotemporal
|
| 2399 |
+
recurrent convolutional networks for recognizing spontaneous
|
| 2400 |
+
micro-expressions,” IEEE Transactions on Multimedia, vol. 22, no. 3,
|
| 2401 |
+
pp. 626–640, 2019.
|
| 2402 |
+
[43] B. Sun, S. Cao, D. Li, J. He, and L. Yu, “Dynamic micro-expression
|
| 2403 |
+
recognition using knowledge distillation,” IEEE Transactions on
|
| 2404 |
+
Affective Computing, 2020.
|
| 2405 |
+
[44] S. Zhao, H. Tang, S. Liu, Y. Zhang, H. Wang, T. Xu, E. Chen, and
|
| 2406 |
+
C. Guan, “Me-plan: A deep prototypical learning with local atten-
|
| 2407 |
+
tion network for dynamic micro-expression recognition,” Neural
|
| 2408 |
+
networks : the official journal of the International Neural Network
|
| 2409 |
+
Society, vol. 153, pp. 427–443, 2022.
|
| 2410 |
+
[45] H.-X. Xie, L. Lo, H.-H. Shuai, and W.-H. Cheng, “Au-assisted
|
| 2411 |
+
graph attention convolutional network for micro-expression
|
| 2412 |
+
recognition,” Proceedings of the 28th ACM International Conference
|
| 2413 |
+
on Multimedia, 2020.
|
| 2414 |
+
[46] P. Ekman and W. V. Friesen, “Facial action coding system,” Envi-
|
| 2415 |
+
ronmental Psychology & Nonverbal Behavior, 1978.
|
| 2416 |
+
[47] J. L. Fleiss, “Measuring nominal scale agreement among many
|
| 2417 |
+
raters.” Psychological bulletin, vol. 76, no. 5, p. 378, 1971.
|
| 2418 |
+
[48] X. Jiang, Y. Zong, W. Zheng, C. Tang, W. Xia, C. Lu, and J. Liu,
|
| 2419 |
+
“Dfew: A large-scale database for recognizing dynamic facial ex-
|
| 2420 |
+
pressions in the wild,” in Proceedings of the 28th ACM International
|
| 2421 |
+
Conference on Multimedia, 2020, pp. 2881–2889.
|
| 2422 |
+
[49] X. Dong, Y. Yan, W. Ouyang, and Y. Yang, “Style aggregated
|
| 2423 |
+
network for facial landmark detection,” pp. 379–388, 2018.
|
| 2424 |
+
[50] J. C. Gower, “Generalized procrustes analysis,” Psychometrika,
|
| 2425 |
+
vol. 40, pp. 33–51, 1975.
|
| 2426 |
+
[51] J. Deng, J. Guo, Y. Zhou, J. Yu, I. Kotsia, and S. Zafeiriou, “Reti-
|
| 2427 |
+
naface: Single-stage dense face localisation in the wild,” ArXiv,
|
| 2428 |
+
vol. abs/1905.00641, 2019.
|
| 2429 |
+
[52] K. Hara, H. Kataoka, and Y. Satoh, “Can spatiotemporal 3d cnns
|
| 2430 |
+
retrace the history of 2d cnns and imagenet?” 2018 IEEE/CVF
|
| 2431 |
+
Conference on Computer Vision and Pattern Recognition, pp. 6546–
|
| 2432 |
+
6555, 2018.
|
| 2433 |
+
[53] Z. Qiu, T. Yao, and T. Mei, “Learning spatio-temporal representa-
|
| 2434 |
+
tion with pseudo-3d residual networks,” 2017 IEEE International
|
| 2435 |
+
Conference on Computer Vision (ICCV), pp. 5534–5542, 2017.
|
| 2436 |
+
[54] G. Huang, Z. Liu, and K. Q. Weinberger, “Densely connected
|
| 2437 |
+
convolutional networks,” 2017 IEEE Conference on Computer Vision
|
| 2438 |
+
and Pattern Recognition (CVPR), pp. 2261–2269, 2017.
|
| 2439 |
+
[55] L. Cai, H. Li, W. Dong, and H. Fang, “Micro-expression recognition
|
| 2440 |
+
using 3d densenet fused squeeze-and-excitation networks,” Appl.
|
| 2441 |
+
Soft Comput., vol. 119, p. 108594, 2022.
|
| 2442 |
+
[56] J. Carreira and A. Zisserman, “Quo vadis, action recognition? a
|
| 2443 |
+
new model and the kinetics dataset,” 2017 IEEE Conference on
|
| 2444 |
+
Computer Vision and Pattern Recognition (CVPR), pp. 4724–4733,
|
| 2445 |
+
2017.
|
| 2446 |
+
[57] M. Frank, M. Herbasz, K. Sinuk, A. Keller, and C. Nolan, “I see
|
| 2447 |
+
how you feel: Training laypeople and professionals to recognize
|
| 2448 |
+
fleeting emotions,” in The Annual Meeting of the International Com-
|
| 2449 |
+
munication Association. Sheraton New York, New York City, 2009, pp.
|
| 2450 |
+
1–35.
|
| 2451 |
+
[58] J. See, M. H. Yap, J. Li, X. Hong, and S.-J. Wang, “Megc 2019 – the
|
| 2452 |
+
second facial micro-expressions grand challenge,” 2019 14th IEEE
|
| 2453 |
+
International Conference on Automatic Face & Gesture Recognition (FG
|
| 2454 |
+
2019), pp. 1–5, 2019.
|
| 2455 |
+
[59] Y. Cui, M. Jia, T.-Y. Lin, Y. Song, and S. J. Belongie, “Class-balanced
|
| 2456 |
+
loss based on effective number of samples,” 2019 IEEE/CVF Confer-
|
| 2457 |
+
ence on Computer Vision and Pattern Recognition (CVPR), pp. 9260–
|
| 2458 |
+
9269, 2019.
|
| 2459 |
+
[60] T.-Y. Lin, P. Goyal, R. B. Girshick, K. He, and P. Doll´ar, “Focal loss
|
| 2460 |
+
for dense object detection,” 2017 IEEE International Conference on
|
| 2461 |
+
Computer Vision (ICCV), pp. 2999–3007, 2017.
|
| 2462 |
+
|
| 2463 |
+
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2022
|
| 2464 |
+
16
|
| 2465 |
+
Sirui Zhao is currently working toward the
|
| 2466 |
+
PhD degree with the Department of Com-
|
| 2467 |
+
puter Science and Technology from University
|
| 2468 |
+
of Science and Technology of China (USTC).
|
| 2469 |
+
His research interests include automatic micro-
|
| 2470 |
+
expressions analysis, human-computer interac-
|
| 2471 |
+
tion (HCI) and affect computing. He has pub-
|
| 2472 |
+
lished several papers in refereed conferences
|
| 2473 |
+
and journals, including ACM Multimedia Confer-
|
| 2474 |
+
ence, IEEE Transactions on Affective Comput-
|
| 2475 |
+
ing, ACM TOMM, Neural Networks, etc.
|
| 2476 |
+
Huaying Tang received the B.S. degree in the
|
| 2477 |
+
School of Computer Science and Technology
|
| 2478 |
+
from University of Science and Technology of
|
| 2479 |
+
China (USTC), Hefei, China, in 2021. He is
|
| 2480 |
+
currently pursuing the M.S. degree in computer
|
| 2481 |
+
science and technology in USTC. His research
|
| 2482 |
+
interests lie around automatic micro-expressions
|
| 2483 |
+
analysis and affect computing.
|
| 2484 |
+
Xinglong Mao received the B.S degree in
|
| 2485 |
+
the School of Data Science from University
|
| 2486 |
+
of Science and Technology of China (USTC),
|
| 2487 |
+
Hefei, China. He is currently working toward
|
| 2488 |
+
the M.S. degree from the School of Data Sci-
|
| 2489 |
+
ence. His research interests include automatic
|
| 2490 |
+
micro-expressions analysis and affect comput-
|
| 2491 |
+
ing. He has published several conference papers
|
| 2492 |
+
in ACM Multimedia Conference, etc.
|
| 2493 |
+
Shifeng Liu received the B.S degree in the
|
| 2494 |
+
School of Gifted Young from University of Sci-
|
| 2495 |
+
ence and Technology of China (USTC), Hefei,
|
| 2496 |
+
China. She is currently working toward the
|
| 2497 |
+
M.S. degree from the School of Data Science.
|
| 2498 |
+
Her research interests include automatic micro-
|
| 2499 |
+
expressions analysis, human-computer interac-
|
| 2500 |
+
tion (HCI) and affect computing. She has pub-
|
| 2501 |
+
lished several papers in refereed conferences
|
| 2502 |
+
and journals, including ACM Multimedia Confer-
|
| 2503 |
+
ence, Neural Networks, etc.
|
| 2504 |
+
Hanqing Tao is currently working toward the
|
| 2505 |
+
Ph.D. degree in the Department of Computer
|
| 2506 |
+
Science and Technology from University of Sci-
|
| 2507 |
+
ence and Technology of China (USTC). His re-
|
| 2508 |
+
search interests include data mining, deep learn-
|
| 2509 |
+
ing, natural language processing and represen-
|
| 2510 |
+
tation learning. He has published several papers
|
| 2511 |
+
in referred journals and conference proceedings,
|
| 2512 |
+
such as IEEE TKDE, IEEE TAC, AAAI, ICDM,
|
| 2513 |
+
ICME etc.
|
| 2514 |
+
Hao Wang received the PhD degree in computer
|
| 2515 |
+
science from USTC. He is currently an associate
|
| 2516 |
+
researcher with the School of Computer Science
|
| 2517 |
+
and Technology, USTC. His main research inter-
|
| 2518 |
+
ests include data mining, representation learn-
|
| 2519 |
+
ing, network embedding and recommender sys-
|
| 2520 |
+
tems. He has published several papers in re-
|
| 2521 |
+
ferred conference proceedings, such as TKDE,
|
| 2522 |
+
TOIS, NeuriPS, and AAAI..
|
| 2523 |
+
Tong Xu received the Ph.D. degree in University
|
| 2524 |
+
of Science and Technology of China (USTC),
|
| 2525 |
+
Hefei, China, in 2016. He is currently working
|
| 2526 |
+
as an Associate Professor of the Anhui Province
|
| 2527 |
+
Key Laboratory of Big Data Analysis and Ap-
|
| 2528 |
+
plication, USTC. He has authored 50+ journal
|
| 2529 |
+
and conference papers in the fields of social
|
| 2530 |
+
network and social media analysis, including
|
| 2531 |
+
IEEE TKDE, IEEE TMC, IEEE TMM, KDD, AAAI,
|
| 2532 |
+
ICDM, etc.
|
| 2533 |
+
Enhong Chen (Sensor Member, IEEE) received
|
| 2534 |
+
the PhD degree from USTC. He is a professor
|
| 2535 |
+
and vice dean with the School of Computer Sci-
|
| 2536 |
+
ence, USTC. His general area of research in-
|
| 2537 |
+
cludes data mining and machine learning, social
|
| 2538 |
+
network analysis, and recommender systems.
|
| 2539 |
+
He has published more than 100 papers in ref-
|
| 2540 |
+
ereed conferences and journals, including IEEE
|
| 2541 |
+
Transactions on Knowledge and Data Engineer-
|
| 2542 |
+
ing, IEEE Transactions on Mobile Computing,
|
| 2543 |
+
KDD, ICDM, NeurIPS, and CIKM. He was on
|
| 2544 |
+
program committees of numerous conferences including KDD, ICDM,
|
| 2545 |
+
and SDM. His research is supported by the National Science Foundation
|
| 2546 |
+
for Distinguished Young Scholars of China.
|
| 2547 |
+
|
3tAzT4oBgHgl3EQfD_po/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
49AyT4oBgHgl3EQfpPiQ/content/2301.00522v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ebd1f65e76542de74aea043e97551a159a006a3c62ebf1c04ee362c0f2a8a2e8
|
| 3 |
+
size 231558
|
49AyT4oBgHgl3EQfpPiQ/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b4c57b3825d10ba6dc6b867f4056323c9413b134ea31b032d5d942b848943616
|
| 3 |
+
size 3473453
|
49AyT4oBgHgl3EQfpPiQ/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:84e4beedbd1afb1f3fc57beb238f4058b0e3a47317821c845670ff9a9e8c367a
|
| 3 |
+
size 132227
|
4dAyT4oBgHgl3EQf2Pkf/content/tmp_files/2301.00746v1.pdf.txt
ADDED
|
@@ -0,0 +1,1615 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
NaQ: Leveraging Narrations as Queries to Supervise Episodic Memory
|
| 2 |
+
Santhosh Kumar Ramakrishnan1, Ziad Al-Halah1, Kristen Grauman1,2
|
| 3 |
+
1UT Austin, 2Meta AI
|
| 4 |
+
Abstract
|
| 5 |
+
Searching long egocentric videos with natural language
|
| 6 |
+
queries (NLQ) has compelling applications in augmented
|
| 7 |
+
reality and robotics, where a fluid index into everything
|
| 8 |
+
that a person (agent) has seen before could augment human
|
| 9 |
+
memory and surface relevant information on demand. How-
|
| 10 |
+
ever, the structured nature of the learning problem (free-
|
| 11 |
+
form text query inputs, localized video temporal window
|
| 12 |
+
outputs) and its needle-in-a-haystack nature makes it both
|
| 13 |
+
technically challenging and expensive to supervise. We in-
|
| 14 |
+
troduce Narrations-as-Queries (NaQ), a data augmentation
|
| 15 |
+
strategy that transforms standard video-text narrations into
|
| 16 |
+
training data for a video query localization model. Vali-
|
| 17 |
+
dating our idea on the Ego4D benchmark, we find it has
|
| 18 |
+
tremendous impact in practice. NaQ improves multiple top
|
| 19 |
+
models by substantial margins (even doubling their accu-
|
| 20 |
+
racy), and yields the very best results to date on the Ego4D
|
| 21 |
+
NLQ challenge, soundly outperforming all challenge win-
|
| 22 |
+
ners in the CVPR and ECCV 2022 competitions and topping
|
| 23 |
+
the current public leaderboard. Beyond achieving the state-
|
| 24 |
+
of-the-art for NLQ, we also demonstrate unique properties
|
| 25 |
+
of our approach such as gains on long-tail object queries,
|
| 26 |
+
and the ability to perform zero-shot and few-shot NLQ.
|
| 27 |
+
1. Introduction
|
| 28 |
+
Human memory can fail us in day-to-day things in our
|
| 29 |
+
visual experience. We misplace objects in the house (where
|
| 30 |
+
is my passport?), we lose track of what tasks we have or
|
| 31 |
+
have not done (did I add the salt already?), we forget where
|
| 32 |
+
we did a given activity (where did I buy tickets last time?),
|
| 33 |
+
we do not notice the state of an object in our environment
|
| 34 |
+
(did I leave the garage door open?). First-person or “ego-
|
| 35 |
+
centric” perception on a wearable camera could reduce that
|
| 36 |
+
cognitive overload and provide us with a superhuman per-
|
| 37 |
+
sonal episodic memory—by seeing what we see, and index-
|
| 38 |
+
ing it in meaningful and easy-to-access ways.
|
| 39 |
+
This is the vision of the Natural Language Query (NLQ)
|
| 40 |
+
task in Ego4D’s Episodic Memory benchmark [12]. Given
|
| 41 |
+
a natural language question and a long egocentric video, the
|
| 42 |
+
NLQ task requires identifying the precise temporal window
|
| 43 |
+
. . .
|
| 44 |
+
. . .
|
| 45 |
+
Query: How many eggs did I break into the bowl?
|
| 46 |
+
Response
|
| 47 |
+
Figure 1. Episodic memory with natural language queries (NLQ)
|
| 48 |
+
aims to search long egocentric videos to identify the temporal
|
| 49 |
+
“response” window revealing the answer to a free-form question
|
| 50 |
+
about the camera wearer’s past visual experience.
|
| 51 |
+
in the camera wearer’s past video that reveals the answer.
|
| 52 |
+
See Figure 1. Such functionality could transform the every-
|
| 53 |
+
day experience of an augmented reality user with always-
|
| 54 |
+
on AR glasses. It could similarly play a role for a mobile
|
| 55 |
+
household robot, whom a user may wish to query about its
|
| 56 |
+
own visual history (have you seen my keys?).
|
| 57 |
+
The NLQ challenge has attracted substantial attention
|
| 58 |
+
in the research community over the last year [18, 19, 31]
|
| 59 |
+
as have related video-language efforts for question answer-
|
| 60 |
+
ing [23, 26–30].
|
| 61 |
+
The technical challenges are striking.
|
| 62 |
+
Queries are free-form natural language, response windows
|
| 63 |
+
are tiny slivers (a few seconds or less) within a long stretch
|
| 64 |
+
of video, and wearable camera video is notoriously noisy
|
| 65 |
+
with its quick head motions and limited field of view.
|
| 66 |
+
Today’s most successful methods embrace the visual-
|
| 67 |
+
language aspect of the problem. In particular, inspired by
|
| 68 |
+
the growing success of visual-linguistic embeddings [17,
|
| 69 |
+
20,22,25,28], top competitors on NLQ perform large-scale
|
| 70 |
+
pretraining on ⟨video clip, text description⟩ pairs mined
|
| 71 |
+
from the Ego4D dataset’s provided narrations [18], which
|
| 72 |
+
are timestamped play-by-play descriptions of the camera-
|
| 73 |
+
wearer’s activity (see Figure 2). The result is a video back-
|
| 74 |
+
bone enhanced by the semantics of grounded language.
|
| 75 |
+
1
|
| 76 |
+
arXiv:2301.00746v1 [cs.CV] 2 Jan 2023
|
| 77 |
+
|
| 78 |
+
C turns on the tap with her right hand
|
| 79 |
+
C opens a drawer
|
| 80 |
+
C cracks an egg into the bowl
|
| 81 |
+
C opens the third refrigerator door
|
| 82 |
+
Figure 2. Narration examples. “C” refers to the camera-wearer.
|
| 83 |
+
While it is important to have strong video and text repre-
|
| 84 |
+
sentations, the downstream query localization models that
|
| 85 |
+
search the video for a response are also crucial to NLQ, yet
|
| 86 |
+
relatively starved for data. This is a direct consequence of
|
| 87 |
+
the difficulty in annotating a query-response pair (which en-
|
| 88 |
+
tails posing a creative question and scrolling the long video
|
| 89 |
+
to mark the temporal response window) versus the relative
|
| 90 |
+
ease in narrating a video (which entails pausing the video
|
| 91 |
+
at regular intervals and writing down what happened). For
|
| 92 |
+
example, whereas Ego4D has 3,670 hours of data annotated
|
| 93 |
+
with narrations—more than 3.85M sentences in total—it of-
|
| 94 |
+
fers only 227 hours of NLQ query examples, for 19k total
|
| 95 |
+
text queries. Accordingly, existing methods risk failing to
|
| 96 |
+
learn the task-specific skills that are poorly represented in
|
| 97 |
+
training, such as responding to queries about objects in the
|
| 98 |
+
long-tail or performing complex reasoning for queries in-
|
| 99 |
+
volving interactions between multiple visual entities.
|
| 100 |
+
To address this issue, we introduce Narrations-as-
|
| 101 |
+
Queries (NaQ), a simple but exceptionally effective data
|
| 102 |
+
augmentation strategy for NLQ. NaQ is a novel strategy
|
| 103 |
+
that uses timestamped narrations to expand the supervision
|
| 104 |
+
available for training query-localization modules within an
|
| 105 |
+
episodic memory architecture. Our hypothesis is that nar-
|
| 106 |
+
rations provide descriptive information that is localizable in
|
| 107 |
+
long videos, and thus can benefit an episodic memory model
|
| 108 |
+
when used as training queries.
|
| 109 |
+
Specifically, we derive
|
| 110 |
+
⟨video, language query, temporal window response⟩ anno-
|
| 111 |
+
tations from timestamped narrations, and augment the con-
|
| 112 |
+
ventional query-response data with these pseudo-queries.
|
| 113 |
+
Importantly, this allows us to influence the localization
|
| 114 |
+
module—the workhorse responsible for finding a needle in
|
| 115 |
+
a haystack—with multimodal data, as opposed to just the
|
| 116 |
+
video and text encoders.
|
| 117 |
+
Empirically, our idea has tremendous impact. Demon-
|
| 118 |
+
strating NaQ on the Ego4D Episodic Memory benchmark,
|
| 119 |
+
we find our simple augmentation strategy successfully com-
|
| 120 |
+
plements multiple existing state-of-the-art episodic mem-
|
| 121 |
+
ory methods, achieving sizeable improvements (e.g., 32%
|
| 122 |
+
to 125% relative jumps in accuracy) across query types,
|
| 123 |
+
metrics, and methods. Notably, our gains hold even com-
|
| 124 |
+
pared to existing methods such as EgoVLP [18] that use the
|
| 125 |
+
same (or even more) narration annotations as our model,
|
| 126 |
+
meaning that NaQ’s success can be attributed to good mod-
|
| 127 |
+
eling, not more data. Moreover, to our knowledge, NaQ
|
| 128 |
+
yields the very best results to date on the NLQ chal-
|
| 129 |
+
lenge, strongly outperforming all the challenge winners
|
| 130 |
+
from Ego4D CVPR’22 and Ego4D ECCV’22 by a substan-
|
| 131 |
+
tial margin, and topping the current public leaderboard. Be-
|
| 132 |
+
yond achieving state-of-the-art results, we perform a thor-
|
| 133 |
+
ough analysis of the strengths and weaknesses of NaQ, and
|
| 134 |
+
demonstrate useful properties such as benefits on long-tail
|
| 135 |
+
object queries as well as zero-shot and few-shot NLQ. We
|
| 136 |
+
are the first to do so.
|
| 137 |
+
2. Related work
|
| 138 |
+
Egocentric video understanding.
|
| 139 |
+
Prior work has devel-
|
| 140 |
+
oped video datasets and methods for egocentric percep-
|
| 141 |
+
tion [4, 8, 10, 12, 14].
|
| 142 |
+
Egocentric video offers a camera
|
| 143 |
+
wearer’s perspective of their activities over a long time
|
| 144 |
+
horizon and raises challenging research problems such as
|
| 145 |
+
human-object interactions [3, 5], activity recognition [14,
|
| 146 |
+
33], anticipation [1, 11], episodic memory [12], and video
|
| 147 |
+
summarization [6,16]. In this work, we tackle the episodic
|
| 148 |
+
memory task. We leverage the Ego4D dataset [12], which
|
| 149 |
+
consists of 3,670 hours of video of daily-life activity cap-
|
| 150 |
+
tured by 931 camera wearers around the world.
|
| 151 |
+
Vision-language
|
| 152 |
+
pretraining.
|
| 153 |
+
Vision-Language
|
| 154 |
+
Pre-
|
| 155 |
+
training (VLP) methods rely on large-scale video-text
|
| 156 |
+
datasets [2, 21] to learn transferable representations for
|
| 157 |
+
video-language tasks such as retrieval [7, 13], question-
|
| 158 |
+
answering [23,27] and video captioning [15,32]. VideoBert
|
| 159 |
+
learns joint video-text embeddings by discretizing video
|
| 160 |
+
frames
|
| 161 |
+
into
|
| 162 |
+
tokens
|
| 163 |
+
and
|
| 164 |
+
performing
|
| 165 |
+
BERT-like
|
| 166 |
+
pre-
|
| 167 |
+
training [25]. HERO improves over this with a hierarchical
|
| 168 |
+
encoding of multi-modal inputs to better capture long-term
|
| 169 |
+
structure [17]. MIL-NCE learns to match clips with tempo-
|
| 170 |
+
rally close captions to address video-text misalignment in
|
| 171 |
+
HowTo100M [20,21]. While these methods primarily focus
|
| 172 |
+
on third-person videos, EgoVLP [18] adapts the InfoNCE
|
| 173 |
+
objective to egocentric settings and uses video-narration
|
| 174 |
+
annotations from Ego4D [12] to learn video-text backbones
|
| 175 |
+
for egocentric video understanding tasks.
|
| 176 |
+
Just-Ask [28]
|
| 177 |
+
proposes a strategy to generate video question-answering
|
| 178 |
+
data consisting of (short clips, questions, text answers)
|
| 179 |
+
from narrated YouTube videos.
|
| 180 |
+
While we take inspiration from such methods, our idea is
|
| 181 |
+
very different. Unlike prior work that learns representations
|
| 182 |
+
or video-QA systems from short video clips and aligned
|
| 183 |
+
(possibly weak) text, we learn to temporally localize short
|
| 184 |
+
2
|
| 185 |
+
|
| 186 |
+
pJ(5)text queries in long untrimmed videos egocentric videos.
|
| 187 |
+
Whereas Just-Ask’s data generation procedure [28] outputs
|
| 188 |
+
questions with text responses for short video clips, ours out-
|
| 189 |
+
puts temporal windows in long videos. Rather than pretrain-
|
| 190 |
+
ing a video/text backbone [17,18,20,25], our model injects
|
| 191 |
+
multimodal supervision to train a query-localization mod-
|
| 192 |
+
ule. Overall, our idea is complementary to prior video-text
|
| 193 |
+
pretraining efforts, as we will demonstrate in the results.
|
| 194 |
+
Episodic memory.
|
| 195 |
+
The episodic memory benchmark’s
|
| 196 |
+
natural language queries (NLQ) task was first introduced
|
| 197 |
+
in the Ego4D dataset [12]. In NLQ, the goal is to tem-
|
| 198 |
+
porally localize the response to a natural language text
|
| 199 |
+
question. Existing video-language grounding methods like
|
| 200 |
+
2D-TAN [30] and VSLNet [29] have been adapted to per-
|
| 201 |
+
form this task. Our goal is to improve such methods via
|
| 202 |
+
large-scale data augmentation with narration-based queries.
|
| 203 |
+
More recently, ReLER [19] achieved the state-of-the-art for
|
| 204 |
+
NLQ by using a multi-scale and cross-model transformer
|
| 205 |
+
with video-level data augmentation and contrastive losses.
|
| 206 |
+
Our proposed strategy performs query-level augmentation
|
| 207 |
+
and is complementary to the video-level data augmentation
|
| 208 |
+
from [19]. As we will demonstrate in experiments, our ap-
|
| 209 |
+
proach stacks well when combined with prior NLQ meth-
|
| 210 |
+
ods [18,19,29].
|
| 211 |
+
3. Approach
|
| 212 |
+
Our key insight is to leverage narrations as an additional
|
| 213 |
+
data source to improve a model’s ability to localize answers
|
| 214 |
+
in a long video when prompted with a natural language
|
| 215 |
+
query. To do this, we propose a strategy to convert narra-
|
| 216 |
+
tions and their timestamps into episodic memory queries.
|
| 217 |
+
Our strategy is automatic and simple which allows us to
|
| 218 |
+
scale the training data for episodic memory search by two
|
| 219 |
+
orders of magnitude. Furthermore, we generate the data in
|
| 220 |
+
a form that is compatible with the manually labeled NLQ
|
| 221 |
+
annotations, which allows an NLQ model to directly take
|
| 222 |
+
advantage of this additional data source and achieve signif-
|
| 223 |
+
icant improvements in performance without any modifica-
|
| 224 |
+
tions to the model itself.
|
| 225 |
+
Next, we define the episodic memory task (Sec. 3.1),
|
| 226 |
+
then describe our Narrations-as-Queries approach to con-
|
| 227 |
+
vert narrations into natural language queries (Sec. 3.2), and
|
| 228 |
+
finally describe our training strategy (Sec. 3.3).
|
| 229 |
+
3.1. Episodic memory with natural language query
|
| 230 |
+
The goal of episodic memory is to perform query-driven
|
| 231 |
+
reasoning about long-form egocentric videos. First intro-
|
| 232 |
+
duced in Ego4D [12], it is well-motivated by applications
|
| 233 |
+
discussed above, such as augmented reality assistants that
|
| 234 |
+
enable superhuman memory. The NLQ task has attracted
|
| 235 |
+
significant attention in the research community, with 10+
|
| 236 |
+
teams from labs around the world competing on the bench-
|
| 237 |
+
mark over the last year [18, 19, 31], two organized chal-
|
| 238 |
+
lenges at CVPR’22 and ECCV’22, and an active public
|
| 239 |
+
leaderboard1.
|
| 240 |
+
More formally, given an egocentric video V capturing
|
| 241 |
+
a camera wearer’s past experiences and a natural language
|
| 242 |
+
query Q in the form of a question, the task requires tempo-
|
| 243 |
+
rally localizing where the answer can be seen in the video,
|
| 244 |
+
i.e., a response window R = [ts, te]. For example, the
|
| 245 |
+
query could be Q =“What vegetables did I put in the soup
|
| 246 |
+
the last time I made it?”, and the model needs to search a
|
| 247 |
+
given video V to identify the time window [ts, te] that con-
|
| 248 |
+
tains the answer, i.e., the type of vegetables in the soup.
|
| 249 |
+
A data sample for this task is of the form ⟨video, query,
|
| 250 |
+
response⟩. The video can be several minutes long, and the
|
| 251 |
+
response to the query can appear in a time window that is
|
| 252 |
+
shorter than a second, making this a very challenging task.
|
| 253 |
+
3.2. Narrations-as-Queries
|
| 254 |
+
Prior NLQ methods are limited in performance due to
|
| 255 |
+
the lack of large-scale NLQ annotations of the form ⟨video,
|
| 256 |
+
query, response⟩. We address this limitation by proposing
|
| 257 |
+
a method to automatically transform narrations associated
|
| 258 |
+
with egocentric videos to a compatible form for NLQ. Nar-
|
| 259 |
+
rations are free-form sentences describing the current ac-
|
| 260 |
+
tivity performed by the camera-wearer (see Fig. 2). They
|
| 261 |
+
are time-stamped and temporally dense (e.g., there are 13.2
|
| 262 |
+
sentences per minute of video on average in Ego4D [12]).
|
| 263 |
+
These annotations are substantially cheaper to obtain
|
| 264 |
+
than NLQ annotations. For narrations, the annotators needs
|
| 265 |
+
to simply describe the activity that is seen in the video;
|
| 266 |
+
whereas for NLQ, first a meaningful, unambiguous ques-
|
| 267 |
+
tion needs to be formulated and then the annotator needs
|
| 268 |
+
to manually search the video back and forth to identify the
|
| 269 |
+
time window that shows the answer. Hence, narrations can
|
| 270 |
+
be annotated at a much larger scale compared to NLQ sam-
|
| 271 |
+
ples (e.g., Ego4D has 3.85M narrations compared to 19K
|
| 272 |
+
NLQ samples).
|
| 273 |
+
Our idea is to leverage this massive data source to aid the
|
| 274 |
+
learning in the NLQ task. We achieve this by first generat-
|
| 275 |
+
ing a temporal window associated with each narration that
|
| 276 |
+
approximately captures when the activity described by the
|
| 277 |
+
narration started and ended. Then, we use these samples
|
| 278 |
+
(narrations coupled with temporal windows) as additional
|
| 279 |
+
supervision to train an NLQ localization model to identify
|
| 280 |
+
where these narrations happen in the video (see Fig. 3).
|
| 281 |
+
Next, we formally describe our approach in detail.
|
| 282 |
+
1. Generating temporal windows for narrations.
|
| 283 |
+
Each
|
| 284 |
+
video narration consists of a textual sentence T , and a single
|
| 285 |
+
timestamp t marking the correspondence to the underlying
|
| 286 |
+
video (see Fig. 3, left). However, this is incompatible with
|
| 287 |
+
1NLQ challenge leaderboard: https://eval.ai/web/challenges/
|
| 288 |
+
challenge-page/1629/leaderboard/3920
|
| 289 |
+
3
|
| 290 |
+
|
| 291 |
+
Text
|
| 292 |
+
Encoder
|
| 293 |
+
Video
|
| 294 |
+
Encoder
|
| 295 |
+
queries
|
| 296 |
+
videos
|
| 297 |
+
Query Localization
|
| 298 |
+
Module
|
| 299 |
+
( Ƹ𝑡𝑠, Ƹ𝑡𝑒)
|
| 300 |
+
NLQ Model
|
| 301 |
+
. . .
|
| 302 |
+
. . .
|
| 303 |
+
NLQ Dataset
|
| 304 |
+
How many eggs did I break?
|
| 305 |
+
Narrations-as-Queries (NaQ )
|
| 306 |
+
C takes the ingredients out of the shelf
|
| 307 |
+
𝑉𝑗
|
| 308 |
+
𝑇𝑖
|
| 309 |
+
𝑅𝑖
|
| 310 |
+
𝑡𝑖
|
| 311 |
+
+𝛽/2𝛼
|
| 312 |
+
−𝛽/2𝛼
|
| 313 |
+
Seed Temporal Window
|
| 314 |
+
Temporal Response Jittering
|
| 315 |
+
𝑡𝑠
|
| 316 |
+
𝑡𝑒
|
| 317 |
+
..
|
| 318 |
+
..
|
| 319 |
+
..
|
| 320 |
+
responses
|
| 321 |
+
𝑉
|
| 322 |
+
𝑄
|
| 323 |
+
𝑅
|
| 324 |
+
. . .
|
| 325 |
+
. . .
|
| 326 |
+
𝑡𝑖
|
| 327 |
+
−𝑠Δ
|
| 328 |
+
+𝑠Δ
|
| 329 |
+
𝑡𝑐 −𝛿𝑡
|
| 330 |
+
Δ
|
| 331 |
+
ത𝑅𝑖
|
| 332 |
+
Figure 3. Narrations-as-Queries: We propose a simple-yet-effective data-augmentation strategy for natural language queries (NLQ).
|
| 333 |
+
The status-quo NLQ methods train in a supervised fashion on annotated (V: video, Q: query, R: response) tuples, where the response
|
| 334 |
+
is a (ts, te) temporal window (see right). This is severely limiting, since such task-specific data is expensive to obtain and is available
|
| 335 |
+
only on a small scale. We propose a narrations-as-queries pipeline to tackle this issue (see left). Our key idea is to leverage densely
|
| 336 |
+
annotated video narrations, where each narration Ti for video Vj is a textual description of the camera-wearer’s activity at time ti. We
|
| 337 |
+
propose “temporal response jittering”, a technique to convert timestamped narrations into natural language queries with temporal response
|
| 338 |
+
windows ⟨Vj, Ti, Ri⟩ and obtain the NaQ dataset, which contains 80× more samples when compared to the NLQ dataset. We then train
|
| 339 |
+
various NLQ models jointly on the NLQ and NaQ datasets to obtain significant gains across query types, architectures, and metrics.
|
| 340 |
+
NLQ task architectures which require queries and tempo-
|
| 341 |
+
ral response windows as supervision. To address this, we
|
| 342 |
+
propose temporal response jittering, a technique to convert
|
| 343 |
+
narration timestamps to temporal windows conditioned on
|
| 344 |
+
the video.
|
| 345 |
+
Temporal response jittering: Our goal is to convert a
|
| 346 |
+
narration timestamp ti from video Vj into a response win-
|
| 347 |
+
dow Ri = (ts, te).
|
| 348 |
+
First, we use “contextual variable-
|
| 349 |
+
length clip pairing strategy” introduced in EgoVLP [18] to
|
| 350 |
+
obtain a video-conditioned seed temporal window centered
|
| 351 |
+
around ti:
|
| 352 |
+
¯
|
| 353 |
+
Ri = [ti − βi/2α, ti + βi/2α]
|
| 354 |
+
(1)
|
| 355 |
+
where βi captures the average temporal length between con-
|
| 356 |
+
secutive narrations in video Vj, and α is the average of all
|
| 357 |
+
βi across all videos (please see [18] for details). While this
|
| 358 |
+
offers a good starting point, it fails to address the inherent
|
| 359 |
+
noise in ¯
|
| 360 |
+
Ri arising from the lack of explicit human annota-
|
| 361 |
+
tion. The responses generated are also typically short (less
|
| 362 |
+
than a second) and do not match the distribution over NLQ
|
| 363 |
+
response windows that are 10 seconds long on average. To
|
| 364 |
+
account for these factors, we transform ¯
|
| 365 |
+
Ri = (¯ts, ¯te) fur-
|
| 366 |
+
ther using a randomized expansion and translation of the
|
| 367 |
+
response window:
|
| 368 |
+
Ri = [(¯tc − δt) − s∆, (¯tc − δt) + s∆],
|
| 369 |
+
(2)
|
| 370 |
+
where ∆ = (¯te − ¯ts)/2 is the half-width of ¯Ri, ¯tc = (¯ts +
|
| 371 |
+
¯te)/2 is the center of ¯Ri, s ∼ U[1, S] is an expansion factor,
|
| 372 |
+
and δt ∼ U[−T, T] is a translation factor. Intuitively, the
|
| 373 |
+
translation factor δt randomly shifts ¯R to model uncertainty
|
| 374 |
+
in its estimate, and the scaling factor s randomly expands ¯R
|
| 375 |
+
to match the distribution of NLQ response windows. S is a
|
| 376 |
+
hyperparameter selected through validation, and T is set as
|
| 377 |
+
(s − 1)∆ after sampling s to ensure that the seed temporal
|
| 378 |
+
window ¯
|
| 379 |
+
Ri is contained within Ri.
|
| 380 |
+
Following this strategy, we can extract narrations and
|
| 381 |
+
their inferred temporal windows for all video clips with
|
| 382 |
+
available narrations (denoted by V) to obtain a dataset
|
| 383 |
+
D =
|
| 384 |
+
�
|
| 385 |
+
(N v
|
| 386 |
+
1 , · · · , N v
|
| 387 |
+
n) | ∀v ∈ V
|
| 388 |
+
�
|
| 389 |
+
,
|
| 390 |
+
(3)
|
| 391 |
+
where N v
|
| 392 |
+
i =
|
| 393 |
+
�
|
| 394 |
+
Ti, Ri
|
| 395 |
+
�
|
| 396 |
+
is the transformed sample that con-
|
| 397 |
+
sists of a narration and its corresponding response window.
|
| 398 |
+
We apply this method to the video clips from the train
|
| 399 |
+
split of the Ego4D Episodic Memory benchmark to create a
|
| 400 |
+
dataset D that contains 850k samples of transformed narra-
|
| 401 |
+
tions from 4,851 video clips.
|
| 402 |
+
2. Generating episodic memory queries. Given the pre-
|
| 403 |
+
vious dataset of narrations with associated temporal win-
|
| 404 |
+
dows D, we now convert these to a dataset of NLQ queries.
|
| 405 |
+
Specifically, given a video Vj, we sample a narration Ni
|
| 406 |
+
from Vj and obtain the task input X = (Vj, Ti), where
|
| 407 |
+
Ti is the narration text, and the label Y = Ri which rep-
|
| 408 |
+
resents the start and end times for a narration as defined
|
| 409 |
+
in Eq. (2). In other words, the narration Ti becomes the
|
| 410 |
+
query2 that effectively asks the model to locate in Vj where
|
| 411 |
+
2We found that simply using narration text as the query to work well.
|
| 412 |
+
4
|
| 413 |
+
|
| 414 |
+
the activity described by Ti can be found, i.e., the response
|
| 415 |
+
window (tstart
|
| 416 |
+
i
|
| 417 |
+
, tend
|
| 418 |
+
i
|
| 419 |
+
). This dataset of (X, Y ) pairs is our
|
| 420 |
+
Narrations-as-Queries (NaQ ) dataset. Next, we incorporate
|
| 421 |
+
this dataset into the NLQ training pipeline as a form of data
|
| 422 |
+
augmentation.
|
| 423 |
+
3.3. Narrations-as-Queries training for NLQ
|
| 424 |
+
Our NaQ is model-agnostic: it stands to benefit any NLQ
|
| 425 |
+
model out of the box without any model-specific modifica-
|
| 426 |
+
tions due to the direct compatibility of NaQ with the NLQ
|
| 427 |
+
data. We demonstrate the universal advantage of NaQ by
|
| 428 |
+
benchmarking several baselines with NaQ in experiments.
|
| 429 |
+
Specifically, for a given NLQ model M, we train it with
|
| 430 |
+
NaQ in two stages. Let us denote the NaQ dataset as DNaQ
|
| 431 |
+
and the NLQ train dataset as DNLQ. First, we jointly train
|
| 432 |
+
M with both DNaQ and DNLQ, effectively treating NaQ as a
|
| 433 |
+
query augmentation strategy. Since NaQ expands the train-
|
| 434 |
+
ing dataset significantly (by 2 orders of magnitude in size),
|
| 435 |
+
we rely on large batch training with 2048 batch size and an
|
| 436 |
+
appropriately large initial learning rate of 0.001 on 4-8 A40
|
| 437 |
+
GPUs. We train in this large-batch setting for 200 epochs,
|
| 438 |
+
with early stopping when the validation performance satu-
|
| 439 |
+
rates. We then finetune the model on DNLQ with the default
|
| 440 |
+
small-batch training used for M, and perform a grid search
|
| 441 |
+
to determine the learning rate based on M performance on
|
| 442 |
+
the validation split.
|
| 443 |
+
4. Experiments
|
| 444 |
+
4.1. Experimental setup
|
| 445 |
+
We evaluate our approach on the NLQ task from the
|
| 446 |
+
episodic memory benchmark from Ego4D [12].
|
| 447 |
+
This
|
| 448 |
+
benchmark has gained significant interest and has been the
|
| 449 |
+
subject of two Ego4D challenges held at CVPR 2022 and
|
| 450 |
+
ECCV 2022.
|
| 451 |
+
The NLQ task contains 11.3k/3.9k/4.0k
|
| 452 |
+
queries annotated over 136/45/46 hours of train / val / test
|
| 453 |
+
videos. Each video clip is 8.2 minutes on average, and the
|
| 454 |
+
ground-truth query response is 10.5 seconds on average in
|
| 455 |
+
the train dataset. That means the response window occupies
|
| 456 |
+
only 2% of the input video on average.
|
| 457 |
+
Evaluation metrics. We measure performance on NLQ us-
|
| 458 |
+
ing metrics from the video-language grounding literature
|
| 459 |
+
and adapted for NLQ in [12].
|
| 460 |
+
We report the recall@k,
|
| 461 |
+
IoU=m metric, where k = {1, 5} and m = {0.3, 0.5}. This
|
| 462 |
+
measures the percentage of times where at least one of
|
| 463 |
+
the top-k predicted candidates have at least an intersection-
|
| 464 |
+
over-union (IoU) of m.
|
| 465 |
+
We expect this is due to the use of pretrained BERT query encoders in
|
| 466 |
+
NLQ models [18, 19, 29], which can effectively adapt to the difference
|
| 467 |
+
between using a “narrated text” vs. “natural language question” as the
|
| 468 |
+
query. However, it would be interesting to study techniques to transform
|
| 469 |
+
narrations to questions [28], which we reserve for future work.
|
| 470 |
+
Baselines.
|
| 471 |
+
We evaluate the impact of our NaQ data aug-
|
| 472 |
+
mentation strategy by combining it with 3 existing methods
|
| 473 |
+
in the literature.
|
| 474 |
+
(1) VSLNet treats natural-language grounding as a text-
|
| 475 |
+
based question answering problem [29]. It represents the
|
| 476 |
+
input video as a text passage and uses a span-based QA
|
| 477 |
+
framework [24] to localize responses to text queries. This
|
| 478 |
+
was adapted to perform the NLQ task in [12] by using Slow-
|
| 479 |
+
Fast features pretrained on Kinetics 400 [9].
|
| 480 |
+
(2) EgoVLP proposes to pretrain video and text back-
|
| 481 |
+
bones on the EgoNCE pretraining task [18]. By leverag-
|
| 482 |
+
ing large-scale video + text narrations from Ego4D, they
|
| 483 |
+
successfully transfer features to a variety of tasks includ-
|
| 484 |
+
ing NLQ. It was the runner-up entry for the Ego4D NLQ
|
| 485 |
+
challenge at CVPR 2022. This method replaces the Slow-
|
| 486 |
+
Fast features from the VSLNet baseline with the EgoVLP
|
| 487 |
+
pretrained backbones. This baseline is complementary to
|
| 488 |
+
our own approach where we use narrations to augment the
|
| 489 |
+
localization training for NLQ task.
|
| 490 |
+
(3) ReLER adapts VSLNet to use a multi-scale cross-
|
| 491 |
+
modal transformer architecture [19].
|
| 492 |
+
It also proposes to
|
| 493 |
+
augment the training data using video-level augmentation
|
| 494 |
+
strategies like randomly sampling a subset of the video to
|
| 495 |
+
try and mitigate overfitting. This was the winning entry of
|
| 496 |
+
the Ego4D NLQ challenge at CVPR 2022. We augment
|
| 497 |
+
this method with EgoVLP pretrained backbones to obtain
|
| 498 |
+
a stronger ‘ReLER∗’ baseline. Unlike this method, which
|
| 499 |
+
augments the data at the video level, we propose to augment
|
| 500 |
+
the data at the query level. We will demonstrate that NaQ is
|
| 501 |
+
complementary and boosts the performance of ReLER.
|
| 502 |
+
Note that both EgoVLP and ReLER∗ leverage the exact
|
| 503 |
+
same narration data as NaQ ; NaQ requires no greater super-
|
| 504 |
+
vision or data.
|
| 505 |
+
Implementation details. For each baseline, we adapt the
|
| 506 |
+
authors’ code bases to train with NaQ data augmentation.
|
| 507 |
+
For consistency, we report the results of each method as re-
|
| 508 |
+
produced using the provided code and instructions, in ad-
|
| 509 |
+
dition to reporting the official paper numbers.
|
| 510 |
+
We train
|
| 511 |
+
each method with NaQ augmentation for 200 epochs and
|
| 512 |
+
stop training early when the validation performance satu-
|
| 513 |
+
rates. We found that it was helpful to finetune for up to 30
|
| 514 |
+
epochs on only the NLQ dataset. Please see Sec. S1 for
|
| 515 |
+
details.
|
| 516 |
+
4.2. Experimental results
|
| 517 |
+
We report results on the NLQ validation set in Tab. 1.
|
| 518 |
+
The poor performance of the VSLNet baseline on NLQ
|
| 519 |
+
highlights the difficulty of the task. It requires localizing re-
|
| 520 |
+
sponses typically shorter than 10 seconds in 8+ minute long
|
| 521 |
+
egocentric videos. The limited size of the training dataset
|
| 522 |
+
further exacerbates this problem, since there are only 11.3k
|
| 523 |
+
training queries.
|
| 524 |
+
However, when augmented with NaQ ,
|
| 525 |
+
5
|
| 526 |
+
|
| 527 |
+
IoU=0.3
|
| 528 |
+
IoU=0.5
|
| 529 |
+
Method
|
| 530 |
+
Narrations
|
| 531 |
+
R@1
|
| 532 |
+
R@5
|
| 533 |
+
R@1
|
| 534 |
+
R@5
|
| 535 |
+
1.
|
| 536 |
+
VSLNet [29]
|
| 537 |
+
|
| 538 |
+
5.45
|
| 539 |
+
10.74
|
| 540 |
+
3.12
|
| 541 |
+
6.63
|
| 542 |
+
2.
|
| 543 |
+
VSLNet†
|
| 544 |
+
|
| 545 |
+
4.78
|
| 546 |
+
10.14
|
| 547 |
+
2.56
|
| 548 |
+
6.12
|
| 549 |
+
3.
|
| 550 |
+
VSLNet + NaQ
|
| 551 |
+
|
| 552 |
+
10.14
|
| 553 |
+
19.01
|
| 554 |
+
5.78
|
| 555 |
+
12.69
|
| 556 |
+
absolute gain
|
| 557 |
+
+5.36
|
| 558 |
+
+8.87
|
| 559 |
+
+3.22
|
| 560 |
+
+6.57
|
| 561 |
+
4.
|
| 562 |
+
EgoVLP [18]
|
| 563 |
+
|
| 564 |
+
10.84
|
| 565 |
+
18.84
|
| 566 |
+
6.81
|
| 567 |
+
13.45
|
| 568 |
+
5.
|
| 569 |
+
EgoVLP†
|
| 570 |
+
|
| 571 |
+
10.43
|
| 572 |
+
19.75
|
| 573 |
+
6.55
|
| 574 |
+
13.46
|
| 575 |
+
6.
|
| 576 |
+
EgoVLP + NaQ
|
| 577 |
+
|
| 578 |
+
15.90
|
| 579 |
+
26.38
|
| 580 |
+
9.46
|
| 581 |
+
17.80
|
| 582 |
+
absolute gain
|
| 583 |
+
+5.47
|
| 584 |
+
+6.63
|
| 585 |
+
+2.91
|
| 586 |
+
+4.34
|
| 587 |
+
7.
|
| 588 |
+
ReLER [19]
|
| 589 |
+
|
| 590 |
+
10.79
|
| 591 |
+
13.19
|
| 592 |
+
6.74
|
| 593 |
+
8.85
|
| 594 |
+
8.
|
| 595 |
+
ReLER†
|
| 596 |
+
|
| 597 |
+
10.25
|
| 598 |
+
12.49
|
| 599 |
+
6.27
|
| 600 |
+
8.23
|
| 601 |
+
9.
|
| 602 |
+
ReLER∗
|
| 603 |
+
|
| 604 |
+
14.48
|
| 605 |
+
17.55
|
| 606 |
+
8.52
|
| 607 |
+
11.33
|
| 608 |
+
10.
|
| 609 |
+
ReLER∗ + NaQ
|
| 610 |
+
|
| 611 |
+
19.31
|
| 612 |
+
23.59
|
| 613 |
+
11.62
|
| 614 |
+
15.51
|
| 615 |
+
absolute gain
|
| 616 |
+
+4.83
|
| 617 |
+
+6.04
|
| 618 |
+
+3.10
|
| 619 |
+
+4.18
|
| 620 |
+
Table 1. Results on NLQ validation.
|
| 621 |
+
∗replace SlowFast with
|
| 622 |
+
EgoVLP features. †Results reproduced using authors’ code.
|
| 623 |
+
the performance across all metrics nearly doubles, indicat-
|
| 624 |
+
ing the effectiveness of NaQ in addressing these challenges.
|
| 625 |
+
This is a dramatic gain, though it comes at the cost of larger
|
| 626 |
+
narrations data that is not available to VSLNet.
|
| 627 |
+
When VSLNet is augmented with NaQ , it is already
|
| 628 |
+
competitive with EgoVLP, which pretrains video and text
|
| 629 |
+
backbones with Ego4D videos + narrations and uses the
|
| 630 |
+
same VSLNet query-localization architecture (rows 3 vs.
|
| 631 |
+
5). When NaQ is combined with EgoVLP, it further im-
|
| 632 |
+
proves the performance by 2.9 - 6.6 points across metrics
|
| 633 |
+
(row 5 vs. row 6). This confirms that NaQ augmentation
|
| 634 |
+
for query localization training complements the EgoVLP
|
| 635 |
+
pretraining of video-text backbones. Importantly, our gain
|
| 636 |
+
here comes at no additional cost in data or annotations.
|
| 637 |
+
ReLER [19] uses SlowFast + CLIP video features. For
|
| 638 |
+
a fair comparison, we replace the SlowFast features with
|
| 639 |
+
EgoVLP features to obtain ReLER∗. This improves by a
|
| 640 |
+
large margin as expected, and gives us a stronger baseline
|
| 641 |
+
to compare with (row 8 vs. row 9). Recall that ReLER∗ uses
|
| 642 |
+
video-level data augmentation using variable-length sliding
|
| 643 |
+
windows and video splicing [19]. When ReLER∗ is aug-
|
| 644 |
+
mented with NaQ , the performance increases by a signifi-
|
| 645 |
+
cant margin. This confirms the complementary nature of the
|
| 646 |
+
query-level augmentation we propose in NaQ with video-
|
| 647 |
+
level augmentation in ReLER.
|
| 648 |
+
Overall, we find that NaQ augmentation greatly improves
|
| 649 |
+
the performance of all methods across all metrics. The ab-
|
| 650 |
+
solute gains across metrics are remarkably consistent re-
|
| 651 |
+
gardless of the underlying method. When averaged across
|
| 652 |
+
the methods, NaQ improves the absolute recall@1 perfor-
|
| 653 |
+
mance by 5.22 at IoU=0.3 and 3.07 at IoU=0.5, and the ab-
|
| 654 |
+
solute recall@5 performance by 7.18 at IoU=0.3 and 5.03
|
| 655 |
+
at IoU=0.5. This confirms the generality and effectiveness
|
| 656 |
+
of NaQ at expanding the limited NLQ annotations by boot-
|
| 657 |
+
strapping it with narrations, a relatively cheaper and more
|
| 658 |
+
abundant data source. More importantly, the insight in NaQ
|
| 659 |
+
Method
|
| 660 |
+
R@1
|
| 661 |
+
IoU=0.3
|
| 662 |
+
R@1
|
| 663 |
+
IoU=0.5
|
| 664 |
+
Mean
|
| 665 |
+
R@1†
|
| 666 |
+
R@5
|
| 667 |
+
IoU=0.3
|
| 668 |
+
R@5
|
| 669 |
+
IoU=0.5
|
| 670 |
+
NaQ (ours)
|
| 671 |
+
18.46
|
| 672 |
+
10.74
|
| 673 |
+
14.59
|
| 674 |
+
21.50
|
| 675 |
+
13.74
|
| 676 |
+
Red Panda∗
|
| 677 |
+
16.46
|
| 678 |
+
10.06
|
| 679 |
+
13.26
|
| 680 |
+
22.95
|
| 681 |
+
16.11
|
| 682 |
+
Badgers@UW-Mad.∗
|
| 683 |
+
15.71
|
| 684 |
+
9.57
|
| 685 |
+
12.64
|
| 686 |
+
28.45
|
| 687 |
+
18.03
|
| 688 |
+
CONE∗
|
| 689 |
+
15.26
|
| 690 |
+
9.24
|
| 691 |
+
12.25
|
| 692 |
+
26.42
|
| 693 |
+
16.51
|
| 694 |
+
ReLER [19]
|
| 695 |
+
12.89
|
| 696 |
+
8.14
|
| 697 |
+
10.51
|
| 698 |
+
15.41
|
| 699 |
+
9.94
|
| 700 |
+
EgoVLP [18]
|
| 701 |
+
10.46
|
| 702 |
+
6.24
|
| 703 |
+
8.35
|
| 704 |
+
16.76
|
| 705 |
+
11.29
|
| 706 |
+
VSLNet [29]
|
| 707 |
+
5.42
|
| 708 |
+
2.75
|
| 709 |
+
4.08
|
| 710 |
+
8.79
|
| 711 |
+
5.07
|
| 712 |
+
Table 2. Results on Ego4D NLQ challenge. †Primary metric for
|
| 713 |
+
the challenge. ∗Unpublished work.
|
| 714 |
+
is not simply that large-scale data benefits performance.
|
| 715 |
+
Rather, we emphasize how to use this data: we leverage nar-
|
| 716 |
+
rations as queries for query-localization network training.
|
| 717 |
+
This is evidenced by our experiments demonstrating major
|
| 718 |
+
gains on EgoVLP and ReLER∗, methods which also benefit
|
| 719 |
+
from large-scale pretraining on video-narrations data.
|
| 720 |
+
Ego4D NLQ challenge. We submitted our best perform-
|
| 721 |
+
ing method (ReLER∗ + NaQ ) to the Ego4D NLQ challenge
|
| 722 |
+
leaderboard, where the NLQ evaluation is performed on a
|
| 723 |
+
EvalAI server on a held-out set of test annotations [12].
|
| 724 |
+
Note that while the videos are available to participants, the
|
| 725 |
+
annotations (including narrations) are not accessible. The
|
| 726 |
+
results are shown in Tab. 2. VSLNet is the baseline provided
|
| 727 |
+
by the organizers. ReLER and EgoVLP were the winning
|
| 728 |
+
and runner-up entries from the CVPR 2022 edition of the
|
| 729 |
+
challenge. Red Panda, Badgers@UW-Madison, and CONE
|
| 730 |
+
are the top three entries from the ECCV 2022 edition of the
|
| 731 |
+
challenge.3 As of the time of submission, NaQ is the lead-
|
| 732 |
+
ing entry among all methods on the leaderboard, including
|
| 733 |
+
those. Our approach has the best available results on this
|
| 734 |
+
challenge, by a healthy margin.
|
| 735 |
+
TRJ ablation. We study the impact of using temporal re-
|
| 736 |
+
sponse jittering (TRJ) (Sec. 3.2) in an ablation study. We
|
| 737 |
+
observe that using TRJ improves the performance by up to
|
| 738 |
+
0.7 points in recall @ 1 metrics and 1.7 in recall @ 5 met-
|
| 739 |
+
rics consistently across all methods. Please see Sec. S3 for
|
| 740 |
+
the complete results.
|
| 741 |
+
4.3. Performance analyses
|
| 742 |
+
In the previous section, we verified the effectiveness
|
| 743 |
+
of our approach through a careful comparison with recent
|
| 744 |
+
state-of-the-art methods. We now ascertain the strengths
|
| 745 |
+
and weaknesses of our approach through a series of quan-
|
| 746 |
+
titative studies and discuss qualitative results in Fig. 4.
|
| 747 |
+
For performing analysis-specific experiments, we adopt the
|
| 748 |
+
EgoVLP + NaQ method since it requires lower computa-
|
| 749 |
+
tional cost and time to train.
|
| 750 |
+
(1) How does performance scale with narrations? One
|
| 751 |
+
3The code+reports for these methods were unavailable at the time of
|
| 752 |
+
our experiments, so we could not compare with them outside the leader-
|
| 753 |
+
board.
|
| 754 |
+
6
|
| 755 |
+
|
| 756 |
+
Video
|
| 757 |
+
ReLER*
|
| 758 |
+
Ground truth
|
| 759 |
+
Ours
|
| 760 |
+
270
|
| 761 |
+
276
|
| 762 |
+
273
|
| 763 |
+
272
|
| 764 |
+
274
|
| 765 |
+
276
|
| 766 |
+
Query: How many funnels are on the shelf?
|
| 767 |
+
0
|
| 768 |
+
9
|
| 769 |
+
18
|
| 770 |
+
Video
|
| 771 |
+
201
|
| 772 |
+
207
|
| 773 |
+
204
|
| 774 |
+
202
|
| 775 |
+
204
|
| 776 |
+
207
|
| 777 |
+
Query: Where was the brake pad before I took it?
|
| 778 |
+
104
|
| 779 |
+
106
|
| 780 |
+
108
|
| 781 |
+
Video
|
| 782 |
+
180
|
| 783 |
+
198
|
| 784 |
+
189
|
| 785 |
+
164
|
| 786 |
+
166
|
| 787 |
+
168
|
| 788 |
+
Query: What color bottle is on the sink?
|
| 789 |
+
180
|
| 790 |
+
190
|
| 791 |
+
200
|
| 792 |
+
𝑡 = 𝑇
|
| 793 |
+
𝑡 = 0
|
| 794 |
+
1
|
| 795 |
+
𝑡 = 𝑇
|
| 796 |
+
𝑡 = 0
|
| 797 |
+
2
|
| 798 |
+
3
|
| 799 |
+
𝑡 = 𝑇
|
| 800 |
+
𝑡 = 0
|
| 801 |
+
Figure 4. Qualitative analysis. We show three examples of NLQ task predictions (one per column). In each column, the natural language
|
| 802 |
+
query is displayed at the top, the ground truth responses are in the central row, and the model predictions are on the first and last rows. The
|
| 803 |
+
temporal extents of the video and predicted time windows are shown right next to the images on each column. We compare ReLER∗ [19]
|
| 804 |
+
baseline (on the first row) against our NaQ method which augments the NLQ training for ReLER∗. Example 1: Our method successfully
|
| 805 |
+
identifies the response window showing how many funnels are on the shelf, while the baseline fails. The object ‘funnel’ is a low-shot
|
| 806 |
+
object with fewer than 10 training queries. This supports our experimental observation that NaQ has a strong advantage on low-shot objects
|
| 807 |
+
and counting-based queries (see Tabs. 3 and 4). Example 2: NaQ successfully recognizes the object ‘brake pad’ and is able to localize
|
| 808 |
+
where it was taken. ReLER* incorrectly identifies a spanner as the response. Example 3: This is a failure case for NaQ . While it correctly
|
| 809 |
+
identifies a sink, this particular sink does not contain the bottle and the model fails to respond.
|
| 810 |
+
Object / place queries
|
| 811 |
+
People queries
|
| 812 |
+
Method
|
| 813 |
+
Where is X
|
| 814 |
+
before/after
|
| 815 |
+
Y?
|
| 816 |
+
Where did
|
| 817 |
+
I put X?
|
| 818 |
+
Where
|
| 819 |
+
is X?
|
| 820 |
+
What did I
|
| 821 |
+
put in X?
|
| 822 |
+
How many
|
| 823 |
+
X’s?
|
| 824 |
+
In what
|
| 825 |
+
location did
|
| 826 |
+
I see X?
|
| 827 |
+
What X
|
| 828 |
+
did I Y?
|
| 829 |
+
What X
|
| 830 |
+
is Y?
|
| 831 |
+
State?
|
| 832 |
+
Who did I
|
| 833 |
+
interact with
|
| 834 |
+
during Y?
|
| 835 |
+
VSLNet
|
| 836 |
+
1.86
|
| 837 |
+
0.96
|
| 838 |
+
3.13
|
| 839 |
+
2.94
|
| 840 |
+
4.67
|
| 841 |
+
2.39
|
| 842 |
+
3.53
|
| 843 |
+
1.96
|
| 844 |
+
3.57
|
| 845 |
+
2.94
|
| 846 |
+
+NaQ
|
| 847 |
+
6.62
|
| 848 |
+
3.58
|
| 849 |
+
3.14
|
| 850 |
+
5.76
|
| 851 |
+
9.82
|
| 852 |
+
2.60
|
| 853 |
+
8.61
|
| 854 |
+
5.86
|
| 855 |
+
8.59
|
| 856 |
+
6.52
|
| 857 |
+
EgoVLP
|
| 858 |
+
5.26
|
| 859 |
+
3.22
|
| 860 |
+
3.62
|
| 861 |
+
10.37
|
| 862 |
+
14.39
|
| 863 |
+
2.23
|
| 864 |
+
9.27
|
| 865 |
+
3.52
|
| 866 |
+
8.59
|
| 867 |
+
7.61
|
| 868 |
+
+NaQ
|
| 869 |
+
10.70
|
| 870 |
+
6.44
|
| 871 |
+
4.83
|
| 872 |
+
13.13
|
| 873 |
+
15.79
|
| 874 |
+
2.60
|
| 875 |
+
11.59
|
| 876 |
+
7.03
|
| 877 |
+
12.88
|
| 878 |
+
13.04
|
| 879 |
+
ReLER*
|
| 880 |
+
9.78
|
| 881 |
+
6.39
|
| 882 |
+
5.82
|
| 883 |
+
10.29
|
| 884 |
+
14.33
|
| 885 |
+
4.78
|
| 886 |
+
11.54
|
| 887 |
+
6.54
|
| 888 |
+
10.12
|
| 889 |
+
4.90
|
| 890 |
+
+NaQ
|
| 891 |
+
13.98
|
| 892 |
+
11.34
|
| 893 |
+
6.26
|
| 894 |
+
12.61
|
| 895 |
+
20.67
|
| 896 |
+
4.78
|
| 897 |
+
15.38
|
| 898 |
+
6.86
|
| 899 |
+
14.29
|
| 900 |
+
7.84
|
| 901 |
+
Table 3. Performance over NLQ query types. We report recall@1 at IoU=0.5. We include query types with ≥ 100 val samples. We
|
| 902 |
+
highlight cases where NaQ improves recall by more than 0.5 points.
|
| 903 |
+
% of narrations as queries
|
| 904 |
+
Recall @ 1
|
| 905 |
+
5
|
| 906 |
+
10
|
| 907 |
+
15
|
| 908 |
+
20
|
| 909 |
+
0
|
| 910 |
+
25
|
| 911 |
+
50
|
| 912 |
+
75
|
| 913 |
+
100
|
| 914 |
+
IoU=0.3
|
| 915 |
+
IoU=0.5
|
| 916 |
+
% of narrations as queries
|
| 917 |
+
Recall @ 5
|
| 918 |
+
5
|
| 919 |
+
10
|
| 920 |
+
15
|
| 921 |
+
20
|
| 922 |
+
25
|
| 923 |
+
30
|
| 924 |
+
0
|
| 925 |
+
25
|
| 926 |
+
50
|
| 927 |
+
75
|
| 928 |
+
100
|
| 929 |
+
IoU=0.3
|
| 930 |
+
IoU=0.5
|
| 931 |
+
% of narrations as queries
|
| 932 |
+
Recall @ 1
|
| 933 |
+
5
|
| 934 |
+
10
|
| 935 |
+
15
|
| 936 |
+
20
|
| 937 |
+
0
|
| 938 |
+
25
|
| 939 |
+
50
|
| 940 |
+
75
|
| 941 |
+
100
|
| 942 |
+
IoU=0.3
|
| 943 |
+
IoU=0.5
|
| 944 |
+
% of narrations as queries
|
| 945 |
+
Recall @ 1
|
| 946 |
+
5
|
| 947 |
+
10
|
| 948 |
+
15
|
| 949 |
+
20
|
| 950 |
+
0
|
| 951 |
+
25
|
| 952 |
+
50
|
| 953 |
+
75
|
| 954 |
+
100
|
| 955 |
+
IoU=0.3
|
| 956 |
+
IoU=0.5
|
| 957 |
+
% of NaQ dataset
|
| 958 |
+
% of NaQ dataset
|
| 959 |
+
Figure 5. Data scaling analysis. We train EgoVLP + NaQ using
|
| 960 |
+
all NLQ and k% of NaQ dataset (k represented on the X-axis).
|
| 961 |
+
NLQ performance scales linearly with the size of the NaQ dataset.
|
| 962 |
+
of the key benefits of using narrations for pretraining is that
|
| 963 |
+
they are available on a large scale. We generated 850k nar-
|
| 964 |
+
rations as queries for the NLQ task, which is two orders
|
| 965 |
+
larger than the NLQ dataset containing 11.3k train queries.
|
| 966 |
+
We now study performance scaling as a function of the
|
| 967 |
+
amount of narrations used for training. For this, we addi-
|
| 968 |
+
tionally trained EgoVLP + NaQ with 10%, 25%, 50% of
|
| 969 |
+
the narrations. Fig. 5 shows the results on NLQ (val). The
|
| 970 |
+
0% performance represents EgoVLP and the 100% perfor-
|
| 971 |
+
mance represents the full EgoVLP + NaQ reported in Tab. 1.
|
| 972 |
+
When adding only 10% of our NaQ data, we already observe
|
| 973 |
+
good improvements on all metrics. The performance con-
|
| 974 |
+
tinues to linearly scale as we add more narrations for NaQ
|
| 975 |
+
augmentation, confirming the utility of our paradigm.
|
| 976 |
+
(2) What types of queries does NaQ benefit?
|
| 977 |
+
Next, we
|
| 978 |
+
break down the NLQ performance across query types, i.e.,
|
| 979 |
+
the form of reasoning required by the query (e.g., where
|
| 980 |
+
did I put object X? who did I talk to while doing activity
|
| 981 |
+
Y?). The NLQ dataset was created by providing an ini-
|
| 982 |
+
tial set of 13 query templates [12]. For reliable evaluation,
|
| 983 |
+
we select 10 out of the 13 templates which contain 100
|
| 984 |
+
or more samples in the validation split, and report results
|
| 985 |
+
7
|
| 986 |
+
|
| 987 |
+
High-shot
|
| 988 |
+
Mid-shot
|
| 989 |
+
Low-shot
|
| 990 |
+
Method
|
| 991 |
+
IoU=0.3
|
| 992 |
+
IoU=0.5
|
| 993 |
+
IoU=0.3
|
| 994 |
+
IoU=0.5
|
| 995 |
+
IoU=0.3
|
| 996 |
+
IoU=0.5
|
| 997 |
+
VSLNet
|
| 998 |
+
5.65
|
| 999 |
+
2.82
|
| 1000 |
+
3.71
|
| 1001 |
+
2.48
|
| 1002 |
+
3.84
|
| 1003 |
+
2.30
|
| 1004 |
+
+NaQ
|
| 1005 |
+
9.72
|
| 1006 |
+
5.53
|
| 1007 |
+
11.26
|
| 1008 |
+
7.00
|
| 1009 |
+
10.14
|
| 1010 |
+
5.57
|
| 1011 |
+
EgoVLP
|
| 1012 |
+
11.32
|
| 1013 |
+
5.83
|
| 1014 |
+
10.96
|
| 1015 |
+
6.70
|
| 1016 |
+
9.63
|
| 1017 |
+
6.42
|
| 1018 |
+
+NaQ
|
| 1019 |
+
16.59
|
| 1020 |
+
9.27
|
| 1021 |
+
16.13
|
| 1022 |
+
10.20
|
| 1023 |
+
16.05
|
| 1024 |
+
10.30
|
| 1025 |
+
ReLER∗
|
| 1026 |
+
17.07
|
| 1027 |
+
10.35
|
| 1028 |
+
17.74
|
| 1029 |
+
10.18
|
| 1030 |
+
13.21
|
| 1031 |
+
8.29
|
| 1032 |
+
+NaQ
|
| 1033 |
+
21.37
|
| 1034 |
+
12.37
|
| 1035 |
+
21.87
|
| 1036 |
+
12.38
|
| 1037 |
+
17.20
|
| 1038 |
+
10.75
|
| 1039 |
+
Table 4. Performance breakdown across object types. For ob-
|
| 1040 |
+
ject type queries, we categories objects into low-shot, mid-shot,
|
| 1041 |
+
and high-shot objects based on their frequency of occurrence. We
|
| 1042 |
+
report the recall@1 metric at IoU=0.3 and IoU=0.5. We highlight
|
| 1043 |
+
cases where NaQ improves recall by over 0.5 points.
|
| 1044 |
+
in Tab. 3. We observe that using NaQ leads to significant
|
| 1045 |
+
improvements (marked in green) on 8/10 templates for at
|
| 1046 |
+
least 2/3 methods. However, it only has a limited impact
|
| 1047 |
+
for ‘Where is object X?’ and ‘In what location did I see X?’
|
| 1048 |
+
queries. These queries may require explicit spatial under-
|
| 1049 |
+
standing to achieve better performance. Since all methods
|
| 1050 |
+
perform poorly on those queries and do not benefit from
|
| 1051 |
+
training on NaQ , it hints at the need to incorporate better
|
| 1052 |
+
spatial understanding for video models.
|
| 1053 |
+
(3) Does NaQ help respond about long-tail objects? The
|
| 1054 |
+
NLQ dataset has a long-tail of objects that are the sub-
|
| 1055 |
+
ject of queries due to the sparse nature of NLQ annota-
|
| 1056 |
+
tions (1 query per 1.4 minutes of videos on average). How-
|
| 1057 |
+
ever, since narrations are more densely annotated through-
|
| 1058 |
+
out the video (20+ narrations per minute), they contain rich
|
| 1059 |
+
information about objects that are rarely queried about. We
|
| 1060 |
+
therefore study if pretraining NLQ localization models with
|
| 1061 |
+
narrations can help respond to queries about long-tail ob-
|
| 1062 |
+
jects. We divide objects from the NLQ train annotations
|
| 1063 |
+
into 3 types (as shown in Fig. S1): 1. high-shot objects
|
| 1064 |
+
which are queried more than 50 times (65 in total), 2. mid-
|
| 1065 |
+
shot objects which are queried about 10 to 50 times (147 in
|
| 1066 |
+
total), and 3. low-shot objects which are queried about be-
|
| 1067 |
+
tween 2 to 10 times (967 in total). The results are in Tab. 4.
|
| 1068 |
+
Overall, we observe that NaQ improves performance by a
|
| 1069 |
+
large margin in most cases, and has the biggest gains on
|
| 1070 |
+
mid-shot and low-shot objects. This indicates that using
|
| 1071 |
+
narrations as queries helps mitigate some of the biases in
|
| 1072 |
+
the NLQ data, and improves responses to queries about less-
|
| 1073 |
+
frequently occurring objects.
|
| 1074 |
+
(4) Does NaQ facilitate zero-shot / few-shot NLQ? Con-
|
| 1075 |
+
sidering that NaQ enables better performance on long-tail
|
| 1076 |
+
objects, we next study whether it can facilitate zero-shot or
|
| 1077 |
+
few-shot learning for NLQ, i.e., given our large-scale NaQ
|
| 1078 |
+
data and little to no NLQ task annotations, can we learn
|
| 1079 |
+
good NLQ models? We are first to study this to the best of
|
| 1080 |
+
our knowledge. We train EgoVLP + NaQ method with all of
|
| 1081 |
+
% of narrations as queries
|
| 1082 |
+
Recall @ 1
|
| 1083 |
+
0
|
| 1084 |
+
5
|
| 1085 |
+
10
|
| 1086 |
+
15
|
| 1087 |
+
0
|
| 1088 |
+
10
|
| 1089 |
+
20
|
| 1090 |
+
30
|
| 1091 |
+
IoU=0.3
|
| 1092 |
+
IoU=0.5
|
| 1093 |
+
% of narrations as queries
|
| 1094 |
+
Recall @ 5
|
| 1095 |
+
10
|
| 1096 |
+
15
|
| 1097 |
+
20
|
| 1098 |
+
25
|
| 1099 |
+
0
|
| 1100 |
+
10
|
| 1101 |
+
20
|
| 1102 |
+
30
|
| 1103 |
+
IoU=0.3
|
| 1104 |
+
IoU=0.5
|
| 1105 |
+
% of narrations as queries
|
| 1106 |
+
Recall @ 1
|
| 1107 |
+
0
|
| 1108 |
+
5
|
| 1109 |
+
10
|
| 1110 |
+
15
|
| 1111 |
+
0
|
| 1112 |
+
10
|
| 1113 |
+
20
|
| 1114 |
+
30
|
| 1115 |
+
IoU=0.3
|
| 1116 |
+
IoU=0.5
|
| 1117 |
+
% of narrations as queries
|
| 1118 |
+
Recall @ 1
|
| 1119 |
+
0
|
| 1120 |
+
5
|
| 1121 |
+
10
|
| 1122 |
+
15
|
| 1123 |
+
0
|
| 1124 |
+
10
|
| 1125 |
+
20
|
| 1126 |
+
30
|
| 1127 |
+
IoU=0.3
|
| 1128 |
+
IoU=0.5
|
| 1129 |
+
% of NLQ dataset
|
| 1130 |
+
% of NLQ dataset
|
| 1131 |
+
Figure 6. Zero-shot and few-shot learning for NLQ. We train
|
| 1132 |
+
EgoVLP + NaQ using all NaQ and k% of the NLQ train data (k
|
| 1133 |
+
on the X-axis). The dotted horizontal lines represent the EgoVLP
|
| 1134 |
+
performance with 100% NLQ and no NaQ augmentation.
|
| 1135 |
+
NaQ and k% of NLQ train data, where k = {0, 10, 25, 35}.
|
| 1136 |
+
k = 0 represents the zero-shot case, and the rest represent
|
| 1137 |
+
few-shot learning. The results are in Fig. 6. The triangles
|
| 1138 |
+
represent EgoVLP + NaQ with k% NLQ data, and the hor-
|
| 1139 |
+
izontal line represents the EgoVLP baseline with no NaQ
|
| 1140 |
+
data. It is interesting to observe that even with no NLQ
|
| 1141 |
+
data, the model performs well using NaQ and matches the
|
| 1142 |
+
EgoVLP performance on the R@5 metrics. When we inject
|
| 1143 |
+
10% of the NLQ dataset, we get comparable or better per-
|
| 1144 |
+
formances on 3/4 metrics. At 25% of NLQ data, it matches
|
| 1145 |
+
or outperforms EgoVLP on all metrics. Finally, at 35%,
|
| 1146 |
+
we comprehensively outperform EgoVLP. This study sug-
|
| 1147 |
+
gests that we can leverage large-scale free-form narration
|
| 1148 |
+
annotations using NaQ to compensate for the lack of NLQ
|
| 1149 |
+
annotations. While these are not free to obtain, they are eas-
|
| 1150 |
+
ier to annotate than NLQ and can also be used for various
|
| 1151 |
+
purposes other than the NLQ task itself [12], meaning that
|
| 1152 |
+
many research directions are likely to continue investing in
|
| 1153 |
+
them.
|
| 1154 |
+
5. Conclusions
|
| 1155 |
+
In this work, we propose Narrations-as-Queries, a sim-
|
| 1156 |
+
ple data augmentation technique that dramatically improves
|
| 1157 |
+
state-of-the-art results on the Natural Language Queries
|
| 1158 |
+
task in the Episodic Memory benchmark. Our key insight is
|
| 1159 |
+
to convert timestamped narrations in egocentric videos into
|
| 1160 |
+
natural language queries and use them as additional data
|
| 1161 |
+
for training NLQ localization models. To convert times-
|
| 1162 |
+
tamped narrations into a form compatible with NLQ, we
|
| 1163 |
+
propose a temporal response jittering technique to convert a
|
| 1164 |
+
single timestamp into temporal windows. We perform ex-
|
| 1165 |
+
periments to demonstrate that our approach can be used as
|
| 1166 |
+
a simple plug-in to existing methods, massively improves
|
| 1167 |
+
multiple top methods for this task, and yields the very best
|
| 1168 |
+
performance to-date on the Ego4D NLQ benchmark. We
|
| 1169 |
+
hope that our approach serves as a useful tool for future
|
| 1170 |
+
research on this problem. We will share code, data, and
|
| 1171 |
+
models upon publication.
|
| 1172 |
+
8
|
| 1173 |
+
|
| 1174 |
+
References
|
| 1175 |
+
[1] Yazan Abu Farha, Alexander Richard, and Juergen Gall.
|
| 1176 |
+
When will you do what?-anticipating temporal occurrences
|
| 1177 |
+
of activities.
|
| 1178 |
+
In Proceedings of the IEEE conference on
|
| 1179 |
+
computer vision and pattern recognition, pages 5343–5352,
|
| 1180 |
+
2018. 2
|
| 1181 |
+
[2] Max Bain, Arsha Nagrani, G¨ul Varol, and Andrew Zisser-
|
| 1182 |
+
man. Frozen in time: A joint video and image encoder for
|
| 1183 |
+
end-to-end retrieval. In Proceedings of the IEEE/CVF Inter-
|
| 1184 |
+
national Conference on Computer Vision, pages 1728–1738,
|
| 1185 |
+
2021. 2
|
| 1186 |
+
[3] Minjie Cai, Kris Kitani, and Yoichi Sato.
|
| 1187 |
+
Understanding
|
| 1188 |
+
hand-object manipulation by modeling the contextual rela-
|
| 1189 |
+
tionship between actions, grasp types and object attributes.
|
| 1190 |
+
arXiv preprint arXiv:1807.08254, 2018. 2
|
| 1191 |
+
[4] Dima Damen, Hazel Doughty, Giovanni Maria Farinella,
|
| 1192 |
+
, Antonino Furnari, Jian Ma, Evangelos Kazakos, Davide
|
| 1193 |
+
Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and
|
| 1194 |
+
Michael Wray.
|
| 1195 |
+
Rescaling egocentric vision: Collection,
|
| 1196 |
+
pipeline and challenges for epic-kitchens-100. International
|
| 1197 |
+
Journal of Computer Vision (IJCV), 130:33–55, 2022. 2
|
| 1198 |
+
[5] Dima Damen, Teesid Leelasawassuk, Osian Haines, Andrew
|
| 1199 |
+
Calway, and Walterio W Mayol-Cuevas.
|
| 1200 |
+
You-do, i-learn:
|
| 1201 |
+
Discovering task relevant objects and their modes of interac-
|
| 1202 |
+
tion from multi-user egocentric video. In BMVC, volume 2,
|
| 1203 |
+
page 3, 2014. 2
|
| 1204 |
+
[6] Ana Garcia Del Molino, Cheston Tan, Joo-Hwee Lim, and
|
| 1205 |
+
Ah-Hwee Tan. Summarization of egocentric videos: A com-
|
| 1206 |
+
prehensive survey. IEEE Transactions on Human-Machine
|
| 1207 |
+
Systems, 47(1):65–76, 2016. 2
|
| 1208 |
+
[7] Victor Escorcia,
|
| 1209 |
+
Mattia Soldan,
|
| 1210 |
+
Josef Sivic,
|
| 1211 |
+
Bernard
|
| 1212 |
+
Ghanem, and Bryan Russell. Temporal localization of mo-
|
| 1213 |
+
ments in video collections with natural language.
|
| 1214 |
+
arXiv
|
| 1215 |
+
preprint arXiv:1907.12763, 2019. 2
|
| 1216 |
+
[8] Alireza Fathi, Xiaofeng Ren, and James M. Rehg. Learning
|
| 1217 |
+
to recognize objects in egocentric activities. In CVPR 2011,
|
| 1218 |
+
pages 3281–3288, 2011. 2
|
| 1219 |
+
[9] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and
|
| 1220 |
+
Kaiming He. Slowfast networks for video recognition. In
|
| 1221 |
+
Proceedings of the IEEE/CVF international conference on
|
| 1222 |
+
computer vision, pages 6202–6211, 2019. 5
|
| 1223 |
+
[10] Antonino Furnari and Giovanni Maria Farinella.
|
| 1224 |
+
Rolling-
|
| 1225 |
+
unrolling lstms for action anticipation from first-person
|
| 1226 |
+
video. IEEE transactions on pattern analysis and machine
|
| 1227 |
+
intelligence, 43(11):4021–4036, 2020. 2
|
| 1228 |
+
[11] Rohit Girdhar and Kristen Grauman.
|
| 1229 |
+
Anticipative video
|
| 1230 |
+
transformer. In Proceedings of the IEEE/CVF International
|
| 1231 |
+
Conference on Computer Vision, pages 13505–13515, 2021.
|
| 1232 |
+
2
|
| 1233 |
+
[12] Kristen
|
| 1234 |
+
Grauman,
|
| 1235 |
+
Andrew
|
| 1236 |
+
Westbury,
|
| 1237 |
+
Eugene
|
| 1238 |
+
Byrne,
|
| 1239 |
+
Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson
|
| 1240 |
+
Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d:
|
| 1241 |
+
Around the world in 3,000 hours of egocentric video. In Pro-
|
| 1242 |
+
ceedings of the IEEE/CVF Conference on Computer Vision
|
| 1243 |
+
and Pattern Recognition, pages 18995–19012, 2022. 1, 2, 3,
|
| 1244 |
+
5, 6, 7, 8
|
| 1245 |
+
[13] Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef
|
| 1246 |
+
Sivic, Trevor Darrell, and Bryan Russell. Localizing mo-
|
| 1247 |
+
ments in video with temporal language. In EMNLP, 2018.
|
| 1248 |
+
2
|
| 1249 |
+
[14] Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, and
|
| 1250 |
+
Dima Damen.
|
| 1251 |
+
Epic-fusion: Audio-visual temporal bind-
|
| 1252 |
+
ing for egocentric action recognition. In Proceedings of the
|
| 1253 |
+
IEEE/CVF International Conference on Computer Vision,
|
| 1254 |
+
pages 5492–5501, 2019. 2
|
| 1255 |
+
[15] Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and
|
| 1256 |
+
Juan Carlos Niebles. Dense-captioning events in videos. In
|
| 1257 |
+
Proceedings of the IEEE international conference on com-
|
| 1258 |
+
puter vision, pages 706–715, 2017. 2
|
| 1259 |
+
[16] Y J. Lee and K. Grauman. Predicting important objects for
|
| 1260 |
+
egocentric video summarization. International Journal on
|
| 1261 |
+
Computer Vision, 2015. 2
|
| 1262 |
+
[17] Linjie Li, Yen-Chun Chen, Yu Cheng, Zhe Gan, Licheng Yu,
|
| 1263 |
+
and Jingjing Liu. Hero: Hierarchical encoder for video+ lan-
|
| 1264 |
+
guage omni-representation pre-training. In Proceedings of
|
| 1265 |
+
the 2020 Conference on Empirical Methods in Natural Lan-
|
| 1266 |
+
guage Processing (EMNLP), pages 2046–2065, 2020. 1, 2,
|
| 1267 |
+
3
|
| 1268 |
+
[18] Kevin Qinghong Lin, Alex Jinpeng Wang, Mattia Sol-
|
| 1269 |
+
dan, Michael Wray, Rui Yan, Eric Zhongcong Xu, Difei
|
| 1270 |
+
Gao, Rongcheng Tu, Wenzhe Zhao, Weijie Kong, et al.
|
| 1271 |
+
Egocentric video-language pretraining.
|
| 1272 |
+
arXiv preprint
|
| 1273 |
+
arXiv:2206.01670, 2022. 1, 2, 3, 4, 5, 6, 11
|
| 1274 |
+
[19] Naiyuan Liu, Xiaohan Wang, Xiaobo Li, Yi Yang, and Yuet-
|
| 1275 |
+
ing Zhuang. Reler@ zju-alibaba submission to the ego4d
|
| 1276 |
+
natural language queries challenge 2022.
|
| 1277 |
+
arXiv preprint
|
| 1278 |
+
arXiv:2207.00383, 2022. 1, 3, 5, 6, 7, 11
|
| 1279 |
+
[20] Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan
|
| 1280 |
+
Laptev, Josef Sivic, and Andrew Zisserman.
|
| 1281 |
+
End-to-end
|
| 1282 |
+
learning of visual representations from uncurated instruc-
|
| 1283 |
+
tional videos. In Proceedings of the IEEE/CVF Conference
|
| 1284 |
+
on Computer Vision and Pattern Recognition, pages 9879–
|
| 1285 |
+
9889, 2020. 1, 2, 3
|
| 1286 |
+
[21] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac,
|
| 1287 |
+
Makarand
|
| 1288 |
+
Tapaswi,
|
| 1289 |
+
Ivan
|
| 1290 |
+
Laptev,
|
| 1291 |
+
and
|
| 1292 |
+
Josef
|
| 1293 |
+
Sivic.
|
| 1294 |
+
Howto100m: Learning a text-video embedding by watching
|
| 1295 |
+
hundred million narrated video clips. In Proceedings of the
|
| 1296 |
+
IEEE/CVF International Conference on Computer Vision,
|
| 1297 |
+
pages 2630–2640, 2019. 2
|
| 1298 |
+
[22] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya
|
| 1299 |
+
Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry,
|
| 1300 |
+
Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn-
|
| 1301 |
+
ing transferable visual models from natural language super-
|
| 1302 |
+
vision. In International Conference on Machine Learning,
|
| 1303 |
+
pages 8748–8763. PMLR, 2021. 1
|
| 1304 |
+
[23] Anna Rohrbach, Atousa Torabi, Marcus Rohrbach, Niket
|
| 1305 |
+
Tandon, Christopher Pal, Hugo Larochelle, Aaron Courville,
|
| 1306 |
+
and Bernt Schiele. Movie description. International Journal
|
| 1307 |
+
of Computer Vision, 123(1):94–120, 2017. 1, 2
|
| 1308 |
+
[24] Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Han-
|
| 1309 |
+
naneh Hajishirzi. Bidirectional attention flow for machine
|
| 1310 |
+
comprehension. arXiv preprint arXiv:1611.01603, 2016. 5
|
| 1311 |
+
[25] Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy,
|
| 1312 |
+
and Cordelia Schmid. Videobert: A joint model for video
|
| 1313 |
+
9
|
| 1314 |
+
|
| 1315 |
+
and language representation learning. In Proceedings of the
|
| 1316 |
+
IEEE/CVF International Conference on Computer Vision,
|
| 1317 |
+
pages 7464–7473, 2019. 1, 2, 3
|
| 1318 |
+
[26] Dejing Xu, Zhou Zhao, Jun Xiao, Fei Wu, Hanwang Zhang,
|
| 1319 |
+
Xiangnan He, and Yueting Zhuang. Video question answer-
|
| 1320 |
+
ing via gradually refined attention over appearance and mo-
|
| 1321 |
+
tion. In Proceedings of the 25th ACM international confer-
|
| 1322 |
+
ence on Multimedia, pages 1645–1653, 2017. 1
|
| 1323 |
+
[27] Hu Xu,
|
| 1324 |
+
Gargi Ghosh,
|
| 1325 |
+
Po-Yao Huang,
|
| 1326 |
+
Prahal Arora,
|
| 1327 |
+
Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian
|
| 1328 |
+
Metze, and Luke Zettlemoyer. Vlm: Task-agnostic video-
|
| 1329 |
+
language model pre-training for video understanding.
|
| 1330 |
+
In
|
| 1331 |
+
Findings of the Association for Computational Linguistics:
|
| 1332 |
+
ACL-IJCNLP 2021, pages 4227–4239, 2021. 1, 2
|
| 1333 |
+
[28] Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and
|
| 1334 |
+
Cordelia Schmid. Just ask: Learning to answer questions
|
| 1335 |
+
from millions of narrated videos.
|
| 1336 |
+
In Proceedings of the
|
| 1337 |
+
IEEE/CVF International Conference on Computer Vision,
|
| 1338 |
+
pages 1686–1697, 2021. 1, 2, 3, 5
|
| 1339 |
+
[29] Hao Zhang, Aixin Sun, Wei Jing, and Joey Tianyi Zhou.
|
| 1340 |
+
Span-based localizing network for natural language video lo-
|
| 1341 |
+
calization. In Proceedings of the 58th Annual Meeting of
|
| 1342 |
+
the Association for Computational Linguistics, pages 6543–
|
| 1343 |
+
6554, 2020. 1, 3, 5, 6
|
| 1344 |
+
[30] Songyang Zhang, Houwen Peng, Jianlong Fu, and Jiebo
|
| 1345 |
+
Luo. Learning 2d temporal adjacent networks for moment
|
| 1346 |
+
localization with natural language.
|
| 1347 |
+
In Proceedings of the
|
| 1348 |
+
AAAI Conference on Artificial Intelligence, volume 34, pages
|
| 1349 |
+
12870–12877, 2020. 1, 3
|
| 1350 |
+
[31] Sipeng Zheng, Qi Zhang, Bei Liu, Qin Jin, and Jianlong Fu.
|
| 1351 |
+
Exploring anchor-based detection for ego4d natural language
|
| 1352 |
+
query. arXiv preprint arXiv:2208.05375, 2022. 1, 3
|
| 1353 |
+
[32] Luowei Zhou, Yingbo Zhou, Jason J Corso, Richard Socher,
|
| 1354 |
+
and Caiming Xiong. End-to-end dense video captioning with
|
| 1355 |
+
masked transformer. In Proceedings of the IEEE conference
|
| 1356 |
+
on computer vision and pattern recognition, pages 8739–
|
| 1357 |
+
8748, 2018. 2
|
| 1358 |
+
[33] Yipin Zhou and Tamara L Berg. Temporal perception and
|
| 1359 |
+
prediction in ego-centric video. In Proceedings of the IEEE
|
| 1360 |
+
International Conference on Computer Vision, pages 4498–
|
| 1361 |
+
4506, 2015. 2
|
| 1362 |
+
10
|
| 1363 |
+
|
| 1364 |
+
Low-shot
|
| 1365 |
+
Mid-shot
|
| 1366 |
+
High-shot
|
| 1367 |
+
Figure S1. Long-tail of objects in NLQ.
|
| 1368 |
+
Supplementary Materials
|
| 1369 |
+
We now provide additional information about our exper-
|
| 1370 |
+
imental settings, and qualitative and quantitative analyses to
|
| 1371 |
+
support our experiments in the main paper.
|
| 1372 |
+
S1. Implementation details
|
| 1373 |
+
We perform joint NaQ + NLQ training with a large batch
|
| 1374 |
+
sizes and high learning rates for accelerated convergence.
|
| 1375 |
+
For VSLNet and EgoVLP methods, we use a batch size of
|
| 1376 |
+
2048 and initial learning rate of 0.001 on 2 A40 GPUs with
|
| 1377 |
+
a memory size of 46GB per GPU. For ReLER∗, we use a
|
| 1378 |
+
batch size of 1536 and an initial learning rate of 0.001 on 8
|
| 1379 |
+
A40 GPUs since it has larger memory and compute require-
|
| 1380 |
+
ments. We train each method for up to 200 epochs on NaQ
|
| 1381 |
+
+ NLQ training data, and then finetune them for up to 30
|
| 1382 |
+
epochs on NLQ training data alone with a lower learning
|
| 1383 |
+
rate. We found finetuning to be unnecessary for VSLNet.
|
| 1384 |
+
For EgoVLP, we finetuned with the original hyperparame-
|
| 1385 |
+
ter settings from [18] and a learning rate of 0.00001. For
|
| 1386 |
+
ReLER∗, we finetuned with the original hyperparameter
|
| 1387 |
+
setting from [19] and a learning rate of 0.0001. We per-
|
| 1388 |
+
form early stopping in each case using the performance on
|
| 1389 |
+
NLQ validation split.
|
| 1390 |
+
For
|
| 1391 |
+
temporal
|
| 1392 |
+
random
|
| 1393 |
+
jittering
|
| 1394 |
+
(TRJ),
|
| 1395 |
+
we
|
| 1396 |
+
per-
|
| 1397 |
+
formed a grid search with the expansion factor values
|
| 1398 |
+
S={2.5, 5.0, 10.0, 20.0}. We found S=2.5 to work best for
|
| 1399 |
+
EgoVLP and VSLNet, and S=5.0 to work best for ReLER∗
|
| 1400 |
+
based on their NLQ validation performance.
|
| 1401 |
+
S2. Long-tail of objects in NLQ
|
| 1402 |
+
Fig. S1 shows the long-tail of objects queried about in
|
| 1403 |
+
NLQ, and the split of low-shot, mid-shot, and high-shot ob-
|
| 1404 |
+
jects used in Sec. 4.3. Note that for a given point x on X-
|
| 1405 |
+
axis, the Y-axis shows the number of objects that have x
|
| 1406 |
+
queries in the NLQ train dataset. For example, there are
|
| 1407 |
+
more than 1000 objects with only 1 training sample.
|
| 1408 |
+
S3. Ablation study for Temporal Response Jit-
|
| 1409 |
+
tering
|
| 1410 |
+
We study the impact of using temporal response jittering
|
| 1411 |
+
(TRJ) described in Eq. (2). In Tab. S1, we measure the per-
|
| 1412 |
+
IoU=0.3
|
| 1413 |
+
IoU=0.5
|
| 1414 |
+
Method
|
| 1415 |
+
TRJ
|
| 1416 |
+
R@1
|
| 1417 |
+
R@5
|
| 1418 |
+
R@1
|
| 1419 |
+
R@5
|
| 1420 |
+
VSLNet + NaQ
|
| 1421 |
+
|
| 1422 |
+
9.89
|
| 1423 |
+
18.02
|
| 1424 |
+
5.30
|
| 1425 |
+
10.99
|
| 1426 |
+
VSLNet + NaQ
|
| 1427 |
+
|
| 1428 |
+
10.14
|
| 1429 |
+
19.01
|
| 1430 |
+
5.78
|
| 1431 |
+
12.69
|
| 1432 |
+
absolute gain
|
| 1433 |
+
+0.25
|
| 1434 |
+
+0.99
|
| 1435 |
+
+0.48
|
| 1436 |
+
+1.70
|
| 1437 |
+
EgoVLP + NaQ
|
| 1438 |
+
|
| 1439 |
+
15.27
|
| 1440 |
+
25.93
|
| 1441 |
+
9.07
|
| 1442 |
+
17.14
|
| 1443 |
+
EgoVLP + NaQ
|
| 1444 |
+
|
| 1445 |
+
15.90
|
| 1446 |
+
26.38
|
| 1447 |
+
9.46
|
| 1448 |
+
17.80
|
| 1449 |
+
absolute gain
|
| 1450 |
+
+0.63
|
| 1451 |
+
+0.45
|
| 1452 |
+
+0.39
|
| 1453 |
+
+0.66
|
| 1454 |
+
ReLER∗ + NaQ
|
| 1455 |
+
|
| 1456 |
+
18.48
|
| 1457 |
+
23.26
|
| 1458 |
+
11.25
|
| 1459 |
+
15.44
|
| 1460 |
+
ReLER∗ + NaQ
|
| 1461 |
+
|
| 1462 |
+
19.31
|
| 1463 |
+
23.59
|
| 1464 |
+
11.62
|
| 1465 |
+
15.51
|
| 1466 |
+
absolute gain
|
| 1467 |
+
+0.83
|
| 1468 |
+
+0.33
|
| 1469 |
+
+0.37
|
| 1470 |
+
+0.07
|
| 1471 |
+
Table S1. Ablation study of temporal random jittering (TRJ).
|
| 1472 |
+
formance of using NaQ with and without TRJ, where not us-
|
| 1473 |
+
ing TRJ implies that the seed temporal window from Eq. (1)
|
| 1474 |
+
is used. Overall, we observe a consistent improvement of up
|
| 1475 |
+
to 0.83 in R@1 metrics and 1.70 in R@5 metrics. This in-
|
| 1476 |
+
dicates that TRJ is able to address the limitations of the seed
|
| 1477 |
+
temporal window.
|
| 1478 |
+
S4. Few-shot analysis
|
| 1479 |
+
We perform a more detailed analysis of the few-shot per-
|
| 1480 |
+
formance discussed in Sec. 4.3 and Fig. 6. Specifically, we
|
| 1481 |
+
analyze the zero-/few-shot performance across the various
|
| 1482 |
+
query templates in Tab. S2. When tested zero-shot, NaQ
|
| 1483 |
+
already competes with or outperforms the baseline on ob-
|
| 1484 |
+
ject/place templates such as ‘where is X before/after Y?’,
|
| 1485 |
+
‘where did I put X?’, ‘where is X?’, ‘In what location did
|
| 1486 |
+
I see X?’, ‘what X is Y?’, and ‘object state’.4 As we in-
|
| 1487 |
+
ject NLQ data into NaQ training, the performance improves
|
| 1488 |
+
quickly on the remaining templates, and outperforms the
|
| 1489 |
+
baseline on 8/10 templates.
|
| 1490 |
+
S5. Qualitative examples
|
| 1491 |
+
In supplementary.html shared here, we link to qual-
|
| 1492 |
+
itative videos for the following:
|
| 1493 |
+
• Comparing annotations for NLQ vs. Narrations
|
| 1494 |
+
• NaQ benefits performance on most query templates
|
| 1495 |
+
• NaQ benefits performance on queries about long-tail
|
| 1496 |
+
objects
|
| 1497 |
+
• NaQ facilitates zero-shot NLQ
|
| 1498 |
+
4We provide video visualizations of the zero-shot performance on these
|
| 1499 |
+
4 templates in supplementary.html.
|
| 1500 |
+
11
|
| 1501 |
+
|
| 1502 |
+
Distribution over obiect freguencies
|
| 1503 |
+
103
|
| 1504 |
+
Objects
|
| 1505 |
+
101
|
| 1506 |
+
#
|
| 1507 |
+
1
|
| 1508 |
+
2
|
| 1509 |
+
10
|
| 1510 |
+
50
|
| 1511 |
+
100
|
| 1512 |
+
1000
|
| 1513 |
+
# queries per objectObject / place queries
|
| 1514 |
+
People queries
|
| 1515 |
+
% NLQ
|
| 1516 |
+
% NaQ
|
| 1517 |
+
Where is X
|
| 1518 |
+
before/after
|
| 1519 |
+
Y?
|
| 1520 |
+
Where did
|
| 1521 |
+
I put X?
|
| 1522 |
+
Where
|
| 1523 |
+
is X?
|
| 1524 |
+
What did I
|
| 1525 |
+
put in X?
|
| 1526 |
+
How many
|
| 1527 |
+
X’s?
|
| 1528 |
+
In what
|
| 1529 |
+
location did
|
| 1530 |
+
I see X?
|
| 1531 |
+
What X
|
| 1532 |
+
did I Y?
|
| 1533 |
+
What X
|
| 1534 |
+
is Y?
|
| 1535 |
+
State?
|
| 1536 |
+
Who did I
|
| 1537 |
+
interact with
|
| 1538 |
+
during Y?
|
| 1539 |
+
100
|
| 1540 |
+
0
|
| 1541 |
+
5.26
|
| 1542 |
+
3.22
|
| 1543 |
+
3.62
|
| 1544 |
+
10.37
|
| 1545 |
+
14.39
|
| 1546 |
+
2.23
|
| 1547 |
+
9.27
|
| 1548 |
+
3.52
|
| 1549 |
+
8.59
|
| 1550 |
+
7.61
|
| 1551 |
+
0
|
| 1552 |
+
100
|
| 1553 |
+
4.41
|
| 1554 |
+
4.29
|
| 1555 |
+
2.90
|
| 1556 |
+
2.53
|
| 1557 |
+
5.26
|
| 1558 |
+
1.49
|
| 1559 |
+
4.30
|
| 1560 |
+
6.25
|
| 1561 |
+
7.36
|
| 1562 |
+
3.26
|
| 1563 |
+
10
|
| 1564 |
+
100
|
| 1565 |
+
8.15
|
| 1566 |
+
5.72
|
| 1567 |
+
2.66
|
| 1568 |
+
5.07
|
| 1569 |
+
5.96
|
| 1570 |
+
1.12
|
| 1571 |
+
3.64
|
| 1572 |
+
5.86
|
| 1573 |
+
6.13
|
| 1574 |
+
4.35
|
| 1575 |
+
25
|
| 1576 |
+
100
|
| 1577 |
+
10.70
|
| 1578 |
+
5.19
|
| 1579 |
+
3.38
|
| 1580 |
+
5.99
|
| 1581 |
+
8.07
|
| 1582 |
+
1.49
|
| 1583 |
+
5.30
|
| 1584 |
+
6.25
|
| 1585 |
+
6.13
|
| 1586 |
+
5.43
|
| 1587 |
+
35
|
| 1588 |
+
100
|
| 1589 |
+
9.51
|
| 1590 |
+
5.55
|
| 1591 |
+
3.86
|
| 1592 |
+
7.83
|
| 1593 |
+
14.04
|
| 1594 |
+
4.09
|
| 1595 |
+
7.62
|
| 1596 |
+
7.81
|
| 1597 |
+
7.98
|
| 1598 |
+
5.43
|
| 1599 |
+
100
|
| 1600 |
+
100
|
| 1601 |
+
10.70
|
| 1602 |
+
6.44
|
| 1603 |
+
4.83
|
| 1604 |
+
13.13
|
| 1605 |
+
15.79
|
| 1606 |
+
2.60
|
| 1607 |
+
11.59
|
| 1608 |
+
7.03
|
| 1609 |
+
12.88
|
| 1610 |
+
13.04
|
| 1611 |
+
Table S2. Few-shot analysis. We split the few-shot results from Fig. 6 in the main paper across the various query templates. We report
|
| 1612 |
+
recall@1 at IoU=0.5. The first two columns show the percentage of the NLQ and NaQ data used for training. For example, the first row
|
| 1613 |
+
with 100% NLQ and 0% NaQ is the baseline, the second row with 0% NLQ and 100% NaQ is our zero-shot setting, and so on.
|
| 1614 |
+
12
|
| 1615 |
+
|
4dAyT4oBgHgl3EQf2Pkf/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
79AyT4oBgHgl3EQf2_lP/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:180e6e1c5849da979ebfd9f1451b1da61025678753d766f24754a77884e7b34f
|
| 3 |
+
size 7602221
|
7dE5T4oBgHgl3EQfQA7X/content/2301.05510v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3912c42c1546464025f9e52a4f34ccc92ebada493ef20d090ee81c507ce396c5
|
| 3 |
+
size 1357462
|
7dE5T4oBgHgl3EQfQA7X/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d556bb1223f86fc90e80387dd5ed3e8af8e326977ee89e58f492c6bfc3805287
|
| 3 |
+
size 4718637
|
8NE1T4oBgHgl3EQfngRF/content/2301.03309v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e42b33902e58c2754c19fa093cae57551ca56771df82485049701e07e5d1d020
|
| 3 |
+
size 16863862
|
8dE0T4oBgHgl3EQffgB5/content/2301.02405v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6b46e64408cbe77bb14b8513372c6504ee6c6b06625e11cb33061bf7cae0e83b
|
| 3 |
+
size 11574642
|
8dE0T4oBgHgl3EQffgB5/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:91bfa05ad0e2db5f36dedd8b9959e02cbe7b7eef6c3faabdc4df1ca1445b0ada
|
| 3 |
+
size 1900589
|
8dE0T4oBgHgl3EQffgB5/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:faf353f4a8e0a79c9699fa5d4fb2b47e8dd173a515cbdc4c83a82ea1deea1b83
|
| 3 |
+
size 78520
|
AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf
ADDED
|
Binary file (75 kB). View file
|
|
|
AdAzT4oBgHgl3EQfTPx3/content/tmp_files/2301.01246v1.pdf.txt
ADDED
|
@@ -0,0 +1,694 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
arXiv:2301.01246v1 [cs.AI] 3 Jan 2023
|
| 2 |
+
Efficient method for handling diverse agents in QDec-POMDPs
|
| 3 |
+
Nitsan Soffair
|
| 4 |
+
Ben Gurion University
|
| 5 |
+
soffair@post.bgu.ac.il
|
| 6 |
+
Abstract
|
| 7 |
+
The SOTA algorithms for addressing QDec-
|
| 8 |
+
POMDP issues, QDec-FP and QDec-FPS, are un-
|
| 9 |
+
able to effectively tackle problems that involve dif-
|
| 10 |
+
ferent types of sensing agents. We propose a new
|
| 11 |
+
algorithm that addresses this issue by requiring
|
| 12 |
+
agents to adopt the same plan if one agent is unable
|
| 13 |
+
to take a sensing action but the other can. Our algo-
|
| 14 |
+
rithm performs significantly better than both QDec-
|
| 15 |
+
FP and QDec-FPS in these types of situations.
|
| 16 |
+
1
|
| 17 |
+
Introduction
|
| 18 |
+
Automated
|
| 19 |
+
planning
|
| 20 |
+
and
|
| 21 |
+
scheduling
|
| 22 |
+
[Wikipedia contributors, 2022a]
|
| 23 |
+
is
|
| 24 |
+
a
|
| 25 |
+
field
|
| 26 |
+
of
|
| 27 |
+
artificial
|
| 28 |
+
intelligence that deals with creating and implementing strate-
|
| 29 |
+
gies or action sequences for intelligent agents, autonomous
|
| 30 |
+
robots, and unmanned vehicles.
|
| 31 |
+
It involves finding and
|
| 32 |
+
optimizing solutions in complex multidimensional spaces
|
| 33 |
+
and is closely related to decision theory.
|
| 34 |
+
Planning can
|
| 35 |
+
be done offline in known environments, but in unknown
|
| 36 |
+
environments, the strategy may need to be revised online and
|
| 37 |
+
models and policies may need to be adapted.
|
| 38 |
+
2
|
| 39 |
+
Background
|
| 40 |
+
2.1
|
| 41 |
+
MDP
|
| 42 |
+
An
|
| 43 |
+
MDP
|
| 44 |
+
[Wikipedia contributors, 2022b]
|
| 45 |
+
is
|
| 46 |
+
a
|
| 47 |
+
4-tuple
|
| 48 |
+
(S, A, P, R) where S is the state space, A is the action space,
|
| 49 |
+
P is the probability that action a in state s will lead to the next
|
| 50 |
+
state, R is the immediate reward received after transforming
|
| 51 |
+
from a state to the next state. A policy function π is a map-
|
| 52 |
+
ping from state space to action space.
|
| 53 |
+
2.2
|
| 54 |
+
POMDP
|
| 55 |
+
A POMDP [Wikipedia contributors, 2022c] is a 7-tuple
|
| 56 |
+
(S, A, T, R, Ω, O, γ) where S is the set of states, A is the
|
| 57 |
+
set of actions, T is a set of transition probabilities between
|
| 58 |
+
states, R is the reward function, Ω is a set of observations, O
|
| 59 |
+
is a set of observation probabilities, γ ∈ [0, 1] is the discount
|
| 60 |
+
factor. At each time period, the environment is in some state.
|
| 61 |
+
The agent takes an action a, which causes the environment to
|
| 62 |
+
transition to the next state with probability T (s|s′, a). At the
|
| 63 |
+
same time, the agent receives an observation o which depends
|
| 64 |
+
on the new state of the environment, and on the just taken ac-
|
| 65 |
+
tion a, with probability O(o|s′, a). Finally, the agent receives
|
| 66 |
+
a reward r equal to R(s′, a).
|
| 67 |
+
2.3
|
| 68 |
+
Dec-POMDP
|
| 69 |
+
A Dec-POMDP [Wikipedia contributors, 2020] is a 7-tuple
|
| 70 |
+
(S, {Ai}, T, R, {Ωi}, O, γ) where S is the set of states, Ai is
|
| 71 |
+
the set of actions for agent i, {Ai} is the set of joint actions,
|
| 72 |
+
T is a set of transition probabilities between states, Ωi is a set
|
| 73 |
+
of observations for agent i, {Ωi} is the set of joint observa-
|
| 74 |
+
tions, O is a set of observation probabilities, γ ∈ [0, 1] is the
|
| 75 |
+
discount factor. At each time step, each agent takes an action
|
| 76 |
+
a, the state updates based on the transition function T , each
|
| 77 |
+
agent observes an observation based on the observation func-
|
| 78 |
+
tion O, and a reward is generated for the whole team based
|
| 79 |
+
on the reward function R.
|
| 80 |
+
2.4
|
| 81 |
+
QDec-POMDP
|
| 82 |
+
A QDec-POMDP [Brafman et al., 2013] is a model for rep-
|
| 83 |
+
resenting the decision-making process of multiple agents in a
|
| 84 |
+
dynamic environment. It consists of a set of agents, states, ac-
|
| 85 |
+
tions, observations, and a goal. The QDec-POMDP uses pol-
|
| 86 |
+
icy trees to represent the local plans of each agent, with each
|
| 87 |
+
node labeled with an action and each branch labeled with an
|
| 88 |
+
observation. To execute the plan, the agent performs the ac-
|
| 89 |
+
tion at the root of the tree and then uses the subtree labeled
|
| 90 |
+
with the observation it obtains to guide future action selec-
|
| 91 |
+
tion.
|
| 92 |
+
2.5
|
| 93 |
+
SDR
|
| 94 |
+
The SDR [Brafman and Shani, 2012] planner is a method for
|
| 95 |
+
planning under uncertainty in which a single state is chosen
|
| 96 |
+
from the current belief state and used to create a determin-
|
| 97 |
+
istic classical problem. The resulting plan is then executed
|
| 98 |
+
until a sensing action is performed, at which point the belief
|
| 99 |
+
state is updated and the process is repeated. This version of
|
| 100 |
+
SDR maintains and uses a complete, explicit description of
|
| 101 |
+
the belief state, though a modified version of the algorithm
|
| 102 |
+
uses sampling and lazy belief-state maintenance.
|
| 103 |
+
2.6
|
| 104 |
+
CPOR
|
| 105 |
+
The CPOR [Maliah et al., 2014] algorithm repeatedly selects
|
| 106 |
+
and executes sensing actions in order to gather information
|
| 107 |
+
and achieve a goal. The planner uses a classical projection to
|
| 108 |
+
|
| 109 |
+
plan for the preconditions of each observation action and then
|
| 110 |
+
executes the action. The selection of the next sensing action
|
| 111 |
+
is based on an estimation of the myopic value of information,
|
| 112 |
+
or the value that will be achieved from executing the action
|
| 113 |
+
without considering future observations. This value is calcu-
|
| 114 |
+
lated using the number of disjunctive action landmarks that
|
| 115 |
+
can be achieved following the sensing action.
|
| 116 |
+
2.7
|
| 117 |
+
Factored planning
|
| 118 |
+
The algorithm [Shekhar et al., 2021a] first creates a single-
|
| 119 |
+
agent team problem by treating all actions and observations
|
| 120 |
+
as if they are performed by a single combined agent. This
|
| 121 |
+
results in a team solution tree, which is then projected to each
|
| 122 |
+
individual agent. Each agent then tries to generate a local
|
| 123 |
+
policy that includes the projected sub-tree as a solution. If all
|
| 124 |
+
agents are able to solve their local problems, the actions are
|
| 125 |
+
aligned and a solution is returned. If one of the agents cannot
|
| 126 |
+
solve their problem, a new team solution is generated and the
|
| 127 |
+
process is repeated. If no new team solution is possible, the
|
| 128 |
+
process fails.
|
| 129 |
+
2.8
|
| 130 |
+
QDec-FP
|
| 131 |
+
QDec-FP [Shekhar et al., 2021b] is a three-stage process for
|
| 132 |
+
solving multi-agent problems. In the first stage, a team so-
|
| 133 |
+
lution is generated by treating all actions as if they were ex-
|
| 134 |
+
ecuted by a single meta-agent. In the second stage, the pro-
|
| 135 |
+
jection of the team solution is extended for each individual
|
| 136 |
+
agent. Finally, in the third stage, the single agent plan trees
|
| 137 |
+
are aligned.
|
| 138 |
+
2.9
|
| 139 |
+
QDec-FPS
|
| 140 |
+
In QDec-FPS [Shekhar et al., 2021b] the SDR translation
|
| 141 |
+
maintains two propositions for each proposition, represent-
|
| 142 |
+
ing that the agent knowing that it is true or false. It also trans-
|
| 143 |
+
forms preconditions of actions into propositions that must be
|
| 144 |
+
known to be true in all possible worlds. In addition, QDec-
|
| 145 |
+
FPS allows for agents to communicate by signal to each other
|
| 146 |
+
by setting the value of a variable that can be sensed by other
|
| 147 |
+
agents, allowing them to reason about the value of a proposi-
|
| 148 |
+
tion they cannot sense.
|
| 149 |
+
3
|
| 150 |
+
Algorithm
|
| 151 |
+
The algorithm consists of two steps. In the first step, we pre-
|
| 152 |
+
pare the environment by determining the sensory capabilities
|
| 153 |
+
of each agent. In the second step, we use QDec-FP to create a
|
| 154 |
+
team plan, ensuring that any actions that rely on observations
|
| 155 |
+
that an agent cannot make are eliminated. The subsequent
|
| 156 |
+
steps are identical to those in QDec-FP.
|
| 157 |
+
4
|
| 158 |
+
Domains
|
| 159 |
+
4.1
|
| 160 |
+
Box-pushing
|
| 161 |
+
There is a grid with boxes that need to be moved to different
|
| 162 |
+
locations outside of the column they are currently in. One
|
| 163 |
+
agent can push a light box, but two agents are required to
|
| 164 |
+
push a heavy box. The agents can vary in their abilities and
|
| 165 |
+
can be assigned to push different boxes.
|
| 166 |
+
4.2
|
| 167 |
+
Table-mover
|
| 168 |
+
The system includes several tables and rooms that are con-
|
| 169 |
+
nected, and agents that can move between them. The exact
|
| 170 |
+
location of the tables is not known at the beginning, and the
|
| 171 |
+
agents must move them to their designated locations. The
|
| 172 |
+
agents can have different capabilities for sensing and manip-
|
| 173 |
+
ulating objects. All actions involving the manipulation of ta-
|
| 174 |
+
bles require the collaboration of at least two agents, including
|
| 175 |
+
moving, lifting, and dropping the tables.
|
| 176 |
+
5
|
| 177 |
+
Results
|
| 178 |
+
The experiments were run on a computer with a 4-core pro-
|
| 179 |
+
cessor running at 2.40GHz. The domain of the experiment
|
| 180 |
+
could be either homogeneous or heterogeneous, represented
|
| 181 |
+
by HM and HT, respectively. The variables measured in the
|
| 182 |
+
experiments included the number of backtracks, time needed
|
| 183 |
+
for the planning process, maximum tree width, and maximum
|
| 184 |
+
tree height. The results are an average of 10 experiments. The
|
| 185 |
+
winner of the QDec versus the variant for each criterion is
|
| 186 |
+
noted in bold. If the solver was unable to solve the problem,
|
| 187 |
+
this is indicated by an asterisk.
|
| 188 |
+
5.1
|
| 189 |
+
Box-pushing
|
| 190 |
+
Grid size 3 with 1 box is represented by B1(3).
|
| 191 |
+
QDec-FP
|
| 192 |
+
type
|
| 193 |
+
domain
|
| 194 |
+
#bts
|
| 195 |
+
time
|
| 196 |
+
width
|
| 197 |
+
height
|
| 198 |
+
HM
|
| 199 |
+
B1(3)
|
| 200 |
+
0
|
| 201 |
+
4.3
|
| 202 |
+
7.6
|
| 203 |
+
20.1
|
| 204 |
+
HM
|
| 205 |
+
B2(4)
|
| 206 |
+
0
|
| 207 |
+
7.8
|
| 208 |
+
15.6
|
| 209 |
+
18.6
|
| 210 |
+
HT
|
| 211 |
+
B4(3)
|
| 212 |
+
8.5
|
| 213 |
+
14.1
|
| 214 |
+
4
|
| 215 |
+
8.4
|
| 216 |
+
HT
|
| 217 |
+
B5(3)
|
| 218 |
+
11.1
|
| 219 |
+
40.8
|
| 220 |
+
7.4
|
| 221 |
+
12.9
|
| 222 |
+
HT
|
| 223 |
+
B6(3)
|
| 224 |
+
17.5
|
| 225 |
+
77.6
|
| 226 |
+
8
|
| 227 |
+
14.4
|
| 228 |
+
HT
|
| 229 |
+
B9(5)
|
| 230 |
+
36.5
|
| 231 |
+
7.7M
|
| 232 |
+
27
|
| 233 |
+
25
|
| 234 |
+
HT
|
| 235 |
+
B10(5)
|
| 236 |
+
*
|
| 237 |
+
*
|
| 238 |
+
*
|
| 239 |
+
*
|
| 240 |
+
QDec-FP variant
|
| 241 |
+
type
|
| 242 |
+
domain
|
| 243 |
+
#bts
|
| 244 |
+
time
|
| 245 |
+
width
|
| 246 |
+
height
|
| 247 |
+
HM
|
| 248 |
+
B1(3)
|
| 249 |
+
0
|
| 250 |
+
3.9
|
| 251 |
+
7.2
|
| 252 |
+
18.8
|
| 253 |
+
HM
|
| 254 |
+
B2(4)
|
| 255 |
+
0
|
| 256 |
+
8.1
|
| 257 |
+
15.6
|
| 258 |
+
20.4
|
| 259 |
+
HT
|
| 260 |
+
B4(3)
|
| 261 |
+
7.7
|
| 262 |
+
9.6
|
| 263 |
+
4
|
| 264 |
+
10.1
|
| 265 |
+
HT
|
| 266 |
+
B5(3)
|
| 267 |
+
4.8
|
| 268 |
+
14.4
|
| 269 |
+
6
|
| 270 |
+
11.8
|
| 271 |
+
HT
|
| 272 |
+
B6(3)
|
| 273 |
+
18.2
|
| 274 |
+
51.1
|
| 275 |
+
8
|
| 276 |
+
14.6
|
| 277 |
+
HT
|
| 278 |
+
B9(5)
|
| 279 |
+
30.5
|
| 280 |
+
1.5M
|
| 281 |
+
27.25
|
| 282 |
+
26.75
|
| 283 |
+
HT
|
| 284 |
+
B10(5)
|
| 285 |
+
19.5
|
| 286 |
+
689K
|
| 287 |
+
27
|
| 288 |
+
27.25
|
| 289 |
+
The variant has no additional costs when there are no back-
|
| 290 |
+
tracks. However, when backtracking is necessary, the vari-
|
| 291 |
+
ant allows for faster planning and produces a higher quality
|
| 292 |
+
tree. This is because the variant focuses on the failing agent,
|
| 293 |
+
speeds up the backtracking process, ensures that branching
|
| 294 |
+
is equal among agents who cannot sense their surroundings,
|
| 295 |
+
and enables the creation of valid team plans through the use
|
| 296 |
+
of CPOR.
|
| 297 |
+
|
| 298 |
+
QDec-FPS
|
| 299 |
+
type
|
| 300 |
+
domain
|
| 301 |
+
#bts
|
| 302 |
+
time
|
| 303 |
+
width
|
| 304 |
+
height
|
| 305 |
+
HM
|
| 306 |
+
B1(3)
|
| 307 |
+
0
|
| 308 |
+
3.4
|
| 309 |
+
6.1
|
| 310 |
+
16.9
|
| 311 |
+
HM
|
| 312 |
+
B2(4)
|
| 313 |
+
0
|
| 314 |
+
7
|
| 315 |
+
9
|
| 316 |
+
17
|
| 317 |
+
HT
|
| 318 |
+
B4(3)
|
| 319 |
+
0
|
| 320 |
+
1.2
|
| 321 |
+
4
|
| 322 |
+
8.4
|
| 323 |
+
HT
|
| 324 |
+
B5(3)
|
| 325 |
+
1.7
|
| 326 |
+
10.6
|
| 327 |
+
7.2
|
| 328 |
+
13.3
|
| 329 |
+
HT
|
| 330 |
+
B6(3)
|
| 331 |
+
0
|
| 332 |
+
3.5
|
| 333 |
+
7.6
|
| 334 |
+
16
|
| 335 |
+
HT
|
| 336 |
+
B7(4)
|
| 337 |
+
*
|
| 338 |
+
*
|
| 339 |
+
*
|
| 340 |
+
*
|
| 341 |
+
HT
|
| 342 |
+
B8(4)
|
| 343 |
+
*
|
| 344 |
+
*
|
| 345 |
+
*
|
| 346 |
+
*
|
| 347 |
+
HT
|
| 348 |
+
B9(4)
|
| 349 |
+
*
|
| 350 |
+
*
|
| 351 |
+
*
|
| 352 |
+
*
|
| 353 |
+
QDec-FPS variant
|
| 354 |
+
type
|
| 355 |
+
domain
|
| 356 |
+
#bts
|
| 357 |
+
time
|
| 358 |
+
width
|
| 359 |
+
height
|
| 360 |
+
HM
|
| 361 |
+
B1(3)
|
| 362 |
+
0
|
| 363 |
+
4
|
| 364 |
+
5.7
|
| 365 |
+
18.2
|
| 366 |
+
HM
|
| 367 |
+
B2(4)
|
| 368 |
+
0
|
| 369 |
+
8.3
|
| 370 |
+
11.7
|
| 371 |
+
19.4
|
| 372 |
+
HT
|
| 373 |
+
B4(3)
|
| 374 |
+
0.8
|
| 375 |
+
2.2
|
| 376 |
+
4
|
| 377 |
+
8.8
|
| 378 |
+
HT
|
| 379 |
+
B5(3)
|
| 380 |
+
1.3
|
| 381 |
+
7
|
| 382 |
+
6.4
|
| 383 |
+
12.6
|
| 384 |
+
HT
|
| 385 |
+
B6(3)
|
| 386 |
+
0
|
| 387 |
+
4.2
|
| 388 |
+
8
|
| 389 |
+
16
|
| 390 |
+
HT
|
| 391 |
+
B7(4)
|
| 392 |
+
0
|
| 393 |
+
3.7
|
| 394 |
+
6
|
| 395 |
+
9.5
|
| 396 |
+
HT
|
| 397 |
+
B8(4)
|
| 398 |
+
0.2
|
| 399 |
+
5
|
| 400 |
+
5.6
|
| 401 |
+
9.5
|
| 402 |
+
HT
|
| 403 |
+
B9(4)
|
| 404 |
+
0
|
| 405 |
+
8.3
|
| 406 |
+
13
|
| 407 |
+
19
|
| 408 |
+
In the case of no backtracks, the variant has a slower run-
|
| 409 |
+
ning time and lower quality trees. In the case of 1+ back-
|
| 410 |
+
tracks, the variant has a faster running time and higher quality
|
| 411 |
+
trees. This is because the variant has fewer agent constraints
|
| 412 |
+
and larger SDR problems, which makes the backtrack mech-
|
| 413 |
+
anism faster and allows for better team plans.
|
| 414 |
+
5.2
|
| 415 |
+
Table-mover
|
| 416 |
+
T1(3) refers to a grid with a size of 3 and containing only 1
|
| 417 |
+
table.
|
| 418 |
+
QDec-FP
|
| 419 |
+
type
|
| 420 |
+
domain
|
| 421 |
+
#bts
|
| 422 |
+
time
|
| 423 |
+
width
|
| 424 |
+
height
|
| 425 |
+
HM
|
| 426 |
+
T1(3)
|
| 427 |
+
0
|
| 428 |
+
9.6
|
| 429 |
+
8
|
| 430 |
+
19.8
|
| 431 |
+
HM
|
| 432 |
+
T3(4)
|
| 433 |
+
1.3
|
| 434 |
+
37.4
|
| 435 |
+
14.1
|
| 436 |
+
33.6
|
| 437 |
+
HT
|
| 438 |
+
T6(3)
|
| 439 |
+
8.7
|
| 440 |
+
7.3
|
| 441 |
+
2
|
| 442 |
+
8
|
| 443 |
+
HT
|
| 444 |
+
T9(3)
|
| 445 |
+
12.5
|
| 446 |
+
66.1
|
| 447 |
+
8
|
| 448 |
+
21
|
| 449 |
+
HT
|
| 450 |
+
T11(5)
|
| 451 |
+
39.5
|
| 452 |
+
879K
|
| 453 |
+
13
|
| 454 |
+
25
|
| 455 |
+
QDec-FP variant
|
| 456 |
+
type
|
| 457 |
+
domain
|
| 458 |
+
#bts
|
| 459 |
+
time
|
| 460 |
+
width
|
| 461 |
+
height
|
| 462 |
+
HM
|
| 463 |
+
T1(3)
|
| 464 |
+
0
|
| 465 |
+
4.4
|
| 466 |
+
8
|
| 467 |
+
20
|
| 468 |
+
HM
|
| 469 |
+
T3(4)
|
| 470 |
+
1.1
|
| 471 |
+
26.6K
|
| 472 |
+
14.2
|
| 473 |
+
33.7
|
| 474 |
+
HT
|
| 475 |
+
T6(3)
|
| 476 |
+
9
|
| 477 |
+
11.1K
|
| 478 |
+
2
|
| 479 |
+
7.6
|
| 480 |
+
HT
|
| 481 |
+
T9(3)
|
| 482 |
+
10
|
| 483 |
+
26K
|
| 484 |
+
8
|
| 485 |
+
18.67
|
| 486 |
+
HT
|
| 487 |
+
T11(5)
|
| 488 |
+
15
|
| 489 |
+
176K
|
| 490 |
+
11.25
|
| 491 |
+
24.25
|
| 492 |
+
The QDec-FP variant is efficient in simple problems with
|
| 493 |
+
no added overhead. It also performs faster and more effi-
|
| 494 |
+
ciently in complex problems, using fewer backtracks and pro-
|
| 495 |
+
ducing smaller plan trees.
|
| 496 |
+
QDec-FPS
|
| 497 |
+
type
|
| 498 |
+
domain
|
| 499 |
+
#bts
|
| 500 |
+
time
|
| 501 |
+
width
|
| 502 |
+
height
|
| 503 |
+
HM
|
| 504 |
+
T1(3)
|
| 505 |
+
0
|
| 506 |
+
4.5
|
| 507 |
+
7.8
|
| 508 |
+
18
|
| 509 |
+
HM
|
| 510 |
+
T3(4)
|
| 511 |
+
0
|
| 512 |
+
13.6
|
| 513 |
+
13.8
|
| 514 |
+
27.5
|
| 515 |
+
HT
|
| 516 |
+
T6(3)
|
| 517 |
+
0
|
| 518 |
+
0.7
|
| 519 |
+
2
|
| 520 |
+
6.4
|
| 521 |
+
HT
|
| 522 |
+
T9(3)
|
| 523 |
+
0
|
| 524 |
+
3.7
|
| 525 |
+
8
|
| 526 |
+
16
|
| 527 |
+
HT
|
| 528 |
+
T11(5)
|
| 529 |
+
0
|
| 530 |
+
15.5K
|
| 531 |
+
12
|
| 532 |
+
33
|
| 533 |
+
HT
|
| 534 |
+
T12(5)
|
| 535 |
+
*
|
| 536 |
+
*
|
| 537 |
+
*
|
| 538 |
+
*
|
| 539 |
+
HT
|
| 540 |
+
T13(5)
|
| 541 |
+
*
|
| 542 |
+
*
|
| 543 |
+
*
|
| 544 |
+
*
|
| 545 |
+
HT
|
| 546 |
+
T14(5)
|
| 547 |
+
*
|
| 548 |
+
*
|
| 549 |
+
*
|
| 550 |
+
*
|
| 551 |
+
QDec-FPS variant
|
| 552 |
+
type
|
| 553 |
+
domain
|
| 554 |
+
#bts
|
| 555 |
+
time
|
| 556 |
+
width
|
| 557 |
+
height
|
| 558 |
+
HM
|
| 559 |
+
T1(3)
|
| 560 |
+
0
|
| 561 |
+
5.5
|
| 562 |
+
7.9
|
| 563 |
+
18
|
| 564 |
+
HM
|
| 565 |
+
T3(4)
|
| 566 |
+
0
|
| 567 |
+
13.7
|
| 568 |
+
13.4
|
| 569 |
+
17.4
|
| 570 |
+
HT
|
| 571 |
+
T6(3)
|
| 572 |
+
0
|
| 573 |
+
0.7
|
| 574 |
+
2
|
| 575 |
+
7.2
|
| 576 |
+
HT
|
| 577 |
+
T9(3)
|
| 578 |
+
1
|
| 579 |
+
7.6
|
| 580 |
+
8
|
| 581 |
+
25
|
| 582 |
+
HT
|
| 583 |
+
T11(5)
|
| 584 |
+
3
|
| 585 |
+
42K
|
| 586 |
+
14
|
| 587 |
+
24.5
|
| 588 |
+
HT
|
| 589 |
+
T12(5)
|
| 590 |
+
0
|
| 591 |
+
32K
|
| 592 |
+
8
|
| 593 |
+
20
|
| 594 |
+
HT
|
| 595 |
+
T13(5)
|
| 596 |
+
5
|
| 597 |
+
233K
|
| 598 |
+
16
|
| 599 |
+
27
|
| 600 |
+
HT
|
| 601 |
+
T14(5)
|
| 602 |
+
10
|
| 603 |
+
140K
|
| 604 |
+
16
|
| 605 |
+
39
|
| 606 |
+
The QDec-FPS variant is able to handle more complex
|
| 607 |
+
problems, solve them faster, and has a small overhead when
|
| 608 |
+
dealing with simple problems.
|
| 609 |
+
6
|
| 610 |
+
Conclusion
|
| 611 |
+
The QDec-FP variant is a planning algorithm that is efficient
|
| 612 |
+
in both simple and complex problems, producing high qual-
|
| 613 |
+
ity tree plans. In cases of backtracking, it speeds up the pro-
|
| 614 |
+
cess and creates better team plans. The QDec-FPS variant is
|
| 615 |
+
also able to handle complex problems efficiently, with a small
|
| 616 |
+
overhead in simple problems.
|
| 617 |
+
7
|
| 618 |
+
Further work
|
| 619 |
+
The variant is not capable of addressing the need for complex
|
| 620 |
+
communication between agents in certain domains.
|
| 621 |
+
References
|
| 622 |
+
[Brafman and Shani, 2012] R. I. Brafman and G. Shani. Re-
|
| 623 |
+
planning in domains with partial information and sens-
|
| 624 |
+
ing actions.
|
| 625 |
+
Journal of Artificial Intelligence Research,
|
| 626 |
+
45:565–600, dec 2012.
|
| 627 |
+
[Brafman et al., 2013] Ronen I. Brafman, Guy Shani, and
|
| 628 |
+
Shlomo Zilberstein. Qualitative planning under partial ob-
|
| 629 |
+
servability in multi-agent domains.
|
| 630 |
+
Proceedings of the
|
| 631 |
+
AAAI Conference on Artificial Intelligence, 2013.
|
| 632 |
+
[Maliah et al., 2014] Shlomi Maliah, Ronen Brafman, Erez
|
| 633 |
+
Karpas, and Guy Shani. Partially observable online con-
|
| 634 |
+
tingent planning using landmark heuristics.
|
| 635 |
+
In Twenty-
|
| 636 |
+
Fourth International Conference on Automated Planning
|
| 637 |
+
and Scheduling, 2014.
|
| 638 |
+
[Shekhar et al., 2021a] Shashank Shekhar, Ronen I. Braf-
|
| 639 |
+
man, and Guy Shani. A factored approach to deterministic
|
| 640 |
+
contingent multi-agent planning. Proceedings of the Inter-
|
| 641 |
+
national Conference on Automated Planning and Schedul-
|
| 642 |
+
ing, 29(1):419–427, May 2021.
|
| 643 |
+
|
| 644 |
+
[Shekhar et al., 2021b] Shashank Shekhar, Ronen I. Braf-
|
| 645 |
+
man, and Guy Shani. Improved knowledge modeling and
|
| 646 |
+
its use for signaling in multi-agent planning with partial
|
| 647 |
+
observability. Proceedings of the AAAI Conference on Ar-
|
| 648 |
+
tificial Intelligence, 35(13):11954–11961, May 2021.
|
| 649 |
+
[Wikipedia contributors, 2020] Wikipedia
|
| 650 |
+
contributors.
|
| 651 |
+
Decentralized
|
| 652 |
+
partially
|
| 653 |
+
observable
|
| 654 |
+
markov
|
| 655 |
+
deci-
|
| 656 |
+
sion process — Wikipedia,
|
| 657 |
+
the free encyclopedia.
|
| 658 |
+
https://en.wikipedia.org/w/index.php?title=Decentralized partially observable Markov decision process&oldid=992800884,
|
| 659 |
+
2020. [Online; accessed 1-December-2022].
|
| 660 |
+
[Wikipedia contributors, 2022a] Wikipedia
|
| 661 |
+
contributors.
|
| 662 |
+
Automated planning and scheduling — Wikipedia, the
|
| 663 |
+
free encyclopedia, 2022.
|
| 664 |
+
[Online; accessed 2-January-
|
| 665 |
+
2023].
|
| 666 |
+
[Wikipedia contributors, 2022b] Wikipedia
|
| 667 |
+
contributors.
|
| 668 |
+
Markov
|
| 669 |
+
decision
|
| 670 |
+
pro-
|
| 671 |
+
cess
|
| 672 |
+
—
|
| 673 |
+
Wikipedia,
|
| 674 |
+
the
|
| 675 |
+
free
|
| 676 |
+
encyclopedia.
|
| 677 |
+
https://en.wikipedia.org/w/index.php?title=Markov decision process&oldid=1124829194,
|
| 678 |
+
2022. [Online; accessed 1-December-2022].
|
| 679 |
+
[Wikipedia contributors, 2022c] Wikipedia
|
| 680 |
+
contrib-
|
| 681 |
+
utors.
|
| 682 |
+
Partially
|
| 683 |
+
observable
|
| 684 |
+
markov
|
| 685 |
+
decision
|
| 686 |
+
process
|
| 687 |
+
—
|
| 688 |
+
Wikipedia,
|
| 689 |
+
the
|
| 690 |
+
free
|
| 691 |
+
encyclopedia.
|
| 692 |
+
https://en.wikipedia.org/w/index.php?title=Partially observable Markov decision process&oldid=1104376990,
|
| 693 |
+
2022. [Online; accessed 1-December-2022].
|
| 694 |
+
|
AdAzT4oBgHgl3EQfTPx3/content/tmp_files/load_file.txt
ADDED
|
@@ -0,0 +1,259 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf,len=258
|
| 2 |
+
page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 3 |
+
page_content='01246v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 4 |
+
page_content='AI] 3 Jan 2023 Efficient method for handling diverse agents in QDec-POMDPs Nitsan Soffair Ben Gurion University soffair@post.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 5 |
+
page_content='bgu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 6 |
+
page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 7 |
+
page_content='il Abstract The SOTA algorithms for addressing QDec- POMDP issues, QDec-FP and QDec-FPS, are un- able to effectively tackle problems that involve dif- ferent types of sensing agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 8 |
+
page_content=' We propose a new algorithm that addresses this issue by requiring agents to adopt the same plan if one agent is unable to take a sensing action but the other can.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 9 |
+
page_content=' Our algo- rithm performs significantly better than both QDec- FP and QDec-FPS in these types of situations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 10 |
+
page_content=' 1 Introduction Automated planning and scheduling [Wikipedia contributors, 2022a] is a field of artificial intelligence that deals with creating and implementing strate- gies or action sequences for intelligent agents, autonomous robots, and unmanned vehicles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 11 |
+
page_content=' It involves finding and optimizing solutions in complex multidimensional spaces and is closely related to decision theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 12 |
+
page_content=' Planning can be done offline in known environments, but in unknown environments, the strategy may need to be revised online and models and policies may need to be adapted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 13 |
+
page_content=' 2 Background 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 14 |
+
page_content='1 MDP An MDP [Wikipedia contributors, 2022b] is a 4-tuple (S, A, P, R) where S is the state space, A is the action space, P is the probability that action a in state s will lead to the next state, R is the immediate reward received after transforming from a state to the next state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 15 |
+
page_content=' A policy function π is a map- ping from state space to action space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 16 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 17 |
+
page_content='2 POMDP A POMDP [Wikipedia contributors, 2022c] is a 7-tuple (S, A, T, R, Ω, O, γ) where S is the set of states, A is the set of actions, T is a set of transition probabilities between states, R is the reward function, Ω is a set of observations, O is a set of observation probabilities, γ ∈ [0, 1] is the discount factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 18 |
+
page_content=' At each time period, the environment is in some state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 19 |
+
page_content=' The agent takes an action a, which causes the environment to transition to the next state with probability T (s|s′, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 20 |
+
page_content=' At the same time, the agent receives an observation o which depends on the new state of the environment, and on the just taken ac- tion a, with probability O(o|s′, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 21 |
+
page_content=' Finally, the agent receives a reward r equal to R(s′, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 22 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 23 |
+
page_content='3 Dec-POMDP A Dec-POMDP [Wikipedia contributors, 2020] is a 7-tuple (S, {Ai}, T, R, {Ωi}, O, γ) where S is the set of states, Ai is the set of actions for agent i, {Ai} is the set of joint actions, T is a set of transition probabilities between states, Ωi is a set of observations for agent i, {Ωi} is the set of joint observa- tions, O is a set of observation probabilities, γ ∈ [0, 1] is the discount factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 24 |
+
page_content=' At each time step, each agent takes an action a, the state updates based on the transition function T , each agent observes an observation based on the observation func- tion O, and a reward is generated for the whole team based on the reward function R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 25 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 26 |
+
page_content='4 QDec-POMDP A QDec-POMDP [Brafman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 27 |
+
page_content=', 2013] is a model for rep- resenting the decision-making process of multiple agents in a dynamic environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 28 |
+
page_content=' It consists of a set of agents, states, ac- tions, observations, and a goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 29 |
+
page_content=' The QDec-POMDP uses pol- icy trees to represent the local plans of each agent, with each node labeled with an action and each branch labeled with an observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 30 |
+
page_content=' To execute the plan, the agent performs the ac- tion at the root of the tree and then uses the subtree labeled with the observation it obtains to guide future action selec- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 31 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 32 |
+
page_content='5 SDR The SDR [Brafman and Shani, 2012] planner is a method for planning under uncertainty in which a single state is chosen from the current belief state and used to create a determin- istic classical problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 33 |
+
page_content=' The resulting plan is then executed until a sensing action is performed, at which point the belief state is updated and the process is repeated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 34 |
+
page_content=' This version of SDR maintains and uses a complete, explicit description of the belief state, though a modified version of the algorithm uses sampling and lazy belief-state maintenance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 35 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 36 |
+
page_content='6 CPOR The CPOR [Maliah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 37 |
+
page_content=', 2014] algorithm repeatedly selects and executes sensing actions in order to gather information and achieve a goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 38 |
+
page_content=' The planner uses a classical projection to plan for the preconditions of each observation action and then executes the action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 39 |
+
page_content=' The selection of the next sensing action is based on an estimation of the myopic value of information, or the value that will be achieved from executing the action without considering future observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 40 |
+
page_content=' This value is calcu- lated using the number of disjunctive action landmarks that can be achieved following the sensing action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 41 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 42 |
+
page_content='7 Factored planning The algorithm [Shekhar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 43 |
+
page_content=', 2021a] first creates a single- agent team problem by treating all actions and observations as if they are performed by a single combined agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 44 |
+
page_content=' This results in a team solution tree, which is then projected to each individual agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 45 |
+
page_content=' Each agent then tries to generate a local policy that includes the projected sub-tree as a solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 46 |
+
page_content=' If all agents are able to solve their local problems, the actions are aligned and a solution is returned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 47 |
+
page_content=' If one of the agents cannot solve their problem, a new team solution is generated and the process is repeated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 48 |
+
page_content=' If no new team solution is possible, the process fails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 49 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 50 |
+
page_content='8 QDec-FP QDec-FP [Shekhar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 51 |
+
page_content=', 2021b] is a three-stage process for solving multi-agent problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 52 |
+
page_content=' In the first stage, a team so- lution is generated by treating all actions as if they were ex- ecuted by a single meta-agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 53 |
+
page_content=' In the second stage, the pro- jection of the team solution is extended for each individual agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 54 |
+
page_content=' Finally, in the third stage, the single agent plan trees are aligned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 55 |
+
page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 56 |
+
page_content='9 QDec-FPS In QDec-FPS [Shekhar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 57 |
+
page_content=', 2021b] the SDR translation maintains two propositions for each proposition, represent- ing that the agent knowing that it is true or false.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 58 |
+
page_content=' It also trans- forms preconditions of actions into propositions that must be known to be true in all possible worlds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 59 |
+
page_content=' In addition, QDec- FPS allows for agents to communicate by signal to each other by setting the value of a variable that can be sensed by other agents, allowing them to reason about the value of a proposi- tion they cannot sense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 60 |
+
page_content=' 3 Algorithm The algorithm consists of two steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 61 |
+
page_content=' In the first step, we pre- pare the environment by determining the sensory capabilities of each agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 62 |
+
page_content=' In the second step, we use QDec-FP to create a team plan, ensuring that any actions that rely on observations that an agent cannot make are eliminated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 63 |
+
page_content=' The subsequent steps are identical to those in QDec-FP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 64 |
+
page_content=' 4 Domains 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 65 |
+
page_content='1 Box-pushing There is a grid with boxes that need to be moved to different locations outside of the column they are currently in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 66 |
+
page_content=' One agent can push a light box, but two agents are required to push a heavy box.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 67 |
+
page_content=' The agents can vary in their abilities and can be assigned to push different boxes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 68 |
+
page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 69 |
+
page_content='2 Table-mover The system includes several tables and rooms that are con- nected, and agents that can move between them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 70 |
+
page_content=' The exact location of the tables is not known at the beginning, and the agents must move them to their designated locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 71 |
+
page_content=' The agents can have different capabilities for sensing and manip- ulating objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 72 |
+
page_content=' All actions involving the manipulation of ta- bles require the collaboration of at least two agents, including moving, lifting, and dropping the tables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 73 |
+
page_content=' 5 Results The experiments were run on a computer with a 4-core pro- cessor running at 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 74 |
+
page_content='40GHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 75 |
+
page_content=' The domain of the experiment could be either homogeneous or heterogeneous, represented by HM and HT, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 76 |
+
page_content=' The variables measured in the experiments included the number of backtracks, time needed for the planning process, maximum tree width, and maximum tree height.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 77 |
+
page_content=' The results are an average of 10 experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 78 |
+
page_content=' The winner of the QDec versus the variant for each criterion is noted in bold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 79 |
+
page_content=' If the solver was unable to solve the problem, this is indicated by an asterisk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 80 |
+
page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 81 |
+
page_content='1 Box-pushing Grid size 3 with 1 box is represented by B1(3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 82 |
+
page_content=' QDec-FP type domain #bts time width height HM B1(3) 0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 83 |
+
page_content='3 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 84 |
+
page_content='6 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 85 |
+
page_content='1 HM B2(4) 0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 86 |
+
page_content='8 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 87 |
+
page_content='6 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 88 |
+
page_content='6 HT B4(3) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 89 |
+
page_content='5 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 90 |
+
page_content='1 4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 91 |
+
page_content='4 HT B5(3) 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 92 |
+
page_content='1 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 93 |
+
page_content='8 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 94 |
+
page_content='4 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 95 |
+
page_content='9 HT B6(3) 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 96 |
+
page_content='5 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 97 |
+
page_content='6 8 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 98 |
+
page_content='4 HT B9(5) 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 99 |
+
page_content='5 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 100 |
+
page_content='7M 27 25 HT B10(5) QDec-FP variant type domain #bts time width height HM B1(3) 0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 101 |
+
page_content='9 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 102 |
+
page_content='2 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 103 |
+
page_content='8 HM B2(4) 0 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 104 |
+
page_content='1 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 105 |
+
page_content='6 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 106 |
+
page_content='4 HT B4(3) 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 107 |
+
page_content='7 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 108 |
+
page_content='6 4 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 109 |
+
page_content='1 HT B5(3) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 110 |
+
page_content='8 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 111 |
+
page_content='4 6 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 112 |
+
page_content='8 HT B6(3) 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 113 |
+
page_content='2 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 114 |
+
page_content='1 8 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 115 |
+
page_content='6 HT B9(5) 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 116 |
+
page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 117 |
+
page_content='5M 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 118 |
+
page_content='25 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 119 |
+
page_content='75 HT B10(5) 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 120 |
+
page_content='5 689K 27 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 121 |
+
page_content='25 The variant has no additional costs when there are no back- tracks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 122 |
+
page_content=' However, when backtracking is necessary, the vari- ant allows for faster planning and produces a higher quality tree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 123 |
+
page_content=' This is because the variant focuses on the failing agent, speeds up the backtracking process, ensures that branching is equal among agents who cannot sense their surroundings, and enables the creation of valid team plans through the use of CPOR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 124 |
+
page_content=' QDec-FPS type domain #bts time width height HM B1(3) 0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 125 |
+
page_content='4 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 126 |
+
page_content='1 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 127 |
+
page_content='9 HM B2(4) 0 7 9 17 HT B4(3) 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 128 |
+
page_content='2 4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 129 |
+
page_content='4 HT B5(3) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 130 |
+
page_content='7 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 131 |
+
page_content='6 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 132 |
+
page_content='2 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 133 |
+
page_content='3 HT B6(3) 0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 134 |
+
page_content='5 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 135 |
+
page_content='6 16 HT B7(4) HT B8(4) HT B9(4) QDec-FPS variant type domain #bts time width height HM B1(3) 0 4 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 136 |
+
page_content='7 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 137 |
+
page_content='2 HM B2(4) 0 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 138 |
+
page_content='3 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 139 |
+
page_content='7 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 140 |
+
page_content='4 HT B4(3) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 141 |
+
page_content='8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 142 |
+
page_content='2 4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 143 |
+
page_content='8 HT B5(3) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 144 |
+
page_content='3 7 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 145 |
+
page_content='4 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 146 |
+
page_content='6 HT B6(3) 0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 147 |
+
page_content='2 8 16 HT B7(4) 0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 148 |
+
page_content='7 6 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 149 |
+
page_content='5 HT B8(4) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 150 |
+
page_content='2 5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 151 |
+
page_content='6 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 152 |
+
page_content='5 HT B9(4) 0 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 153 |
+
page_content='3 13 19 In the case of no backtracks, the variant has a slower run- ning time and lower quality trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 154 |
+
page_content=' In the case of 1+ back- tracks, the variant has a faster running time and higher quality trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 155 |
+
page_content=' This is because the variant has fewer agent constraints and larger SDR problems, which makes the backtrack mech- anism faster and allows for better team plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 156 |
+
page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 157 |
+
page_content='2 Table-mover T1(3) refers to a grid with a size of 3 and containing only 1 table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 158 |
+
page_content=' QDec-FP type domain #bts time width height HM T1(3) 0 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 159 |
+
page_content='6 8 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 160 |
+
page_content='8 HM T3(4) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 161 |
+
page_content='3 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 162 |
+
page_content='4 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 163 |
+
page_content='1 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 164 |
+
page_content='6 HT T6(3) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 165 |
+
page_content='7 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 166 |
+
page_content='3 2 8 HT T9(3) 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 167 |
+
page_content='5 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 168 |
+
page_content='1 8 21 HT T11(5) 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 169 |
+
page_content='5 879K 13 25 QDec-FP variant type domain #bts time width height HM T1(3) 0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 170 |
+
page_content='4 8 20 HM T3(4) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 171 |
+
page_content='1 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 172 |
+
page_content='6K 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 173 |
+
page_content='2 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 174 |
+
page_content='7 HT T6(3) 9 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 175 |
+
page_content='1K 2 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 176 |
+
page_content='6 HT T9(3) 10 26K 8 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 177 |
+
page_content='67 HT T11(5) 15 176K 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 178 |
+
page_content='25 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 179 |
+
page_content='25 The QDec-FP variant is efficient in simple problems with no added overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 180 |
+
page_content=' It also performs faster and more effi- ciently in complex problems, using fewer backtracks and pro- ducing smaller plan trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 181 |
+
page_content=' QDec-FPS type domain #bts time width height HM T1(3) 0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 182 |
+
page_content='5 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 183 |
+
page_content='8 18 HM T3(4) 0 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 184 |
+
page_content='6 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 185 |
+
page_content='8 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 186 |
+
page_content='5 HT T6(3) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 187 |
+
page_content='7 2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 188 |
+
page_content='4 HT T9(3) 0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 189 |
+
page_content='7 8 16 HT T11(5) 0 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 190 |
+
page_content='5K 12 33 HT T12(5) HT T13(5) HT T14(5) QDec-FPS variant type domain #bts time width height HM T1(3) 0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 191 |
+
page_content='5 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 192 |
+
page_content='9 18 HM T3(4) 0 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 193 |
+
page_content='7 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 194 |
+
page_content='4 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 195 |
+
page_content='4 HT T6(3) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 196 |
+
page_content='7 2 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 197 |
+
page_content='2 HT T9(3) 1 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 198 |
+
page_content='6 8 25 HT T11(5) 3 42K 14 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 199 |
+
page_content='5 HT T12(5) 0 32K 8 20 HT T13(5) 5 233K 16 27 HT T14(5) 10 140K 16 39 The QDec-FPS variant is able to handle more complex problems, solve them faster, and has a small overhead when dealing with simple problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 200 |
+
page_content=' 6 Conclusion The QDec-FP variant is a planning algorithm that is efficient in both simple and complex problems, producing high qual- ity tree plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 201 |
+
page_content=' In cases of backtracking, it speeds up the pro- cess and creates better team plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 202 |
+
page_content=' The QDec-FPS variant is also able to handle complex problems efficiently, with a small overhead in simple problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 203 |
+
page_content=' 7 Further work The variant is not capable of addressing the need for complex communication between agents in certain domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 204 |
+
page_content=' References [Brafman and Shani, 2012] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 205 |
+
page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 206 |
+
page_content=' Brafman and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 207 |
+
page_content=' Shani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 208 |
+
page_content=' Re- planning in domains with partial information and sens- ing actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 209 |
+
page_content=' Journal of Artificial Intelligence Research, 45:565–600, dec 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 210 |
+
page_content=' [Brafman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 211 |
+
page_content=', 2013] Ronen I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 212 |
+
page_content=' Brafman, Guy Shani, and Shlomo Zilberstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 213 |
+
page_content=' Qualitative planning under partial ob- servability in multi-agent domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 214 |
+
page_content=' Proceedings of the AAAI Conference on Artificial Intelligence, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 215 |
+
page_content=' [Maliah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 216 |
+
page_content=', 2014] Shlomi Maliah, Ronen Brafman, Erez Karpas, and Guy Shani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 217 |
+
page_content=' Partially observable online con- tingent planning using landmark heuristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 218 |
+
page_content=' In Twenty- Fourth International Conference on Automated Planning and Scheduling, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 219 |
+
page_content=' [Shekhar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 220 |
+
page_content=', 2021a] Shashank Shekhar, Ronen I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 221 |
+
page_content=' Braf- man, and Guy Shani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 222 |
+
page_content=' A factored approach to deterministic contingent multi-agent planning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 223 |
+
page_content=' Proceedings of the Inter- national Conference on Automated Planning and Schedul- ing, 29(1):419–427, May 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 224 |
+
page_content=' [Shekhar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 225 |
+
page_content=', 2021b] Shashank Shekhar, Ronen I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 226 |
+
page_content=' Braf- man, and Guy Shani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 227 |
+
page_content=' Improved knowledge modeling and its use for signaling in multi-agent planning with partial observability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 228 |
+
page_content=' Proceedings of the AAAI Conference on Ar- tificial Intelligence, 35(13):11954–11961, May 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 229 |
+
page_content=' [Wikipedia contributors, 2020] Wikipedia contributors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 230 |
+
page_content=' Decentralized partially observable markov deci- sion process — Wikipedia, the free encyclopedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 231 |
+
page_content=' https://en.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 232 |
+
page_content='wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 233 |
+
page_content='org/w/index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 234 |
+
page_content='php?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 235 |
+
page_content='title=Decentralized partially observable Markov decision process&oldid=992800884, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 236 |
+
page_content=' [Online;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 237 |
+
page_content=' accessed 1-December-2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 238 |
+
page_content=' [Wikipedia contributors, 2022a] Wikipedia contributors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 239 |
+
page_content=' Automated planning and scheduling — Wikipedia, the free encyclopedia, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 240 |
+
page_content=' [Online;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 241 |
+
page_content=' accessed 2-January- 2023].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 242 |
+
page_content=' [Wikipedia contributors, 2022b] Wikipedia contributors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 243 |
+
page_content=' Markov decision pro- cess — Wikipedia, the free encyclopedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 244 |
+
page_content=' https://en.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 245 |
+
page_content='wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 246 |
+
page_content='org/w/index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 247 |
+
page_content='php?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 248 |
+
page_content='title=Markov decision process&oldid=1124829194, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 249 |
+
page_content=' [Online;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 250 |
+
page_content=' accessed 1-December-2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 251 |
+
page_content=' [Wikipedia contributors, 2022c] Wikipedia contrib- utors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 252 |
+
page_content=' Partially observable markov decision process — Wikipedia, the free encyclopedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 253 |
+
page_content=' https://en.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 254 |
+
page_content='wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 255 |
+
page_content='org/w/index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 256 |
+
page_content='php?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 257 |
+
page_content='title=Partially observable Markov decision process&oldid=1104376990, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 258 |
+
page_content=' [Online;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
| 259 |
+
page_content=' accessed 1-December-2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AdAzT4oBgHgl3EQfTPx3/content/2301.01246v1.pdf'}
|
C9AyT4oBgHgl3EQf4fqw/content/tmp_files/2301.00788v1.pdf.txt
ADDED
|
@@ -0,0 +1,677 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Graphical Abstract
|
| 2 |
+
Electrochemical Polishing of Chemical Vapor Deposited Niobium
|
| 3 |
+
Thin Films
|
| 4 |
+
Zeming Sun, Mingqi Ge, James T. Maniscalco, Victor Arrieta, Shawn R.
|
| 5 |
+
McNeal, Matthias U. Liepe
|
| 6 |
+
arXiv:2301.00788v1 [cond-mat.mtrl-sci] 2 Jan 2023
|
| 7 |
+
|
| 8 |
+
Chemical vapor deposition
|
| 9 |
+
Electrochemical polishing
|
| 10 |
+
10 um
|
| 11 |
+
Functional Nb surface for superconducting RFHighlights
|
| 12 |
+
Electrochemical Polishing of Chemical Vapor Deposited Niobium
|
| 13 |
+
Thin Films
|
| 14 |
+
Zeming Sun, Mingqi Ge, James T. Maniscalco, Victor Arrieta, Shawn R.
|
| 15 |
+
McNeal, Matthias U. Liepe
|
| 16 |
+
• Electrochemical polishing (EP) is demonstrated to effectively minimize
|
| 17 |
+
the surface roughness for chemical vapor deposited (CVD) niobium thin
|
| 18 |
+
films.
|
| 19 |
+
• CVD niobium films contain steps, kinks, and pyramidal features, re-
|
| 20 |
+
sulting in large surface roughness. EP polishing of these films involves
|
| 21 |
+
both macroscale and microscale smoothing.
|
| 22 |
+
• A probable dependence on crystal orientation during EP is observed,
|
| 23 |
+
indicating strong influences from locally enhanced current density and
|
| 24 |
+
thickness variations of oxide dielectrics.
|
| 25 |
+
• Obtaining the required surface conditions by a combined EP-CVD tech-
|
| 26 |
+
nology marks a feasible application of niobium thin films in supercon-
|
| 27 |
+
ducting RF.
|
| 28 |
+
|
| 29 |
+
Electrochemical Polishing of Chemical Vapor Deposited
|
| 30 |
+
Niobium Thin Films
|
| 31 |
+
Zeming Suna,∗, Mingqi Gea,1, James T. Maniscalcoa,2, Victor Arrietab,
|
| 32 |
+
Shawn R. McNealb, Matthias U. Liepea,∗∗
|
| 33 |
+
aCornell Laboratory for Accelerator-Based Sciences and
|
| 34 |
+
Education, Ithaca, 14853, NY, USA
|
| 35 |
+
bUltramet, Pacoima, 12173, CA, USA
|
| 36 |
+
Abstract
|
| 37 |
+
Combining chemical vapor deposition (CVD) with electrochemical polish
|
| 38 |
+
(EP) operations is a promising route to producing performance-capable su-
|
| 39 |
+
perconducting films for use in the fabrication of cost-effective components
|
| 40 |
+
for superconducting radiofrequency (SRF) particle accelerators and super-
|
| 41 |
+
conducting quantum computers. The post-deposition EP process enables a
|
| 42 |
+
critically necessary reduction in surface roughness of niobium thin films to
|
| 43 |
+
promote optimal superconducting surface conditions. In this work, surface
|
| 44 |
+
morphology, roughness, and crystal orientation of the CVD-grown and EP-
|
| 45 |
+
polished niobium films were investigated. The grain growth and polishing
|
| 46 |
+
mechanisms were analyzed. The CVD films were found to comprise steps,
|
| 47 |
+
kinks, and pyramidal features, resulting in undesirable large peak-to-valley
|
| 48 |
+
distances. The electrochemical polish was demonstrated to significantly di-
|
| 49 |
+
minish the height of pyramids and effectively minimize the overall surface
|
| 50 |
+
roughness.
|
| 51 |
+
In contrast to buffered chemical polishing (BCP), EP results
|
| 52 |
+
showed a probable dependence on crystal orientation, suggesting this process
|
| 53 |
+
was influenced by locally enhanced current density and thickness variations
|
| 54 |
+
of oxide dielectrics. These understandings identify the EP principles tied
|
| 55 |
+
to CVD-grown Nb films that allow further refinement of surface profiles for
|
| 56 |
+
film-based SRF applications.
|
| 57 |
+
∗zs253@cornell.edu
|
| 58 |
+
∗∗mul2@cornell.edu
|
| 59 |
+
1Now at Jefferson Lab
|
| 60 |
+
2Now at SLAC
|
| 61 |
+
Preprint submitted to Applied Surface Science
|
| 62 |
+
January 3, 2023
|
| 63 |
+
|
| 64 |
+
Keywords:
|
| 65 |
+
Electrochemical polishing, chemical vapor deposition, niobium,
|
| 66 |
+
thin film, surface roughness, crystal orientation
|
| 67 |
+
1. Introduction
|
| 68 |
+
Niobium (Nb) is an important superconducting material that finds use in
|
| 69 |
+
superconducting radio-frequency (SRF) cavities, the chamber containing the
|
| 70 |
+
electromagnetic field in modern particle accelerators [1], and in components
|
| 71 |
+
needed in the emerging technological field of quantum computers [2]. SRF
|
| 72 |
+
cavities are critical components in a wide range of applications, including
|
| 73 |
+
synchrotron and free-electron-laser light sources (e.g., Linac Coherent Light
|
| 74 |
+
Source (LCLS)) [3, 4], high energy physics such as in the search for dark mat-
|
| 75 |
+
ter [5], high-precision (< 5 nm) photolithography for semiconductor device
|
| 76 |
+
fabrication [6], and in biopharmaceutical and medical applications [7].
|
| 77 |
+
Since the transition of accelerators from low-gradient normal-conducting
|
| 78 |
+
RF to high-gradient superconducting RF, bulk Nb remains as the dominant
|
| 79 |
+
cavity technology used to obtain high accelerating gradients. Bulk Nb cavi-
|
| 80 |
+
ties are comprised of high-purity Nb with a residual resistivity ratio (RRR)
|
| 81 |
+
exceeding 300 and require high-cost triple arc-melted RRR-500+ start ma-
|
| 82 |
+
terials for fabrication. One promising direction for realizing cost-effective
|
| 83 |
+
cavities for SRF applications is the use of thin-film Nb coatings applied to
|
| 84 |
+
low-cost, high-thermal-conducting copper (Cu) cavity substrates. The thin-
|
| 85 |
+
film technology is viable since the active region for an SRF cavity is dictated
|
| 86 |
+
by the field penetration depth, typically, tens to hundreds of nanometers at
|
| 87 |
+
the inner surface, e.g., ∼ 40 nm for Nb. Additionally, due to the improved
|
| 88 |
+
thermal conductance, the Nb-coated Cu cavity promises enhanced thermal
|
| 89 |
+
stability during operation. The structural Cu cavity wall enables the out-
|
| 90 |
+
ward diffusion and removal of waste heat, while the Nb film functions as the
|
| 91 |
+
critical component interacting with the RF field. Controlling cavity surface
|
| 92 |
+
roughness and mitigating surface defects are important for achieving high-
|
| 93 |
+
quality factors as localized heat generated by these features can result in the
|
| 94 |
+
cascading loss of the superconducting state on the cavity surface, an effect
|
| 95 |
+
known as “quench” [8].
|
| 96 |
+
Chemical vapor deposition (CVD) of Nb films, in addition to sputter-
|
| 97 |
+
ing [9, 10, 11] and epitaxy [12], were studied on silicon-carbide and graphite
|
| 98 |
+
substrates using NbCl5 and NbBr5 precursors [13, 14, 15]. This vapor-based
|
| 99 |
+
technique is suitable for coating the inner surface of cavities with intricate
|
| 100 |
+
2
|
| 101 |
+
|
| 102 |
+
Figure 1: (a) Picture of a Cu SRF cavity coated with CVD Nb thin films at the inner
|
| 103 |
+
surface. (b) Cross-sectional EDS mapping of CVD Nb films on Cu. Samples were cut
|
| 104 |
+
from the cavity. Inserts show locations of Cu substrate and Nb films.
|
| 105 |
+
shapes. Ultramet developed advanced CVD processing to deposit high-RRR
|
| 106 |
+
(> 280) and used rapid CVD process capabilities to produce freestanding
|
| 107 |
+
testable bulk Nb 3.9 GHz cavities [17]. Ultramet, working with Cornell’s
|
| 108 |
+
SRF Group, adapted the advanced CVD process technology to vapor de-
|
| 109 |
+
posit thick-, and thin-film Nb on 5-inch diameter plates and then scaled the
|
| 110 |
+
process to form Nb films on the interior surface of 1.3 GHz elliptical Cu cav-
|
| 111 |
+
ities of the full-scale single-cell ILC design (Fig. 1a) [17, 16]. Thin-film CVD
|
| 112 |
+
Nb coatings produced by Ultramet in this work demonstrated a high-quality
|
| 113 |
+
factor above 1010 at 2 K and a low residual resistance of ∼ 5 nΩ [16]. Fig. 1b
|
| 114 |
+
shows the results of the elemental mapping via an energy-dispersive X-ray
|
| 115 |
+
spectroscope (EDS), over the cross-section of a sample cut from the Nb/Cu
|
| 116 |
+
cavity that had been electrochemically polished. The excellent Nb-Cu inter-
|
| 117 |
+
face in the image confirms the ∼ 400 µm Nb film is strongly bonded to the
|
| 118 |
+
Cu substrate, and no Cu inclusions are observed in the film. However, a large
|
| 119 |
+
thickness variation of ∼ 150 µm remains even after the electrochemical pol-
|
| 120 |
+
ishing operation. The surface roughness can locally enhance the magnetic
|
| 121 |
+
field and negatively impact the RF performance, due for example, to the
|
| 122 |
+
degradation of quality factors (Q0) at high accelerating gradients [18]. Also,
|
| 123 |
+
this type of field enhancement can cause a quench and limit the maximum
|
| 124 |
+
field capability due to the permanent loss of superconductivity.
|
| 125 |
+
As such, engineering a smooth RF surface is required. Previous investi-
|
| 126 |
+
gations on bulk Nb involved mechanical polish [19], the use of chemicals such
|
| 127 |
+
as buffered chemical polish (BCP) [20], and electrochemical polish (EP) [21].
|
| 128 |
+
Among these methods, the EP process that employs 9-part concentrated
|
| 129 |
+
H2SO4 to 1-part 48% HF under a DC current is typically performed as a
|
| 130 |
+
critical surface finish yielding an encouraging result of 300 nm roughness on
|
| 131 |
+
3
|
| 132 |
+
|
| 133 |
+
Cu wall
|
| 134 |
+
Nb film
|
| 135 |
+
Cu
|
| 136 |
+
500 μm
|
| 137 |
+
500 μum
|
| 138 |
+
500 μm
|
| 139 |
+
(b)
|
| 140 |
+
Cu -NbO
|
| 141 |
+
Nb
|
| 142 |
+
(a)Figure 2:
|
| 143 |
+
(a,b) Mechanisms of electrochemical polishing on a niobium surface using
|
| 144 |
+
H2SO4/HF electrolytes: (a) macropolishing and (b) micropolishing.
|
| 145 |
+
(c) Schematic of
|
| 146 |
+
the electrochemical polishing system and (d) polishing current oscillation.
|
| 147 |
+
bulk Nb [22]. A review of the literature suggests that an investigation into
|
| 148 |
+
EP processing to condition Nb thin-film surfaces for SRF applications has
|
| 149 |
+
not yet been done.
|
| 150 |
+
Electrochemical polishing includes two categories regarding surface fea-
|
| 151 |
+
ture size, macropolishing and micropolishing. Landolt et al. [23, 24] and
|
| 152 |
+
Hryniewicz et al. [25] have reviewed the fundamental aspects of each. As
|
| 153 |
+
shown in Fig. 2a, the local current density is significantly enhanced at posi-
|
| 154 |
+
tions with a smaller radius of curvature as described via [26]
|
| 155 |
+
σ =
|
| 156 |
+
2ε∆V
|
| 157 |
+
R
|
| 158 |
+
exp( −2∆n
|
| 159 |
+
R ) − 1 ∆ n→0
|
| 160 |
+
(1)
|
| 161 |
+
where σ is the surface charge density, R is the radius of curvature, ∆n is a
|
| 162 |
+
limited distance normal to the surface, ∆V is the potential difference between
|
| 163 |
+
two endpoints of the distance ∆n, and ε is electric permittivity. Thus, for a
|
| 164 |
+
surface with high roughness, the leveling of the peak and recessed regions via
|
| 165 |
+
macropolishing is primarily determined by their difference in their current
|
| 166 |
+
4
|
| 167 |
+
|
| 168 |
+
Normalized current density
|
| 169 |
+
(b) Micropolishing
|
| 170 |
+
(a)Macropolishing
|
| 171 |
+
Electrolyte
|
| 172 |
+
R2?
|
| 173 |
+
R1
|
| 174 |
+
F-
|
| 175 |
+
F
|
| 176 |
+
个
|
| 177 |
+
个
|
| 178 |
+
HNbF6
|
| 179 |
+
Viscous layer
|
| 180 |
+
Nb.O5
|
| 181 |
+
Nb
|
| 182 |
+
Radius of curvature
|
| 183 |
+
(p)
|
| 184 |
+
(c)
|
| 185 |
+
Current density [A/cm?]
|
| 186 |
+
DC power supply
|
| 187 |
+
Current monitor
|
| 188 |
+
CVD Nb film on Mc
|
| 189 |
+
> substrate (anode)
|
| 190 |
+
--->Al cathode
|
| 191 |
+
9 HSO4/ 1 HF
|
| 192 |
+
0
|
| 193 |
+
5
|
| 194 |
+
10
|
| 195 |
+
15
|
| 196 |
+
20
|
| 197 |
+
Time [s]density. In contrast, a submicrometer-roughness surface has large radius-
|
| 198 |
+
of-curvature features (closer to R0 in Fig. 2a), leading to a more uniform
|
| 199 |
+
electrical field between peak and recessed regions, and making the microp-
|
| 200 |
+
olishing dominant by way of controlling the mass transport of species such
|
| 201 |
+
as reactants (water, F−, SO2−
|
| 202 |
+
4 ) and products (HNbF6 and other complexes).
|
| 203 |
+
Numerous studies have been carried out to investigate the transport mech-
|
| 204 |
+
anism in play during polishing operations performed on bulk Nb surfaces
|
| 205 |
+
[21, 27, 28]. Tian et al. [21, 27] identified the limiting of the transport of
|
| 206 |
+
F- ions as one mechanism and validated the theoretical interface model, as
|
| 207 |
+
illustrated in Fig. 2b, showing a compact Nb2O5 film and an HNbF6 (and
|
| 208 |
+
other complexes) diffusion layer. A viscous layer and/or dielectric film is
|
| 209 |
+
formed between the bulk solid and liquid regions so that the reaction is facil-
|
| 210 |
+
itated at the peak region where random diffusion of species (F−) is feasible
|
| 211 |
+
as compared to the recessed region.
|
| 212 |
+
Limitations in applying EP to thin Nb films arise due to the distinctive
|
| 213 |
+
surface profile and structural properties induced by CVD, which are detailed
|
| 214 |
+
in this work. For example, a variety of feature sizes appear on the film surface
|
| 215 |
+
ranging from ∼ 100 µm, large pyramidal features to several nm-size kinks
|
| 216 |
+
and steps, and present the challenge of smoothing the surface at the limit
|
| 217 |
+
of allowed polish thickness. Moreover, crystal defects such as dislocations,
|
| 218 |
+
impurities, and vacancies together with intrinsic stress in the film are more
|
| 219 |
+
common than bulk Nb. Owing to the defective sites, there is concern over
|
| 220 |
+
the formation of compact dielectric films as well as a desirable distribution
|
| 221 |
+
of electric fields. Cu EP studies have reported failure of dielectric formation
|
| 222 |
+
on a film sample and hence, a negative polish result, as compared to a bulk
|
| 223 |
+
sample [29]. These challenges motivate us to investigate EP on Nb thin films.
|
| 224 |
+
Here we analyze new phenomena tied to the EP treatment of CVD-grown
|
| 225 |
+
Nb films and to further advance the EP-CVD combined technology, paving
|
| 226 |
+
the way for film-based Nb RF cavities and other superconducting applica-
|
| 227 |
+
tions. We focus on comparing the characteristics between as-deposited and
|
| 228 |
+
electrochemically polished films.
|
| 229 |
+
Specifically, we investigate surface mor-
|
| 230 |
+
phology, roughness, and grain orientation. Also, we discuss the CVD growth
|
| 231 |
+
mode since these unique surface features observed are critical for determin-
|
| 232 |
+
ing the mechanism of a subsequent EP process. Moreover, the EP results to
|
| 233 |
+
date indicate a probable dependence on crystal orientation; and analysis is
|
| 234 |
+
provided in comparison with the chemically-controlled BCP treatment.
|
| 235 |
+
5
|
| 236 |
+
|
| 237 |
+
Figure 3: Comparison of surface SEM images for CVD Nb films on the Mo substrate (a,c)
|
| 238 |
+
before and (b,d) after EP under different fields of width: (a,b) 100 µm, (c,d) 500 µm.
|
| 239 |
+
2. Experimental section
|
| 240 |
+
Thin films (> 100 µm) of Nb on the molybdenum (Mo) substrates were
|
| 241 |
+
prepared by a low-temperature CVD process. The CVD Nb thin films were
|
| 242 |
+
provided by Ultramet and the recipes are not disclosed. The as-deposited
|
| 243 |
+
films were electrochemically polished by nominally 10 µm in thickness using
|
| 244 |
+
a 2-electrode system (Fig. 2c) consisting of the CVD Nb/Mo as an anode,
|
| 245 |
+
Al as a cathode, and the electrolyte of 98% H2SO4 and 48% HF at a 9:1
|
| 246 |
+
volume ratio. The 2-electrode system is commonly used in the cavity polish
|
| 247 |
+
at Cornell, FNAL, KEK, and other accelerator laboratories [16, 22, 30]. The
|
| 248 |
+
current oscillation regime (Fig. 2d) was monitored to facilitate the genera-
|
| 249 |
+
tion and subsequent removal of compact Nb2O5 dielectrics. For reference to
|
| 250 |
+
EP, samples were polished in a standard BCP (buffered chemical polishing)
|
| 251 |
+
solution with 48% hydrofluoric, 70% nitric, and 85% phosphoric acids at a
|
| 252 |
+
volume ratio of 1:1:1.
|
| 253 |
+
To evaluate the surface morphology change, surface and cross-sectional
|
| 254 |
+
imaging were performed using a Zeiss Gemini scanning electron microscope
|
| 255 |
+
(SEM) equipped with an in-lens detector under low voltage regimes (1 – 5
|
| 256 |
+
6
|
| 257 |
+
|
| 258 |
+
(a)
|
| 259 |
+
(b)
|
| 260 |
+
10 μm
|
| 261 |
+
no
|
| 262 |
+
(c)
|
| 263 |
+
(d)Figure 4: Comparison of cross-sectional SEM images for the largest pyramidal features
|
| 264 |
+
observed (a) before and (b) after EP. Inserts show closer inspections of (a) the CVD
|
| 265 |
+
pyramid and (b) the relatively smooth regions after EP.
|
| 266 |
+
kV). Electron dispersive x-ray spectroscopy (EDS) was used to determine
|
| 267 |
+
the chemical information. The surface roughness of films was measured via
|
| 268 |
+
an atomic force microscope (AFM, Asylum MFP-3D) but the high (> 100
|
| 269 |
+
µm) pyramids affected the measurement, so the AFM results only compared
|
| 270 |
+
the relatively smooth regions.
|
| 271 |
+
To obtain effective comparison, films were
|
| 272 |
+
vertically placed under the SEM, and the cross-sections of the highest pyra-
|
| 273 |
+
mids were imaged and compared. Moreover, high-resolution X-ray diffraction
|
| 274 |
+
(XRD, Rigaku SmartLab) patterns were collected for analyzing grain orien-
|
| 275 |
+
tations. A Cu Kα radiation with a wavelength of 0.154 nm was used.
|
| 276 |
+
3. Results and discussion
|
| 277 |
+
3.1. Surface morphology
|
| 278 |
+
Fig. 3 shows the surface morphology of as-deposited and EP’ed films. As-
|
| 279 |
+
deposited films (Fig. 3a), although uniformly covering the substrate surface,
|
| 280 |
+
exhibit features of facets and steps. Also notably, pyramid-like structures
|
| 281 |
+
are widely observed on the surface as inspected under large fields of width
|
| 282 |
+
(Fig. 3c).
|
| 283 |
+
The cross-section of the largest pyramid observed is presented
|
| 284 |
+
in Fig. 4a. To summarize, there are two sources of surface roughness: (1)
|
| 285 |
+
pyramids as high as 100 µm; (2) step-kink structures appearing both in
|
| 286 |
+
the relatively flat regions and on the pyramids. Note that small but sharp
|
| 287 |
+
features, e.g., steps, would negatively affect the RF performance due to strong
|
| 288 |
+
local field enhancement. Hence, polishing the film surface is necessary to
|
| 289 |
+
improve the surface condition.
|
| 290 |
+
7
|
| 291 |
+
|
| 292 |
+
(a)
|
| 293 |
+
10 μm
|
| 294 |
+
(b)
|
| 295 |
+
20 μm
|
| 296 |
+
Nb film
|
| 297 |
+
0 μm
|
| 298 |
+
Nb pyramid
|
| 299 |
+
Nb pyramidFigure 5: Atomic models showing the terrace-step-kink formation on the Nb (110) plane.
|
| 300 |
+
Blue, red, and green atoms indicate the 1st, 2nd, and 3rd atomic layers, respectively.
|
| 301 |
+
Regarding the step-kink and pyramid formation, we analyze the film
|
| 302 |
+
growth mechanism.
|
| 303 |
+
Based on a typical terrace-step-kink model [31], the
|
| 304 |
+
nucleation events occur on multiple sites and a subsequent island growth
|
| 305 |
+
mode forms the pyramid structure. As shown in Fig. 5, the Nb atoms, as
|
| 306 |
+
a result of the chemical reactions of precursors, are adsorbed on a terrace
|
| 307 |
+
(the flat surface) and then diffuse to a kink site (the site at the terrace edge)
|
| 308 |
+
where the surface energy is typically low. If the lateral diffusion of adatoms
|
| 309 |
+
(adsorbed atoms) on the terrace is not sufficient, these adatoms build up to
|
| 310 |
+
pyramid islands together with the appearance of steps. Such effects are fur-
|
| 311 |
+
ther enhanced once islands are largely formed since adatoms cannot diffuse
|
| 312 |
+
to and join existing islands. Consequently, the terrace-step-kink and pyramid
|
| 313 |
+
structures predominate on the CVD Nb surface.
|
| 314 |
+
After CVD, EP polishing was conducted to alter the surface morphology
|
| 315 |
+
regarding two aspects, i.e., removing or smoothing large pyramid structures,
|
| 316 |
+
and eliminating surface steps and kinks. As demonstrated in Fig. 3b and 3d,
|
| 317 |
+
the edges and sharp features are greatly rounded after EP. Closer inspection
|
| 318 |
+
of the cross-sections (Fig. 4b) shows the regions that were relatively flat upon
|
| 319 |
+
deposition are further smoothed; small islands are completely dissolved, while
|
| 320 |
+
some large islands as high as 50 µm exist but their surfaces are also smoothed.
|
| 321 |
+
This infers that kink and step sites, regardless of their locations, favor the
|
| 322 |
+
onset of polishing, leading to a smooth and less-edged surface.
|
| 323 |
+
Due to the ex situ challenge, we compare the height of the highest pyra-
|
| 324 |
+
mids observed before and after EP. For example, the pyramid height prior to
|
| 325 |
+
polishing is as high as ∼ 100 µm, whereas the highest observed after polishing
|
| 326 |
+
is ∼ 50 µm. This empirical comparison suggests the pyramids are polished by
|
| 327 |
+
more than half in height, owing to intense macropolishing at these pyramids
|
| 328 |
+
8
|
| 329 |
+
|
| 330 |
+
Normal stack
|
| 331 |
+
Terrace-step-kink formationFigure 6: Representative AFM images taken on the relatively flat regions (a) before and
|
| 332 |
+
(b) after EP.
|
| 333 |
+
with a small radius of curvature (closer to R2 in Fig. 2a).
|
| 334 |
+
High-magnification images taken on the CVD pyramid (insert Fig. 4a)
|
| 335 |
+
show the pyramid consists of small nuclei (5 – 10 µm) and exhibits a similar
|
| 336 |
+
morphology of steps and kinks as other relatively flat regions.
|
| 337 |
+
After EP
|
| 338 |
+
(Fig. 4b), these features disappear resulting in a smooth pyramid surface.
|
| 339 |
+
This observation indicates micropolishing is also involved through leveling
|
| 340 |
+
the height difference at steps and kinks and dissolving the small nuclei. Note
|
| 341 |
+
that our primary motivation is to diminish the sharp features; while the
|
| 342 |
+
existence of tall pyramids is not ideal, the smoothed pyramids would less
|
| 343 |
+
severely impact the field enhancement.
|
| 344 |
+
3.2. Surface roughness
|
| 345 |
+
The quantification of surface roughness using AFM on a >10 µm uneven
|
| 346 |
+
surface is challenging owing to the instrumental capability of the depth of
|
| 347 |
+
field. The cross-sectional SEM images in Fig. 4 provide an empirical compar-
|
| 348 |
+
ison of height change for pyramid structures before and after EP. Here, the
|
| 349 |
+
AFM images were taken, as indications of roughness change, on the relatively
|
| 350 |
+
flat regions.
|
| 351 |
+
As shown in Fig. 6, the smooth areas (denoted in red) are prominently
|
| 352 |
+
enlarged after EP in the representative 202 µm2 areas. Taking account of
|
| 353 |
+
some inescapable small islands, the as-deposited samples have a large peak-
|
| 354 |
+
to-valley distance of 4.2 µm. In contrast, the EP’ed samples exhibit a reduced
|
| 355 |
+
9
|
| 356 |
+
|
| 357 |
+
(a)20
|
| 358 |
+
(b) 20
|
| 359 |
+
um
|
| 360 |
+
1.5
|
| 361 |
+
15.
|
| 362 |
+
15
|
| 363 |
+
1.0
|
| 364 |
+
0.5
|
| 365 |
+
10
|
| 366 |
+
10
|
| 367 |
+
0
|
| 368 |
+
-0.5
|
| 369 |
+
5
|
| 370 |
+
-1.0
|
| 371 |
+
5
|
| 372 |
+
-1.5
|
| 373 |
+
μm 0
|
| 374 |
+
μmo
|
| 375 |
+
0
|
| 376 |
+
5
|
| 377 |
+
10
|
| 378 |
+
15
|
| 379 |
+
20
|
| 380 |
+
0
|
| 381 |
+
5
|
| 382 |
+
10
|
| 383 |
+
15
|
| 384 |
+
20
|
| 385 |
+
μm
|
| 386 |
+
μmFigure 7: XRD patterns of (a) as-deposited, (b) EP’ed, and (c) BCP’ed CVD Nb films.
|
| 387 |
+
Intensities are normalized to their highest diffraction limit as referenced to as-deposited
|
| 388 |
+
films.
|
| 389 |
+
value of 2.6 µm. Other surface parameters again indicate ∼ 50% reduction
|
| 390 |
+
of surface roughness, e.g., mean deviation (Ra) from 590 nm to 270 nm, and
|
| 391 |
+
root mean square (Rq) from 740 nm to 390 nm. Ra values from EP-smoothed
|
| 392 |
+
regions on the film are close to the typical value (∼ 300 nm) from an EP’ed
|
| 393 |
+
bulk surface, which indicates the effectiveness of EP polishing when applied
|
| 394 |
+
to thin films. Future work should focus on the removal of the remaining
|
| 395 |
+
pyramid features.
|
| 396 |
+
3.3. Crystal orientation
|
| 397 |
+
The X-ray diffraction characteristics of electrochemically (EP) and chem-
|
| 398 |
+
ically (BCP) polished CVD Nb films were compared (Fig. 7).
|
| 399 |
+
The as-
|
| 400 |
+
deposited films exhibit a predominant (110) peak, epitaxy from the cubic
|
| 401 |
+
Mo substrate, along with (100) and (211) diffractions. Fig. 8 illustrates the
|
| 402 |
+
formation mechanisms of (100) and (211) planes in addition to the (110)
|
| 403 |
+
epitaxy. In a body-centered cubic (bcc) structure, the [111] direction is the
|
| 404 |
+
closest packed, and (110) planes could easily slip along this direction yield-
|
| 405 |
+
ing (100) planes (Fig. 8a). The Burgers vector of dislocations in between
|
| 406 |
+
(100) and (110) planes is a/2 [111]. Additionally, rotating around the [111]
|
| 407 |
+
axis by 70.5 degrees, the (211) and (110) planes can form the twin structure
|
| 408 |
+
(Fig. 8b). These twin structures are extensively observed under SEM which
|
| 409 |
+
are marked by dashed lines in Fig. 3a.
|
| 410 |
+
Moreover, we observed an orientation dependence during EP. For exam-
|
| 411 |
+
ple, as shown in Fig. 7, the highest diffraction peak changed to (100) planes
|
| 412 |
+
from the initial highest (110) planes. Intensities were then normalized to that
|
| 413 |
+
10
|
| 414 |
+
|
| 415 |
+
(a) As-deposited
|
| 416 |
+
Intensity [arb. unit]
|
| 417 |
+
(b) After EP
|
| 418 |
+
c) After BCP
|
| 419 |
+
(200)
|
| 420 |
+
(211)
|
| 421 |
+
(110)
|
| 422 |
+
35
|
| 423 |
+
45
|
| 424 |
+
55
|
| 425 |
+
65
|
| 426 |
+
75
|
| 427 |
+
20 [degrees]Figure 8: Atomic models showing the formation mechanisms of (a) (100) and (b) (211)
|
| 428 |
+
planes in addition to (110) planes. The lattice constant is denoted as “a”, and the Burgers
|
| 429 |
+
vector is denoted as “b”.
|
| 430 |
+
of (100) planes. Indeed, the (110) intensity reduced by half, and the (211)
|
| 431 |
+
intensity likewise dropped exceeding half. (The shifting to smaller diffraction
|
| 432 |
+
angles after EP indicates the compressive stress in the film is relieved.)
|
| 433 |
+
The orientation-dependence behaviors, however, do pose some subtle
|
| 434 |
+
questions for the conventional interpretation; the suppression of influences
|
| 435 |
+
from crystal orientation is expected in micropolishing. In general, electropol-
|
| 436 |
+
ishing is controlled by electrical, reaction, and diffusion processes. In mi-
|
| 437 |
+
cropolishing, the limiting factor nevertheless is the mass transport instead of
|
| 438 |
+
charge transfer [23]. The diffusion of species is a random motion and hence is
|
| 439 |
+
believed to be orientation-independent, whereas the reaction-controlled pol-
|
| 440 |
+
ishing is typically orientation-dependent since the planer density that char-
|
| 441 |
+
acterizes the average atoms in certain planes differs as summarized in Table
|
| 442 |
+
1.
|
| 443 |
+
Table 1: Planer density and plane spacing of (110), (100), and (211) planes in Nb. The
|
| 444 |
+
lattice constant (a) is 330 pm.
|
| 445 |
+
Plane orientation
|
| 446 |
+
(110)
|
| 447 |
+
(100)
|
| 448 |
+
(211)
|
| 449 |
+
Planer density
|
| 450 |
+
√
|
| 451 |
+
2
|
| 452 |
+
a2
|
| 453 |
+
1
|
| 454 |
+
a2
|
| 455 |
+
√
|
| 456 |
+
6
|
| 457 |
+
3a2
|
| 458 |
+
Plane spacing
|
| 459 |
+
√
|
| 460 |
+
2a
|
| 461 |
+
2
|
| 462 |
+
a
|
| 463 |
+
2
|
| 464 |
+
√
|
| 465 |
+
6a
|
| 466 |
+
6
|
| 467 |
+
To test whether the orientation dependence during EP arises from a
|
| 468 |
+
reaction-controlled process, we carried out BCP polishing that underwent
|
| 469 |
+
similar chemical reactions as EP [31]. From XRD (Fig. 7), the (100) and
|
| 470 |
+
(211) planes that have small planer densities show a pronounced reduction
|
| 471 |
+
in intensity after BCP as compared to the (110) planes. This BCP behavior
|
| 472 |
+
significantly differs from the EP results; it supports the theory that EP is
|
| 473 |
+
less reaction-controlled.
|
| 474 |
+
We further analyze the possible mechanisms that induce an orientation
|
| 475 |
+
11
|
| 476 |
+
|
| 477 |
+
(a)
|
| 478 |
+
(b)
|
| 479 |
+
(121)
|
| 480 |
+
b =/ a[1-11]
|
| 481 |
+
[1-11]
|
| 482 |
+
(110)
|
| 483 |
+
(110)
|
| 484 |
+
(100)dependence. Our results have suggested that both macropolishing and mi-
|
| 485 |
+
cropolishing are involved in the EP process. Local electrical fields depending
|
| 486 |
+
on geometry factors play a major role at the pyramids where local polishing-
|
| 487 |
+
current densities are intensified resulting in large polishing rates. Upon as-
|
| 488 |
+
suming the statistical distribution of pyramids is uniform, the dominant pop-
|
| 489 |
+
ulation of (110)-structured pyramids are indicated by their highest intensity
|
| 490 |
+
in as-deposited films (Fig. 7a), and thus the global reduction of pyramids
|
| 491 |
+
would exhibit a preference in the (110) plane. For example, comparing the
|
| 492 |
+
pyramid cross-sections in Fig. 4, the FWHM (full width at half maximum)
|
| 493 |
+
remains the same value of 80 µm after EP, while the height reduces from 100
|
| 494 |
+
µm to 50 µm, suggesting the polishing substantially occurs in the perpendic-
|
| 495 |
+
ular direction, say [110] orientation.
|
| 496 |
+
Another possible mechanism is based on the conventional theory (i.e.,
|
| 497 |
+
mass transport controls EP); although the diffusion of species is orientation-
|
| 498 |
+
independent, the oxide growth during EP (Fig. 2b) varies in orientation. The
|
| 499 |
+
large local polishing current produces thicker oxide layers and hence larger
|
| 500 |
+
polishing rates – this scenario would produce a similar outcome discussed
|
| 501 |
+
above. Regardless of influences from the local polishing current, the oxide
|
| 502 |
+
growth rate on the (110) plane is found to be higher than other planes [33, 34].
|
| 503 |
+
A thicker oxide layer on the (110) plane would induce a larger amount of re-
|
| 504 |
+
moval on this plane during EP. Overall, preferential polishing is critical since
|
| 505 |
+
it might provide selective polishing capabilities, and further investigations
|
| 506 |
+
are necessary to confirm the mechanisms indicated by this work.
|
| 507 |
+
4. Conclusions
|
| 508 |
+
In summary, electrochemical polishing (EP) was successfully performed
|
| 509 |
+
on the chemical vapor deposited (CVD) Nb films to reduce the surface rough-
|
| 510 |
+
ness, and compared with buffered chemical polishing (BCP). The character-
|
| 511 |
+
istics of surface morphology, roughness, and crystal orientation have been
|
| 512 |
+
analyzed to reveal the CVD growth and EP polishing mechanisms.
|
| 513 |
+
As-deposited films consist of relatively flat and pyramid-structured re-
|
| 514 |
+
gions, which cause a large peak-to-valley distance of > 100 µm. The obser-
|
| 515 |
+
vation of steps and kinks suggests that a terrace-step-kink model is respon-
|
| 516 |
+
sible for the generation of pyramids. Also, the CVD crystals exhibit a large
|
| 517 |
+
amount of (110) planes and some slip-induced (100) planes as well as the
|
| 518 |
+
(211) twinning planes.
|
| 519 |
+
12
|
| 520 |
+
|
| 521 |
+
EP is demonstrated to effectively minimize the mean surface roughness
|
| 522 |
+
on the relatively flat regions and significantly reduce the height of pyramids,
|
| 523 |
+
i.e., by more than half. These smoothening behaviors are critical to enhanc-
|
| 524 |
+
ing the RF performance of CVD Nb-based cavities. Besides the reduction
|
| 525 |
+
of pyramid height, the steps and kinks are found to disappear on the pyra-
|
| 526 |
+
mids, indicating the involvement of both macroscale and microscale smooth-
|
| 527 |
+
ing during the EP polish. The reaction-controlled mechanism is negligible in
|
| 528 |
+
EP as suggested by a comparison with chemical polishing (BCP). The local
|
| 529 |
+
enhanced current density and thickness variation of oxide dielectrics might
|
| 530 |
+
be the controlling factors in the CVD-film polishing, leading to the crystal
|
| 531 |
+
orientation dependence observed in this work. Overall, EP proceeds with
|
| 532 |
+
more complex scenarios for CVD Nb films which contain the removal of both
|
| 533 |
+
beyond and below-micrometer-scale sharp features.
|
| 534 |
+
Our demonstration of the EP-CVD technology represents a viable appli-
|
| 535 |
+
cation of Nb thin films for emerging superconducting applications.
|
| 536 |
+
Data availability statement
|
| 537 |
+
The data that support the findings of this study are available upon rea-
|
| 538 |
+
sonable request from the authors.
|
| 539 |
+
Conflicts of interest
|
| 540 |
+
V.A. and S.R.M. work at Ultramet.
|
| 541 |
+
Z.S., M.G., J.T.M., and M.U.L.
|
| 542 |
+
declare no competing financial interests.
|
| 543 |
+
Acknowledgments
|
| 544 |
+
This work is funded by the U.S. Department of Energy SBIR phase-II
|
| 545 |
+
award DE- SC0015727 and also supported by the National Science Founda-
|
| 546 |
+
tion under Grant No. PHY-1549132, the Center for Bright Beams.
|
| 547 |
+
References
|
| 548 |
+
[1] H. Padamsee, 50 years of success for srf accelerators – a review, Super-
|
| 549 |
+
cond. Sci. Technol. 30 (2017) 053003. doi:10.1088/1361-6668/aa6376.
|
| 550 |
+
[2] A. Blais, A. L. Grimsmo, S. M. Girvin, A. Wallraff, Circuit quantum
|
| 551 |
+
electrodynamics, Rev. Mod. Phys. 93 (2021) 025005.
|
| 552 |
+
doi:10.1103/
|
| 553 |
+
RevModPhys.93.025005.
|
| 554 |
+
13
|
| 555 |
+
|
| 556 |
+
[3] W. Decking, et al., A MHz-repetition-rate hard X-ray free-electron laser
|
| 557 |
+
driven by a superconducting linear accelerator, Nat. Photon. 14 (2020)
|
| 558 |
+
391–397. doi:10.1038/s41566-020-00712-8.
|
| 559 |
+
[4] E. Prat, et al., A compact and cost-effective hard X-ray free-electron
|
| 560 |
+
laser driven by a high-brightness and low-energy electron beam, Nat.
|
| 561 |
+
Photon. 14 (2020) 748–754. doi:10.1038/s41566-020-0607-z.
|
| 562 |
+
[5] E. Sicking, R. Strom, From precision physics to the energy frontier with
|
| 563 |
+
the compact linear collider, Nat. Phys. 16 (2020) 386–392.
|
| 564 |
+
doi:10.
|
| 565 |
+
1038/s41567-020-0834-8.
|
| 566 |
+
[6] W. Ehrfeld, A. Schmidt, Recent developments in deep X-ray lithography,
|
| 567 |
+
J. Vac. Sci. Technol. B Microelectron. Nanometer. Struct. Process. Meas.
|
| 568 |
+
Phenom. 16 (1998) 3526. doi:10.1116/1.590490.
|
| 569 |
+
[7] E. J. Jaeschke, et al., Synchrotron light sources and free-electron lasers:
|
| 570 |
+
accelerator physics, instrumentation and science applications, Springer,
|
| 571 |
+
Cham, 2020.
|
| 572 |
+
[8] C. Benvenuti, N. Circelli, M. Hauer, Niobium films for superconducting
|
| 573 |
+
accelerating cavities, Appl. Phys. Lett. 45 (1984) 584. doi:10.1063/1.
|
| 574 |
+
95289.
|
| 575 |
+
[9] C. T. Wu, Intrinsic stress of magnetron-sputtered niobium films, Thin
|
| 576 |
+
Solid Films 64 (1979) 103–110. doi:10.1016/0040-6090(79)90549-2.
|
| 577 |
+
[10] V. Palmieri, R. Vaglio, Thermal contact resistance at the Nb/Cu inter-
|
| 578 |
+
face as a limiting factor for sputtered thin film rf superconducting cav-
|
| 579 |
+
ities, Supercond. Sci. Technol. 29 (2015) 1. doi:10.1088/0953-2048/
|
| 580 |
+
29/1/015004.
|
| 581 |
+
[11] W. M. Roach, D. B. Beringer, J. R. Skuza, W. A. Oliver, C. Clavero,
|
| 582 |
+
C. E. Reece, R. A. Lukaszew, Niobium thin film deposition studies
|
| 583 |
+
on copper surfaces for superconducting radio frequency cavity appli-
|
| 584 |
+
cations, Phys. Rev. ST Accel. Beams 15 (2012) 062002. doi:10.1103/
|
| 585 |
+
PhysRevSTAB.15.062002.
|
| 586 |
+
[12] A. R. Wildes, J. Mayer, K. Theis-Brohl, The growth and structure of
|
| 587 |
+
epitaxial niobium on sapphire, Thin Solid Films 401 (2001) 7–34. doi:
|
| 588 |
+
10.1016/S0040-6090(01)01631-5.
|
| 589 |
+
14
|
| 590 |
+
|
| 591 |
+
[13] Q. Liu, L. Zhang, L. Cheng, J. Liu, Y. Wang, Low pressure chemical
|
| 592 |
+
vapor deposition of niobium coating on silicon carbide, Appl. Surf. Sci.
|
| 593 |
+
255 (2009) 8611–8615. doi:10.1016/j.apsusc.2009.06.037.
|
| 594 |
+
[14] Q. Liu, L. Zhang, L. Cheng, Low pressure chemical vapor deposition
|
| 595 |
+
of niobium coatings on graphite, Vacuum 85 (2010) 332–337. doi:10.
|
| 596 |
+
1016/j.vacuum.2010.07.006.
|
| 597 |
+
[15] M. Miyake, Y. Hirooka, R. Imoto, T. Sano, Chemical vapor deposition
|
| 598 |
+
of niobium on graphite, Thin Solid Films 63 (1979) 303–308.
|
| 599 |
+
doi:
|
| 600 |
+
10.1016/0040-6090(79)90033-6.
|
| 601 |
+
[16] M. Ge, V. Arrieta, T. Gruber, J. J. Kaufman, M. Liepe, J. T. Manis-
|
| 602 |
+
calco, S. McNeal, T. Oseroff, R. D. Porter, Z. Sun, CVD coated copper
|
| 603 |
+
substrate SRF cavity research at Cornell university, in: Proc. 19th Int.
|
| 604 |
+
Conf. on RF Superconductivity (SRF2019), Dresden, Germany, 2019, p.
|
| 605 |
+
381. doi:10.18429/JACoW-SRF2019-TUFUB8.
|
| 606 |
+
[17] R. Porter, D. L. Hall, M. Liepe, J. T. Maniscalco, V. Arrieta, S. Mc-
|
| 607 |
+
Neal, B. Williams, High-performance thin-film niobium produced via
|
| 608 |
+
chemical vapor deposition (CVD), in: Proc. 18th Int. Conf. on RF
|
| 609 |
+
Superconductivity (SRF2017), Lanzhou, China, 2017, p. 674.
|
| 610 |
+
doi:
|
| 611 |
+
10.18429/JACoW-SRF2017-WEXA03.
|
| 612 |
+
[18] J. Knobloch, R. L. Geng, M. Liepe, H. Padamsee, High-field Q slope
|
| 613 |
+
in superconducting cavities due to magnetic field enhancement at
|
| 614 |
+
grain boundaries, in: Proc. 9th Int. Conf. on RF Superconductivity
|
| 615 |
+
(SRF1999), Santa Fe, NM, USA, 1999, p. 77.
|
| 616 |
+
[19] C. A. Cooper, L. D. Cooley, Mirror-smooth surfaces and repair of defects
|
| 617 |
+
in superconducting RF cavities by mechanical polishing, Supercond. Sci.
|
| 618 |
+
Technol. 26 (2013) 015011. doi:10.1088/0953-2048/26/1/015011.
|
| 619 |
+
[20] H. Tian, C. E. Reece, M. J. Kelley, S. Wang, L. Plucinski, K. E. Smith,
|
| 620 |
+
M. M. Nowell, Surface studies of niobium chemically polished under
|
| 621 |
+
conditions for superconducting radio frequency (SRF) cavity production,
|
| 622 |
+
Appl. Surf. Sci. 253 (2006) 1236–1242. doi:10.1016/j.apsusc.2006.
|
| 623 |
+
01.083.
|
| 624 |
+
15
|
| 625 |
+
|
| 626 |
+
[21] H. Tian, S. G. Corcoran, C. E. Reece, M. J. Kelley, The mechanism of
|
| 627 |
+
electropolishing of niobium in hydrofluoric-sulfuric acid electrolyte, J.
|
| 628 |
+
Electrochem. Soc. 155 (2008) D563. doi:10.1149/1.2945913.
|
| 629 |
+
[22] K. Saito, Development of electropolishing technology for superconduct-
|
| 630 |
+
ing cavities, in: Proceedings of the 2003 Particle Accelerator Conference,
|
| 631 |
+
2003, p. 462. doi:10.1109/PAC.2003.1288950.
|
| 632 |
+
[23] D. Landolt, P. F. Chauvy, O. Zinger, Electrochemical micromachining,
|
| 633 |
+
polishing and surface structuring of metals: fundamental aspects and
|
| 634 |
+
new developments, Electrochim. Acta 48 (2003) 3185–3201. doi:10.
|
| 635 |
+
1016/S0013-4686(03)00368-2.
|
| 636 |
+
[24] D. Landolt, Fundamental aspects of electropolishing, Electrochim. Acta
|
| 637 |
+
32 (1987) 1–11. doi:10.1016/0013-4686(87)87001-9.
|
| 638 |
+
[25] T. Hryniewicz,
|
| 639 |
+
Concept of microsmoothing in the electropolish-
|
| 640 |
+
ing process, Surf. Coat. Technol. 64 (1994) 75–80.
|
| 641 |
+
doi:10.1016/
|
| 642 |
+
S0257-8972(09)90006-8.
|
| 643 |
+
[26] E. Luo, The distribution function of surface charge density with respect
|
| 644 |
+
to surface curvature, J. Phys. D: Appl. Phys. 19 (1986) 1. doi:10.1088/
|
| 645 |
+
0022-3727/19/1/005.
|
| 646 |
+
[27] H. Tian, C. E. Reece, Evaluation of the diffusion coefficient of fluorine
|
| 647 |
+
during the electropolishing of niobium, Phys. Rev. ST Accel. Beams 13
|
| 648 |
+
(2010) 083502. doi:10.1103/PhysRevSTAB.13.083502.
|
| 649 |
+
[28] A. Chandra, M. Sumption, G. S. Frankel, On the mechanism of niobium
|
| 650 |
+
electropolishing, J. Electrochem. Soc. 159 (2012) C485. doi:10.1149/
|
| 651 |
+
2.054211jes.
|
| 652 |
+
[29] J. Huo, R. Solanki, J. McAndrew, Study of anodic layers and their
|
| 653 |
+
effects on electropolishing of bulk and electroplated films of copper, J.
|
| 654 |
+
Appl. Electrochem. 34 (2004) 305. doi:10.1023/B:JACH.0000015621.
|
| 655 |
+
31360.14.
|
| 656 |
+
[30] A. C. Crawford, Extreme diffusion limited electropolishing of niobium
|
| 657 |
+
radiofrequency cavities, Nucl. Instrum. Methods Phys. Res. A: Accel.
|
| 658 |
+
Spectrom. Detect. Assoc. Equip. 849 (2017) 5. doi:10.1016/j.nima.
|
| 659 |
+
2017.01.006.
|
| 660 |
+
16
|
| 661 |
+
|
| 662 |
+
[31] Z. Zhang, M. G. Lagally, Atomistic processes in the early stages of thin-
|
| 663 |
+
film growth, Science 276 (1997) 377. doi:10.1126/science.276.5311.
|
| 664 |
+
377.
|
| 665 |
+
[32] G. Ciovati, H. Tian, S. G. Corcoran, Buffered electrochemical polish-
|
| 666 |
+
ing of niobium, J. Appl. Electrochem. 41 (2011) 721. doi:10.1007/
|
| 667 |
+
s10800-011-0286-z.
|
| 668 |
+
[33] A. D. Batchelor, D. N. Leonard, P. E. Russell, F. A. Stevie, D. P.
|
| 669 |
+
Griffis, G. R. Myneni, TEM and SIMS analysis of (100), (110), and
|
| 670 |
+
(111) single crystal niobium, AIP Conf. Proc. 927 (2007) 72.
|
| 671 |
+
doi:
|
| 672 |
+
10.1063/1.2770680.
|
| 673 |
+
[34] C. Nico, T. Monteiro, M. P. F. Graca, Niobium oxides and niobates
|
| 674 |
+
physical properties: review and prospects, Prog. Mater. Sci. 80 (2016)
|
| 675 |
+
1. doi:10.1016/j.pmatsci.2016.02.001.
|
| 676 |
+
17
|
| 677 |
+
|
C9AyT4oBgHgl3EQf4fqw/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
CdE0T4oBgHgl3EQfgQEc/content/2301.02414v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e233208a2c3c739734cd86b854aa0c1f77f35f47d2b26d92555d17385386f0ea
|
| 3 |
+
size 890683
|
CdE0T4oBgHgl3EQfgQEc/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0d216829b975976e742b45aa27402bc5dac534d26086e4253193a62a3fdc45c3
|
| 3 |
+
size 4390957
|
CdE0T4oBgHgl3EQfgQEc/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ede8dcd475822bb2e76b0e238fb83314ab6a5bcbe322e763d50603caf48cebef
|
| 3 |
+
size 144025
|
D9E4T4oBgHgl3EQfGQye/content/tmp_files/2301.04893v1.pdf.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
D9E4T4oBgHgl3EQfGQye/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
D9E5T4oBgHgl3EQfUg90/content/2301.05544v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1abfe4eadd6a3b50a7a2483a2db10074d9b7e53ecbcb9c3670d2a14ce94fe285
|
| 3 |
+
size 752755
|
D9E5T4oBgHgl3EQfUg90/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c361ca94bdf3d06467a2dd84fb51194e92ea17328ee3cd1c9538ecface0ed189
|
| 3 |
+
size 1703981
|
D9E5T4oBgHgl3EQfUg90/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dacde8ea6e4258ff95562e69f9b9e44c00f93a9dc49a6edd0f04fc9d669c5bb5
|
| 3 |
+
size 63456
|
F9AzT4oBgHgl3EQfHPuf/content/tmp_files/2301.01042v1.pdf.txt
ADDED
|
@@ -0,0 +1,2175 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
1
|
| 2 |
+
Modeling and Analysis of 6G Joint Localization
|
| 3 |
+
and Communication under Hardware Impairments
|
| 4 |
+
Hui Chen, Member, IEEE, Musa Furkan Keskin, Member, IEEE, Sina Rezaei Aghdam, Member, IEEE,
|
| 5 |
+
Hyowon Kim, Member, IEEE, Simon Lindberg, Member, IEEE, Andreas Wolfgang, Member, IEEE,
|
| 6 |
+
Traian E. Abrudan, Member, IEEE, Thomas Eriksson, Senior Member, IEEE,
|
| 7 |
+
and Henk Wymeersch, Senior Member, IEEE
|
| 8 |
+
Abstract—Localization (position and orientation estimation)
|
| 9 |
+
is envisioned as a key enabler to satisfy the requirements of
|
| 10 |
+
communication and context-aware services in the sixth generation
|
| 11 |
+
(6G) communication systems. User localization can be achieved
|
| 12 |
+
based on delay and angle estimation using uplink or downlink
|
| 13 |
+
pilot signals. However, hardware impairments (HWIs) distort
|
| 14 |
+
the signals at both the transmitter and receiver sides and thus
|
| 15 |
+
affect the localization performance. While this impact can be
|
| 16 |
+
ignored at lower frequencies where HWIs are less severe, and the
|
| 17 |
+
localization requirements are not stringent, modeling and analysis
|
| 18 |
+
efforts are needed for high-frequency 6G bands (e.g., sub-THz)
|
| 19 |
+
to assess degradation in localization accuracy due to HWIs. In
|
| 20 |
+
this work, we model various types of impairments for a sub-
|
| 21 |
+
THz multiple-input-multiple-output communication system and
|
| 22 |
+
conduct a misspecified Cram´er-Rao bound analysis to evaluate
|
| 23 |
+
HWI-induced performance losses in terms of angle/delay estima-
|
| 24 |
+
tion and the resulting 3D position/orientation estimation error.
|
| 25 |
+
Complementary to the localization analysis, we also investigate
|
| 26 |
+
the effect of individual and overall HWIs on communication
|
| 27 |
+
in terms of symbol error rate (SER). Our extensive simulation
|
| 28 |
+
results demonstrate that each type of HWI leads to a different
|
| 29 |
+
level of degradation in angle and delay estimation performance.
|
| 30 |
+
The prominent factors on delay estimation (e.g., phase noise and
|
| 31 |
+
carrier frequency offset) will have a dominant negative effect on
|
| 32 |
+
SER, while the impairments affecting only the angle estimation
|
| 33 |
+
(e.g., mutual coupling and antenna displacement) induce slight
|
| 34 |
+
degradation in SER performance.
|
| 35 |
+
Index Terms—Localization, 6G, hardware impairment, THz
|
| 36 |
+
communications, CRB, MCRB, MIMO.
|
| 37 |
+
I. INTRODUCTION
|
| 38 |
+
Localization refers to the process of estimating the position
|
| 39 |
+
and orientation of a connected device or user equipment
|
| 40 |
+
(UE), which is expected to have a tight interaction with
|
| 41 |
+
communication in future wireless systems [1]. Localization
|
| 42 |
+
can benefit from a large array dimension and wide bandwidth
|
| 43 |
+
of high-frequency signals (e.g., mmWave and sub-THz) [2].
|
| 44 |
+
In return, the position and orientation information can im-
|
| 45 |
+
prove spatial efficiency and optimize resource allocation for
|
| 46 |
+
H. Chen, M. F. Keskin, S. R. Aghdam, H. Kim, T. Eriksson and H. Wymeer-
|
| 47 |
+
sch are with the Department of Electrical Engineering, Chalmers University
|
| 48 |
+
of Technology, 412 58 Gothenburg, Sweden (email: hui.chen; furkan; sinar;
|
| 49 |
+
hyowon; thomase; henkw@chalmers.se).
|
| 50 |
+
S. Lindberg and A. Wolfgang are with Qamcom Research & Technology,
|
| 51 |
+
Gothenburg, Sweden (email: simon.lindberg; andreas.wolfgang@qamcom.se).
|
| 52 |
+
T.
|
| 53 |
+
E.
|
| 54 |
+
Abrudan
|
| 55 |
+
is
|
| 56 |
+
with
|
| 57 |
+
Nokia
|
| 58 |
+
Bell
|
| 59 |
+
Labs,
|
| 60 |
+
Finland
|
| 61 |
+
(email:
|
| 62 |
+
traian.abrudan@nokia-bell-labs.com).
|
| 63 |
+
This work was supported, in part, by the European Commission through
|
| 64 |
+
the H2020 project Hexa-X (Grant Agreement no. 101015956) and by the
|
| 65 |
+
MSCA-IF grant 888913 (OTFS-RADCOM).
|
| 66 |
+
communication [3]. As a result, high-accuracy context-aware
|
| 67 |
+
applications such as the tactile Internet, augmented reality,
|
| 68 |
+
and smart cities will be supported in next-generation wireless
|
| 69 |
+
networks [4]–[6].
|
| 70 |
+
In global navigation satellite systems (GNSSs) and tra-
|
| 71 |
+
ditional cellular networks, range-based algorithms, such as
|
| 72 |
+
trilateration, are usually applied for estimating position. When
|
| 73 |
+
moving to higher carrier frequencies, more antennas can be
|
| 74 |
+
packed in a single array due to shorter wavelengths. As a
|
| 75 |
+
consequence, in addition to delay estimation, angle-of-arrival
|
| 76 |
+
(AOA) and angle-of-departure (AOD) information can be
|
| 77 |
+
exploited for localization, and a variety of new localization
|
| 78 |
+
techniques have recently emerged in the fifth/sixth generation
|
| 79 |
+
(5G/6G) systems, e.g., [7]–[10], considering localization with
|
| 80 |
+
minimal infrastructure requirements. Multipath components
|
| 81 |
+
(MPCs), which are usually considered as destructive signals,
|
| 82 |
+
can be resolved in the emerging wireless systems, thereby
|
| 83 |
+
enabling single-base station (BS) positioning and mapping [7]
|
| 84 |
+
as well as simultaneous localization and mapping (SLAM) [8].
|
| 85 |
+
When the UE is equipped with an antenna array, orientation
|
| 86 |
+
estimation is also possible [9]. In Doppler-assisted localiza-
|
| 87 |
+
tion, although new unknowns (e.g., velocity) are introduced,
|
| 88 |
+
localization performance can be improved because mobility
|
| 89 |
+
forms a virtual array with a large aperture compared to the
|
| 90 |
+
stationary scenarios [10]. Most localization works rely on
|
| 91 |
+
idealized models of the received signals as a function of the
|
| 92 |
+
channel parameters (angles, delays, Dopplers) induced by the
|
| 93 |
+
propagation environment, based on the assumption of deter-
|
| 94 |
+
ministic and sparse channels in high-frequency systems [1],
|
| 95 |
+
[11]–[15]. However, in sub-THz bands for 6G communica-
|
| 96 |
+
tions, pilot signals can be distorted due to the presence of
|
| 97 |
+
hardware impairments (HWIs) such as phase noise (PN),
|
| 98 |
+
carrier frequency offset (CFO), mutual coupling (MC), power
|
| 99 |
+
amplifier nonlinearity (PAN), array gain error (AGE), antenna
|
| 100 |
+
displacement error (ADE), in-phase and quadrature imbalance
|
| 101 |
+
(IQI), etc [16]. Consequently, when algorithm derivation is
|
| 102 |
+
based on a mismatched model (i.e., without considering the
|
| 103 |
+
HWIs in the channel model), the localization performance is
|
| 104 |
+
unavoidably affected.
|
| 105 |
+
The effect of HWIs on communication have been stud-
|
| 106 |
+
ied extensively in the literature [16]–[20]. In [16], differ-
|
| 107 |
+
ent types of impairments have been accurately modeled
|
| 108 |
+
and the effects on a multiple-input-multiple-output (MIMO)-
|
| 109 |
+
orthogonal frequency-division multiplexing (OFDM) system
|
| 110 |
+
are discussed. In [17], an aggregate statistical HWI model con-
|
| 111 |
+
arXiv:2301.01042v1 [eess.SP] 3 Jan 2023
|
| 112 |
+
|
| 113 |
+
2
|
| 114 |
+
sidering PAN, local oscillators with PN, and finite-resolution
|
| 115 |
+
analog to digital converters (ADCs) is derived and validated
|
| 116 |
+
with numerical simulations. The residual additive transceiver
|
| 117 |
+
hardware impairments, caused by direct current offset, MC,
|
| 118 |
+
IQI and quantization noise, are discussed in [18], with the de-
|
| 119 |
+
rived spectral efficiency to quantify the degradation caused by
|
| 120 |
+
the HWIs. In addition to modeling and analysis of the HWIs,
|
| 121 |
+
research has also been conducted on impairment mitigation
|
| 122 |
+
algorithms. By incorporating signal distortions caused by hard-
|
| 123 |
+
ware impairments, beamforming optimization is performed to
|
| 124 |
+
maximize the received SNR at the destination [19]. A channel
|
| 125 |
+
estimation algorithm is designed by taking into account the
|
| 126 |
+
transceiver impairments in [21], showing a better bit error
|
| 127 |
+
rate and normalized mean-squared-error performance than the
|
| 128 |
+
conventional algorithms. Contrary to model-based solutions,
|
| 129 |
+
channel estimation under HWI can also be formulated as a
|
| 130 |
+
deep learning problem [20], [22]. Nevertheless, these works
|
| 131 |
+
focus only on communication performance.
|
| 132 |
+
Research on localization and sensing (here, sensing includes
|
| 133 |
+
detection, angle, and delay estimation, as well as tracking
|
| 134 |
+
of passive targets) considering HWIs is recently drawing
|
| 135 |
+
attention. The effect of PN on automotive radar [23]–[25], MC
|
| 136 |
+
on AOA estimation [26], IQI on mmWave localization [27],
|
| 137 |
+
and PAN on joint radar-communication systems [28] have
|
| 138 |
+
been studied. However, these works only consider one or two
|
| 139 |
+
types of impairments and cannot provide a thorough analysis
|
| 140 |
+
in real scenarios. In [29], [30], the impairments are modeled
|
| 141 |
+
as additional Gaussian noise, with the variance determined
|
| 142 |
+
by an ad hoc HWI factor, from which the error bounds for
|
| 143 |
+
3D localization are discussed. However, this approach fails
|
| 144 |
+
to capture the contribution of each individual HWI. In [31],
|
| 145 |
+
which forms the basis of the current paper, a simplified syn-
|
| 146 |
+
chronized single-input-multiple-output (SIMO) uplink system
|
| 147 |
+
is considered for 2D positioning, and the results show that dif-
|
| 148 |
+
ferent types of impairments affect angle and delay estimation
|
| 149 |
+
in different ways. Nevertheless, the perfect synchronization
|
| 150 |
+
assumption is impractical, and the impairments such as array
|
| 151 |
+
calibration error and IQI are not considered. Besides analyzing
|
| 152 |
+
the effect of HWIs on localization or communication alone,
|
| 153 |
+
more recent works consider the HWIs in joint localization and
|
| 154 |
+
communication systems and use learning-based methods to
|
| 155 |
+
mitigate the performance degradation [32], [33]. Nevertheless,
|
| 156 |
+
only a limited number of impairment types are discussed (MC
|
| 157 |
+
and ADE in [32], IQI and DC offset in [33]). In addition,
|
| 158 |
+
no theoretical analysis is performed in these works, and the
|
| 159 |
+
relative importance of each HWI on communication compared
|
| 160 |
+
to localization is unknown. Hence, there is a need for a more
|
| 161 |
+
systematic study that evaluates the effect of different types of
|
| 162 |
+
HWI on both communication and localization performance.
|
| 163 |
+
In this paper, we investigate the problem of estimating
|
| 164 |
+
the 3D position and 3D orientation of a multiple-antenna
|
| 165 |
+
UE using several multiple-antenna BSs in a realistic uplink
|
| 166 |
+
scenario for a sub-THz communications system under a wide
|
| 167 |
+
variety of HWIs. Specifically, we consider an OFDM-based
|
| 168 |
+
system by rigorously modeling the impact of various HWIs
|
| 169 |
+
on the received observations, and assume that the correspond-
|
| 170 |
+
ing channel estimation and localization algorithms have no
|
| 171 |
+
knowledge about these HWIs, resulting in degradation of lo-
|
| 172 |
+
calization and communication performance. The misspecified
|
| 173 |
+
Cram´er-Rao bound (MCRB) [34]–[36] is employed to quantify
|
| 174 |
+
the estimation performance loss due to model mismatch. In
|
| 175 |
+
addition, the effect of HWI on communication is evaluated
|
| 176 |
+
numerically in terms of symbol error rate (SER) based on the
|
| 177 |
+
developed model for a hardware-impaired channel under the
|
| 178 |
+
same HWI levels, which allows a fair comparison of the impact
|
| 179 |
+
of HWI on communication and localization. The contributions
|
| 180 |
+
are summarized as follows:
|
| 181 |
+
• Channel model with multiple HWIs: Based on the ideal
|
| 182 |
+
MIMO model (mismatched model (MM)) with perfect
|
| 183 |
+
hardware, we develop a more general channel model
|
| 184 |
+
for the considered sub-THz system (true model (TM))
|
| 185 |
+
that can accommodate a variety of HWI types (including
|
| 186 |
+
PN, CFO, MC, PAN, AGE, ADE, and IQI) in a 3D
|
| 187 |
+
environment. To the best of the authors’ knowledge, this
|
| 188 |
+
is the first study to derive a comprehensive and realistic
|
| 189 |
+
signal model for localization and communications that
|
| 190 |
+
provides explicit modeling of major HWIs that are likely
|
| 191 |
+
to affect 6G communication systems at high-frequency
|
| 192 |
+
operation (e.g., mmWave and sub-THz bands).
|
| 193 |
+
• Analytical performance prediction of channel param-
|
| 194 |
+
eter estimation and localization under HWIs: We
|
| 195 |
+
leverage MCRB analysis to evaluate the effect of indi-
|
| 196 |
+
vidual and combined HWIs on the estimation of channel
|
| 197 |
+
parameters (AOD, AOA and delay estimation) and on the
|
| 198 |
+
corresponding localization performance (3D position and
|
| 199 |
+
3D orientation estimation). More specifically, the bounds
|
| 200 |
+
provide the best performance of estimators using a MM
|
| 201 |
+
to process the TM data.
|
| 202 |
+
• Performance evaluation and comparison with commu-
|
| 203 |
+
nication: Extensive simulations are performed to verify
|
| 204 |
+
the performance analysis of the effect of HWI on lo-
|
| 205 |
+
calization and communication. For communication, we
|
| 206 |
+
approximate the HWIs as additive noise and evaluate the
|
| 207 |
+
effect of individual and aggregated HWIs on communica-
|
| 208 |
+
tion performance in terms of SER using a 16-quadrature
|
| 209 |
+
amplitude modulation (QAM) modulation scheme. In
|
| 210 |
+
addition, the effect of different HWIs on localization
|
| 211 |
+
and communication is evaluated with dominant factors
|
| 212 |
+
identified. We notice that the dominant factors that affect
|
| 213 |
+
delay estimation will also affect communication, whereas
|
| 214 |
+
the impairments that only affect AOA, AOD have a
|
| 215 |
+
limited impact on communication.
|
| 216 |
+
The rest of this paper is organized as follows. Section II
|
| 217 |
+
reviews the system models with and without HWIs. Section III
|
| 218 |
+
describes the channel estimation and localization algorithms.
|
| 219 |
+
Theoretical performance analysis is carried out in Section
|
| 220 |
+
IV. Next, the simulation results are presented in Section V,
|
| 221 |
+
followed by the concluding remarks in Section VI.
|
| 222 |
+
Notations and Symbols: Italic letters denote scalars (e.g. a),
|
| 223 |
+
bold lower-case letters denote vectors (e.g. a), and bold upper-
|
| 224 |
+
case letters denote matrices (e.g. A). (·)⊤, (·)H, (·)−1, tr(·),
|
| 225 |
+
and ∥·∥ represent the transpose, Hermitian transpose, inverse,
|
| 226 |
+
trace, and ℓ-2 norm operations, respectively; A⊙B, A⊗B,
|
| 227 |
+
a◦b are the Hadamard product, Kronecker product, and outer
|
| 228 |
+
product, respectively; [·, ·, · · · , ·]⊤ denotes a column vector;
|
| 229 |
+
|
| 230 |
+
3
|
| 231 |
+
IFFT
|
| 232 |
+
D/A
|
| 233 |
+
mix
|
| 234 |
+
LO
|
| 235 |
+
MC + AGE + ADE
|
| 236 |
+
PN + CFO
|
| 237 |
+
mix
|
| 238 |
+
LO
|
| 239 |
+
A/D
|
| 240 |
+
FFT
|
| 241 |
+
LNA
|
| 242 |
+
LNA
|
| 243 |
+
LNA
|
| 244 |
+
LNA
|
| 245 |
+
PN + CFO
|
| 246 |
+
PA
|
| 247 |
+
PA
|
| 248 |
+
PA
|
| 249 |
+
PA
|
| 250 |
+
IQI
|
| 251 |
+
MC + AGE + ADE
|
| 252 |
+
PAN
|
| 253 |
+
IQI
|
| 254 |
+
UE
|
| 255 |
+
xg
|
| 256 |
+
lth BS
|
| 257 |
+
yg
|
| 258 |
+
Channel
|
| 259 |
+
Hl
|
| 260 |
+
Estimated UE state:
|
| 261 |
+
s = [p⊤
|
| 262 |
+
U , BU, vec(RU)⊤]⊤
|
| 263 |
+
η1
|
| 264 |
+
· · ·
|
| 265 |
+
ηL
|
| 266 |
+
Channel parameter extraction:
|
| 267 |
+
ηl = [φB, θB, φU, θU, τ, ρ, ξ]⊤
|
| 268 |
+
Estimated channel: ˆHl
|
| 269 |
+
Received symbol:
|
| 270 |
+
ˆyl = [y⊤
|
| 271 |
+
1 , . . . , y⊤
|
| 272 |
+
g ]⊤
|
| 273 |
+
Fig. 1. Block diagram of the hardware impairments considered at transmitter and receiver (highlighted in shaded regions). When the localization algorithm
|
| 274 |
+
does not have perfect knowledge of the generative model, it operates under model mismatch. PN (phase noise), CFO (carrier frequency offset), MC (mutual
|
| 275 |
+
coupling), PAN (power amplifier non-linearity), AGE (array gain error), ADE (antenna displacement error), and IQI (in-phase and quadrature imbalance) are
|
| 276 |
+
considered.
|
| 277 |
+
tr(·) returns the trace of a matrix, [·]i,j is the element in the
|
| 278 |
+
ith row, jth column of a matrix, and [·]a:b,c:d is the submatrix
|
| 279 |
+
constructed from the ath to the bth row, and the cth to dth
|
| 280 |
+
column of a matrix.
|
| 281 |
+
II. SYSTEM MODEL
|
| 282 |
+
In this section, we start with a MIMO channel model (HWI-
|
| 283 |
+
free model) and then describe the model considering the HWI.
|
| 284 |
+
A. Geometric Model
|
| 285 |
+
The block diagram of considered HWIs and localization
|
| 286 |
+
procedures are shown in Fig. 1. An uplink MIMO system
|
| 287 |
+
consisting of a UE and L BSs is considered. The BSs and UE
|
| 288 |
+
are equipped with an uniform planar array (UPA) (antennas
|
| 289 |
+
lie on the local YZ plane) driven by a single radio-frequency
|
| 290 |
+
chain (RFC). The number of antenna elements at the l-th
|
| 291 |
+
BS and the UE arrays is denoted as NB,l = NB,l,z × NB,l,y
|
| 292 |
+
and NU = NU,z × NU,y where Nz and Ny are the number
|
| 293 |
+
of antennas on the Z and Y axes, respectively. The BSs
|
| 294 |
+
are perfectly synchronized while a clock offset BU exists
|
| 295 |
+
between the UE and the BSs. We denote the array center
|
| 296 |
+
and orientation of the l-th BS as pB,l ∈ R3 and oB,l ∈ R3
|
| 297 |
+
in the global coordinate system. Similarly, the position and
|
| 298 |
+
orientation of the UE can be denoted as pU, oU. Since the
|
| 299 |
+
orientation represented by an Euler angle vector is not unique,
|
| 300 |
+
we use rotation matrices RB,l ∈ SO(3) and RU ∈ SO(3) in
|
| 301 |
+
orientation estimation (more details can be found in [1], [12]).
|
| 302 |
+
In localization, channel estimations are performed at each BS,
|
| 303 |
+
and all estimates are combined to find the UE state parameter
|
| 304 |
+
vector s = [p⊤
|
| 305 |
+
U , BU, vec(RU)⊤]⊤ ∈ R13, containing the UE
|
| 306 |
+
position pU, clock offset BU, and rotation matrix RU, as
|
| 307 |
+
shown in Fig. 1.
|
| 308 |
+
B. Hardware Impairment-free Model
|
| 309 |
+
Considering the transmitted OFDM symbol1 at the g-th
|
| 310 |
+
transmission and k-th subcarrier, xg,k with an average transmit
|
| 311 |
+
1For positioning, constant modulus pilots are typically used. For commu-
|
| 312 |
+
nication, different modulations (e.g., 16-QAM) can be adopted.
|
| 313 |
+
power E{|xg,k|2} = P/NU, its observation at a specific BS
|
| 314 |
+
(the index l is omitted for convenience) can be formulated as
|
| 315 |
+
yg,k = w⊤
|
| 316 |
+
g Hkvgxg,k + ng,k,
|
| 317 |
+
(1)
|
| 318 |
+
where wg ∈ CN is the combiner at the BS for the g-th
|
| 319 |
+
transmission and vg ∈ CN is the precoder at the UE, both
|
| 320 |
+
with unit amplitude entries, ng,k ∈ CN(0, wH
|
| 321 |
+
g wgσ2
|
| 322 |
+
n) is the
|
| 323 |
+
noise vector with each entry following a complex normal
|
| 324 |
+
distribution, with σ2
|
| 325 |
+
n = N0W (N0 is the noise power spectral
|
| 326 |
+
density (PSD) and W = K∆f is the total bandwidth with K
|
| 327 |
+
subcarriers and subcarrier spacing ∆f). We assume Hk re-
|
| 328 |
+
mains the same during G transmissions (within the coherence
|
| 329 |
+
time). The channel matrix at subcarrier k is given by
|
| 330 |
+
Hk = αdk(τ)aB(ϕB)a⊤
|
| 331 |
+
U (ϕU)
|
| 332 |
+
�
|
| 333 |
+
��
|
| 334 |
+
�
|
| 335 |
+
LOS path
|
| 336 |
+
(2)
|
| 337 |
+
+
|
| 338 |
+
P
|
| 339 |
+
�
|
| 340 |
+
p=1
|
| 341 |
+
αpdp,k(τp)aB(ϕB,p)a⊤
|
| 342 |
+
U (ϕU,p)
|
| 343 |
+
�
|
| 344 |
+
��
|
| 345 |
+
�
|
| 346 |
+
NLOS paths
|
| 347 |
+
,
|
| 348 |
+
where for the LOS path, α = ρe−jξ is the complex channel
|
| 349 |
+
gain assumed to be identical for different subcarriers, dk(τ) =
|
| 350 |
+
e−j2πk∆f τ (∆f is the subcarrier spacing) as a function of
|
| 351 |
+
the path delay τ, while aB(ϕB) and aU(ϕU) are the receiver
|
| 352 |
+
and transmitter steering vectors as a function of the AOA
|
| 353 |
+
ϕB = [φB, θB]⊤ (azimuth angle φB and elevation angle θB),
|
| 354 |
+
and AOD φU = [φU, θU]⊤. A steering vector a(ϕ) of an N-
|
| 355 |
+
element array is a function of the direction of the (incoming
|
| 356 |
+
or outgoing) signal and the locations of the antenna elements,
|
| 357 |
+
which can be expressed as [1]
|
| 358 |
+
a(ϕ) = ej 2πfc
|
| 359 |
+
c
|
| 360 |
+
Z⊤t(ϕ),
|
| 361 |
+
(3)
|
| 362 |
+
where we apply the exp operator element-wise, Z ∈ R3×N is
|
| 363 |
+
the matrix containing the position of the N antennas in the
|
| 364 |
+
local coordinate system (all zeros in the first row of Z) and
|
| 365 |
+
t(ϕ) = [cos(θ) cos(φ), cos(θ) sin(φ), sin(θ)]⊤. For the NLOS
|
| 366 |
+
paths, each path can correspond to single or multi-bounce
|
| 367 |
+
reflections, or diffuse scattering. Hence, the NLOS path will
|
| 368 |
+
not be utilized for the positioning of the UE in this work. We
|
| 369 |
+
further make the assumption that the LOS path is resolvable
|
| 370 |
+
with respect to the NLOS paths (though the NLOS paths may
|
| 371 |
+
|
| 372 |
+
4
|
| 373 |
+
be mutually unresolved). This is a reasonable assumption2
|
| 374 |
+
for 6G systems, due to large bandwidth and a large number
|
| 375 |
+
of antennas [11]. Without significant loss of generality, the
|
| 376 |
+
channel matrix for the kth subcarrier can thus be simplified as
|
| 377 |
+
Hk = αdk(τ)aB(ϕB)a⊤
|
| 378 |
+
U (ϕU).
|
| 379 |
+
(4)
|
| 380 |
+
Correspondingly, the channel geometric parameter vector
|
| 381 |
+
of the line-of-sight (LOS) path between a BS and the
|
| 382 |
+
UE is defined as ηch
|
| 383 |
+
=
|
| 384 |
+
[η⊤
|
| 385 |
+
1 , . . . , η⊤
|
| 386 |
+
L]⊤
|
| 387 |
+
with ηl
|
| 388 |
+
=
|
| 389 |
+
[ϕ⊤
|
| 390 |
+
B,l, ϕ⊤
|
| 391 |
+
U,l, τl, ρl, ξl]⊤ ∈ R7 for the lth BS. For later analysis,
|
| 392 |
+
we define a vector by removing all the nuisance parame-
|
| 393 |
+
ters (i.e., complex channel gain for each path) as cch =
|
| 394 |
+
[c⊤
|
| 395 |
+
1 , . . . , c⊤
|
| 396 |
+
L]⊤ with cl = [ϕ⊤
|
| 397 |
+
B,l, ϕ⊤
|
| 398 |
+
U,l, τl]⊤ ∈ R5. The geo-
|
| 399 |
+
metric relationships between the channel parameters vector c
|
| 400 |
+
and the state parameters s can be expressed as
|
| 401 |
+
ϕB =
|
| 402 |
+
�φB
|
| 403 |
+
θB
|
| 404 |
+
�
|
| 405 |
+
=
|
| 406 |
+
�arctan 2(tB,2, tB,1)
|
| 407 |
+
arcsin(tB,3)
|
| 408 |
+
�
|
| 409 |
+
,
|
| 410 |
+
(5)
|
| 411 |
+
ϕU =
|
| 412 |
+
�
|
| 413 |
+
φU
|
| 414 |
+
θU
|
| 415 |
+
�
|
| 416 |
+
=
|
| 417 |
+
�
|
| 418 |
+
arctan 2(tU,2, tU,1)
|
| 419 |
+
arcsin(tU,3)
|
| 420 |
+
�
|
| 421 |
+
,
|
| 422 |
+
(6)
|
| 423 |
+
τ = ∥pU − pB∥
|
| 424 |
+
c
|
| 425 |
+
+ BU,
|
| 426 |
+
(7)
|
| 427 |
+
where c is the speed of light, tB = [tB,1, tB,2, tB,3]⊤ and
|
| 428 |
+
tU = [tU,1, tU,2, tU,3]⊤ are the direction vectors in the local
|
| 429 |
+
coordinate system that can be expressed using global direction
|
| 430 |
+
vectors and rotation matrices as
|
| 431 |
+
tB = R−1
|
| 432 |
+
B
|
| 433 |
+
pU − pB
|
| 434 |
+
∥pU − pB∥,
|
| 435 |
+
(8)
|
| 436 |
+
tU = R−1
|
| 437 |
+
U
|
| 438 |
+
pB − pU
|
| 439 |
+
∥pB − pU∥.
|
| 440 |
+
(9)
|
| 441 |
+
Finally, by concatenating all the received symbols into a
|
| 442 |
+
column, we obtain the received symbol block y ∈ RGK as
|
| 443 |
+
y = [y⊤
|
| 444 |
+
1 , . . . , y⊤
|
| 445 |
+
g , . . . , y⊤
|
| 446 |
+
G]⊤, where yg = [yg,1, . . . , yg,K]⊤
|
| 447 |
+
can be expressed as
|
| 448 |
+
yg = α(w⊤
|
| 449 |
+
g a(ϕB)a⊤(ϕU)vg)d(τ) ⊙ xg + ng,
|
| 450 |
+
(10)
|
| 451 |
+
in
|
| 452 |
+
which
|
| 453 |
+
d(τ)
|
| 454 |
+
=
|
| 455 |
+
[d1(τ), . . . , dK(τ)]⊤,
|
| 456 |
+
xg
|
| 457 |
+
=
|
| 458 |
+
[xg,1, . . . , xg,K]⊤, and ng = [ng,1, . . . , ng,K]⊤.
|
| 459 |
+
C. Hardware Impairments
|
| 460 |
+
In this work, several types of HWIs are considered as shown
|
| 461 |
+
in Fig. 1. We study the effects of MC, PAN, AGE, ADE, PN,
|
| 462 |
+
CFO, and IQI. Note that the impairments such as PN, CFO,
|
| 463 |
+
MC, AGE, ADE and IQI exist both at the transmitter and
|
| 464 |
+
the receiver, while the PAN appears only at the transmitter.
|
| 465 |
+
The HWIs are usually compensated offline during calibration
|
| 466 |
+
or online with dedicated signals and routines, depending on
|
| 467 |
+
whether the impairment is static or time-variant. Both the
|
| 468 |
+
offline and the online methods will have residual errors, which
|
| 469 |
+
can be modeled as random perturbations around the nominal
|
| 470 |
+
values. This work focus on the impact of these residual
|
| 471 |
+
errors after calibration. For online methods, these random
|
| 472 |
+
realizations correspond to different times for a specific device,
|
| 473 |
+
2For example, with a bandwidth of 1 GHz and 8 × 8 BS arrays, a delay
|
| 474 |
+
resolution of 30 cm and an angle resolution of 22 degrees is achievable. Unless
|
| 475 |
+
the UE is very close to a reflector, multipath can be resolved in the combined
|
| 476 |
+
range-angle domain.
|
| 477 |
+
while for offline methods, these random realizations should be
|
| 478 |
+
interpreted as corresponding to an ensemble of devices.
|
| 479 |
+
The imperfections of ADC, digital to analog converter
|
| 480 |
+
(DAC), low-noise amplifier and mixer are not considered.
|
| 481 |
+
1) Phase Noise and Carrier Frequency Offset:
|
| 482 |
+
Imper-
|
| 483 |
+
fect local oscillators (LOs) in the up-conversion and down-
|
| 484 |
+
conversion processes add PN to the carrier wave phase. In ad-
|
| 485 |
+
dition, when the down-converting LO in the receiver does not
|
| 486 |
+
perfectly synchronize with the received signal’s carrier [37],
|
| 487 |
+
CFO occurs. In general, both PN and CFO are estimated and
|
| 488 |
+
compensated by the receiver [38], so we only consider the
|
| 489 |
+
residual PN and residual CFO at the receiver. With PN and
|
| 490 |
+
CFO, the observation, yg,k, is modified as in [39]
|
| 491 |
+
yg,k → f ⊤
|
| 492 |
+
k EgΞgFHyg,
|
| 493 |
+
(11)
|
| 494 |
+
Eg = ej 2πϵgKtot
|
| 495 |
+
K
|
| 496 |
+
diag([1, ej 2πϵ
|
| 497 |
+
K , . . . , ej 2π(K−1)ϵ
|
| 498 |
+
K
|
| 499 |
+
]),
|
| 500 |
+
(12)
|
| 501 |
+
Ξg = diag([ejνg,1, . . . , ejνg,K]),
|
| 502 |
+
(13)
|
| 503 |
+
where yg is the received signals of the ideal model without PN
|
| 504 |
+
or CFO (i.e., from (1)), F = [f1, f2, . . . , fK] is the FFT matrix.
|
| 505 |
+
The CFO matrix Eg considers both inter-OFDM symbol phase
|
| 506 |
+
changes as well as inter-carrier interference [39], [40]. More
|
| 507 |
+
specifically, Ktot = K + Kcp with Kcp as the length of the
|
| 508 |
+
cyclic prefix, and ϵ is the residual CFO with ϵ ∼ N(0, σ2
|
| 509 |
+
CFO).
|
| 510 |
+
Ξg is the residual3 PN matrix with νg,k ∼ N(0, σ2
|
| 511 |
+
PN). In
|
| 512 |
+
(11), the vector yg is converted to the time domain by FHyg,
|
| 513 |
+
where the successive PN samples, as well as the CFO, are
|
| 514 |
+
applied. Finally, f ⊤
|
| 515 |
+
k extracts the k-th subcarrier after applying
|
| 516 |
+
an FFT to EgΞgFHyg. Note that the residual CFO ϵ is fixed
|
| 517 |
+
for each realization (e.g., one localization measurement with
|
| 518 |
+
G transmission), while the PN νg,k is different for all the
|
| 519 |
+
subcarriers and OFDM symbols.
|
| 520 |
+
2) Mutual Coupling: MC refers to the electromagnetic
|
| 521 |
+
interaction between the antenna elements in an array [26]. For
|
| 522 |
+
a UPA, we adopt the MC model as in [43] by assuming the
|
| 523 |
+
antenna is only affected by the coupling of the surrounding
|
| 524 |
+
elements. As a result, the MC matrix can be expressed as
|
| 525 |
+
C =
|
| 526 |
+
�
|
| 527 |
+
������
|
| 528 |
+
C1
|
| 529 |
+
C2
|
| 530 |
+
0
|
| 531 |
+
· · ·
|
| 532 |
+
0
|
| 533 |
+
C2
|
| 534 |
+
C1
|
| 535 |
+
0
|
| 536 |
+
· · ·
|
| 537 |
+
0
|
| 538 |
+
...
|
| 539 |
+
...
|
| 540 |
+
...
|
| 541 |
+
...
|
| 542 |
+
...
|
| 543 |
+
0
|
| 544 |
+
· · ·
|
| 545 |
+
C2
|
| 546 |
+
C1
|
| 547 |
+
C2
|
| 548 |
+
0
|
| 549 |
+
· · ·
|
| 550 |
+
0
|
| 551 |
+
C2
|
| 552 |
+
C1
|
| 553 |
+
�
|
| 554 |
+
������
|
| 555 |
+
.
|
| 556 |
+
(14)
|
| 557 |
+
Here, C ∈ CNzNy×NzNy is the MC matrix with sub-matrices
|
| 558 |
+
C1
|
| 559 |
+
=
|
| 560 |
+
Toeplitz([1, cx, 0 . . . , 0])
|
| 561 |
+
∈
|
| 562 |
+
CNy×Ny and C2
|
| 563 |
+
=
|
| 564 |
+
Toeplitz([cx, cxy, 0, . . . , 0]) ∈ CNy×Ny [43]. For convenience,
|
| 565 |
+
we use one variable σMC to denote the severity of the MC
|
| 566 |
+
such that cx ∼ CN(0, σ2
|
| 567 |
+
MC) and cxy ∼ CN(0, σ2
|
| 568 |
+
MC/4). The
|
| 569 |
+
MC leads to the following substitution of the channel matrix
|
| 570 |
+
Hk → CBHkC⊤
|
| 571 |
+
U .
|
| 572 |
+
(15)
|
| 573 |
+
3) Power Amplifier Nonlinearity: For the PA nonlinearity,
|
| 574 |
+
we consider a Q-th order memoryless polynomial nonlinear
|
| 575 |
+
3Note that νg,k and ϵ represent residual PN and CFO that remains after
|
| 576 |
+
the carrier synchronization process processing (e.g., [41], [42]). Hence, νg,k
|
| 577 |
+
is assumed to be independent across time.
|
| 578 |
+
|
| 579 |
+
5
|
| 580 |
+
model with a clipping point xclip ∈ R as [16]
|
| 581 |
+
hPA(ˇxt) =
|
| 582 |
+
��Q−1
|
| 583 |
+
q=0 βq+1ˇx|ˇx|q
|
| 584 |
+
|ˇx| ≤ xclip,
|
| 585 |
+
�Q−1
|
| 586 |
+
q=0 βq+1 ˇx
|
| 587 |
+
|ˇx||xclip|q+1
|
| 588 |
+
|ˇx| > xclip,
|
| 589 |
+
(16)
|
| 590 |
+
where ˇxt = xt/R denotes the voltage of the transmitted time-
|
| 591 |
+
domain signal (R is the load impedance in Ohms) in the
|
| 592 |
+
time domain and β1, . . . , βQ are complex-valued parameters.
|
| 593 |
+
We assume that (16) models both the effect of digital pre-
|
| 594 |
+
distortion and power amplifier, and we use non-oversampled
|
| 595 |
+
signals as input to PA for tractable localization performance
|
| 596 |
+
analysis4. Note that the PA affects the time domain signals and
|
| 597 |
+
each antenna at the Tx has a separate PA, and the PA model
|
| 598 |
+
in (16) does not consider the out-of-band emissions (only the
|
| 599 |
+
in-band distortion). For simplicity, the models are the same
|
| 600 |
+
for different PAs and hPA(ˇxt) returns the time domain signal
|
| 601 |
+
vector (by operating point-wise on each of the elements) with
|
| 602 |
+
PA nonlinearity introduced.
|
| 603 |
+
4) Array Calibration Error: The AGE and ADE are con-
|
| 604 |
+
sidered in the array calibration error. We define the complex
|
| 605 |
+
excitation coefficient of the n-th antenna at direction ϕ as [45]
|
| 606 |
+
bn(ϕ) = (1 + δa)ejδp,
|
| 607 |
+
(17)
|
| 608 |
+
where δa ∈ N(0, σ2
|
| 609 |
+
AA), and δp ∈ N(0, σ2
|
| 610 |
+
AP) are the relative
|
| 611 |
+
amplitude error and phase error, respectively. Regarding the
|
| 612 |
+
displacement error, we assume the n-th antenna position has a
|
| 613 |
+
displacement on the 2D plane of the local coordinate system
|
| 614 |
+
as
|
| 615 |
+
˜zn = zn + [0, δn,y, δn,z]⊤,
|
| 616 |
+
(18)
|
| 617 |
+
with dn ∈ R3 is the ideal position of the nth antenna in
|
| 618 |
+
the local coordinate system, δn,y, δn,z ∈ N(0, σ2
|
| 619 |
+
ADE) are the
|
| 620 |
+
displacement error. The steering vector is then modified as
|
| 621 |
+
a(ϕ) → b(ϕ) ⊙ ej 2π
|
| 622 |
+
λ ˜Z⊤t,
|
| 623 |
+
(19)
|
| 624 |
+
where ˜Z = [˜z1, . . . , ˜zN] contains the geometry information
|
| 625 |
+
of all the antennas. The array calibration error is fixed for a
|
| 626 |
+
certain array and varies across different devices.
|
| 627 |
+
5) In-phase and quadrature imbalance: IQI operates on
|
| 628 |
+
the time domain signal and the transmitted symbol vector is
|
| 629 |
+
modified as [27], [46]
|
| 630 |
+
xg → F(αUFHxg + βUFHx∗
|
| 631 |
+
g) = αUxg + βUx∗
|
| 632 |
+
g,
|
| 633 |
+
(20)
|
| 634 |
+
where the FFT matrix F and IFFT matrix FH switch be-
|
| 635 |
+
tween time and frequency domain, αU =
|
| 636 |
+
1
|
| 637 |
+
2 + 1
|
| 638 |
+
2mUejψU,
|
| 639 |
+
βU = 1
|
| 640 |
+
2 − 1
|
| 641 |
+
2mUejψU with mU and ψU as the amplitude and
|
| 642 |
+
phase imbalance parameters at the UE side. We assume that
|
| 643 |
+
the IQI is compensated in the system, leading to a residual
|
| 644 |
+
impairment and the imbalance parameters can be modeled as
|
| 645 |
+
mU ∼ N(1, σ2
|
| 646 |
+
IA) and φU ∼ N(0, σ2
|
| 647 |
+
IP). Similarly, the IQI at
|
| 648 |
+
the receiving BS can be expressed as
|
| 649 |
+
yg → αByg + βBy∗
|
| 650 |
+
g.
|
| 651 |
+
(21)
|
| 652 |
+
More accurate frequency-dependent IQI models can be found
|
| 653 |
+
in [47], [48], which is beyond the scope of this work.
|
| 654 |
+
4In order to fully characterize the effect of PAN, an oversampled model is
|
| 655 |
+
needed, which also captures the intersymbol interference introduced by the
|
| 656 |
+
nonlinearity, in addition to the symbol distortion (see (25) in [44]).
|
| 657 |
+
D. Hardware-impaired Model
|
| 658 |
+
Considering all types of HWIs described in Sec. II-C
|
| 659 |
+
and substituting (11)–(21) into (10), the observation can be
|
| 660 |
+
rewritten in the frequency domain.
|
| 661 |
+
1) Transmit Signal under HWI: The precoded transmitted
|
| 662 |
+
signal across subcarriers and antennas is modified from Xg =
|
| 663 |
+
xgv⊤
|
| 664 |
+
g ∈ CK×NU to
|
| 665 |
+
ˇXg = FhPA(EUΞU(αUFHxg + βUFHx∗
|
| 666 |
+
g)v⊤
|
| 667 |
+
g
|
| 668 |
+
�
|
| 669 |
+
��
|
| 670 |
+
�
|
| 671 |
+
precoded time domain signal before PA
|
| 672 |
+
).
|
| 673 |
+
(22)
|
| 674 |
+
2) Channel under HWI: The channel is modified from
|
| 675 |
+
Hk = αdk(τ)a(ϕB)a⊤(ϕU) ∈ CNB×NU in (4) to
|
| 676 |
+
ˇH = αdk(τ)CB(bB(ϕB) ⊙ ej 2π
|
| 677 |
+
λ ˜Z⊤
|
| 678 |
+
B tB(ϕB)
|
| 679 |
+
�
|
| 680 |
+
��
|
| 681 |
+
�
|
| 682 |
+
steering vector ˜aB(ϕB)
|
| 683 |
+
)
|
| 684 |
+
× (bU(ϕU) ⊙ ej 2π
|
| 685 |
+
λ ˜Z⊤
|
| 686 |
+
U tU(ϕU)
|
| 687 |
+
�
|
| 688 |
+
��
|
| 689 |
+
�
|
| 690 |
+
steering vector ˜aU(ϕU)
|
| 691 |
+
)C⊤
|
| 692 |
+
U .
|
| 693 |
+
(23)
|
| 694 |
+
3) Received Signal under HWI: The received signal is
|
| 695 |
+
modified from yg ∈ CK×1 to (24).
|
| 696 |
+
E. Summary of the Models
|
| 697 |
+
To summarize, we have defined a MM in (1) without consid-
|
| 698 |
+
ering the HWI, which will be used for algorithm development.
|
| 699 |
+
With HWIs introduced, the impaired model defined in (24) will
|
| 700 |
+
be used as the TM. In the following section, we will evaluate
|
| 701 |
+
the impact of using the MM to process data generated by TM
|
| 702 |
+
on localization performance. For the sake of convenience in
|
| 703 |
+
performance analysis, we use µg(η) and ¯µg(η) to denote the
|
| 704 |
+
noise-free observation of (1) and (24), respectively.
|
| 705 |
+
III. LOCALIZATION ALGORITHM
|
| 706 |
+
Based on the models described above, a two-stage local-
|
| 707 |
+
ization5 problem can be formulated such that the channel
|
| 708 |
+
parameter vectors ˆηch = [η⊤
|
| 709 |
+
1 , . . . , η⊤
|
| 710 |
+
L]⊤ are firstly estimated
|
| 711 |
+
based on the observation vector ˆy1, . . . , ˆyL from all the BSs,
|
| 712 |
+
and then the stage vector ˆs is determined from ˆηch.
|
| 713 |
+
A. Mismatched Maximum Likelihood Estimator
|
| 714 |
+
The maximum likelihood estimation (MLE) can be em-
|
| 715 |
+
ployed when the observation y is generated from the same
|
| 716 |
+
model used by the algorithm. If y ∼ fTM(y|¯η), the MLE of
|
| 717 |
+
the UE position and channel gain is
|
| 718 |
+
ˆηMLE = arg max
|
| 719 |
+
¯η
|
| 720 |
+
ln fTM(y|¯η),
|
| 721 |
+
(25)
|
| 722 |
+
where ln fTM(y|¯η) is the log-likelihood of the TM. However, if
|
| 723 |
+
y ∼ fTM(y|¯η), but the estimator uses fMM(y|η) ̸= fTM(y|¯η),
|
| 724 |
+
the mismatched maximum likelihood estimation (MMLE) is
|
| 725 |
+
given by
|
| 726 |
+
ˆηMMLE = arg max
|
| 727 |
+
η
|
| 728 |
+
ln fMM(y|η).
|
| 729 |
+
(26)
|
| 730 |
+
More specifically, equation (26) formulates the MMLE for
|
| 731 |
+
channel parameters extraction, which can also be implemented
|
| 732 |
+
5In contrast, the direct localization estimates the state vector s from the
|
| 733 |
+
observed signal vector y directly. Considering the high complexity involved,
|
| 734 |
+
we adopt two-stage localization in this work.
|
| 735 |
+
|
| 736 |
+
6
|
| 737 |
+
ˇyg = F(αB(EB,gΞB,gFH( ˇXg ˇH⊤wg ⊙ d(τ)))) + βB(EB,gΞB,gFH( ˇXg ˇH⊤wg ⊙ d(τ)))∗) + ng.
|
| 738 |
+
(24)
|
| 739 |
+
in position and orientation estimation with known or approx-
|
| 740 |
+
imated likelihood function. A practical approach is to use the
|
| 741 |
+
gradient descent method with an initial point, which will be
|
| 742 |
+
detailed in the following subsections.
|
| 743 |
+
B. Channel Parameters Estimation
|
| 744 |
+
The channel parameters estimation will be performed with a
|
| 745 |
+
coarse estimation using ESPRIT, which provides a good initial
|
| 746 |
+
point for a refined estimation using (26).
|
| 747 |
+
1) Coarse Estimation using ESPRIT: We aim to obtain an
|
| 748 |
+
initial estimate of the channel parameters with a low com-
|
| 749 |
+
plexity, which can be solved using tensor-based beamspace
|
| 750 |
+
ESPRIT6 algorithm [13]. To implement the beamspace ES-
|
| 751 |
+
PRIT algorithm, we reformulate a beamspace channel matrix
|
| 752 |
+
H(b) based on the signal model in (1) as
|
| 753 |
+
H(b)
|
| 754 |
+
k
|
| 755 |
+
= αdk(τ)WHaB(ϕB)a⊤
|
| 756 |
+
U (ϕU)V
|
| 757 |
+
(27)
|
| 758 |
+
where W = T1⊗T2 ∈ CN1N2×M1M2 and V = (T3⊗T4)∗ ∈
|
| 759 |
+
CN3N4×M3M4 are the combining matrix and precoder matrix
|
| 760 |
+
and the total number of transmissions G = M1M2M3M4.
|
| 761 |
+
Since the first row of the antenna position matrix ˜Z is all
|
| 762 |
+
zeros (see Sec. II-A and equation (3)), we can express the
|
| 763 |
+
steering vector in (3) as
|
| 764 |
+
aB(ϕB) = a(M1)(ω1) ⊗ a(M2)(ω2),
|
| 765 |
+
(28)
|
| 766 |
+
with
|
| 767 |
+
ω1 = π sin(φB) cos(θB),
|
| 768 |
+
ω2 = π sin(θB),
|
| 769 |
+
(29)
|
| 770 |
+
a(M1)
|
| 771 |
+
B
|
| 772 |
+
(ω1) = ej 2πfc sin(φB) cos(θB)
|
| 773 |
+
c
|
| 774 |
+
˜zB,2 = ej 2
|
| 775 |
+
λc ω1˜zB,2,
|
| 776 |
+
(30)
|
| 777 |
+
a(M2)
|
| 778 |
+
B
|
| 779 |
+
(ω2) = ej 2πfc sin(θB)
|
| 780 |
+
c
|
| 781 |
+
˜zB,3 = ej 2
|
| 782 |
+
λc ω2˜zB,3.
|
| 783 |
+
(31)
|
| 784 |
+
Here, ��z⊤
|
| 785 |
+
B,2 ∈ C1×NB and ˜z⊤
|
| 786 |
+
B,3 ∈ C1×NB are the second and
|
| 787 |
+
third row of the matrix ˜Z, respectively. The combining matrix
|
| 788 |
+
can then be defined in terms of a grid of the spatial frequencies
|
| 789 |
+
¯ω1 = [¯ω1,1, . . . , ¯ω1,M1] and ¯ω2 = [¯ω2,1, . . . , ¯ω2,M2] as
|
| 790 |
+
T1 = [a(N1)(¯ω1,1), . . . , a(N1)(¯ω1,M1)]⊤ ∈ CN1×M1,
|
| 791 |
+
(32)
|
| 792 |
+
T2 = [a(N2)(¯ω2,1), . . . , a(N2)(¯ω2,M2)]⊤ ∈ CN2×M2,
|
| 793 |
+
(33)
|
| 794 |
+
where ¯ω1,m and ¯ω2,m are decided by beamforming directions
|
| 795 |
+
(detailed in Sec. V). A similar reasoning applies to the steering
|
| 796 |
+
vectors a(M3)
|
| 797 |
+
U
|
| 798 |
+
(ω3) and a(M4)
|
| 799 |
+
U
|
| 800 |
+
(ω4) at UE to define T3 and T4,
|
| 801 |
+
with
|
| 802 |
+
ω3 = π sin(φU) cos(θU),
|
| 803 |
+
ω4 = π sin(θU).
|
| 804 |
+
(34)
|
| 805 |
+
We further define b(Mn)(ωn) = TH
|
| 806 |
+
naNn(ωn) ∈ CMn for
|
| 807 |
+
n ∈ {1, 2, 3, 4} and b(M5)(ω5) = d(τ) (ω5 = 2π∆fτ), and
|
| 808 |
+
the beamspace channel matrix in (27) can be represented by
|
| 809 |
+
a tensor H ∈ CM1×M2×···×M5 as [14]
|
| 810 |
+
H(b) = αb(M1)(ω1) ◦ . . . ◦ b(M5)(ω5).
|
| 811 |
+
(35)
|
| 812 |
+
In practice, the estimated beamspace channel matrix can
|
| 813 |
+
be estimated with known pilot signals as vec( ˆH(b)
|
| 814 |
+
k )
|
| 815 |
+
=
|
| 816 |
+
[ˆy1,k/x1,k, . . . , ˆyG,k/xG,k]⊤. By rearranging the estimated
|
| 817 |
+
6While this work considers only the LOS channel, the ESPRIT also works
|
| 818 |
+
for the scenarios with NLOS paths.
|
| 819 |
+
channel into a tensor ˆH
|
| 820 |
+
(b) shown in (35), the beamspace
|
| 821 |
+
tensor-based ESPRIT method can then be used to estimate ω1
|
| 822 |
+
to ω5 and obtain the AOA, AOD, and delay accordingly [13],
|
| 823 |
+
[14].
|
| 824 |
+
2) Fine Estimation using MMLE: From ESPRIT, we can
|
| 825 |
+
obtain an initial estimate of the channel parameters ˆη0. The
|
| 826 |
+
refinement of the initial estimate can be formulated as an
|
| 827 |
+
optimization problem, based on (26), as
|
| 828 |
+
ˆη = arg min
|
| 829 |
+
η ∥y − µ(η)∥2.
|
| 830 |
+
(36)
|
| 831 |
+
Since α appears linearly in the noise-free observation µ, we
|
| 832 |
+
further define γ(η) = µ(c)/α with c = [ϕ⊤
|
| 833 |
+
B , ϕ⊤
|
| 834 |
+
U , τ]⊤. By
|
| 835 |
+
setting ∂∥y − µ(η)∥2/∂α = 0, we can have
|
| 836 |
+
ˆc = arg min
|
| 837 |
+
c ∥y − γH(c)y
|
| 838 |
+
∥γH(c)∥2 γ(c)∥2.
|
| 839 |
+
(37)
|
| 840 |
+
In this way, the nuisance parameters can be removed, which
|
| 841 |
+
reduces the dimension of the unknown parameters.
|
| 842 |
+
C. Localization Algorithm
|
| 843 |
+
1) Coarse Estimation: Given the estimated geometric pa-
|
| 844 |
+
rameter vector cl (1 ≤ l ≤ L) for all the channels, the
|
| 845 |
+
least squares solution for coarse estimation of position and
|
| 846 |
+
orientation as [49]
|
| 847 |
+
ˆRU,LS =
|
| 848 |
+
�
|
| 849 |
+
UVT,
|
| 850 |
+
if det(UVT) = 1,
|
| 851 |
+
UJVT,
|
| 852 |
+
if det(UVT) = −1,
|
| 853 |
+
(38)
|
| 854 |
+
[ˆpU,LS, ˆBU,LS]⊤ = (Q⊤
|
| 855 |
+
3 Q3)−1Q⊤
|
| 856 |
+
3 q,
|
| 857 |
+
(39)
|
| 858 |
+
where J = diag([1, 1, −1]), U and V are the unitary basis
|
| 859 |
+
matrices of the singular value decomposition of the matrix
|
| 860 |
+
Q1Q⊤
|
| 861 |
+
2 , together with Q3, q are given by [49]
|
| 862 |
+
Q1 = −[RB,1t(ˆϕB,1), . . . , RB,Lt(ˆϕB,L)],
|
| 863 |
+
(40)
|
| 864 |
+
Q2 = [t(ˆϕU,1), . . . , t(ˆϕU,L)],
|
| 865 |
+
(41)
|
| 866 |
+
Q3 =
|
| 867 |
+
�
|
| 868 |
+
��
|
| 869 |
+
I3
|
| 870 |
+
RB,1t(ˆϕB,1)
|
| 871 |
+
...
|
| 872 |
+
...
|
| 873 |
+
I3
|
| 874 |
+
RB,Lt(ˆϕB,L)
|
| 875 |
+
�
|
| 876 |
+
�� ,
|
| 877 |
+
(42)
|
| 878 |
+
q =
|
| 879 |
+
�
|
| 880 |
+
��
|
| 881 |
+
p(1)
|
| 882 |
+
B
|
| 883 |
+
+ RB,1ˆτ1t(ˆϕB,1)
|
| 884 |
+
...
|
| 885 |
+
pB,L + RB,LˆτLt(ˆϕB,L)]⊤
|
| 886 |
+
�
|
| 887 |
+
�� .
|
| 888 |
+
(43)
|
| 889 |
+
Different from the algorithm in [49], the estimator for position
|
| 890 |
+
and clock offset in (39) does not require the orientation of the
|
| 891 |
+
UE RU, which is still sufficient as a coarse estimate, as will
|
| 892 |
+
be shown in the simulation section.
|
| 893 |
+
2) MMLE: Once the initial position and orientation results
|
| 894 |
+
are obtained, joint position and orientation estimation using
|
| 895 |
+
MMLE can be formulated as
|
| 896 |
+
ˆs = arg min
|
| 897 |
+
s
|
| 898 |
+
L
|
| 899 |
+
�
|
| 900 |
+
l=1
|
| 901 |
+
(cl(s) − ˆcl)⊤Σ−1
|
| 902 |
+
cl (cl(s) − ˆcl),
|
| 903 |
+
(44)
|
| 904 |
+
which can be solved using the manifold optimization toolbox
|
| 905 |
+
Manopt [50]. Note that the covariance matrix may not be
|
| 906 |
+
accurately obtained in practice. We formulate localization as
|
| 907 |
+
an MMLE problem with two purposes: (a) to evaluate the
|
| 908 |
+
|
| 909 |
+
7
|
| 910 |
+
performance improvement with known covariance matrices
|
| 911 |
+
compared to the coarse estimation; (b) to validate the derived
|
| 912 |
+
bound, which will be detailed in Sec. IV.
|
| 913 |
+
IV. LOWER BOUND ANALYSIS
|
| 914 |
+
In the next, we derive the CRB for MM, as well as the
|
| 915 |
+
MCRB for the mismatched estimator in (26).
|
| 916 |
+
A. CRB Analysis for the Mismatched Model
|
| 917 |
+
Based on the defined channel parameter vector η and state
|
| 918 |
+
vector s, the signal model in (1) and y ∼ fMM(y|η), the
|
| 919 |
+
channel estimation CRB of the MM for the lth channel can
|
| 920 |
+
be obtained as I(ηl)−1 ∈ R7×7 with [51]
|
| 921 |
+
I(ηl) = 2
|
| 922 |
+
σ2n
|
| 923 |
+
G
|
| 924 |
+
�
|
| 925 |
+
g=1
|
| 926 |
+
K
|
| 927 |
+
�
|
| 928 |
+
k=1
|
| 929 |
+
Re
|
| 930 |
+
��∂µg,k
|
| 931 |
+
∂ηl
|
| 932 |
+
�H �∂µg,k
|
| 933 |
+
∂ηl
|
| 934 |
+
��
|
| 935 |
+
.
|
| 936 |
+
(45)
|
| 937 |
+
Here, Re{·} extracts the real part of a complex variable.
|
| 938 |
+
Consequently, the FIM of all the channel parameters ηch can
|
| 939 |
+
be formulated as
|
| 940 |
+
I(ηch) = blkdiag(I(η1), . . . , I(ηL)).
|
| 941 |
+
(46)
|
| 942 |
+
where blkdiag(·) returns the block diagonal matrix created by
|
| 943 |
+
aligning the input matrices. The FIM of the state vector I(s) ∈
|
| 944 |
+
R13×13 can then be formulated as
|
| 945 |
+
I(s) = M(M⊤ J⊤
|
| 946 |
+
S I(cch)JS M)−1M⊤,
|
| 947 |
+
(47)
|
| 948 |
+
where I(cch)
|
| 949 |
+
∈
|
| 950 |
+
R5L×5L is the EFIM of non-nuisance
|
| 951 |
+
parameters cch obtained from I(ηch), JS ≜
|
| 952 |
+
∂cch
|
| 953 |
+
∂s
|
| 954 |
+
is the
|
| 955 |
+
Jacobian matrix using a denominator-layout notation, M =
|
| 956 |
+
blkdiag(I4×4, ¯M) with ¯M as [9]
|
| 957 |
+
¯M =
|
| 958 |
+
1
|
| 959 |
+
√
|
| 960 |
+
2
|
| 961 |
+
�
|
| 962 |
+
�
|
| 963 |
+
−r3
|
| 964 |
+
03×1
|
| 965 |
+
r2
|
| 966 |
+
03×1
|
| 967 |
+
−r3
|
| 968 |
+
−r1
|
| 969 |
+
r1
|
| 970 |
+
r2
|
| 971 |
+
03×1
|
| 972 |
+
�
|
| 973 |
+
� ,
|
| 974 |
+
(48)
|
| 975 |
+
where r1, r2, and r3 are the first, second, and third columns
|
| 976 |
+
of the UE rotation matrix RU.
|
| 977 |
+
Based on I(η) in (45), we can define the AOD error bound
|
| 978 |
+
(ADEB), AOA error bound (AAEB), and delay error bound
|
| 979 |
+
(DEB) of the link between the UE and the lth BS) as
|
| 980 |
+
AAEB =
|
| 981 |
+
�
|
| 982 |
+
tr([I(ηl)−1]1:2,1:2),
|
| 983 |
+
(49)
|
| 984 |
+
ADEB =
|
| 985 |
+
�
|
| 986 |
+
tr([I(ηl)−1]3:4,3:4),
|
| 987 |
+
(50)
|
| 988 |
+
DEB =
|
| 989 |
+
�
|
| 990 |
+
([I(ηl)−1]5,5).
|
| 991 |
+
(51)
|
| 992 |
+
Similarly, based on I(s), we can define the position error
|
| 993 |
+
bound (PEB), clock offset error bound (CEB) and orientation
|
| 994 |
+
error bound (OEB) as
|
| 995 |
+
PEB =
|
| 996 |
+
�
|
| 997 |
+
tr([I(s)−1]1:3,1:3),
|
| 998 |
+
(52)
|
| 999 |
+
CEB =
|
| 1000 |
+
�
|
| 1001 |
+
([I(s)−1]4,4),
|
| 1002 |
+
(53)
|
| 1003 |
+
OEB =
|
| 1004 |
+
�
|
| 1005 |
+
tr([I(s)−1]5:13,5:13).
|
| 1006 |
+
(54)
|
| 1007 |
+
The bounds from (49)–(54) will be used to evaluate the
|
| 1008 |
+
channel estimation and localization performance. In the next
|
| 1009 |
+
subsections, we will first formulate the MCRB for channel
|
| 1010 |
+
estimation, and then the mismatched lower bound for position
|
| 1011 |
+
and orientation estimation will be derived.
|
| 1012 |
+
B. Misspecified CRB of Channel Parameters
|
| 1013 |
+
For a given channel model, the model is said to be mis-
|
| 1014 |
+
matched or misspecified when y ∼ fTM(y|η), while the
|
| 1015 |
+
estimation is based on the assumption that y ∼ fMM(y|η)),
|
| 1016 |
+
where fTM(y|η) ̸= fMM(y|η).
|
| 1017 |
+
The lower bound (LB) of using a mismatched estimator can
|
| 1018 |
+
be obtained as [35]
|
| 1019 |
+
LB(¯η, η0) = A−1
|
| 1020 |
+
η0 Bη0A−1
|
| 1021 |
+
η0
|
| 1022 |
+
�
|
| 1023 |
+
��
|
| 1024 |
+
�
|
| 1025 |
+
=MCRB(η0)
|
| 1026 |
+
+ (¯η − η0)(¯η − η0)⊤
|
| 1027 |
+
�
|
| 1028 |
+
��
|
| 1029 |
+
�
|
| 1030 |
+
=Bias(η0)
|
| 1031 |
+
,
|
| 1032 |
+
(55)
|
| 1033 |
+
where ¯η is the true channel parameter vector, η0 is the pseudo-
|
| 1034 |
+
true parameter vector (which will be introduced soon), and
|
| 1035 |
+
Aη0, Bη0 are two possible generalizations of the FIMs. The
|
| 1036 |
+
LB is a bound in the sense that
|
| 1037 |
+
E{(ˆηMMLE − ¯η)(ˆηMMLE − ¯η)⊤} ⪰ LB(¯η, η0),
|
| 1038 |
+
(56)
|
| 1039 |
+
where the expectation is with respect to fTM(y|η). What re-
|
| 1040 |
+
mains is the formal definition and computation of the pseudo-
|
| 1041 |
+
true parameter η0 and Aη0, Bη0.
|
| 1042 |
+
1) Pseudo-true Parameter: Assume the probability density
|
| 1043 |
+
function (PDF) of the TM, where the observation data come
|
| 1044 |
+
from, is fTM(y|¯η), where y is the received signals and ¯η ∈ R7
|
| 1045 |
+
(7 unknowns for this case) is the vector containing all the
|
| 1046 |
+
channel parameters. Similarly, the PDF of the MM for the
|
| 1047 |
+
received signal y can be noted as fMM(y, η). The pseudo-true
|
| 1048 |
+
parameter vector is defined as the point that minimizes the
|
| 1049 |
+
Kullback-Leibler divergence between fTM(y|¯η) and fMM(y|η)
|
| 1050 |
+
as
|
| 1051 |
+
η0 = arg min
|
| 1052 |
+
η DKL(fTM(y|¯η)∥fMM(y|η)).
|
| 1053 |
+
(57)
|
| 1054 |
+
We define ϵ(η) ≜ ¯µ(¯η)−µ(η), and the pseudo-true parameter
|
| 1055 |
+
can be obtained as [36]
|
| 1056 |
+
η0 = arg min
|
| 1057 |
+
η ∥ϵ(η)∥2 = arg min
|
| 1058 |
+
η ∥¯µ(¯η) − µ(η)∥2.
|
| 1059 |
+
(58)
|
| 1060 |
+
Hence, η0 can be found by solving (36) with the observation
|
| 1061 |
+
y =
|
| 1062 |
+
¯µ(¯η), which can be accomplished using the same
|
| 1063 |
+
algorithm in Sec. III, initialized with the true value ¯η.
|
| 1064 |
+
2) MCRB Component Matrices: The matrices Aη0 and
|
| 1065 |
+
Bη0 can be obtained based on the pseudo-true parameter
|
| 1066 |
+
vector η0 as [36]
|
| 1067 |
+
[Aη0]i,j =
|
| 1068 |
+
ˆ ∂2lnfMM(y|η)
|
| 1069 |
+
∂ηi∂ηj
|
| 1070 |
+
fTM(y|¯η)dy
|
| 1071 |
+
����
|
| 1072 |
+
η=η0
|
| 1073 |
+
=
|
| 1074 |
+
2
|
| 1075 |
+
σ2n
|
| 1076 |
+
Re
|
| 1077 |
+
�
|
| 1078 |
+
∂2µ(η)
|
| 1079 |
+
∂ηi∂ηj
|
| 1080 |
+
ϵ(η) − ∂µ(η)
|
| 1081 |
+
∂ηj
|
| 1082 |
+
�∂µ(η)
|
| 1083 |
+
∂ηi
|
| 1084 |
+
�H������
|
| 1085 |
+
η=η0
|
| 1086 |
+
(59)
|
| 1087 |
+
and
|
| 1088 |
+
[Bη0]i,j =
|
| 1089 |
+
ˆ ∂lnfMM(y|η)
|
| 1090 |
+
∂ηi
|
| 1091 |
+
∂lnfMM(y|η)
|
| 1092 |
+
∂ηj
|
| 1093 |
+
fTM(y|¯η)dy
|
| 1094 |
+
����
|
| 1095 |
+
η=η0
|
| 1096 |
+
= 4
|
| 1097 |
+
σ4n
|
| 1098 |
+
Re
|
| 1099 |
+
�∂2µ(η)
|
| 1100 |
+
∂ηi
|
| 1101 |
+
ϵ(η)
|
| 1102 |
+
�
|
| 1103 |
+
Re
|
| 1104 |
+
�∂2µ(η)
|
| 1105 |
+
∂ηj
|
| 1106 |
+
ϵ(η)
|
| 1107 |
+
�
|
| 1108 |
+
+ 2
|
| 1109 |
+
σ2n
|
| 1110 |
+
Re
|
| 1111 |
+
�
|
| 1112 |
+
∂µ(η)
|
| 1113 |
+
∂ηj
|
| 1114 |
+
�∂µ(η)
|
| 1115 |
+
∂ηi
|
| 1116 |
+
�H������
|
| 1117 |
+
η=η0
|
| 1118 |
+
.
|
| 1119 |
+
(60)
|
| 1120 |
+
C. Absolute Lower Bound (ALB) for Localization
|
| 1121 |
+
Another way to interpret the LB specified in (55) is that the
|
| 1122 |
+
estimated channel parameter vector from an efficient estimator
|
| 1123 |
+
|
| 1124 |
+
8
|
| 1125 |
+
follows a nonzero-mean multi-variable Gaussian distribution
|
| 1126 |
+
as
|
| 1127 |
+
ˆηl ∼ N(η0,l, A−1
|
| 1128 |
+
η0,lBη0,lA−1
|
| 1129 |
+
η0,l),
|
| 1130 |
+
(61)
|
| 1131 |
+
while the assumed distribution of the MMLE is
|
| 1132 |
+
ˆηl ∼ N(ηl(¯s), I(ηl)−1),
|
| 1133 |
+
(62)
|
| 1134 |
+
where ¯s is the true state of the UE. As a result, the position and
|
| 1135 |
+
orientation estimation (from the channel parameter vectors of
|
| 1136 |
+
all the paths) of the two-stage localization problem is another
|
| 1137 |
+
mismatched problem and the bound follows as
|
| 1138 |
+
LB(¯s, s0) = MCRB(s0) + (¯s − s0)(¯s − s0)⊤
|
| 1139 |
+
�
|
| 1140 |
+
��
|
| 1141 |
+
�
|
| 1142 |
+
Absolute lower bound (ALB)
|
| 1143 |
+
.
|
| 1144 |
+
(63)
|
| 1145 |
+
Similar to (55), ¯s is the true state parameter vector, s0 is the
|
| 1146 |
+
pseudo-true state parameter vector.
|
| 1147 |
+
It is possible to derive the localization LB constrained
|
| 1148 |
+
MCRB [52]; however, considering the high complexity when
|
| 1149 |
+
involving 3D orientation estimation, we will focus on the
|
| 1150 |
+
bias term, defined as the absolute lower bound (ALB) of the
|
| 1151 |
+
localization performance as ALB = (¯s − s0)(¯s − s0)⊤, which
|
| 1152 |
+
can sufficiently evaluate the effect of HWIs on localization
|
| 1153 |
+
as will be shown in Sec. V-C2 Following a similar derivation
|
| 1154 |
+
in (58). The pseudo-true parameters for state vector s can be
|
| 1155 |
+
obtained as
|
| 1156 |
+
s0 = arg min
|
| 1157 |
+
¯s
|
| 1158 |
+
�
|
| 1159 |
+
l
|
| 1160 |
+
(η0,l − ηl(¯s))⊤I(ηl)(η0,l − ηl(¯s)), (64)
|
| 1161 |
+
where η0,l
|
| 1162 |
+
=
|
| 1163 |
+
arg minη ∥¯µ(¯ηl) − µ(ηl)∥2 is the biased
|
| 1164 |
+
mapping obtained by calculating the pseudo-true parameters
|
| 1165 |
+
of the lth channel from (58), and I(ηl) is the inverse of the
|
| 1166 |
+
covariance matrix that can be obtained from (45).
|
| 1167 |
+
D. Summary of Different Bounds
|
| 1168 |
+
In this section, we introduced different types of lower
|
| 1169 |
+
bounds. For channel geometric parameters, the CRB and LB
|
| 1170 |
+
are derived for AOA, AOD, and delay estimations. For state
|
| 1171 |
+
parameters, the CRB and ALB are derived for the position,
|
| 1172 |
+
orientation, and clock offset estimations. All types of the lower
|
| 1173 |
+
bounds are summarized in Table I, which will be used in
|
| 1174 |
+
Sec. V Numerical Results.
|
| 1175 |
+
TABLE I
|
| 1176 |
+
SUMMARY OF DIFFERENT LOWER BOUNDS
|
| 1177 |
+
Types
|
| 1178 |
+
Parameters
|
| 1179 |
+
Remarks
|
| 1180 |
+
AOA
|
| 1181 |
+
AOD
|
| 1182 |
+
Delay
|
| 1183 |
+
Channel Parameters
|
| 1184 |
+
CRB
|
| 1185 |
+
AAEB
|
| 1186 |
+
ADEB
|
| 1187 |
+
DEB
|
| 1188 |
+
(49)-(51)
|
| 1189 |
+
LB
|
| 1190 |
+
AALB
|
| 1191 |
+
ADLB
|
| 1192 |
+
DLB
|
| 1193 |
+
(55)
|
| 1194 |
+
Position
|
| 1195 |
+
Orientation
|
| 1196 |
+
Clock Offset
|
| 1197 |
+
State Parameters
|
| 1198 |
+
CRB
|
| 1199 |
+
PEB
|
| 1200 |
+
OEB
|
| 1201 |
+
CEB
|
| 1202 |
+
(52)-(54)
|
| 1203 |
+
ALB
|
| 1204 |
+
PALB
|
| 1205 |
+
OALB
|
| 1206 |
+
CALB
|
| 1207 |
+
(63)
|
| 1208 |
+
V. NUMERICAL RESULTS
|
| 1209 |
+
A. Default Parameters
|
| 1210 |
+
We consider a 3D MIMO uplink scenario with one UE
|
| 1211 |
+
and two BSs, and the simulation parameters7 can be found
|
| 1212 |
+
7The PA parameters are estimated from the measurements of the RF
|
| 1213 |
+
WebLab, which can be remotely accessed at www.dpdcompetition.com. Part
|
| 1214 |
+
of the parameters come from the Hexa-X Deliverable 3.1.
|
| 1215 |
+
in Table II. We utilize 10% of the total number of subcarriers
|
| 1216 |
+
Kcom for localization, resulting in K = 100 subcarriers as
|
| 1217 |
+
pilot signals. The amplitude of the channel gain is calculated
|
| 1218 |
+
as ρ =
|
| 1219 |
+
λ
|
| 1220 |
+
4πcτ . The selection of these parameters is to show
|
| 1221 |
+
the performance of the estimator in comparison to the derived
|
| 1222 |
+
bound. The analysis of each HWI type is also discussed in the
|
| 1223 |
+
simulation results.
|
| 1224 |
+
Regarding the evaluation of communication performance,
|
| 1225 |
+
only the first BS is considered, and 16-QAM modulation
|
| 1226 |
+
is adopted. Different from localization, where BS-UE beam
|
| 1227 |
+
sweeping is needed, we evaluate the effect on communication
|
| 1228 |
+
with fixed precoder and combiner vectors across different
|
| 1229 |
+
transmissions. By considering all HWIs, we assume the chan-
|
| 1230 |
+
nel can be perfectly estimated (with a sufficient number of
|
| 1231 |
+
pilots) as ˆH = ˇH = ˆaBˆaU with ˆaB = √αCB˜aB(ϕB) and
|
| 1232 |
+
ˆaU = √α˜aU(ϕU)C⊤
|
| 1233 |
+
U from (23). In order to maximize the SNR
|
| 1234 |
+
with the amplitude constraints of the precoder and combiner,
|
| 1235 |
+
we choose w and v respectively as the conjugate of ˆaB and
|
| 1236 |
+
ˆaU with each of the elements normalized to a unit amplitude.
|
| 1237 |
+
For each realization, 20 OFDM symbols are sent with data
|
| 1238 |
+
drawn randomly from 16-QAM, and SER is used to evaluate
|
| 1239 |
+
the effect of HWIs on communication.
|
| 1240 |
+
For localization, the pilot signal xg,k is chosen with random
|
| 1241 |
+
phase and a constant amplitude |xg,k|2 = P/NU. To assist the
|
| 1242 |
+
beamspace ESPRIT algorithm, we set the number of sweeping
|
| 1243 |
+
beams as M1 = 4, M2 = 4, M3 = 3, M4 = 3 with
|
| 1244 |
+
a total number of transmission G = 144. For a specific
|
| 1245 |
+
spatial frequency vector ¯ωn (n ∈ {1, 2, 3, 4}), we assume
|
| 1246 |
+
the sweeping range as (Mn − 1)∆ω centered at the location
|
| 1247 |
+
prior ˚ωn = ωn + δω, where ωn is defined in (29), (34),
|
| 1248 |
+
and δω is the error). More specifically, we choose ¯ωn,m =
|
| 1249 |
+
ωn + δω + 2m−Mn−1
|
| 1250 |
+
2
|
| 1251 |
+
∆ω, with ∆ω = 0.15 and δω = 0.05 in
|
| 1252 |
+
the simulation. The sweeping priority is set to ‘BS-first’ by
|
| 1253 |
+
default, which means that the UE can change its precoder
|
| 1254 |
+
vector when the BS finishes the M1M2
|
| 1255 |
+
= 16 different
|
| 1256 |
+
sweeping beams. Different error bounds (i.e., CRBs, LBs,
|
| 1257 |
+
ALBs from Table I) are utilized as localization performance
|
| 1258 |
+
metrics.
|
| 1259 |
+
B. The Effect of HWIs on Communication
|
| 1260 |
+
1) The Effect of HWIs on SER: We approximate the effect
|
| 1261 |
+
of HWIs on communication as the random noise and evaluate
|
| 1262 |
+
the effect on SER based on numerical and analytical results8.
|
| 1263 |
+
Considering that the effects of some HWIs depend on the
|
| 1264 |
+
amplitude of the symbol (e.g., PAN), we also obtain the
|
| 1265 |
+
minimum and maximum noise levels across different symbols
|
| 1266 |
+
to evaluate the lower bound and upper bound of the SER. The
|
| 1267 |
+
SERs of 16-QAM with different transmit power for different
|
| 1268 |
+
HWI coefficients are visualized in Fig. 2, where the black
|
| 1269 |
+
solid curve is the benchmark SER without HWIs. By default,
|
| 1270 |
+
cHWI = 1, and the HWI level is the same as the parameters
|
| 1271 |
+
in Table II. A value of cHWI = 10 indicates that the standard
|
| 1272 |
+
deviations (e.g., σPN, σCFO) of all the impairments (except for
|
| 1273 |
+
8The SER of M-QAM can be calculated as SERM
|
| 1274 |
+
=
|
| 1275 |
+
1 − (1 −
|
| 1276 |
+
2
|
| 1277 |
+
√
|
| 1278 |
+
M−1
|
| 1279 |
+
√
|
| 1280 |
+
M
|
| 1281 |
+
Q(
|
| 1282 |
+
�
|
| 1283 |
+
3SNR
|
| 1284 |
+
M−1 ))2 [53, (6,23)], where Q(·) is the Q-function and SNR
|
| 1285 |
+
is effective SNR considering both approximated HWI noise and background
|
| 1286 |
+
noise.
|
| 1287 |
+
|
| 1288 |
+
9
|
| 1289 |
+
TABLE II
|
| 1290 |
+
DEFAULT SIMULATION PARAMETERS
|
| 1291 |
+
Parameters
|
| 1292 |
+
True Model
|
| 1293 |
+
Mismatched Model
|
| 1294 |
+
BS
|
| 1295 |
+
p1
|
| 1296 |
+
B = [0, 0, 3]⊤, p2
|
| 1297 |
+
B = [0, 5, 3]⊤
|
| 1298 |
+
BS Orientations
|
| 1299 |
+
o1
|
| 1300 |
+
B = [0◦, 15◦, 0◦]⊤, o2
|
| 1301 |
+
B = [−30◦, 15◦, 0◦]⊤
|
| 1302 |
+
BS Antennas
|
| 1303 |
+
N 1
|
| 1304 |
+
B = N 2
|
| 1305 |
+
B = 8 × 8
|
| 1306 |
+
UE Position
|
| 1307 |
+
pU = [4, 2, 1.5]⊤
|
| 1308 |
+
UE Orientation
|
| 1309 |
+
oU = [180◦, 0◦, 0◦]⊤
|
| 1310 |
+
UE Antennas
|
| 1311 |
+
NU = 4 × 4
|
| 1312 |
+
RFC at BS/UE
|
| 1313 |
+
1
|
| 1314 |
+
Carrier Frequency
|
| 1315 |
+
fc = 140 GHz
|
| 1316 |
+
Bandwidth
|
| 1317 |
+
W = 1 GHz
|
| 1318 |
+
Transmissions
|
| 1319 |
+
G = 4 × 4 × 3 × 3 = 144
|
| 1320 |
+
Subcarriers (Total)
|
| 1321 |
+
Kcom = 1040 (∆f = 960 kHz)
|
| 1322 |
+
Subcarriers (Pilots)
|
| 1323 |
+
K = 100
|
| 1324 |
+
Length of the CP
|
| 1325 |
+
Kcp = 7
|
| 1326 |
+
Load Impedance
|
| 1327 |
+
R = 50 Ω
|
| 1328 |
+
Noise PSD
|
| 1329 |
+
N0 = −173.855 dBm/Hz
|
| 1330 |
+
Noise Figure
|
| 1331 |
+
10 dB
|
| 1332 |
+
Phase Noise
|
| 1333 |
+
σPN = 2.5◦
|
| 1334 |
+
σPN = 0◦
|
| 1335 |
+
Carrier Freq. Offset
|
| 1336 |
+
σCFO = 5e−4 (0.036 ppm)
|
| 1337 |
+
σCFO = 0
|
| 1338 |
+
Mutual Coupling
|
| 1339 |
+
σMC = 0.002
|
| 1340 |
+
σMC = 0
|
| 1341 |
+
β1 = 0.9798+0.0286j
|
| 1342 |
+
Power Amplifier
|
| 1343 |
+
β2 = 0.0122-0.0043j
|
| 1344 |
+
n/a
|
| 1345 |
+
β3 = −0.0007+0.0001j
|
| 1346 |
+
PA Clipping Voltage
|
| 1347 |
+
xclip = 1 V
|
| 1348 |
+
n/a
|
| 1349 |
+
Array Gain Error
|
| 1350 |
+
σGA = σGP = 0.002
|
| 1351 |
+
σRA = σRP = 0
|
| 1352 |
+
Antenna Disp. Error
|
| 1353 |
+
σAD = 5 um (2.3e−3λ)
|
| 1354 |
+
σAD = 0
|
| 1355 |
+
IQ Imbalance
|
| 1356 |
+
σIA = σIP = 0.02
|
| 1357 |
+
σIA = σIP = 0
|
| 1358 |
+
PAN) are multiplied by 10. We can see from the figure that the
|
| 1359 |
+
analytical SERs with approximated noise levels (red, blue, and
|
| 1360 |
+
green markers) are close to the numerical SERs (solid red, blue
|
| 1361 |
+
and green curves), and both are within the lower and upper
|
| 1362 |
+
bounds (shaded areas). We can also see from Fig. 2 that the
|
| 1363 |
+
selected impairment level (i.e., cHWI = 1) has limited effects
|
| 1364 |
+
on communication. However, we will show the localization
|
| 1365 |
+
performance will be affected by the same level of HWIs in
|
| 1366 |
+
Sec. V-C.
|
| 1367 |
+
−10
|
| 1368 |
+
−5
|
| 1369 |
+
0
|
| 1370 |
+
5
|
| 1371 |
+
10
|
| 1372 |
+
15
|
| 1373 |
+
10−7
|
| 1374 |
+
10−5
|
| 1375 |
+
10−3
|
| 1376 |
+
10−1
|
| 1377 |
+
P [dBm]
|
| 1378 |
+
SER (16-QAM)
|
| 1379 |
+
Anal. without HWI
|
| 1380 |
+
Numer. HWI (cHWI = 0.1)
|
| 1381 |
+
Anal.-Approx. HWI (cHWI = 0.1)
|
| 1382 |
+
Numer. HWI (cHWI = 1)
|
| 1383 |
+
Anal.-Approx. HWI (cHWI = 1)
|
| 1384 |
+
Numer. HWI (cHWI = 2)
|
| 1385 |
+
Anal.-Approx. HWI (cHWI = 2)
|
| 1386 |
+
Fig. 2.
|
| 1387 |
+
The effect of different HWI levels on SER. Numerical results for
|
| 1388 |
+
100 realizations and analytical results calculated with approximated equivalent
|
| 1389 |
+
HWI noise. The boundaries of the shadow areas indicate the upper and lower
|
| 1390 |
+
bounds for SER.
|
| 1391 |
+
2) The Effect of Individual HWIs on SER: We are also
|
| 1392 |
+
interested in the effect of individual HWIs on communication.
|
| 1393 |
+
By considering PN, CFO, PAN, and IQI one by one, the
|
| 1394 |
+
SERs under HWI are shown in Fig. 3. Benchmarked by
|
| 1395 |
+
−10
|
| 1396 |
+
−5
|
| 1397 |
+
0
|
| 1398 |
+
5
|
| 1399 |
+
10
|
| 1400 |
+
15
|
| 1401 |
+
10−7
|
| 1402 |
+
10−5
|
| 1403 |
+
10−3
|
| 1404 |
+
10−1
|
| 1405 |
+
P [dBm]
|
| 1406 |
+
SER (16QAM)
|
| 1407 |
+
PN
|
| 1408 |
+
PAN
|
| 1409 |
+
CFO
|
| 1410 |
+
IQI
|
| 1411 |
+
MC+AGE+ADE
|
| 1412 |
+
Without HWI
|
| 1413 |
+
−10
|
| 1414 |
+
−5
|
| 1415 |
+
0
|
| 1416 |
+
5
|
| 1417 |
+
10
|
| 1418 |
+
15
|
| 1419 |
+
10−7
|
| 1420 |
+
10−5
|
| 1421 |
+
10−3
|
| 1422 |
+
10−1
|
| 1423 |
+
P [dBm]
|
| 1424 |
+
SER (16QAM)
|
| 1425 |
+
PN
|
| 1426 |
+
PAN
|
| 1427 |
+
CFO
|
| 1428 |
+
IQI
|
| 1429 |
+
MC+AGE+ADE
|
| 1430 |
+
Without HWI
|
| 1431 |
+
Fig. 3. The effect of individual HWIs on SER using approximated equivalent
|
| 1432 |
+
HWI noise. Under current simulation parameters, the PN, PAN, CFO and IQI
|
| 1433 |
+
increase the SER, whereas the MC, AGE and ADE have negligible effects on
|
| 1434 |
+
communication.
|
| 1435 |
+
the solid black curve without HWIs, these factors degrade
|
| 1436 |
+
SERs. We also performed simulations by including MC, AGE,
|
| 1437 |
+
ADE at the same time, as shown in the dashed curve with
|
| 1438 |
+
cross markers, and found their effects on communication are
|
| 1439 |
+
negligible under the current simulation setup.
|
| 1440 |
+
3) Insights into the Impact of HWI on Communication: To
|
| 1441 |
+
gain further insight into the effect of HWI on communication,
|
| 1442 |
+
we separate the overall system noise into equivalent HWI
|
| 1443 |
+
noise and background noise. We can see from Fig. 4 that
|
| 1444 |
+
the equivalent HWI noise is determined by the HWI level
|
| 1445 |
+
and has an approximately linear relationship with the transmit
|
| 1446 |
+
power (when working within the linear region of the PA). In
|
| 1447 |
+
addition to the fixed background noise, the overall equivalent
|
| 1448 |
+
noise level keeps increasing and is dominated by the HWIs at
|
| 1449 |
+
high transmit power.
|
| 1450 |
+
−10
|
| 1451 |
+
−5
|
| 1452 |
+
0
|
| 1453 |
+
5
|
| 1454 |
+
10
|
| 1455 |
+
15
|
| 1456 |
+
−110
|
| 1457 |
+
−100
|
| 1458 |
+
−90
|
| 1459 |
+
−80
|
| 1460 |
+
−70
|
| 1461 |
+
−60
|
| 1462 |
+
P [dBm]
|
| 1463 |
+
Equivalent Noise Level [dBm]
|
| 1464 |
+
Overall Noise (cHWI = 2)
|
| 1465 |
+
HWI Noise (cHWI = 2)
|
| 1466 |
+
Overall Noise (cHWI = 1)
|
| 1467 |
+
HWI Noise (cHWI = 1)
|
| 1468 |
+
Overall Noise (cHWI = 0.1)
|
| 1469 |
+
HWI Noise (cHWI = 0.1)
|
| 1470 |
+
Background Noise
|
| 1471 |
+
Fig. 4. Visualization of overall system noise, equivalent HWI, and background
|
| 1472 |
+
noise with different transmit power P. The background noise has a large effect
|
| 1473 |
+
on communication in low transmit power, whereas the HWIs contribute more
|
| 1474 |
+
in high transmit power.
|
| 1475 |
+
C. The Effect of HWIs on Localization
|
| 1476 |
+
Before analyzing the HWIs in detail, we first establish the
|
| 1477 |
+
validity of the derived bounds by comparing them against the
|
| 1478 |
+
performance of practical algorithms.
|
| 1479 |
+
1) Channel Estimation Results: For convenient analysis, we
|
| 1480 |
+
adopt one specific realization of the HWIs for the system. The
|
| 1481 |
+
results of channel parameters estimation using ESPRIT (circle,
|
| 1482 |
+
|
| 1483 |
+
10
|
| 1484 |
+
−10
|
| 1485 |
+
0
|
| 1486 |
+
10
|
| 1487 |
+
20
|
| 1488 |
+
30
|
| 1489 |
+
40
|
| 1490 |
+
10−5
|
| 1491 |
+
10−4
|
| 1492 |
+
10−3
|
| 1493 |
+
10−2
|
| 1494 |
+
10−1
|
| 1495 |
+
100
|
| 1496 |
+
101
|
| 1497 |
+
P [dBm]
|
| 1498 |
+
AOA [◦] / AOD [◦] / Delay [m]
|
| 1499 |
+
AOA-ESPRIT
|
| 1500 |
+
AOD-ESPRIT
|
| 1501 |
+
Delay-ESPRIT
|
| 1502 |
+
AOA-MMLE
|
| 1503 |
+
AOD-MMLE
|
| 1504 |
+
Delay-MMLE
|
| 1505 |
+
AAEB
|
| 1506 |
+
ADEB
|
| 1507 |
+
DEB
|
| 1508 |
+
AALB
|
| 1509 |
+
ADLB
|
| 1510 |
+
DLB
|
| 1511 |
+
Fig. 5. Comparison between channel parameters estimation results (ESPRIT
|
| 1512 |
+
and MMLE) and different lower bounds (CRB of the MM and the LB of the
|
| 1513 |
+
mismatched estimator) in terms of AOA, AOD and delay. Due to the HWIs,
|
| 1514 |
+
the performance starts to saturate when the transmit power exceeds 30 dBm.
|
| 1515 |
+
Although the performance of the coarse estimation using ESPRIT (using a
|
| 1516 |
+
mismatched model) may not attain the theoretical bounds (especially for delay
|
| 1517 |
+
estimation), the refined results using MMLE can reach the LB (solid curves
|
| 1518 |
+
align well with the cross-marked dotted curve).
|
| 1519 |
+
−10
|
| 1520 |
+
0
|
| 1521 |
+
10
|
| 1522 |
+
20
|
| 1523 |
+
30
|
| 1524 |
+
40
|
| 1525 |
+
10−4
|
| 1526 |
+
10−3
|
| 1527 |
+
10−2
|
| 1528 |
+
10−1
|
| 1529 |
+
100
|
| 1530 |
+
P [dBm]
|
| 1531 |
+
Pos [m] / Ori [◦] / Clock [m]
|
| 1532 |
+
POS-LS
|
| 1533 |
+
ORI-LS
|
| 1534 |
+
Clock-LS
|
| 1535 |
+
POS-MMLE
|
| 1536 |
+
ORI-MMLE
|
| 1537 |
+
Clock-MMLE
|
| 1538 |
+
PEB
|
| 1539 |
+
OEB
|
| 1540 |
+
CEB
|
| 1541 |
+
PALB
|
| 1542 |
+
OALB
|
| 1543 |
+
CALB
|
| 1544 |
+
Fig. 6.
|
| 1545 |
+
Comparison between localization results (position, orientation, and
|
| 1546 |
+
clock offset estimation) and different lower bounds (CRB of the MM and
|
| 1547 |
+
the LB of the mismatched estimator). We noticed that the LS estimators are
|
| 1548 |
+
sufficient for this 2-BS scenario, and the refined results using MMLE attain
|
| 1549 |
+
the ALBs.
|
| 1550 |
+
square, and diamond markers) and MMLE (solid curves) are
|
| 1551 |
+
shown in Fig. 5. The estimators are benchmarked by the CRBs
|
| 1552 |
+
of the ideal/mismatched model (CRB-MM, dashed curves) and
|
| 1553 |
+
the LB using a mismatched model (dotted curves with cross
|
| 1554 |
+
markers). Note that the average transmit power P is calculated
|
| 1555 |
+
without considering the nonlinearity of the power amplifier
|
| 1556 |
+
(calculated before the PA). When the transmit power P is low,
|
| 1557 |
+
the LB is determined by the MCRB (since the bias part is con-
|
| 1558 |
+
stant, see (55)) and has a similar performance as CRBs. This
|
| 1559 |
+
indicates that in low transmit power, the mismatched model
|
| 1560 |
+
will not significantly affect the performance, as the expected
|
| 1561 |
+
accuracy is low and limited by the noise. With the increase of
|
| 1562 |
+
transmit power, the contribution of MCRB decreases due to an
|
| 1563 |
+
increased SNR, and eventually, the mismatched localization is
|
| 1564 |
+
lower bounded by the absolute lower bound (ALB) (bias part
|
| 1565 |
+
in (55)). This indicates that the localization performance can
|
| 1566 |
+
no longer be improved by increasing transmit power, which
|
| 1567 |
+
cannot be ignored in scenarios requiring high-accuracy local-
|
| 1568 |
+
ization performance9. Regarding the estimators, the ESPRIT
|
| 1569 |
+
(using a mismatched model) provides low-complexity results
|
| 1570 |
+
with limited performance in delay estimation. However, the
|
| 1571 |
+
refined results using MMLE can reach the LB (solid curves
|
| 1572 |
+
align well with the dotted curve).
|
| 1573 |
+
2) Localization Results: Based on the estimated channel
|
| 1574 |
+
parameters, we are able to estimate the UE position and
|
| 1575 |
+
orientation. Similar to the channel estimation results, two
|
| 1576 |
+
estimators (LS and MMLE) and two bounds (CRB and LB)
|
| 1577 |
+
are evaluated. The results for localization are shown in Fig. 6.
|
| 1578 |
+
From the figure, we can see that at low transmit powers, the
|
| 1579 |
+
LB and CRBs coincide, implying that the HWIs are not the
|
| 1580 |
+
main source of error. At higher transmit powers (10 dBm for
|
| 1581 |
+
OEB, and 20 dBm for PEB), LB deviates from the CRBs, and
|
| 1582 |
+
the positioning performance is thus more severely affected by
|
| 1583 |
+
HWIs. The MMLE in high SNR is close to the ALB, indicating
|
| 1584 |
+
the validity of the MCRB analysis.
|
| 1585 |
+
0
|
| 1586 |
+
5
|
| 1587 |
+
10
|
| 1588 |
+
15
|
| 1589 |
+
20
|
| 1590 |
+
25
|
| 1591 |
+
30
|
| 1592 |
+
35
|
| 1593 |
+
40
|
| 1594 |
+
10−5
|
| 1595 |
+
10−4
|
| 1596 |
+
10−3
|
| 1597 |
+
10−2
|
| 1598 |
+
10−1
|
| 1599 |
+
100
|
| 1600 |
+
101
|
| 1601 |
+
P [dBm]
|
| 1602 |
+
AALB (Average)
|
| 1603 |
+
AALB (Multi)
|
| 1604 |
+
AAEB
|
| 1605 |
+
ADLB (Average)
|
| 1606 |
+
ADLB (Multi)
|
| 1607 |
+
ADEB
|
| 1608 |
+
DLB (Average)
|
| 1609 |
+
DLB (Multi)
|
| 1610 |
+
DEB
|
| 1611 |
+
(a) PN
|
| 1612 |
+
0
|
| 1613 |
+
5
|
| 1614 |
+
10
|
| 1615 |
+
15
|
| 1616 |
+
20
|
| 1617 |
+
25
|
| 1618 |
+
30
|
| 1619 |
+
35
|
| 1620 |
+
40
|
| 1621 |
+
10−5
|
| 1622 |
+
10−4
|
| 1623 |
+
10−3
|
| 1624 |
+
10−2
|
| 1625 |
+
10−1
|
| 1626 |
+
100
|
| 1627 |
+
101
|
| 1628 |
+
P [dBm]
|
| 1629 |
+
(b) CFO
|
| 1630 |
+
0
|
| 1631 |
+
5
|
| 1632 |
+
10
|
| 1633 |
+
15
|
| 1634 |
+
20
|
| 1635 |
+
25
|
| 1636 |
+
30
|
| 1637 |
+
35
|
| 1638 |
+
40
|
| 1639 |
+
10−5
|
| 1640 |
+
10−4
|
| 1641 |
+
10−3
|
| 1642 |
+
10−2
|
| 1643 |
+
10−1
|
| 1644 |
+
100
|
| 1645 |
+
101
|
| 1646 |
+
P [dBm]
|
| 1647 |
+
(c) MC
|
| 1648 |
+
0
|
| 1649 |
+
5
|
| 1650 |
+
10
|
| 1651 |
+
15
|
| 1652 |
+
20
|
| 1653 |
+
25
|
| 1654 |
+
30
|
| 1655 |
+
35
|
| 1656 |
+
40
|
| 1657 |
+
10−5
|
| 1658 |
+
10−4
|
| 1659 |
+
10−3
|
| 1660 |
+
10−2
|
| 1661 |
+
10−1
|
| 1662 |
+
100
|
| 1663 |
+
101
|
| 1664 |
+
P [dBm]
|
| 1665 |
+
(d) AGE
|
| 1666 |
+
0
|
| 1667 |
+
5
|
| 1668 |
+
10
|
| 1669 |
+
15
|
| 1670 |
+
20
|
| 1671 |
+
25
|
| 1672 |
+
30
|
| 1673 |
+
35
|
| 1674 |
+
40
|
| 1675 |
+
10−5
|
| 1676 |
+
10−4
|
| 1677 |
+
10−3
|
| 1678 |
+
10−2
|
| 1679 |
+
10−1
|
| 1680 |
+
100
|
| 1681 |
+
101
|
| 1682 |
+
P [dBm]
|
| 1683 |
+
(e) ADE
|
| 1684 |
+
0
|
| 1685 |
+
5
|
| 1686 |
+
10
|
| 1687 |
+
15
|
| 1688 |
+
20
|
| 1689 |
+
25
|
| 1690 |
+
30
|
| 1691 |
+
35
|
| 1692 |
+
40
|
| 1693 |
+
10−5
|
| 1694 |
+
10−4
|
| 1695 |
+
10−3
|
| 1696 |
+
10−2
|
| 1697 |
+
10−1
|
| 1698 |
+
100
|
| 1699 |
+
101
|
| 1700 |
+
P [dBm]
|
| 1701 |
+
(f) IQI
|
| 1702 |
+
Fig. 7.
|
| 1703 |
+
LBs of channel parameter estimation under different types of
|
| 1704 |
+
impairment with multiple realizations: (a) Phase noise, (b) Carrier frequency
|
| 1705 |
+
offset, (c) Mutual coupling, (d) Array gain error, (e) Antenna displacement
|
| 1706 |
+
error, (f) IQ-imbalance.
|
| 1707 |
+
Now that the validity of the bounds has been established,
|
| 1708 |
+
we rely solely on the bounds to evaluate the effect of HWIs
|
| 1709 |
+
on localization. First, the impairments are studied individually,
|
| 1710 |
+
then the impact of the waveform type is evaluated, and finally,
|
| 1711 |
+
the impairment levels are varied.
|
| 1712 |
+
9Note that the analysis here is under the same level of residual noise (e.g.,
|
| 1713 |
+
PN, CFO, IQI). In practice, the impairment levels depend on specific HWI
|
| 1714 |
+
calibration algorithms and transmit power.
|
| 1715 |
+
|
| 1716 |
+
11
|
| 1717 |
+
3) The Effect of Individual Impairments: To understand the
|
| 1718 |
+
effect of different types of HWIs, we study the LB for AOA,
|
| 1719 |
+
AOD, and delay estimation by considering one type of HWIs
|
| 1720 |
+
at a time. The results are shown in Fig. 7 for (a) PN, (b)
|
| 1721 |
+
CFO, (c) MC, (d) AGE, (e) ADE and (f) IQI. The effect of
|
| 1722 |
+
PA will be separately discussed in Sec. V-C4. Considering we
|
| 1723 |
+
define the HWIs as random variables with a fixed impairment
|
| 1724 |
+
level as shown in Table II, we perform multiple hardware
|
| 1725 |
+
realizations with a fixed pilot signal and plot all the resultant
|
| 1726 |
+
LBs in the shaded regions. We can see that different types of
|
| 1727 |
+
the HWIs affect angle and delay estimation differently. The
|
| 1728 |
+
PN, CFO, and IQI introduce noise on the symbols across
|
| 1729 |
+
different subcarriers and hence affect delay estimation10. Since
|
| 1730 |
+
the phase change introduced by CFO affects the phase changes
|
| 1731 |
+
across beams, angle estimation will also be affected. Instead of
|
| 1732 |
+
affecting the phase changes between different subcarriers, the
|
| 1733 |
+
MC, AGE, and ADE distort the steering vectors and therefore
|
| 1734 |
+
have a more significant effect on the angle estimation. For all
|
| 1735 |
+
the HWIs, the negative effect on the performance occurs when
|
| 1736 |
+
the transmit power is high.
|
| 1737 |
+
One special observation is that the effect of CFO on the
|
| 1738 |
+
AOA is less pronounced than on AOD in Fig. 7 (b). This is
|
| 1739 |
+
because the sweeping strategy is ‘BS-first’. For a system with
|
| 1740 |
+
analog arrays, the estimation of AOA/AOD relies on phase
|
| 1741 |
+
shifts across consecutive beams over time, meaning the angle
|
| 1742 |
+
cannot be estimated from a single receive beam, like in a
|
| 1743 |
+
digital array. If the BS sweeps across different beams while
|
| 1744 |
+
the UE is using a fixed beam, the AOA can be estimated
|
| 1745 |
+
in one BS sweep, and the effect of CFO will be minor.
|
| 1746 |
+
However, the AOD estimation requires multiple BS sweeps,
|
| 1747 |
+
which increases the effect of CFO. To verify the explanation,
|
| 1748 |
+
we further changed the sweeping strategy from ‘BS-first’ to
|
| 1749 |
+
‘UE-first,’ and the results with different array sizes can be
|
| 1750 |
+
found in Fig. 8. We can see that the AOA is less affected if
|
| 1751 |
+
the sweeping is ‘BS-first’ (blue curves in (a)) as shown in (12).
|
| 1752 |
+
Similarly, the AODs are less affected if the sweeping is ‘UE-
|
| 1753 |
+
first’ (dashed red curves in (b)) with a large UE array size.
|
| 1754 |
+
However, when the array size is small, sweeping order will
|
| 1755 |
+
have less impact (i.e., the gaps are small between the dashed
|
| 1756 |
+
curves in (a) and the solid curves in (b)).
|
| 1757 |
+
4) The Effect of PA with Different Pilot Signals: High peak-
|
| 1758 |
+
to-average-power ratio (PAPR) is one of the critical issues in
|
| 1759 |
+
implementing the OFDM signals, and a promising alternative
|
| 1760 |
+
is to use DFT-S-OFDM [54]. When increasing the transmit
|
| 1761 |
+
power, the PAN is more likely to happen, as can be seen
|
| 1762 |
+
in Fig. 9 (a). The delay estimation suffers more from the
|
| 1763 |
+
nonlinear distortion because of the clipping of transmit signal,
|
| 1764 |
+
which distorts the uniformity of phase changes across the
|
| 1765 |
+
subcarriers. The effect on angle estimation is less pronounced
|
| 1766 |
+
(at the same level of transmit power) since different antenna
|
| 1767 |
+
elements experience similar distortions with identical PAs
|
| 1768 |
+
adopted in this work. We compare using the random OFDM
|
| 1769 |
+
symbols and the FFT version of the benchmark symbols (a
|
| 1770 |
+
special case of DFT-S-OFDM by choosing an identity mapping
|
| 1771 |
+
matrix [54]), and the results are shown in Fig. 9. Due to the
|
| 1772 |
+
10If multiple RFCs or several local oscillators are adopted in the array, PN
|
| 1773 |
+
may have a larger effect on angle estimation.
|
| 1774 |
+
0
|
| 1775 |
+
10
|
| 1776 |
+
20
|
| 1777 |
+
30
|
| 1778 |
+
40
|
| 1779 |
+
50
|
| 1780 |
+
10−3
|
| 1781 |
+
10−2
|
| 1782 |
+
10−1
|
| 1783 |
+
100
|
| 1784 |
+
P [dBm]
|
| 1785 |
+
Angle Error [◦]
|
| 1786 |
+
BS 8x8, UE 4x4, BS first
|
| 1787 |
+
BS 8x8, UE 4x4, UE first
|
| 1788 |
+
BS 4x4, UE 8x8, BS first
|
| 1789 |
+
BS 4x4, UE 8x8, UE first
|
| 1790 |
+
(a) AALB (average)
|
| 1791 |
+
0
|
| 1792 |
+
10
|
| 1793 |
+
20
|
| 1794 |
+
30
|
| 1795 |
+
40
|
| 1796 |
+
50
|
| 1797 |
+
10−3
|
| 1798 |
+
10−2
|
| 1799 |
+
10−1
|
| 1800 |
+
100
|
| 1801 |
+
P [dBm]
|
| 1802 |
+
Angle Error [◦]
|
| 1803 |
+
(b) ADLB (average)
|
| 1804 |
+
Fig. 8. The effect of CFO on channel geometrical parameters with different
|
| 1805 |
+
sweeping strategies. The ‘BS first’ strategy (blue curves) works better for
|
| 1806 |
+
AOA estimation, while the ‘UE first’ strategy (red curves) works better for
|
| 1807 |
+
AOD estimation.
|
| 1808 |
+
20
|
| 1809 |
+
25
|
| 1810 |
+
30
|
| 1811 |
+
35
|
| 1812 |
+
40
|
| 1813 |
+
45
|
| 1814 |
+
50
|
| 1815 |
+
55
|
| 1816 |
+
60
|
| 1817 |
+
10−6
|
| 1818 |
+
10−5
|
| 1819 |
+
10−4
|
| 1820 |
+
10−3
|
| 1821 |
+
10−2
|
| 1822 |
+
10−1
|
| 1823 |
+
100
|
| 1824 |
+
P [dBm]
|
| 1825 |
+
AALB (Average)
|
| 1826 |
+
AALB (Multi)
|
| 1827 |
+
AAEB
|
| 1828 |
+
ADLB (Average)
|
| 1829 |
+
ADLB (Multi)
|
| 1830 |
+
ADEB
|
| 1831 |
+
DLB (Average)
|
| 1832 |
+
DLB (Multi)
|
| 1833 |
+
DEB
|
| 1834 |
+
(a) OFDM
|
| 1835 |
+
20
|
| 1836 |
+
25
|
| 1837 |
+
30
|
| 1838 |
+
35
|
| 1839 |
+
40
|
| 1840 |
+
45
|
| 1841 |
+
50
|
| 1842 |
+
55
|
| 1843 |
+
60
|
| 1844 |
+
10−6
|
| 1845 |
+
10−5
|
| 1846 |
+
10−4
|
| 1847 |
+
10−3
|
| 1848 |
+
10−2
|
| 1849 |
+
10−1
|
| 1850 |
+
100
|
| 1851 |
+
P [dBm]
|
| 1852 |
+
(b) DFT-S-OFDM
|
| 1853 |
+
Fig. 9. The effect of PA on channel parameters estimation using (a) OFDM,
|
| 1854 |
+
and (b) DFT-S-OFDM.
|
| 1855 |
+
reduced PAPR by DFT-S-OFDM, the localization performance
|
| 1856 |
+
can be improved, as shown in the right figure.
|
| 1857 |
+
5) Evaluation of HWIs with Different Impairment Levels:
|
| 1858 |
+
We further evaluate the position and orientation ALBs with
|
| 1859 |
+
different levels of HWIs by defining a HWI coefficient cHWI.
|
| 1860 |
+
With different value of cHWI, the position ALB and orientation
|
| 1861 |
+
ALB by considering all the HWIs, and individual HWIs, are
|
| 1862 |
+
shown in Fig. 10 (a) and (b). All the results indicate the 75th
|
| 1863 |
+
percentile of the total 100 realizations. We notice that the effect
|
| 1864 |
+
of PN, MC, AR, AG, and IQI on the localization increases
|
| 1865 |
+
approximately in a linear trend with impairment level. The
|
| 1866 |
+
CFO has a larger effect in high impairment level as the error
|
| 1867 |
+
residue accumulates over time. Based on Fig. 10, we can
|
| 1868 |
+
quantize the contribution of individual HWIs (e.g., if the ALBs
|
| 1869 |
+
are much smaller than current CRB, the negative contribution
|
| 1870 |
+
of HWI on localization is negligible). In addition, it can also
|
| 1871 |
+
identify dominant impairment factors for further compensation
|
| 1872 |
+
(e.g., ADE is one of the dominant factors under current system
|
| 1873 |
+
parameters).
|
| 1874 |
+
|
| 1875 |
+
12
|
| 1876 |
+
−1
|
| 1877 |
+
−0.5
|
| 1878 |
+
0
|
| 1879 |
+
0.5
|
| 1880 |
+
1
|
| 1881 |
+
10−6
|
| 1882 |
+
10−3
|
| 1883 |
+
100
|
| 1884 |
+
10log(cHWI)
|
| 1885 |
+
PALB [m]
|
| 1886 |
+
ALL
|
| 1887 |
+
PN
|
| 1888 |
+
CFO
|
| 1889 |
+
MC
|
| 1890 |
+
AGE
|
| 1891 |
+
ADE
|
| 1892 |
+
IQI
|
| 1893 |
+
(a) PALB
|
| 1894 |
+
−1
|
| 1895 |
+
−0.5
|
| 1896 |
+
0
|
| 1897 |
+
0.5
|
| 1898 |
+
1
|
| 1899 |
+
10−6
|
| 1900 |
+
10−3
|
| 1901 |
+
100
|
| 1902 |
+
10log(cHWI)
|
| 1903 |
+
OALB
|
| 1904 |
+
ALL
|
| 1905 |
+
PN
|
| 1906 |
+
CFO
|
| 1907 |
+
MC
|
| 1908 |
+
AGE
|
| 1909 |
+
ADE
|
| 1910 |
+
IQI
|
| 1911 |
+
(b) OALB
|
| 1912 |
+
Fig. 10. An example of ALB with different levels of impairments: (a) PALB,
|
| 1913 |
+
(b) OALB. The ALBs of the position and orientation affected by the HWIs
|
| 1914 |
+
increase with cHWI (reflecting the impairment level).
|
| 1915 |
+
D. Summary
|
| 1916 |
+
From the simulation, we found that the HWIs affect both
|
| 1917 |
+
localization and communication, especially at high transmit
|
| 1918 |
+
power. The equivalent noise is mainly contributed by HWIs
|
| 1919 |
+
for communication, and the localization performance will
|
| 1920 |
+
saturate due to model mismatch. However, different types of
|
| 1921 |
+
HWIs affect localization and communication differently. The
|
| 1922 |
+
effect of the individual impairment on angle/delay estimation
|
| 1923 |
+
and communication (i.e., SER) is summarized in Table III,
|
| 1924 |
+
with two levels of impacts H/L to denote High/Low. Note
|
| 1925 |
+
that in this uplink scenario, the position estimation is mainly
|
| 1926 |
+
affected by AOA and TOA information, while the orientation
|
| 1927 |
+
estimation is mainly affected by AOD.
|
| 1928 |
+
As for the angle estimation for localization, the performance
|
| 1929 |
+
is strongly affected by CFO, MC, AGE, and ADE. When
|
| 1930 |
+
talking about the TOA, it is mainly affected by PN, CFO
|
| 1931 |
+
and IQI. Since communication does not exploit the phase
|
| 1932 |
+
relationship between antennas (e.g., no sweeping is needed
|
| 1933 |
+
once the communication link is established), SER will be
|
| 1934 |
+
affected by the same factors as delay estimation, which are
|
| 1935 |
+
verified in Fig. 7. It should be noted that the effect of CFO on
|
| 1936 |
+
AOA and AOD estimation depends on the sweeping order and
|
| 1937 |
+
number of transmissions, while the effect of PA depends on
|
| 1938 |
+
the transmit power and the nonlinear region of the amplifier.
|
| 1939 |
+
VI. CONCLUSION
|
| 1940 |
+
As the requirements on localization and communication
|
| 1941 |
+
performance are more stringent to support new applications,
|
| 1942 |
+
HWIs become a prominent factor affecting the performance
|
| 1943 |
+
in 6G systems. We have modeled different types of HWIs and
|
| 1944 |
+
utilized the MCRB to evaluate the localization error caused
|
| 1945 |
+
by model-mismatch. The effects of HWIs on angle/delay
|
| 1946 |
+
and position/orientation estimation are evaluated. We found
|
| 1947 |
+
that PN and IQI have a stronger effect on delay estimation,
|
| 1948 |
+
while MC, AGE, and ADE have a more significant effect
|
| 1949 |
+
TABLE III
|
| 1950 |
+
SUMMARY OF THE EFFECTS OF HWIS ON LOCALIZATION AND
|
| 1951 |
+
COMMUNICATION
|
| 1952 |
+
Type of HWI
|
| 1953 |
+
AOD
|
| 1954 |
+
AOA
|
| 1955 |
+
TOA
|
| 1956 |
+
SER
|
| 1957 |
+
Phase Noise
|
| 1958 |
+
L
|
| 1959 |
+
L
|
| 1960 |
+
H
|
| 1961 |
+
H
|
| 1962 |
+
Carrier Frequency Offset
|
| 1963 |
+
H∗
|
| 1964 |
+
H∗
|
| 1965 |
+
H
|
| 1966 |
+
H
|
| 1967 |
+
Mutual Coupling
|
| 1968 |
+
H
|
| 1969 |
+
H
|
| 1970 |
+
L
|
| 1971 |
+
L
|
| 1972 |
+
Power Amplifier Nonlinearity
|
| 1973 |
+
H∗
|
| 1974 |
+
H∗
|
| 1975 |
+
H∗
|
| 1976 |
+
H∗
|
| 1977 |
+
Array Gain Error
|
| 1978 |
+
H
|
| 1979 |
+
H
|
| 1980 |
+
L
|
| 1981 |
+
L
|
| 1982 |
+
Antenna Displacement Error
|
| 1983 |
+
H
|
| 1984 |
+
H
|
| 1985 |
+
L
|
| 1986 |
+
L
|
| 1987 |
+
IQ Imbalance
|
| 1988 |
+
L
|
| 1989 |
+
L
|
| 1990 |
+
H
|
| 1991 |
+
H
|
| 1992 |
+
∗The effect of CFO on angle estimations depends on the sweeping order and number
|
| 1993 |
+
of transmissions. The effect of PAN depends on the transmit power and the nonlinear
|
| 1994 |
+
region of the amplifier.
|
| 1995 |
+
on angle estimation. The CFO and PAN affect both angle
|
| 1996 |
+
and delay, where the former one depends on the sweeping
|
| 1997 |
+
strategy and number of transmissions, and the latter factor
|
| 1998 |
+
is determined by the transmit power (or amplitude) of the
|
| 1999 |
+
signals. Furthermore, we evaluated the effect of individual
|
| 2000 |
+
HWIs on the communication performance in terms of SER.
|
| 2001 |
+
The dominant impairments that degrade SER (i.e., PN, CFO,
|
| 2002 |
+
PA, and IQI) are in good agreement with the factors that affect
|
| 2003 |
+
delay estimation.
|
| 2004 |
+
In summary, the localization and communication perfor-
|
| 2005 |
+
mance that improves with transmit power in an ideal model
|
| 2006 |
+
will saturate due to the effect of HWIs. To fully realize the
|
| 2007 |
+
potential of 6G joint localization and communication system,
|
| 2008 |
+
a dedicated pilot signal design and algorithms for estimating
|
| 2009 |
+
and mitigating HWI are needed. Further works can consider
|
| 2010 |
+
the effect of HWIs in multipath and reconfigurable intelligent
|
| 2011 |
+
surface-aided scenarios, as well as learning-based methods for
|
| 2012 |
+
mismatch mitigation.
|
| 2013 |
+
REFERENCES
|
| 2014 |
+
[1] H. Chen et al., “A tutorial on terahertz-band localization for 6G
|
| 2015 |
+
communication systems,” IEEE Communications Surveys & Tutorials,
|
| 2016 |
+
May. 2022.
|
| 2017 |
+
[2] H. Wymeersch et al., “6G radio requirements to support integrated
|
| 2018 |
+
communication, localization, and sensing,” 2022.
|
| 2019 |
+
[3] R. Di Taranto et al., “Location-aware communications for 5G networks:
|
| 2020 |
+
How location information can improve scalability, latency, and robust-
|
| 2021 |
+
ness of 5G,” IEEE Signal Process. Mag., vol. 31, no. 6, pp. 102–112,
|
| 2022 |
+
Oct. 2014.
|
| 2023 |
+
[4] G. Kwon et al., “Joint communication and localization in millimeter
|
| 2024 |
+
wave networks,” IEEE J. Sel. Topics Signal Process., Sep. 2021.
|
| 2025 |
+
[5] Z. Xiao et al., “An overview on integrated localization and communi-
|
| 2026 |
+
cation towards 6G,” Sci. China Inf. Sciences, vol. 65, no. 3, pp. 1–46,
|
| 2027 |
+
Mar. 2022.
|
| 2028 |
+
[6] A. Behravan et al., “Positioning and sensing in 6G: Gaps, challenges,
|
| 2029 |
+
and opportunities,” IEEE Veh. Technol. Mag., 2022.
|
| 2030 |
+
[7] F. Wen et al., “5G positioning and mapping with diffuse multipath,”
|
| 2031 |
+
IEEE Trans. Commun., vol. 20, no. 2, pp. 1164–1174, Oct. 2020.
|
| 2032 |
+
[8] Y. Ge et al., “5G SLAM using the clustering and assignment approach
|
| 2033 |
+
with diffuse multipath,” Sensors, vol. 20, no. 16, p. 4656, Jan. 2020.
|
| 2034 |
+
[9] M. A. Nazari et al., “3D orientation estimation with multiple 5G
|
| 2035 |
+
mmWave base stations,” in Proc. IEEE Int. Conf. Commun., Jun. 2021.
|
| 2036 |
+
[10] Y. Han et al., “Performance limits and geometric properties of array
|
| 2037 |
+
localization,” IEEE Trans. Inf. Theory, vol. 62, no. 2, pp. 1054–1075,
|
| 2038 |
+
Dec. 2015.
|
| 2039 |
+
[11] Z. Abu-Shaban et al., “Error bounds for uplink and downlink 3D
|
| 2040 |
+
localization in 5G millimeter wave systems,” IEEE Trans. on Wireless
|
| 2041 |
+
Commun., vol. 17, no. 8, pp. 4939–4954, May. 2018.
|
| 2042 |
+
[12] M. A. Nazari et al., “Mmwave 6D radio localization with a snapshot
|
| 2043 |
+
observation from a single BS,” arXiv preprint arXiv:2204.05189, 2022.
|
| 2044 |
+
|
| 2045 |
+
13
|
| 2046 |
+
[13] F. Wen et al., “Tensor decomposition based beamspace ESPRIT for
|
| 2047 |
+
millimeter wave MIMO channel estimation,” in Proc. IEEE Global
|
| 2048 |
+
Commun. Conf. (GLOBECOM).
|
| 2049 |
+
IEEE, Dec. 2018.
|
| 2050 |
+
[14] F. Jiang et al., “Beamspace multidimensional ESPRIT approaches
|
| 2051 |
+
for simultaneous localization and communications,” arXiv preprint
|
| 2052 |
+
arXiv:2111.07450, 2021.
|
| 2053 |
+
[15] A. Elzanaty et al., “Reconfigurable intelligent surfaces for localization:
|
| 2054 |
+
Position and orientation error bounds,” IEEE Trans. Signal Process.,
|
| 2055 |
+
vol. 69, pp. 5386–5402, Aug. 2021.
|
| 2056 |
+
[16] T. Schenk, RF imperfections in high-rate wireless systems: Impact and
|
| 2057 |
+
digital compensation.
|
| 2058 |
+
Springer Science & Business Media, 2008.
|
| 2059 |
+
[17] S. Jacobsson et al., “Massive MU-MIMO-OFDM uplink with hardware
|
| 2060 |
+
impairments: Modeling and analysis,” in Proc. 52nd Asilomar Conf. on
|
| 2061 |
+
Signals, Syst., Comput., Oct. 2018, pp. 1829–1835.
|
| 2062 |
+
[18] O. Kolawole et al., “Impact of hardware impairments on mmwave
|
| 2063 |
+
MIMO systems with hybrid precoding,” in Proc. IEEE Wireless Com-
|
| 2064 |
+
mun. Netw. Conf., Apr. 2018.
|
| 2065 |
+
[19] H. Shen et al., “Beamforming optimization for IRS-aided communica-
|
| 2066 |
+
tions with transceiver hardware impairments,” IEEE Trans. Commun.,
|
| 2067 |
+
vol. 69, no. 2, pp. 1214–1227, Oct. 2020.
|
| 2068 |
+
[20] N. Ginige et al., “Untrained DNN for channel estimation of RIS-assisted
|
| 2069 |
+
multi-user OFDM system with hardware impairments,” in Proc. IEEE
|
| 2070 |
+
32nd Annu. Int. Symp. Pers. Indoor Mobile Radio Commun., Sep. 2021,
|
| 2071 |
+
pp. 561–566.
|
| 2072 |
+
[21] Y. Wu et al., “Efficient channel estimation for mmwave MIMO with
|
| 2073 |
+
transceiver hardware impairments,” IEEE Trans. Veh. Technol., vol. 68,
|
| 2074 |
+
no. 10, pp. 9883–9895, Aug. 2019.
|
| 2075 |
+
[22] T. Yassine et al., “mpNet: variable depth unfolded neural network for
|
| 2076 |
+
massive MIMO channel estimation,” IEEE Trans. Wireless Commun.,
|
| 2077 |
+
2022.
|
| 2078 |
+
[23] H. C. Yildirim et al., “Impact of phase noise on mutual interference of
|
| 2079 |
+
FMCW and PMCW automotive radars,” in Proc. 16th Eur. Radar Conf.,
|
| 2080 |
+
Oct. 2019, pp. 181–184.
|
| 2081 |
+
[24] M. Gerstmair et al., “On the safe road toward autonomous driving: Phase
|
| 2082 |
+
noise monitoring in radar sensors for functional safety compliance,”
|
| 2083 |
+
IEEE Signal Process. Mag., vol. 36, no. 5, pp. 60–70, Sep. 2019.
|
| 2084 |
+
[25] K. Siddiq et al., “Phase noise in FMCW radar systems,” IEEE Trans.
|
| 2085 |
+
Aerosp. Electron. Syst., vol. 55, no. 1, pp. 70–81, Feb 2019.
|
| 2086 |
+
[26] Z. Ye et al., “DOA estimation for uniform linear array with mutual
|
| 2087 |
+
coupling,” IEEE Trans. Aerosp. Electron. Syst., vol. 45, no. 1, pp. 280–
|
| 2088 |
+
288, Mar. 2009.
|
| 2089 |
+
[27] F. Ghaseminajm et al., “Localization error bounds for 5G mmwave
|
| 2090 |
+
systems under I/Q imbalance,” IEEE Trans. Veh. Technol., vol. 69, no. 7,
|
| 2091 |
+
pp. 7971–7975, Apr. 2020.
|
| 2092 |
+
[28] F. Bozorgi et al., “RF front-end challenges for joint communication and
|
| 2093 |
+
radar sensing,” in Proc. 1st IEEE Int. Online Symp. Joint Commun. Sens.,
|
| 2094 |
+
Feb. 2021.
|
| 2095 |
+
[29] D. A. Tubail et al., “Error bounds for 3D localization and maximum
|
| 2096 |
+
likelihood estimation of mm-Wave MISO OFDM systems in the pres-
|
| 2097 |
+
ence of hardware impairments,” IEEE Commun. Lett., vol. 26, no. 9, pp.
|
| 2098 |
+
2042–2046, Jun. 2022.
|
| 2099 |
+
[30] B. Ceniklioglu et al., “Error analysis of the joint localization and
|
| 2100 |
+
synchronization of RIS-assisted mm-Wave MISO-OFDM under the
|
| 2101 |
+
effect of hardware impairments,” IEEE Open J. Commun. Soc., Aug.
|
| 2102 |
+
2022.
|
| 2103 |
+
[31] H. Chen et al., “MCRB-based performance analysis of 6G localization
|
| 2104 |
+
under hardware impairments,” in Proc. IEEE Int. Conf. Commun. (ICC)
|
| 2105 |
+
Workshops, May. 2022.
|
| 2106 |
+
[32] J. M. Mateos-Ramos et al., “End-to-end learning for integrated sensing
|
| 2107 |
+
and communication,” in Proc. IEEE Int. Conf. Commun. (ICC).
|
| 2108 |
+
IEEE,
|
| 2109 |
+
May. 2022, pp. 1942–1947.
|
| 2110 |
+
[33] K. Sankhe et al., “No radio left behind: Radio fingerprinting through
|
| 2111 |
+
deep learning of physical-layer hardware impairments,” IEEE Trans. on
|
| 2112 |
+
Cogn. Commun. Netw., vol. 6, no. 1, pp. 165–178, Oct. 2019.
|
| 2113 |
+
[34] C. D. Richmond et al., “Parameter bounds on estimation accuracy under
|
| 2114 |
+
model misspecification,” IEEE Trans. Signal Process., vol. 63, no. 9, pp.
|
| 2115 |
+
2263–2278, Mar. 2015.
|
| 2116 |
+
[35] S. Fortunati et al., “Performance bounds for parameter estimation under
|
| 2117 |
+
misspecified models: Fundamental findings and applications,” IEEE
|
| 2118 |
+
Signal Process. Mag., vol. 34, no. 6, pp. 142–157, Nov. 2017.
|
| 2119 |
+
[36] C. Ozturk et al., “RIS-aided near-field localization under phase-
|
| 2120 |
+
dependent amplitude variations,” arXiv preprint arXiv:2204.12783,
|
| 2121 |
+
2022.
|
| 2122 |
+
[37] A. Mohammadian et al., “RF impairments in wireless transceivers: Phase
|
| 2123 |
+
noise, CFO, and IQ imbalance–A survey,” IEEE Access, vol. 9, pp.
|
| 2124 |
+
111 718–111 791, Aug. 2021.
|
| 2125 |
+
[38] N. Hajiabdolrahim et al., “An extended Kalman filter framework for
|
| 2126 |
+
joint phase noise, CFO and sampling time error estimation,” in Proc.
|
| 2127 |
+
31st Annu. Int. Symp. Pers. Indoor Mobile Radio Commun. IEEE, Aug.
|
| 2128 |
+
2020.
|
| 2129 |
+
[39] D. D. Lin et al., “Joint estimation of channel response, frequency offset,
|
| 2130 |
+
and phase noise in OFDM,” IEEE Trans. Signal Process., vol. 54, no. 9,
|
| 2131 |
+
pp. 3542–3554, Aug. 2006.
|
| 2132 |
+
[40] T. Roman et al., “Blind frequency synchronization in OFDM via
|
| 2133 |
+
diagonality criterion,” IEEE Trans. Signal Process., vol. 54, no. 8, pp.
|
| 2134 |
+
3125–3135, Jul. 2006.
|
| 2135 |
+
[41] O. H. Salim et al., “Channel, phase noise, and frequency offset in OFDM
|
| 2136 |
+
systems: Joint estimation, data detection, and hybrid cram´er-rao lower
|
| 2137 |
+
bound,” IEEE Trans. Commun., vol. 62, no. 9, pp. 3311–3325, Jul. 2014.
|
| 2138 |
+
[42] M. Chung et al., “Phase-noise compensation for OFDM systems exploit-
|
| 2139 |
+
ing coherence bandwidth: Modeling, algorithms, and analysis,” IEEE
|
| 2140 |
+
Trans. Wireless Commun., Oct. 2021.
|
| 2141 |
+
[43] Z. Ye et al., “2-D DOA estimation in the presence of mutual coupling,”
|
| 2142 |
+
IEEE Trans. Antennas Propag., vol. 56, no. 10, pp. 3150–3158, Sep.
|
| 2143 |
+
2008.
|
| 2144 |
+
[44] M. H. Moghaddam et al., “Statistical modeling and analysis of power
|
| 2145 |
+
amplifier nonlinearities in communication systems,” IEEE Trans. Com-
|
| 2146 |
+
mun., vol. 70, no. 2, pp. 822–835, Dec. 2021.
|
| 2147 |
+
[45] A. J. van den Biggelaar et al., “Improved statistical model on the effect
|
| 2148 |
+
of random errors in the phase and amplitude of element excitations on
|
| 2149 |
+
the array radiation pattern,” IEEE Trans. Antennas Propag., vol. 66,
|
| 2150 |
+
no. 5, pp. 2309–2317, Jan. 2018.
|
| 2151 |
+
[46] A. Tarighat et al., “Joint compensation of transmitter and receiver
|
| 2152 |
+
impairments in OFDM systems,” IEEE Trans. Wireless Commun., vol. 6,
|
| 2153 |
+
no. 1, pp. 240–247, Feb. 2007.
|
| 2154 |
+
[47] B. Narasimhan et al., “Digital compensation of frequency-dependent
|
| 2155 |
+
joint Tx/Rx I/Q imbalance in OFDM systems under high mobility,” IEEE
|
| 2156 |
+
J. Sel. Topics Signal Process., vol. 3, no. 3, pp. 405–417, May. 2009.
|
| 2157 |
+
[48] H. Minn et al., “Pilot designs for channel estimation of MIMO OFDM
|
| 2158 |
+
systems with frequency-dependent I/Q imbalances,” IEEE Trans. Com-
|
| 2159 |
+
mun., vol. 58, no. 8, pp. 2252–2264, Aug. 2010.
|
| 2160 |
+
[49] P. Zheng et al., “Coverage analysis of joint localization and communi-
|
| 2161 |
+
cation in THz systems with 3D arrays,” TechRxiv, 2022.
|
| 2162 |
+
[50] N. Boumal et al., “Manopt, a matlab toolbox for optimization on
|
| 2163 |
+
manifolds,” The Journal of Machine Learning Research, vol. 15, no. 1,
|
| 2164 |
+
pp. 1455–1459, Jan. 2014.
|
| 2165 |
+
[51] S. M. Kay, Fundamentals of statistical signal processing: estimation
|
| 2166 |
+
theory.
|
| 2167 |
+
Prentice-Hall, Inc., 1993.
|
| 2168 |
+
[52] S. Fortunati et al., “The constrained misspecified Cram´er–Rao bound,”
|
| 2169 |
+
IEEE Signal Process. Letters, vol. 23, no. 5, pp. 718–721, Mar. 2016.
|
| 2170 |
+
[53] A. Goldsmith, Wireless communications.
|
| 2171 |
+
Cambridge university press,
|
| 2172 |
+
Aug. 2005.
|
| 2173 |
+
[54] G. Berardinelli, “Generalized DFT-s-OFDM waveforms without cyclic
|
| 2174 |
+
prefix,” IEEE Access, vol. 6, pp. 4677–4689, Dec. 2017.
|
| 2175 |
+
|
F9AzT4oBgHgl3EQfHPuf/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
F9E1T4oBgHgl3EQfFAMn/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:03861c0d2de31c63745cf65bd134785d5d5ff0d7efd830ba5eb5e1fb4167ec99
|
| 3 |
+
size 1769517
|
F9E1T4oBgHgl3EQfFAMn/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5fb30e84bbec8a6069d2c50b5566efe12f4fd998f752af94a5d3f2b01b0d9765
|
| 3 |
+
size 72619
|
FNAzT4oBgHgl3EQfw_7Q/content/tmp_files/2301.01732v1.pdf.txt
ADDED
|
@@ -0,0 +1,901 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
arXiv:2301.01732v1 [eess.IV] 4 Jan 2023
|
| 2 |
+
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. XX, NO. XX, SUBMITTED FEB. 2023
|
| 3 |
+
1
|
| 4 |
+
UNAEN: Unsupervised Abnomality Extraction
|
| 5 |
+
Network for MRI Motion Artifact Reduction
|
| 6 |
+
Yusheng Zhou, Hao Li, Jianan Liu, Zhengmin Kong, Tao Huang, Euijoon Ah, and Zhihan Lv
|
| 7 |
+
Abstract—Motion artifact reduction is one of the most
|
| 8 |
+
concerned problems in magnetic resonance imaging. As
|
| 9 |
+
a promising solution, deep learning-based methods have
|
| 10 |
+
been widely investigated for artifact reduction tasks in
|
| 11 |
+
MRI. As a retrospective processing method, neural network
|
| 12 |
+
does not cost additional acquisition time or require new
|
| 13 |
+
acquisition equipment, and seems to work better than tra-
|
| 14 |
+
ditional artifact reduction methods. In the previous study,
|
| 15 |
+
training such models require the paired motion-corrupted
|
| 16 |
+
and motion-free MR images. However, it is extremely tough
|
| 17 |
+
or even impossible to obtain these images in reality be-
|
| 18 |
+
cause patients have difficulty in maintaining the same state
|
| 19 |
+
during two image acquisition, which makes the training
|
| 20 |
+
in a supervised manner impractical. In this work, we pro-
|
| 21 |
+
posed a new unsupervised abnomality extraction network
|
| 22 |
+
(UNAEN) to alleviate this problem. Our network realizes the
|
| 23 |
+
transition from artifact domain to motion-free domain by
|
| 24 |
+
processing the abnormal information introduced by artifact
|
| 25 |
+
in unpaired MR images. Different from directly generating
|
| 26 |
+
artifact reduction results from motion-corrupted MR im-
|
| 27 |
+
ages, we adopted the strategy of abnomality extraction to
|
| 28 |
+
indirectly correct the impact of artifact in MR images by
|
| 29 |
+
learning the deep features. Experimental results show that
|
| 30 |
+
our method is superior to state-of-the-art networks and can
|
| 31 |
+
potentially be applied in real clinical settings.
|
| 32 |
+
Index Terms— Magnetic Resonance Imaging, Motion Ar-
|
| 33 |
+
tifact Reduction, Unsupervised Learning.
|
| 34 |
+
I. INTRODUCTION
|
| 35 |
+
M
|
| 36 |
+
AGNETIC resonance imaging (MRI) is a non-invasive
|
| 37 |
+
medical imaging technique used in the diagnosis of
|
| 38 |
+
various diseases without radiation exposure. However, due to
|
| 39 |
+
the long acquisition time, MRI is sensitive to the patient’s
|
| 40 |
+
movement [1], and incorrect K-space signal filling cause
|
| 41 |
+
blurring or ghosting artifacts, which in turn affects the patient’s
|
| 42 |
+
diagnosis. To solve motion-related problems, researchers have
|
| 43 |
+
proposed a variety of methods to prevent movement or correct
|
| 44 |
+
artifacts [2]–[6]. An effective method is to introduce new
|
| 45 |
+
equipment to accelerate the acquisition and compensate or
|
| 46 |
+
Yusheng Zhou and Hao Li contribute equally to the work and are co-
|
| 47 |
+
first authors.
|
| 48 |
+
Zhengmin Kong is the corresponding author.
|
| 49 |
+
Yusheng Zhou and Zhengmin Kong are with School of Electrical
|
| 50 |
+
Engineering and Automation , Wuhan University, China.
|
| 51 |
+
Hao Li is with the Department of Neuroradiology, University Hospital
|
| 52 |
+
Heidelberg, Heidelberg, Germany.
|
| 53 |
+
Jianan Liu is with Vitalent Consulting, Gothenburg, Sweden. (Email:
|
| 54 |
+
jianan.liu@vitalent.se)
|
| 55 |
+
Tao Huang and Euijoon Ahn are with the College of Science
|
| 56 |
+
and Engineering, James Cook University, Cairns, Australia. (Email:
|
| 57 |
+
tao.huang1@jcu.edu.au; euijoon.ahn@jcu.edu.au)
|
| 58 |
+
Zhihan Lv is with the Department of Game Design, Faculty of Arts,
|
| 59 |
+
Uppsala University, Sweden (Email: lvzhihan@gmail.com)
|
| 60 |
+
reacquire the K-space data partially in a prospective manner.
|
| 61 |
+
Although it can significantly prevent the appearance of motion
|
| 62 |
+
artifacts, it has not been widely applied due to the expensive
|
| 63 |
+
cost. Therefore, compared with high-cost prospective meth-
|
| 64 |
+
ods, retrospective artifact removal is still the main research
|
| 65 |
+
direction at present.
|
| 66 |
+
In recent years, artifact reduction techniques based on su-
|
| 67 |
+
pervision and deep learning have been proposed to address the
|
| 68 |
+
artifact problem in MRI [7]–[9]. It does not increase scanning
|
| 69 |
+
time and requires no additional acquisition equipment. A large
|
| 70 |
+
number of training samples are used to train neural networks.
|
| 71 |
+
Motion-free MR images is used as the correction guide to re-
|
| 72 |
+
duce artifacts in paired motion-corrupted MR images, showing
|
| 73 |
+
better performance over traditional methods in several studies.
|
| 74 |
+
However, the acquisition of paired MR images is extremely
|
| 75 |
+
tough or even impossible due to the difficulty in maintaining
|
| 76 |
+
the same state of the patients during the two image acquisition.
|
| 77 |
+
Image misalignment caused by state deviation is mistakenly
|
| 78 |
+
considered as a type of artifact, and then descends the artifact
|
| 79 |
+
reduction ability of the model, restricting the use of these
|
| 80 |
+
method in real clinical practice.
|
| 81 |
+
It is necessary to develop training methods that are appli-
|
| 82 |
+
cable when no paired MR images are available [10], [11],
|
| 83 |
+
and the successful popularization of unsupervised learning in
|
| 84 |
+
various tasks in the field of computer vision [12]–[16] gives us
|
| 85 |
+
a possible way to solve above problems. As another branch of
|
| 86 |
+
deep learning, unsupervised learning can find hidden patterns
|
| 87 |
+
or features from data without requiring feedback information
|
| 88 |
+
such as labels or categories, and does not over-rely on prior
|
| 89 |
+
knowledge of dataset. In particular, several recent models
|
| 90 |
+
based on unsupervised learning have shown promising results
|
| 91 |
+
without paired training samples, such as ISCL [17] for image
|
| 92 |
+
denoising task proposed by Lee et al., ADN [18] for computed
|
| 93 |
+
tomography (CT) metal artifact reduction task proposed by
|
| 94 |
+
Liao et al. and CycleGAN [19] proposed by Zhu et al. for
|
| 95 |
+
realizing images style transfer. However, although these tasks
|
| 96 |
+
are similar to motion artifact reduction, it does not mean that
|
| 97 |
+
the former models can be directly applied to the latter task.
|
| 98 |
+
As a common basis of the methods mentioned above,
|
| 99 |
+
generative adversarial network (GAN) [12] is one of the
|
| 100 |
+
most attractive technologies at present and one of the most
|
| 101 |
+
promising methods to handle the distribution of complex data.
|
| 102 |
+
Originally designed to generate data that doesn’t exist in the
|
| 103 |
+
real world, GAN comes in many variations for different tasks
|
| 104 |
+
[19]–[22]. Especially in the field of image generation, includ-
|
| 105 |
+
ing unconditional generation [12], [21], conditional generation
|
| 106 |
+
[20], [22] and image-to-image translation [19], etc., GAN’s
|
| 107 |
+
|
| 108 |
+
LOGO2
|
| 109 |
+
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. XX, NO. XX, SUBMITTED FEB. 2023
|
| 110 |
+
studies have accumulated a solid fundamental of knowledge.
|
| 111 |
+
In order to avoid the unavailablility of paired MR images, we
|
| 112 |
+
proposed an unsupervised MRI artifact reduction framework
|
| 113 |
+
inspired by GAN, which trains the network by using unpaired
|
| 114 |
+
motion-free MR images and motion-corrupted MR images.
|
| 115 |
+
The contributions of this work are summarized as follows:
|
| 116 |
+
• We proposed an unsupervised abnomality extraction
|
| 117 |
+
network (UNAEN) to extract artifact residual maps by
|
| 118 |
+
learning the deep feature differences between unpaired
|
| 119 |
+
motion-free images and motion-corrupted images, indi-
|
| 120 |
+
rectly achieving the reduction of motion artifacts in MR
|
| 121 |
+
images.
|
| 122 |
+
• Different from the existing domain transfer methods in
|
| 123 |
+
the literature, UNAEN aimed to extract the abnormal
|
| 124 |
+
information in the image that causes the deep features
|
| 125 |
+
difference, and eliminated these abnormal information
|
| 126 |
+
to make the motion-corrupted close to the motion-
|
| 127 |
+
free distribution, improving the model’s representation
|
| 128 |
+
learning ability of artifact.
|
| 129 |
+
• Experimental results showed that compared with some
|
| 130 |
+
unsupervised models, the proposed model got higher
|
| 131 |
+
evaluation metrics and generated image with superior
|
| 132 |
+
quality.
|
| 133 |
+
II. RELATED WORK
|
| 134 |
+
A. Conventional Artifact Reduction
|
| 135 |
+
The most straightforward method to address the problem
|
| 136 |
+
of motion artifacts in MRI is to restrain the patients’ motions
|
| 137 |
+
by means of sedation or breath-holding during K-space data
|
| 138 |
+
acquisition [2]. However, patients cannot control physiological
|
| 139 |
+
involuntary movements such as cerebrospinal fluid pulsation
|
| 140 |
+
or intestinal peristalsis. In order to reduce the burden on
|
| 141 |
+
patients, some fast acquisition strategies have been proposed.
|
| 142 |
+
Compressed sensing [3] is an acquisition and reconstruction
|
| 143 |
+
technique based on signal sparsity, and its application to
|
| 144 |
+
K-space undersampling can shorten the scan time. Parallel
|
| 145 |
+
imaging [4] technique uses multiple coils with different sensi-
|
| 146 |
+
tivities to collect data during MR scanning to reduce the phase
|
| 147 |
+
encodings and thus the scan time. Although these methods to
|
| 148 |
+
accelerate the acquisition of K-space data can suppress motion
|
| 149 |
+
artifacts to a certain extent, they do not fundamentally solve
|
| 150 |
+
the problem.
|
| 151 |
+
Traditional artifact reduction methods include prospective
|
| 152 |
+
methods and retrospective methods. Prospective motion arti-
|
| 153 |
+
fact correction [5], [6] can compensate or reacquire K-space
|
| 154 |
+
partially during acquisition, which has great potential. But
|
| 155 |
+
because of requiring additional expensive hardware, it have
|
| 156 |
+
not been widely used in the clinic. Unlike the prospective
|
| 157 |
+
methods, the retrospective methods have no additional equip-
|
| 158 |
+
ment requirements. Retrospective motion artifact correction
|
| 159 |
+
[23]–[25] can estimate motions without obtaining information.
|
| 160 |
+
But these algorithms are computationally limited due to the
|
| 161 |
+
complexity and unpredictability of patients’ motions. Overall,
|
| 162 |
+
the traditional algorithms mentioned above all have some
|
| 163 |
+
shortcomings when dealing with the motion artifacts.
|
| 164 |
+
B. Deep Artifact Reduction
|
| 165 |
+
With the great success of deep learning in the field of
|
| 166 |
+
computer vision, some researchers have proposed retrospective
|
| 167 |
+
artifact reduction schemes based on deep learning (especially
|
| 168 |
+
convolutional neural network, CNN). The CNN model can be
|
| 169 |
+
trained with motion-corrupted images as input and the same
|
| 170 |
+
individual’s motion-free images as ground truth. As one of the
|
| 171 |
+
first studies for motion correction using deep learning, Johnson
|
| 172 |
+
et al. reconstructed the motion-corrected MR image from the
|
| 173 |
+
vector of motion-deformed k-space by the deep neural network
|
| 174 |
+
(DNN) [8]. Han et al. proposed a denoising algorithm based on
|
| 175 |
+
U-net to remove the streak artifacts induced in images obtained
|
| 176 |
+
via radial acquisition [7]. And Sommer et al. applied a fully
|
| 177 |
+
convolutional neural networks to extracted motion artifact-
|
| 178 |
+
only image, which subtracts the motion-clean image from
|
| 179 |
+
the motion-corrupted image, resulting in less deformation [9].
|
| 180 |
+
However, in most cases it is difficult or impossible to obtain
|
| 181 |
+
paired MRI dataset to train neural networks. Although several
|
| 182 |
+
algorithms on motion simulation have been proposed to solve
|
| 183 |
+
this problem, these algorithms only consider simple and fixed
|
| 184 |
+
motion patterns to corrupt MR images from the image domain
|
| 185 |
+
[26] or K-space [27], [28]. In fact, the motion of patients is
|
| 186 |
+
more random and unpredictable. Models trained on datasets
|
| 187 |
+
generated by simulation artifacts perform poorly in practical
|
| 188 |
+
applications.
|
| 189 |
+
C. Unsupervised Image-to-Image Translation
|
| 190 |
+
Artifact reduction can be regarded as a task of image-to-
|
| 191 |
+
image translation. In recent years, some training strategies
|
| 192 |
+
based on unpaired images have attracted much attention. Deep
|
| 193 |
+
Image Prior (DIP) [29] demonstrated the feasibility of hand-
|
| 194 |
+
crafted prior generated by a randomly initialized network for
|
| 195 |
+
image denoising task. However, the disadvantage is that a large
|
| 196 |
+
amount of resources are consumed for iterative computation
|
| 197 |
+
for each image. Noise2Noise (N2N) [30] and Noise2Void
|
| 198 |
+
(N2V) [31] only used noisy images to train a CNN denoiser.
|
| 199 |
+
Although satisfactory denoising effect can be achieved without
|
| 200 |
+
noisy-clean image pairs, it is also necessary to know the
|
| 201 |
+
distribution of pixel-independent noise in order to choose
|
| 202 |
+
the applicable loss functions. Recently, generative adversarial
|
| 203 |
+
network (GAN) [12] had shown great potential in image gen-
|
| 204 |
+
eration and representation learning. The GCBD [32] proposed
|
| 205 |
+
by Chen et al. illustrated that GAN can train to estimate the
|
| 206 |
+
noise distribution of the noisy images. UIDnet [33] applied
|
| 207 |
+
a conditional GAN (cGAN) [22] to generate clean-pseudo
|
| 208 |
+
noisy pairs for training a denoising network. CycleGAN [19]
|
| 209 |
+
is a cyclic symmetric network consisted of two generators and
|
| 210 |
+
two discriminators, which is mainly used for domain adaption.
|
| 211 |
+
ISCL [17] added a noise extractor on the basis of CycleGAN
|
| 212 |
+
for cooperative learning with the generators. By combining
|
| 213 |
+
generative model and disentanglement network, ADN [18]
|
| 214 |
+
constructed multiple encoders and decoders to separate the
|
| 215 |
+
contents and artifacts in the CT images and get comparable
|
| 216 |
+
results with supervised learning.
|
| 217 |
+
|
| 218 |
+
AUTHOR et al.: PREPARATION OF PAPERS FOR IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS (FEBRUARY 2023)
|
| 219 |
+
3
|
| 220 |
+
III. PROPOSED METHOD
|
| 221 |
+
In this work, an unsupervised de-motion artifact model
|
| 222 |
+
named Unsupervised Abnomality Extraction Network (UN-
|
| 223 |
+
AEN) which uses the unpaired MR images to train, is proposed
|
| 224 |
+
as shown in Fig.1. In order to promote the representation
|
| 225 |
+
learning ability of motion artifact, an artifact extractor was
|
| 226 |
+
designed to intercept the artifact residual maps from the
|
| 227 |
+
motion-corrupted MR images, instead of using the generator to
|
| 228 |
+
directly generate the motion correction result. Compared with
|
| 229 |
+
general GAN, the mapping function between artifact domain
|
| 230 |
+
and motion-free domain could be obtained more easily. In
|
| 231 |
+
addition, we used an artifact reconstructor to restore the orig-
|
| 232 |
+
inal input from the motion artifact-reduced images to prevent
|
| 233 |
+
the artifact extractor from mismapping. In the experiment,
|
| 234 |
+
we compared the performance of UNEAN with some state-
|
| 235 |
+
of-the-art models such as CycleGAN, ISCL, UIDnet. The
|
| 236 |
+
experimental results show that our proposed model can achieve
|
| 237 |
+
better artifact reduction effect.
|
| 238 |
+
A. Network Architecture
|
| 239 |
+
Specifically, the UNAEN framework contains two modules:
|
| 240 |
+
forward module for artifact reduction and backward module
|
| 241 |
+
for artifact reconstruction. The forward module comes with an
|
| 242 |
+
artifact extractor Ge for learning the artifact distribution in the
|
| 243 |
+
motion-corrupted MR images. There is an artifact reconstruc-
|
| 244 |
+
tor Gr in the backward module that restores the corresponding
|
| 245 |
+
original input from the output generated by the forward mod-
|
| 246 |
+
ule. We take the unpaired images {(xa, y)|xa ∈ Xa, y ∈ Y }
|
| 247 |
+
as training samples, where Xa and Y represent the motion-
|
| 248 |
+
corrupted MRI set and motion-free MRI set, respectively. The
|
| 249 |
+
Ge and Gr are both generators of UNAEN. To train generators,
|
| 250 |
+
we employed Df and Db as discriminators in the forward and
|
| 251 |
+
backward modules to distinguish between a real sample and a
|
| 252 |
+
fake sample.
|
| 253 |
+
The workflow of UNAEN is shown as the arrows in the
|
| 254 |
+
Fig.1. We took the motion-corrupted MR image xa as input
|
| 255 |
+
fed into Ge to extract the artifact residual map Ge(xa), which
|
| 256 |
+
affects the texture information of MRI. The forward module
|
| 257 |
+
will generate the corresponding artifact-reduced image x by
|
| 258 |
+
subtracting Ge(xa) from xa:
|
| 259 |
+
x = xa − Ge(xa),
|
| 260 |
+
(1)
|
| 261 |
+
To enable the forward module to translate an instance xa
|
| 262 |
+
into a counterpart x rather than any instance, we introduced
|
| 263 |
+
the backward module. The main target of Gr is to translate
|
| 264 |
+
back the x into the original xa. So Gr is used to restore the
|
| 265 |
+
generated x and output the restored artifact-corrupted image
|
| 266 |
+
xa:
|
| 267 |
+
xa = Gr(x),
|
| 268 |
+
(2)
|
| 269 |
+
There is a cycle consistency between xa and xa and they
|
| 270 |
+
are expected to be identical. Since x and y are unpaired and
|
| 271 |
+
only have similar content, a forward discriminator Df should
|
| 272 |
+
be applied to distinguish between the generated image x and
|
| 273 |
+
real motion-free image y. To promote the reconstruction ability
|
| 274 |
+
of xa, we train a backward discriminator Db to distinguish
|
| 275 |
+
between the original input xa and restored artifact-corrupted
|
| 276 |
+
result xa.
|
| 277 |
+
During the training step, we train the generators and dis-
|
| 278 |
+
criminators alternately. The generators aim to generate samples
|
| 279 |
+
that are closed to real data while discriminators try not to be
|
| 280 |
+
deceived by the output of generators. During the inference
|
| 281 |
+
step, only the trained Ge are required. We can obtain the
|
| 282 |
+
motion artifact-reduced images as long as we subtract the
|
| 283 |
+
artifact residual maps extracted by the Ge from corresponding
|
| 284 |
+
motion-corrupted inputs. More details about generators and
|
| 285 |
+
discriminators will be discussed in the following subsection.
|
| 286 |
+
B. Loss Functions
|
| 287 |
+
In our experiments, we employed three types of loss
|
| 288 |
+
functions which are the L1 loss, SSIM loss [34], [35] and
|
| 289 |
+
adversarial loss:
|
| 290 |
+
L1(x, y) = 1
|
| 291 |
+
N
|
| 292 |
+
N
|
| 293 |
+
�
|
| 294 |
+
i=1
|
| 295 |
+
|x − y|
|
| 296 |
+
(3)
|
| 297 |
+
LSSIM(x, y) = 1
|
| 298 |
+
N
|
| 299 |
+
N
|
| 300 |
+
�
|
| 301 |
+
i=1
|
| 302 |
+
��1 − SSIM(x, y)2��
|
| 303 |
+
(4)
|
| 304 |
+
Ladv(x, D) = 1
|
| 305 |
+
N
|
| 306 |
+
N
|
| 307 |
+
�
|
| 308 |
+
i=1
|
| 309 |
+
�
|
| 310 |
+
(D(x) − 1)2
|
| 311 |
+
(5)
|
| 312 |
+
where D represents the Df or Db. SSIM (Structural Similarity
|
| 313 |
+
Index Measure) is an indicator to quantify the similarity
|
| 314 |
+
between two digital images. See Eq.(10) for specific formula.
|
| 315 |
+
In addition, we use the least square loss [36] as the adversarial
|
| 316 |
+
loss in our model instead of the negative log likelihood [12]
|
| 317 |
+
for stabilizing the training procedure.
|
| 318 |
+
To train Ge, we use a discriminator Df which aims to
|
| 319 |
+
classify the motion artifact-reduced output x as a motion-free
|
| 320 |
+
image. The adversarial loss function LGe as follow:
|
| 321 |
+
LGe adv(x, Df) = 1
|
| 322 |
+
N
|
| 323 |
+
N
|
| 324 |
+
�
|
| 325 |
+
i=1
|
| 326 |
+
�
|
| 327 |
+
(Df(x) − 1)2
|
| 328 |
+
(6)
|
| 329 |
+
To train Gr, we use a discriminator Db which aims to
|
| 330 |
+
classify the restored artifact-corrupted result xa as the orig-
|
| 331 |
+
inal motion-corrupted image. The following adversarial loss
|
| 332 |
+
function is used to train the Gr:
|
| 333 |
+
LGr adv(xa, Db) = 1
|
| 334 |
+
N
|
| 335 |
+
N
|
| 336 |
+
�
|
| 337 |
+
i=1
|
| 338 |
+
�
|
| 339 |
+
(Db(xa) − 1)2
|
| 340 |
+
(7)
|
| 341 |
+
Moreover, we adopt the cycle consistency loss to restrain
|
| 342 |
+
the restoration of xa. It is calculated as a weighted sum of
|
| 343 |
+
L1 loss and SSIM loss between the input and reconstruction
|
| 344 |
+
images:
|
| 345 |
+
LGr cyc(xa, xa) = L1(xa, xa)+λSSIM ∗LSSIM(xa, xa) (8)
|
| 346 |
+
where λSSIM is the weight of SSIM loss. We set λSSIM =
|
| 347 |
+
0.5 in our experiments.
|
| 348 |
+
So, the final objective function that optimizes the Ge and
|
| 349 |
+
Gr networks can be represented as:
|
| 350 |
+
LG = λGe adv ∗ LGe adv + λGr adv ∗ LGr adv + LGr cyc (9)
|
| 351 |
+
where λGe adv and λGr adv are the weights of the adversarial
|
| 352 |
+
losses of Ge and Gr, respectively. We set λGe adv = 0.1 and
|
| 353 |
+
λGr adv = 0.1 in our experiments.
|
| 354 |
+
|
| 355 |
+
4
|
| 356 |
+
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. XX, NO. XX, SUBMITTED FEB. 2023
|
| 357 |
+
Fig. 1. The architecture of UNAEN. It consists of two generators and two discriminators. The network is fed unpaired motion artifact-corrupted and
|
| 358 |
+
motion artifact-free images in training. Motion artifact reduced output can be obtained by subtracting the artifact residual map extracted by Ge from
|
| 359 |
+
motion-corrupted input, and Gr converts the output to original input. Df compared the output with motion artifact-free input to identify whether the
|
| 360 |
+
artifact removal is successful while Db is used to check whether Gr is restored successfully.
|
| 361 |
+
Fig. 2.
|
| 362 |
+
The detailed structures of generator and discriminator. The generator adopt the RCAN backbone with a depth of 5 residual groups (RG)
|
| 363 |
+
and a long skip connection, and the discriminator is a VGG network.
|
| 364 |
+
C. Motion Simulation
|
| 365 |
+
We referred to the paper [37] to simulate the motion in MR
|
| 366 |
+
images. The method of splicing lines from multiple K-space
|
| 367 |
+
was used to simulate the generation of real motion artifacts.
|
| 368 |
+
Firstly, a group of images was generated from the original
|
| 369 |
+
images by rotating them in specific directions and to specific
|
| 370 |
+
degrees. The severity can be managed by the frequency of
|
| 371 |
+
motion. Then the original image and the generated images
|
| 372 |
+
were transformed to K-space using FFT, and K-space segments
|
| 373 |
+
of the original image were replaced with segments from the
|
| 374 |
+
generated images’ K-spaces, according to a predefined pattern.
|
| 375 |
+
Finally, the damaged original K-space data is transferred back
|
| 376 |
+
to the image domain by iFFT to obtain the simulation motion-
|
| 377 |
+
corrupted MR image.
|
| 378 |
+
In the process of motion simulation, we used the echo
|
| 379 |
+
group (EG) as the minimum time period unit to obtain a
|
| 380 |
+
certain number of successive echoes, and the duration of
|
| 381 |
+
any action must be an integer multiple of EG. To simulate
|
| 382 |
+
the motion of patients’ head, we set the original images to
|
| 383 |
+
be rotated 5 degrees to the left and to the right in plane.
|
| 384 |
+
Specifically, we used the K-space segments of the rotated
|
| 385 |
+
images to periodically replace the K-space segments of the
|
| 386 |
+
original image from the center line to the edge line.
|
| 387 |
+
|
| 388 |
+
Generator Architectures
|
| 389 |
+
Discriminator Architecture
|
| 390 |
+
RG
|
| 391 |
+
RG
|
| 392 |
+
RG
|
| 393 |
+
RG
|
| 394 |
+
Channel
|
| 395 |
+
RCAB
|
| 396 |
+
RCAB
|
| 397 |
+
RCAB
|
| 398 |
+
2D Conv
|
| 399 |
+
ReLU
|
| 400 |
+
Attention
|
| 401 |
+
RCAB
|
| 402 |
+
Leaky ReLU
|
| 403 |
+
Batch Norm
|
| 404 |
+
FC
|
| 405 |
+
Tanh
|
| 406 |
+
Element-wise sumDb
|
| 407 |
+
G
|
| 408 |
+
MotionArtifacts
|
| 409 |
+
Extracted Motion Artifacts
|
| 410 |
+
MotionArtifacts
|
| 411 |
+
Restored Artifacts
|
| 412 |
+
-corrupted Image xa
|
| 413 |
+
-reduced Image x
|
| 414 |
+
-corrupted Image xa
|
| 415 |
+
D
|
| 416 |
+
Motion Artifacts
|
| 417 |
+
-free Image yAUTHOR et al.: PREPARATION OF PAPERS FOR IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS (FEBRUARY 2023)
|
| 418 |
+
5
|
| 419 |
+
IV. EXPERIMENTS
|
| 420 |
+
In this section, a brief description of the dataset is presented,
|
| 421 |
+
and implementation details, including the network architecture
|
| 422 |
+
and hyper-parameters, are introduced. Experimental results are
|
| 423 |
+
presented with analyses and discussions.
|
| 424 |
+
A. Dataset Description
|
| 425 |
+
In this study, the fastMRI brain dataset [38] is used to
|
| 426 |
+
evaluate the proposed method. It includes 6970 fully sampled
|
| 427 |
+
brain MRIs (3001 at 1.5T and 3969 at 3T) collected at NYU
|
| 428 |
+
Langone Health on Siemens scanners using T1-weighted, T2-
|
| 429 |
+
weighted, and FLAIR acquisitions. Some of the T1-weighted
|
| 430 |
+
acquisitions included admissions of contrast agents. The Brain
|
| 431 |
+
MRI DICOM set, which exhibits a wide variety of recon-
|
| 432 |
+
struction matrix sizes, were acquired with a larger diversity
|
| 433 |
+
of scanners, manners of acquisition, reconstruction methods,
|
| 434 |
+
and post-processing algorithms. See paper [38], [39] for more
|
| 435 |
+
details.
|
| 436 |
+
In our experiments, the slices with large background in brain
|
| 437 |
+
MRI dataset were firstly discarded. To reduce the influence of
|
| 438 |
+
external factors and MRI acquisition methods on the exper-
|
| 439 |
+
iment results, we randomly selected 5000 slices only from
|
| 440 |
+
the T1 weighted slices with 3T field strength, whose matrix
|
| 441 |
+
size is 320 x 320. All selected images were corrupted from
|
| 442 |
+
the K-space by using a certain motion simulation algorithm
|
| 443 |
+
mentioned above. Specifically, 1 EG contained 10 echos and
|
| 444 |
+
the movement interval TS was set to 3EG, 6EG and 9EG,
|
| 445 |
+
resulting in a K-space corrupted line ratio of 75%, 60% and
|
| 446 |
+
50%, respectively. Then the dataset was divided into training
|
| 447 |
+
set, validation set and test set. The unsupervised MRI de-
|
| 448 |
+
motion artifact method requires unpaired motion-free MR im-
|
| 449 |
+
ages and motion-corrupted MR images, so we further divided
|
| 450 |
+
the training set into two non-overlapping groups. One group
|
| 451 |
+
contains only motion-free images as learning target while the
|
| 452 |
+
other group contains only motion-corrupted images as input
|
| 453 |
+
to the model. The validation set were used to monitor the
|
| 454 |
+
networks’ performance during training and test set to evaluate
|
| 455 |
+
the networks after training. All of images were normalized to
|
| 456 |
+
0 to 1. To save computation resource, we cropped images into
|
| 457 |
+
128 x 128 patches.
|
| 458 |
+
B. Evaluation Metrics
|
| 459 |
+
In order to make a comprehensive comparison, we used
|
| 460 |
+
SSIM and PSNR as the basic evaluation metrics in our
|
| 461 |
+
experiments.
|
| 462 |
+
As mentioned in III-B, SSIM (Structural Similarity Index
|
| 463 |
+
Measure) can quantify the similarity of two images. It was
|
| 464 |
+
defined to compare the brightness, contrast, and structure
|
| 465 |
+
between the motion artifact-reduced output x and the ground
|
| 466 |
+
truth. The SSIM is never greater than 1 and a larger value
|
| 467 |
+
represents a better motion correction result. The specific
|
| 468 |
+
expression is as follow:
|
| 469 |
+
SSIM(X, Y ) =
|
| 470 |
+
(2µXµY + C1)(2σXY + C2)
|
| 471 |
+
(µ2
|
| 472 |
+
X + µ2
|
| 473 |
+
Y + C1)(σ2
|
| 474 |
+
X + σ2
|
| 475 |
+
Y + C2)
|
| 476 |
+
(10)
|
| 477 |
+
where µ and σ donate the mean and standard deviation of the
|
| 478 |
+
images, respectively (σ2
|
| 479 |
+
XY donates the covariance of x and y).
|
| 480 |
+
C1 and C2 are constants.
|
| 481 |
+
The PSNR (Peak Signal-to-Noise Ratio) is one of the
|
| 482 |
+
widely employed image quality indicators, which represents
|
| 483 |
+
the ratio between the maximum possible signal value and the
|
| 484 |
+
interference noise value that affects the signal representation
|
| 485 |
+
accuracy. It is usually measured in decibels (db) and a higher
|
| 486 |
+
value indicates a lower distortion. PSNR can be calculated
|
| 487 |
+
according to the following formula:
|
| 488 |
+
PSNR = 10 log10
|
| 489 |
+
MaxV alue2
|
| 490 |
+
MSE
|
| 491 |
+
(11)
|
| 492 |
+
MSE =
|
| 493 |
+
1
|
| 494 |
+
mn
|
| 495 |
+
m−1
|
| 496 |
+
�
|
| 497 |
+
i=0
|
| 498 |
+
n−1
|
| 499 |
+
�
|
| 500 |
+
j=0
|
| 501 |
+
[I(i, j) − K(i, j)]2
|
| 502 |
+
(12)
|
| 503 |
+
where MaxV alue is the largest possible pixel value and
|
| 504 |
+
MSE calculates the mean square error of two images. It is
|
| 505 |
+
difficult for human eyes to perceive the difference when PSNR
|
| 506 |
+
exceeds 30.
|
| 507 |
+
C. Experiment Configurations
|
| 508 |
+
We constructed two generators (artifact extractor Ge and
|
| 509 |
+
artifact reconstructor Gr) and two discriminators to train
|
| 510 |
+
UNAEN. The detailed structure of all networks as shown in
|
| 511 |
+
the Fig.2. The backbone of generator was built by the Residual
|
| 512 |
+
Channel Attention Network (RCAN) [40], [41] with a depth
|
| 513 |
+
of 5 residual groups (RG) and a long skip connection. Each
|
| 514 |
+
residual group (RG) has 5 residual channel attention blocks
|
| 515 |
+
(RCAB) and a long skip connection. We set the number of
|
| 516 |
+
feature channels to 64 at each base block of the generator. For
|
| 517 |
+
the discriminator, we just used simple convolutional units to
|
| 518 |
+
build the network, each unit consists of a 3 x 3 convolutional
|
| 519 |
+
layer and a leaky rectified linear unit (leaky ReLU) activation
|
| 520 |
+
layer [42]. The size of feature map was reduced by half after
|
| 521 |
+
each two convolution. All but the first unit have a batch
|
| 522 |
+
normalization layer [43]. Similarly, we set the number of
|
| 523 |
+
feature channels to 64 in the first convolutional layer of the
|
| 524 |
+
discriminator and doubled after each two convolutional layer.
|
| 525 |
+
All of our experiments were implemented on a desktop
|
| 526 |
+
system with 64GB RAM and two NVIDIA GeForce RTX 2080
|
| 527 |
+
Ti graphics cards and used torch 1.8.1 as the back end. Before
|
| 528 |
+
each epoch of training process, all the motion-free and motion-
|
| 529 |
+
corrupted image patches were shuffled. We trained our model
|
| 530 |
+
for 50 epochs using the ADAM optimizer with β1 = 0.9, β2
|
| 531 |
+
= 0.99 and set batch size to 4. In each batch, the motion-free
|
| 532 |
+
patches and motion-corrupted patches fed to the networks were
|
| 533 |
+
unpaired. The initial learning rate was set to 10-4 and droppd
|
| 534 |
+
by half every 10 epochs. The generators were trained twice
|
| 535 |
+
for every time the discriminators trained.
|
| 536 |
+
D. Artifact Reduction on fastMRI
|
| 537 |
+
As shown in the Table I, we compared the performance of
|
| 538 |
+
the proposed model with other baseline methods on fastMRI
|
| 539 |
+
brain datasets with varying degrees of artifacts severity. The
|
| 540 |
+
SSIMs and PSNRs of the motion artifact-corrupted images
|
| 541 |
+
revealed the severity difference of motion artifacts. We ob-
|
| 542 |
+
served that the proposed unsupervised model was significantly
|
| 543 |
+
superior to all comparison unsupervised methods, where the
|
| 544 |
+
|
| 545 |
+
6
|
| 546 |
+
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. XX, NO. XX, SUBMITTED FEB. 2023
|
| 547 |
+
Fig. 3.
|
| 548 |
+
Comparison of the qualitative performance of UNAEN and other unsupervised models on the fastMRI brain dataset. There visualized the
|
| 549 |
+
artifact reduction results with varying degrees of artifact severity and corresponding error heat maps showing the difference between ground truth
|
| 550 |
+
and each result.
|
| 551 |
+
TABLE I
|
| 552 |
+
QUANTITATIVE COMPARISON WITH THE STATE-OF-THE-ART UNSUPERVISED NETWORKS FOR MRI MOTION ARTIFACT REDUCTION ON FASTMRI
|
| 553 |
+
BRAIN DATASET
|
| 554 |
+
Methods
|
| 555 |
+
TS=3EG
|
| 556 |
+
TS=6EG
|
| 557 |
+
TS=9EG
|
| 558 |
+
SSIM
|
| 559 |
+
PSNR
|
| 560 |
+
SSIM
|
| 561 |
+
PSNR
|
| 562 |
+
SSIM
|
| 563 |
+
PSNR
|
| 564 |
+
Before Reduction
|
| 565 |
+
0.7981
|
| 566 |
+
26.6165
|
| 567 |
+
0.8824
|
| 568 |
+
30.4109
|
| 569 |
+
0.9225
|
| 570 |
+
33.4192
|
| 571 |
+
UIDnet (AAAI 2020) [33]
|
| 572 |
+
0.8551
|
| 573 |
+
27.1392
|
| 574 |
+
0.9168
|
| 575 |
+
30.4248
|
| 576 |
+
0.9411
|
| 577 |
+
32.5677
|
| 578 |
+
CycleGAN (ICCV 2017) [19]
|
| 579 |
+
0.8714
|
| 580 |
+
27.4449
|
| 581 |
+
0.9261
|
| 582 |
+
31.1473
|
| 583 |
+
0.9559
|
| 584 |
+
33.4017
|
| 585 |
+
ISCL (IEEE TMI 2021) [17]
|
| 586 |
+
0.8958
|
| 587 |
+
29.3085
|
| 588 |
+
0.9410
|
| 589 |
+
32.4944
|
| 590 |
+
0.9585
|
| 591 |
+
34.4717
|
| 592 |
+
UNAEN (Ours)
|
| 593 |
+
0.9126
|
| 594 |
+
30.5387
|
| 595 |
+
0.9504
|
| 596 |
+
33.5448
|
| 597 |
+
0.9674
|
| 598 |
+
35.9265
|
| 599 |
+
|
| 600 |
+
Ground Truth
|
| 601 |
+
Before Correction
|
| 602 |
+
UIDNet
|
| 603 |
+
CycleGAN
|
| 604 |
+
ISCL
|
| 605 |
+
UNAEN (Ours)
|
| 606 |
+
3
|
| 607 |
+
SSIM / PSNR
|
| 608 |
+
0.7898 / 26.1342
|
| 609 |
+
0.8620 / 27.3402
|
| 610 |
+
0.8818 / 28.1277
|
| 611 |
+
0.9024 /29.1901
|
| 612 |
+
0.9306 / 31.0245
|
| 613 |
+
人S
|
| 614 |
+
0.20
|
| 615 |
+
Error Map
|
| 616 |
+
0.10
|
| 617 |
+
0.00
|
| 618 |
+
1
|
| 619 |
+
9=
|
| 620 |
+
SSIM / PSNR
|
| 621 |
+
0.8516 / 29.0333
|
| 622 |
+
0.9093 / 30.6505
|
| 623 |
+
0.9159/30.9561
|
| 624 |
+
0.9376 / 33.0140
|
| 625 |
+
0.9530 / 35.3089
|
| 626 |
+
TS
|
| 627 |
+
0.20
|
| 628 |
+
Error Map
|
| 629 |
+
0.10
|
| 630 |
+
0.00
|
| 631 |
+
志
|
| 632 |
+
老
|
| 633 |
+
=9
|
| 634 |
+
SSIM/ PSNR
|
| 635 |
+
0.8561/29.2947
|
| 636 |
+
0.9139/29.9313
|
| 637 |
+
0.9442/30.9974
|
| 638 |
+
0.9504/31.5782
|
| 639 |
+
0.9656 / 34.3280
|
| 640 |
+
TS
|
| 641 |
+
0.20
|
| 642 |
+
Error Map
|
| 643 |
+
0.10
|
| 644 |
+
0.00AUTHOR et al.: PREPARATION OF PAPERS FOR IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS (FEBRUARY 2023)
|
| 645 |
+
7
|
| 646 |
+
SSIM was higher than 0.0089 to 0.0575 and the PSNR was
|
| 647 |
+
higher than 1.0504 to 3.3995 dB according to experimental
|
| 648 |
+
results.
|
| 649 |
+
Fig.3 visualized the artifact reduction effects of different
|
| 650 |
+
model and showed the qualitative performance on three de-
|
| 651 |
+
grees of artifact severity by displaying the reduction results and
|
| 652 |
+
corresponding error heat maps comparing to ground truth. All
|
| 653 |
+
four unsupervised methods we compared (UIDnet, CycleGAN,
|
| 654 |
+
ISCL, and UNAEN) successfully reduced the motion artifact.
|
| 655 |
+
UIDnet seemed to have the weakest reduction ability and its
|
| 656 |
+
outputs still retained significant artifact traces in the marginal
|
| 657 |
+
region of the tissue. Similarly, CycleGAN generated blurry im-
|
| 658 |
+
ages even though it had a higher SSIM and PSNR than UIDnet.
|
| 659 |
+
ISCL had better artifact reduction performance and improved
|
| 660 |
+
image quality. However, evident errors on the boundaries of
|
| 661 |
+
distinct soft tissues were observed in the reduction results,
|
| 662 |
+
as shown in the error heat maps. On the contrary, UNEAN
|
| 663 |
+
achieved higher metrics values and minimized errors, and with
|
| 664 |
+
the increase of artifact severity, the performance gap with other
|
| 665 |
+
methods was larger. In summary, UNAEN outperformed other
|
| 666 |
+
compared models in terms of overall image quality and feature
|
| 667 |
+
details in the experiment of fastMRI brain dataset.
|
| 668 |
+
V. DISCUSSION AND CONCLUSION
|
| 669 |
+
In this paper, we proposed an improved GAN model to
|
| 670 |
+
get an artifact reduction network, which trained by unpaired
|
| 671 |
+
MR images in an unsupervised manner to circumvent the
|
| 672 |
+
difficulty of obtaining paired MR images. We conducted sev-
|
| 673 |
+
eral experiments on two different dataset to qualitatively and
|
| 674 |
+
quantitively prove the outstanding performance of proposed
|
| 675 |
+
model by compared to UIDnet, CycleGAN and ISCL.
|
| 676 |
+
Unlike other unsupervised networks, UIDnet trains a cGAN
|
| 677 |
+
[22] which adds artifacts to clean images in order to generate
|
| 678 |
+
paired images to train a de-artifacts network under supervision.
|
| 679 |
+
Due to its indirect training strategy, more errors will be caused
|
| 680 |
+
than other models, limiting the ability to remove artifacts and
|
| 681 |
+
resulting in the fewest SSIM and PSNR in the experiments.
|
| 682 |
+
The network error which represented as geometric uncertainty
|
| 683 |
+
in image detail, could result in inaccurate surgery or therapy
|
| 684 |
+
doses, indicating that the approach is less applicable in real
|
| 685 |
+
clinics.
|
| 686 |
+
As an unsupervised network for domain transfer tasks,
|
| 687 |
+
CycleGAN can transfer images between different styles. To
|
| 688 |
+
generate a tighter mapping space, two symmetric generators
|
| 689 |
+
are used to realize the conversion between motion-corrupted
|
| 690 |
+
and motion-free image domains. The special learning method
|
| 691 |
+
slightly promotes the artifact reduction effect while causes
|
| 692 |
+
the problem of calculation redundancy. However, most of the
|
| 693 |
+
time we just need the artifact removal function rather than
|
| 694 |
+
the reverse process, which would make training the model
|
| 695 |
+
more difficult. Consuming more computing resources is not
|
| 696 |
+
proportional to the improvement in evaluation metrics.
|
| 697 |
+
ISCL is a variation of CycleGAN that adds an additional
|
| 698 |
+
extractor and collaborates with generators to accomplish co-
|
| 699 |
+
operative learning. The generators are responsible for direct
|
| 700 |
+
conversion between image domains, while the extractor can
|
| 701 |
+
extract artifacts from artifact observations. The experimen-
|
| 702 |
+
tal results showed that cooperative learning can further im-
|
| 703 |
+
prove the SSIM and PSNR values, but has no effect on the
|
| 704 |
+
boundaries of soft tissues. Unlike ISCL, UNAEN has no
|
| 705 |
+
cooperative learning, no bidirectional cycle consistency, and
|
| 706 |
+
the abandonment of redundant training makes the model pay
|
| 707 |
+
more attention to the artifact removal process and promote
|
| 708 |
+
the representation ability of artifacts. Experimental results
|
| 709 |
+
demonstrated that our modifications could successfully extract
|
| 710 |
+
the artifact residual components of the images and suppress the
|
| 711 |
+
motion artifact with little impact on the image quality, which
|
| 712 |
+
significantly improved the metrics values and generated high
|
| 713 |
+
quality artifact reduction results.
|
| 714 |
+
Given the effectiveness of UNAEN for unpaired images,
|
| 715 |
+
we expect more applications to artifact reduction since ob-
|
| 716 |
+
taining paired images is commonly impractical. In the real
|
| 717 |
+
clinical settings, UNAEN, as a retrospective method, can
|
| 718 |
+
correct movements of patients to avoid the destruction of
|
| 719 |
+
textures caused by artifacts. It is critical when researchers or
|
| 720 |
+
medical staffs do not have access to the original data and
|
| 721 |
+
associated reconstruction algorithms. In addition, we did not
|
| 722 |
+
make assumptions about the nature of artifacts during the
|
| 723 |
+
construction of UNAEN architecture, which makes it possible
|
| 724 |
+
for the proposed model to be generalized in other artifact
|
| 725 |
+
reduction problems, such as deblurring and denoising. We will
|
| 726 |
+
further explore the possibility of realizing these extensions.
|
| 727 |
+
Despite the superior artifact reduction effect of UNAEN,
|
| 728 |
+
there are still limitations in this study. Firstly, we generated ar-
|
| 729 |
+
tifacts of brain MRI only through simple periodic motion, but
|
| 730 |
+
the movement of patients during K-space data acquisition may
|
| 731 |
+
be more complex and irregular in real scenes. The performance
|
| 732 |
+
of the proposed model trained with authentic motion-corrupted
|
| 733 |
+
and motion-free images remains to be investigated. Besides,
|
| 734 |
+
another limitation is that training the network is difficult,
|
| 735 |
+
e.g., finding optimal hyper-parameters, due to complex loss
|
| 736 |
+
functions and adversarial networks. For the selection of some
|
| 737 |
+
hyper-parameters, we directly gave the conclusions without
|
| 738 |
+
listing relevant comparative experimental results, because their
|
| 739 |
+
adjustments have limited impact on the overall performance of
|
| 740 |
+
the network. We payed more attention to the modification of
|
| 741 |
+
the model architecture, and the optimization of the details is
|
| 742 |
+
one of goals of our future work.
|
| 743 |
+
REFERENCES
|
| 744 |
+
[1] M. Zaitsev, J. Maclaren, and M. Herbst, “Motion artifacts in mri: A
|
| 745 |
+
complex problem with many partial solutions,” Journal of magnetic
|
| 746 |
+
resonance imaging : JMRI, vol. 42, no. 4, p. 887—901, October 2015.
|
| 747 |
+
[2] A. Stadler, W. Schima, A. Ba’ssalamah, J. Kettenbach, and E. Eisen-
|
| 748 |
+
huber, “Artifacts in body mr imaging: Their appearance and how to
|
| 749 |
+
eliminate them,” European radiology, vol. 17, pp. 1242–55, 06 2007.
|
| 750 |
+
[3] Z. Yang, C. Zhang, and L. Xie, “Sparse mri for motion correction,” in
|
| 751 |
+
2013 IEEE 10th International Symposium on Biomedical Imaging, 2013.
|
| 752 |
+
[4] P. No¨el, R. Bammer, C. Reinhold, and M. A. Haider, “Parallel imaging
|
| 753 |
+
artifacts in body magnetic resonance imaging,” Canadian Association of
|
| 754 |
+
Radiologists Journal, vol. 60, no. 2, pp. 91–98, 2009.
|
| 755 |
+
[5] J. Maclaren, M. Herbst, O. Speck, and M. Zaitsev, “Prospective mo-
|
| 756 |
+
tion correction in brain imaging: A review,” Magnetic Resonance in
|
| 757 |
+
Medicine, vol. 69, no. 3, pp. 621–636, 2013.
|
| 758 |
+
[6] M. B. Ooi, S. Krueger, W. J. Thomas, S. V. Swaminathan, and T. R.
|
| 759 |
+
Brown, “Prospective real-time correction for arbitrary head motion using
|
| 760 |
+
active markers,” Magnetic Resonance in Medicine, vol. 62, no. 4, pp.
|
| 761 |
+
943–954, 2009.
|
| 762 |
+
|
| 763 |
+
8
|
| 764 |
+
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. XX, NO. XX, SUBMITTED FEB. 2023
|
| 765 |
+
[7] Y. Han, J. Yoo, H. H. Kim, H. J. Shin, K. Sung, and J. C. Ye,
|
| 766 |
+
“Deep learning with domain adaptation for accelerated projection-
|
| 767 |
+
reconstruction mr,” Magnetic resonance in medicine, vol. 80, no. 3, pp.
|
| 768 |
+
1189–1205, 2018.
|
| 769 |
+
[8] P. M. Johnson and M. Drangova, “Motion correction in mri using deep
|
| 770 |
+
learning,” in Proceedings of the ISMRM Scientific Meeting & Exhibition,
|
| 771 |
+
Paris, vol. 4098, 2018, pp. 1–4.
|
| 772 |
+
[9] K. Sommer, T. Brosch, R. Wiemker, T. Harder, A. Saalbach, C. S. Hall,
|
| 773 |
+
and J. B. Andre, “Correction of motion artifacts using a multi-resolution
|
| 774 |
+
fully convolutional neural network,” in Proceedings of the 26th Annual
|
| 775 |
+
Meeting of ISMRM, Paris, France Abstract, vol. 1175, 2018.
|
| 776 |
+
[10] S. Laguna, R. Schleicher, B. Billot, P. Schaefer, B. McKaig, J. N.
|
| 777 |
+
Goldstein, K. N. Sheth, M. S. Rosen, W. T. Kimberly, and J. E. Iglesias,
|
| 778 |
+
“Super-resolution of portable low-field mri in real scenarios: integration
|
| 779 |
+
with denoising and domain adaptation,” in Medical Imaging with Deep
|
| 780 |
+
Learning, 2022.
|
| 781 |
+
[11] J. Liu, H. Li, T. Huang, E. Ahn, K. Han, A. Razi, and W. Xiang,
|
| 782 |
+
“Unsupervised Representation Learning for 3D MRI Super Resolution
|
| 783 |
+
with Degradation Adaptation,” arXiv e-prints, p. arXiv:2205.06891, Nov.
|
| 784 |
+
2022.
|
| 785 |
+
[12] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
|
| 786 |
+
S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,”
|
| 787 |
+
Communications of the ACM, vol. 63, no. 11, pp. 139–144, 2020.
|
| 788 |
+
[13] J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,”
|
| 789 |
+
Advances in Neural Information Processing Systems, vol. 33, pp. 6840–
|
| 790 |
+
6851, 2020.
|
| 791 |
+
[14] D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes,”
|
| 792 |
+
arXiv e-prints, p. arXiv:1312.6114, Dec. 2013.
|
| 793 |
+
[15] A. Van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves
|
| 794 |
+
et al., “Conditional image generation with pixelcnn decoders,” Advances
|
| 795 |
+
in neural information processing systems, vol. 29, 2016.
|
| 796 |
+
[16] L. Dinh, D. Krueger, and Y. Bengio, “NICE: Non-linear Independent
|
| 797 |
+
Components Estimation,” arXiv e-prints, p. arXiv:1410.8516, Oct. 2014.
|
| 798 |
+
[17] K. Lee and W.-K. Jeong, “Iscl: Interdependent self-cooperative learning
|
| 799 |
+
for unpaired image denoising,” IEEE Transactions on Medical Imaging,
|
| 800 |
+
vol. 40, no. 11, pp. 3238–3248, 2021.
|
| 801 |
+
[18] H. Liao, W.-A. Lin, S. K. Zhou, and J. Luo, “Adn: Artifact disen-
|
| 802 |
+
tanglement network for unsupervised metal artifact reduction,” IEEE
|
| 803 |
+
Transactions on Medical Imaging, vol. 39, no. 3, pp. 634–643, 2020.
|
| 804 |
+
[19] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image
|
| 805 |
+
translation using cycle-consistent adversarial networks,” in Proceedings
|
| 806 |
+
of the IEEE international conference on computer vision, 2017, pp.
|
| 807 |
+
2223–2232.
|
| 808 |
+
[20] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture
|
| 809 |
+
for generative adversarial networks,” in Proceedings of the IEEE/CVF
|
| 810 |
+
conference on computer vision and pattern recognition, 2019, pp. 4401–
|
| 811 |
+
4410.
|
| 812 |
+
[21] A. Radford, L. Metz, and S. Chintala, “Unsupervised Representation
|
| 813 |
+
Learning with Deep Convolutional Generative Adversarial Networks,”
|
| 814 |
+
arXiv e-prints, p. arXiv:1511.06434, Nov. 2015.
|
| 815 |
+
[22] M. Mirza and S. Osindero, “Conditional Generative Adversarial Nets,”
|
| 816 |
+
arXiv e-prints, p. arXiv:1411.1784, Nov. 2014.
|
| 817 |
+
[23] D. Atkinson, D. Hill, P. Stoyle, P. Summers, and S. Keevil, “Automatic
|
| 818 |
+
correction of motion artifacts in magnetic resonance images using
|
| 819 |
+
an entropy focus criterion,” IEEE Transactions on Medical Imaging,
|
| 820 |
+
vol. 16, no. 6, pp. 903–910, 1997.
|
| 821 |
+
[24] D. Gallichan, J. P. Marques, and R. Gruetter, “Retrospective correction
|
| 822 |
+
of involuntary microscopic head movement using highly accelerated fat
|
| 823 |
+
image navigators (3d fatnavs) at 7t,” Magnetic Resonance in Medicine,
|
| 824 |
+
vol. 75, no. 3, pp. 1030–1039, 2016.
|
| 825 |
+
[25] E. B. Welch, A. Manduca, R. C. Grimm, H. A. Ward, and C. R. Jack Jr.,
|
| 826 |
+
“Spherical navigator echoes for full 3d rigid body motion measurement
|
| 827 |
+
in mri,” Magnetic Resonance in Medicine, vol. 47, no. 1, pp. 32–41,
|
| 828 |
+
2002.
|
| 829 |
+
[26] K. Pawar, Z. Chen, N. J. Shah, and G. F. Egan, “Suppressing motion
|
| 830 |
+
artefacts in mri using an inception-resnet network with motion simula-
|
| 831 |
+
tion augmentation,” NMR in Biomedicine, vol. 35, no. 4, p. e4225, 2022,
|
| 832 |
+
e4225 NBM-19-0154.R2.
|
| 833 |
+
[27] D. Tamada, M.-L. Kromrey, H. Onishi, and U. Motosugi, “Method
|
| 834 |
+
for motion artifact reduction using a convolutional neural network
|
| 835 |
+
for dynamic contrast enhanced MRI of the liver,” arXiv e-prints, p.
|
| 836 |
+
arXiv:1807.06956, Jul. 2018.
|
| 837 |
+
[28] M. W. Haskell, S. F. Cauley, B. Bilgic, J. Hossbach, D. N. Splitthoff,
|
| 838 |
+
J. Pfeuffer, K. Setsompop, and L. L. Wald, “Network accelerated
|
| 839 |
+
motion estimation and reduction (namer): Convolutional neural network
|
| 840 |
+
guided retrospective motion correction using a separable motion model,”
|
| 841 |
+
Magnetic Resonance in Medicine, vol. 82, no. 4, pp. 1452–1461, 2019.
|
| 842 |
+
[29] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in
|
| 843 |
+
Proceedings of the IEEE conference on computer vision and pattern
|
| 844 |
+
recognition, 2018, pp. 9446–9454.
|
| 845 |
+
[30] J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittala,
|
| 846 |
+
and T. Aila, “Noise2Noise: Learning Image Restoration without Clean
|
| 847 |
+
Data,” arXiv e-prints, p. arXiv:1803.04189, Mar. 2018.
|
| 848 |
+
[31] A. Krull, T.-O. Buchholz, and F. Jug, “Noise2void - learning denoising
|
| 849 |
+
from single noisy images,” in 2019 IEEE/CVF Conference on Computer
|
| 850 |
+
Vision and Pattern Recognition (CVPR), 2019, pp. 2124–2132.
|
| 851 |
+
[32] J. Chen, J. Chen, H. Chao, and M. Yang, “Image blind denoising
|
| 852 |
+
with generative adversarial network based noise modeling,” in 2018
|
| 853 |
+
IEEE/CVF Conference on Computer Vision and Pattern Recognition,
|
| 854 |
+
2018, pp. 3155–3164.
|
| 855 |
+
[33] Z. Hong, F. Xiaochen, T. Jiang, and J. Feng, “End-to-end unpaired image
|
| 856 |
+
denoising with conditional adversarial networks,” Proceedings of the
|
| 857 |
+
AAAI Conference on Artificial Intelligence, vol. 34, pp. 4140–4149, 04
|
| 858 |
+
2020.
|
| 859 |
+
[34] E. M. Masutani, N. Bahrami, and A. Hsiao, “Deep learning single-frame
|
| 860 |
+
and multiframe super-resolution for cardiac mri,” Radiology, vol. 295,
|
| 861 |
+
no. 3, pp. 552–561, 2020, pMID: 32286192.
|
| 862 |
+
[35] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assess-
|
| 863 |
+
ment: from error visibility to structural similarity,” IEEE Transactions
|
| 864 |
+
on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
|
| 865 |
+
[36] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, “Least
|
| 866 |
+
squares generative adversarial networks,” in Proceedings of the IEEE
|
| 867 |
+
international conference on computer vision, 2017, pp. 2794–2802.
|
| 868 |
+
[37] H. Li and J. Liu, “3D High-Quality Magnetic Resonance Image
|
| 869 |
+
Restoration in Clinics Using Deep Learning,” arXiv e-prints, p.
|
| 870 |
+
arXiv:2111.14259, Nov. 2021.
|
| 871 |
+
[38] J. Zbontar, F. Knoll, A. Sriram, T. Murrell, Z. Huang, M. J. Muckley,
|
| 872 |
+
A. Defazio, R. Stern, P. Johnson, M. Bruno, M. Parente, K. J. Geras,
|
| 873 |
+
J. Katsnelson, H. Chandarana, Z. Zhang, M. Drozdzal, A. Romero,
|
| 874 |
+
M. Rabbat, P. Vincent, N. Yakubova, J. Pinkerton, D. Wang, E. Owens,
|
| 875 |
+
C. L. Zitnick, M. P. Recht, D. K. Sodickson, and Y. W. Lui, “fastMRI:
|
| 876 |
+
An Open Dataset and Benchmarks for Accelerated MRI,” arXiv e-prints,
|
| 877 |
+
p. arXiv:1811.08839, Nov. 2018.
|
| 878 |
+
[39] F. Knoll, J. Zbontar, A. Sriram, M. J. Muckley, M. Bruno, A. Defazio,
|
| 879 |
+
M. Parente, K. J. Geras, J. Katsnelson, H. Chandarana, Z. Zhang,
|
| 880 |
+
M. Drozdzalv, A. Romero, M. Rabbat, P. Vincent, J. Pinkerton, D. Wang,
|
| 881 |
+
N. Yakubova, E. Owens, C. L. Zitnick, M. P. Recht, D. K. Sodickson,
|
| 882 |
+
and Y. W. Lui, “fastmri: A publicly available raw k-space and dicom
|
| 883 |
+
dataset of knee images for accelerated mr image reconstruction using
|
| 884 |
+
machine learning,” Radiology: Artificial Intelligence, vol. 2, no. 1, p.
|
| 885 |
+
e190007, 2020, pMID: 32076662.
|
| 886 |
+
[40] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-
|
| 887 |
+
resolution using very deep residual channel attention networks,” in
|
| 888 |
+
Proceedings of the European conference on computer vision (ECCV),
|
| 889 |
+
2018, pp. 286–301.
|
| 890 |
+
[41] Z. Lin, P. Garg, A. Banerjee, S. A. Magid, D. Sun, Y. Zhang,
|
| 891 |
+
L. Van Gool, D. Wei, and H. Pfister, “Revisiting rcan: Improved training
|
| 892 |
+
for image super-resolution,” arXiv preprint arXiv:2201.11279, 2022.
|
| 893 |
+
[42] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers:
|
| 894 |
+
Surpassing human-level performance on imagenet classification,” in
|
| 895 |
+
2015 IEEE International Conference on Computer Vision (ICCV), 2015,
|
| 896 |
+
pp. 1026–1034.
|
| 897 |
+
[43] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep
|
| 898 |
+
network training by reducing internal covariate shift,” in International
|
| 899 |
+
conference on machine learning.
|
| 900 |
+
PMLR, 2015, pp. 448–456.
|
| 901 |
+
|
FNAzT4oBgHgl3EQfw_7Q/content/tmp_files/load_file.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
FNE3T4oBgHgl3EQfVgrM/content/2301.04461v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5f539c024b2282fb89f1dddc5c349fe2e68df31e23cb9adc111424bf63e2008e
|
| 3 |
+
size 3808726
|
I9E3T4oBgHgl3EQfXAr0/content/2301.04476v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:def468781df8ed13e7ecf3b9dd5a9661562d7a09a95270e585bfe857c9ced132
|
| 3 |
+
size 2740071
|
I9E3T4oBgHgl3EQfXAr0/vector_store/index.faiss
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:55bc656b49d97b6b69a46180948ac0f21bcbe5e6762481dc78ebe592e5e54f08
|
| 3 |
+
size 14155821
|
I9E3T4oBgHgl3EQfXAr0/vector_store/index.pkl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:06cdfce1e985a8f6c85ee7b22b59d62b987b79f4312152f8e9559a91dc6871c8
|
| 3 |
+
size 412205
|
INE5T4oBgHgl3EQfWw9u/content/2301.05561v1.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fdc17639782f2df41b09ddf4df86536235edca14a43fe3dbb1101d2bdba3bb4b
|
| 3 |
+
size 655413
|