mishig HF Staff commited on
Commit
f5149fc
·
verified ·
1 Parent(s): 9e37f88

Add 1 files

Browse files
Files changed (1) hide show
  1. 2311/2311.04465.md +5804 -0
2311/2311.04465.md ADDED
@@ -0,0 +1,5804 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Solving High Frequency and Multi-Scale PDEs with Gaussian Processes
2
+
3
+ URL Source: https://arxiv.org/html/2311.04465
4
+
5
+ Markdown Content:
6
+ Back to arXiv
7
+
8
+ This is experimental HTML to improve accessibility. We invite you to report rendering errors.
9
+ Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off.
10
+ Learn more about this project and help improve conversions.
11
+
12
+ Why HTML?
13
+ Report Issue
14
+ Back to Abstract
15
+ Download PDF
16
+ 1Introduction
17
+ 2Gaussian Process
18
+ 3Gaussian Process PDE Solvers
19
+ 4Algorithm
20
+ 5Related Work
21
+ 6Experiment
22
+ 7Conclusion
23
+ License: CC BY 4.0
24
+ arXiv:2311.04465v2 [cs.LG] 19 Mar 2024
25
+ Solving High Frequency and Multi-Scale PDEs with Gaussian Processes
26
+ Shikai Fang , Madison Cooley1 , Da Long1 , Shibo Li, Robert M. Kirby, Shandian Zhe
27
+ University of Utah, Salt Lake City, UT 84112, USA {shikai,mcooley,dl932,shibo,kirby,zhe}@cs.utah.edu
28
+
29
+ Equal contributionCorresponding author
30
+ Abstract
31
+
32
+ Machine learning based solvers have garnered much attention in physical simulation and scientific computing, with a prominent example, physics-informed neural networks (PINNs). However, PINNs often struggle to solve high-frequency and multi-scale PDEs, which can be due to spectral bias during neural network training. To address this problem, we resort to the Gaussian process (GP) framework. To flexibly capture the dominant frequencies, we model the power spectrum of the PDE solution with a student
33
+ 𝑡
34
+ mixture or Gaussian mixture. We apply the inverse Fourier transform to obtain the covariance function (by Wiener-Khinchin theorem). The covariance derived from the Gaussian mixture spectrum corresponds to the known spectral mixture kernel. Next, we estimate the mixture weights in the log domain, which we show is equivalent to placing a Jeffreys prior. It automatically induces sparsity, prunes excessive frequencies, and adjusts the remaining toward the ground truth. Third, to enable efficient and scalable computation on massive collocation points, which are critical to capture high frequencies, we place the collocation points on a grid, and multiply our covariance function at each input dimension. We use the GP conditional mean to predict the solution and its derivatives so as to fit the boundary condition and the equation itself. As a result, we can derive a Kronecker product structure in the covariance matrix. We use Kronecker product properties and multilinear algebra to promote computational efficiency and scalability, without low-rank approximations. We show the advantage of our method in systematic experiments. The code is released at https://github.com/xuangu-fang/Gaussian-Process-Slover-for-High-Freq-PDE.
35
+
36
+ 1Introduction
37
+
38
+ Scientific and engineering problems often demand we solve a set of partial differential equations (PDEs). Recently, machine learning (ML) solvers have attracted much attention. Compared to traditional numerical methods, ML solvers do not require complex mesh designs and sophisticated numerical tricks, are simple to implement, and can solve inverse problems efficiently and conveniently. The most popular ML solver is the physics-informed neural network (PINN) (Raissi et al.,, 2019). Consider a PDE of the following general form,
39
+
40
+
41
+
42
+
43
+ [
44
+ 𝑢
45
+ ]
46
+
47
+ (
48
+ 𝐱
49
+ )
50
+ =
51
+ 𝑓
52
+
53
+ (
54
+ 𝐱
55
+ )
56
+
57
+ (
58
+ 𝐱
59
+
60
+ Ω
61
+ )
62
+ ,
63
+ 𝑢
64
+
65
+ (
66
+ 𝐱
67
+ )
68
+ =
69
+ 𝑔
70
+
71
+ (
72
+ 𝐱
73
+ )
74
+
75
+ (
76
+ 𝐱
77
+
78
+
79
+ Ω
80
+ )
81
+ ,
82
+
83
+ (1)
84
+
85
+ where
86
+
87
+ is the differential operator,
88
+ Ω
89
+ is the domain, and
90
+
91
+ Ω
92
+ is the boundary of the domain. To solve the PDE, the PINN uses a deep neural network (NN)
93
+ 𝑢
94
+ ^
95
+ 𝜽
96
+
97
+ (
98
+ 𝐱
99
+ )
100
+ to model the solution
101
+ 𝑢
102
+ . It samples
103
+ 𝑁
104
+ 𝑐
105
+ collocation points
106
+ {
107
+ 𝐱
108
+ 𝑐
109
+ 𝑗
110
+ }
111
+ 𝑗
112
+ =
113
+ 1
114
+ 𝑁
115
+ 𝑐
116
+ from
117
+ Ω
118
+ and
119
+ 𝑁
120
+ 𝑏
121
+ points
122
+ {
123
+ 𝐱
124
+ 𝑏
125
+ 𝑗
126
+ }
127
+ 𝑗
128
+ =
129
+ 1
130
+ 𝑁
131
+ 𝑏
132
+ from
133
+
134
+ Ω
135
+ , and minimizes a loss,
136
+
137
+
138
+ 𝜽
139
+ *
140
+ =
141
+ argmin
142
+ 𝜽
143
+ 𝐿
144
+ 𝑏
145
+
146
+ (
147
+ 𝜽
148
+ )
149
+ +
150
+ 𝐿
151
+ 𝑟
152
+
153
+ (
154
+ 𝜽
155
+ )
156
+ ,
157
+
158
+ (2)
159
+
160
+ where
161
+ 𝐿
162
+ 𝑏
163
+
164
+ (
165
+ 𝜽
166
+ )
167
+ =
168
+ 1
169
+ 𝑁
170
+ 𝑏
171
+
172
+
173
+ 𝑗
174
+ =
175
+ 1
176
+ 𝑁
177
+ 𝑏
178
+ (
179
+ 𝑢
180
+ ^
181
+ 𝜽
182
+
183
+ (
184
+ 𝐱
185
+ 𝑏
186
+ 𝑗
187
+ )
188
+
189
+ 𝑔
190
+
191
+ (
192
+ 𝐱
193
+ 𝑏
194
+ 𝑗
195
+ )
196
+ )
197
+ 2
198
+ is the boundary term to fit the boundary condition, and
199
+ 𝐿
200
+ 𝑟
201
+
202
+ (
203
+ 𝜽
204
+ )
205
+ =
206
+ 1
207
+ 𝑁
208
+ 𝑐
209
+
210
+
211
+ 𝑗
212
+ =
213
+ 1
214
+ 𝑁
215
+ 𝑐
216
+ (
217
+
218
+
219
+ [
220
+ 𝑢
221
+ ^
222
+ 𝜽
223
+ ]
224
+
225
+ (
226
+ 𝐱
227
+ 𝑐
228
+ 𝑗
229
+ )
230
+
231
+ 𝑓
232
+
233
+ (
234
+ 𝐱
235
+ 𝑐
236
+ 𝑗
237
+ )
238
+ )
239
+ 2
240
+ is the residual term to fit the equation.
241
+
242
+ Despite many success stories, the PINN often struggles to solve PDEs with high-frequency and multi-scale components in the solutions. This is consistent with the “spectrum bias” observed in NN training Rahaman et al., (2019). That is, NNs typically can learn the low-frequency information efficiently but grasping the high-frequency knowledge is much harder. To alleviate this problem, the recent work Wang et al., 2021b proposes to construct a set of random Fourier features from zero-mean Gaussian distributions. The random features are then fed into the PINN layers for training (see (2)). While effective, the performance of this method is unstable, and is highly sensitive to the number and scales of the Gaussian variances, which are difficult to choose beforehand.
243
+
244
+ In this paper, we resort to an alternative arising ML solver framework, Gaussian processes (GP) (Chen et al.,, 2021; Long et al., 2022a,). We propose GP-HM, a GP solver for High frequency and Multi-scale PDEs. By leveraging the Wiener-Khinchin theorem, we can directly model the solution in the frequency domain and estimate the target frequencies from the covariance function. We then develop an efficient learning algorithm to scale up to massive collocation points, which are critical to capture high frequencies. The major contributions of our work are as follows.
245
+
246
+
247
+
248
+ Model. To flexibly capture the dominant frequencies, we use a mixture of student
249
+ 𝑡
250
+ or Gaussian distributions to model the power spectrum of the solution. According to the Wiener-Khinchin theorem, we can derive the GP covariance function via inverse Fourier transform, which contains the component weights and frequency parameters. We show that by estimating the weights in the log domain, it is equivalent to assigning each weight a Jeffreys prior, which induces strong sparsity, automatically removes excessive frequency components, and drives the remaining toward the ground-truth. This way our GP can effectively extract the solution frequencies. Our covariance function derived from the Gaussian mixture power spectrum corresponds to the known spectral mixture kernel. We therefore are the first to realize its rationale and benefit for solving high-frequency and multi-scale PDEs.
251
+
252
+
253
+
254
+ Algorithm. To enable efficient computation, we place all the collocation points and the boundary (and/or initial) points on a grid, and model the solution values at the grid with the GP finite projection. To obtain the derivative values in the equation, we compute the GP conditional mean via kernel differentiation. Next, we multiply our covariance function at each input dimension to obtain a product covariance. We then derive a Kronecker product form for the covariance and cross-covariance matrices. We use the properties of the Kronecker product and multilinear algebra to restrict the covariance matrix calculation to each input dimension. In this way, we can substantially reduce the cost and handle massive collocation points, without any low rank approximations.
255
+
256
+
257
+
258
+ Result. We evaluated GP-HM with several benchmark PDEs that have high-frequency and multi-scale solutions. We compared with the standard PINN and several state-of-the-art variants. We compared with spectral methods (Boyd,, 2001) that linearly combine a set of trigonometric bases to estimate the solution. We also compared with several other traditional numerical solvers. In all the cases, GP-HM consistently achieves relative
259
+ 𝐿
260
+ 2
261
+ errors at
262
+
263
+ 10
264
+
265
+ 3
266
+ or
267
+
268
+ 10
269
+
270
+ 4
271
+ or even smaller. By contrast, the competing ML based approaches often failed and gave much larger errors. The visualization of the element-wise prediction error shows that GP-HM also recovers the local solution values much better. We examined the learned frequency parameters, which match the ground-truth. Our ablation study as in Section C of Appendix also shows enough collocation points is critical to the success, implying the importance of our efficient learning method.
272
+
273
+ 2Gaussian Process
274
+
275
+ Gaussian processes (GPs) provide an expressive framework for function estimation. Suppose given a training dataset
276
+ 𝒟
277
+ =
278
+ {
279
+ (
280
+ 𝐱
281
+ 𝑛
282
+ ,
283
+ 𝑦
284
+ 𝑛
285
+ )
286
+ |
287
+ 1
288
+
289
+ 𝑛
290
+
291
+ 𝑁
292
+ }
293
+ , we aim to estimate a target function
294
+ 𝑓
295
+ :
296
+
297
+ 𝑑
298
+
299
+
300
+ . We can assign a GP prior,
301
+
302
+
303
+ 𝑓
304
+
305
+ (
306
+
307
+ )
308
+
309
+ 𝒢
310
+
311
+ 𝒫
312
+
313
+ (
314
+ 𝑚
315
+
316
+ (
317
+
318
+ )
319
+ ,
320
+ cov
321
+
322
+ (
323
+
324
+ ,
325
+
326
+ )
327
+ )
328
+ ,
329
+
330
+
331
+ where
332
+ 𝑚
333
+
334
+ (
335
+
336
+ )
337
+ is the mean function and
338
+ cov
339
+
340
+ (
341
+
342
+ ,
343
+
344
+ )
345
+ is the covariance function. In practice, one often sets
346
+ 𝑚
347
+
348
+ (
349
+
350
+ )
351
+ =
352
+ 0
353
+ , and adopts a kernel function as the covariance function, namely
354
+ cov
355
+
356
+ (
357
+ 𝑓
358
+
359
+ (
360
+ 𝐱
361
+ )
362
+ ,
363
+ 𝑓
364
+
365
+ (
366
+ 𝐱
367
+
368
+ )
369
+ )
370
+ =
371
+ 𝑘
372
+
373
+ (
374
+ 𝐱
375
+ ,
376
+ 𝐱
377
+
378
+ )
379
+ . A nice property of the GP prior is that if
380
+ 𝑓
381
+ is sampled from a GP, then any derivative (if existent) of
382
+ 𝑓
383
+ is also a GP, and the covariance between the derivative and the function
384
+ 𝑓
385
+ is the derivative of the kernel function w.r.t the same input variable(s). For example,
386
+
387
+
388
+ cov
389
+
390
+ (
391
+
392
+ 𝑥
393
+ 1
394
+
395
+ 𝑥
396
+ 2
397
+ 𝑓
398
+
399
+ (
400
+ 𝐱
401
+ )
402
+ ,
403
+ 𝑓
404
+
405
+ (
406
+ 𝐱
407
+
408
+ )
409
+ )
410
+ =
411
+
412
+ 𝑥
413
+ 1
414
+
415
+ 𝑥
416
+ 2
417
+ 𝑘
418
+
419
+ (
420
+ 𝐱
421
+ ,
422
+ 𝐱
423
+
424
+ )
425
+ ,
426
+
427
+ (3)
428
+
429
+ where
430
+ 𝐱
431
+ =
432
+ (
433
+ 𝑥
434
+ 1
435
+ ,
436
+
437
+ ,
438
+ 𝑥
439
+ 𝑑
440
+ )
441
+
442
+ and
443
+ 𝐱
444
+
445
+ =
446
+ (
447
+ 𝑥
448
+ 1
449
+
450
+ ,
451
+
452
+ ,
453
+ 𝑥
454
+ 𝑑
455
+
456
+ )
457
+
458
+ . Under the GP prior, the function values at any finite input collection,
459
+ 𝐟
460
+ =
461
+ [
462
+ 𝑓
463
+
464
+ (
465
+ 𝐱
466
+ 1
467
+ )
468
+ ,
469
+
470
+ ,
471
+ 𝑓
472
+
473
+ (
474
+ 𝐱
475
+ 𝑁
476
+ )
477
+ ]
478
+ , follow a multi-variate Gaussian distribution,
479
+ 𝑝
480
+
481
+ (
482
+ 𝐟
483
+ )
484
+ =
485
+ 𝒩
486
+
487
+ (
488
+ 𝐟
489
+ |
490
+ 𝟎
491
+ ,
492
+ 𝐊
493
+ )
494
+ where
495
+ [
496
+ 𝐊
497
+ ]
498
+ 𝑖
499
+
500
+ 𝑗
501
+ =
502
+ cov
503
+
504
+ (
505
+ 𝑓
506
+
507
+ (
508
+ 𝐱
509
+ 𝑖
510
+ )
511
+ ,
512
+ 𝑓
513
+
514
+ (
515
+ 𝐱
516
+ 𝑗
517
+ )
518
+ )
519
+ =
520
+ 𝑘
521
+
522
+ (
523
+ 𝐱
524
+ 𝑖
525
+ ,
526
+ 𝐱
527
+ 𝑗
528
+ )
529
+ . This is called a GP projection. Suppose given
530
+ 𝐟
531
+ , we want to compute the distribution of the function value at any input
532
+ 𝐱
533
+ , namely
534
+ 𝑝
535
+
536
+ (
537
+ 𝑓
538
+
539
+ (
540
+ 𝐱
541
+ )
542
+ |
543
+ 𝐟
544
+ )
545
+ . Since
546
+ 𝐟
547
+ and
548
+ 𝑓
549
+
550
+ (
551
+ 𝐱
552
+ )
553
+ also follow a multi-variate Gaussian distribution, we obtain a conditional Gaussian,
554
+ 𝑝
555
+
556
+ (
557
+ 𝑓
558
+
559
+ (
560
+ 𝐱
561
+ )
562
+ |
563
+ 𝐟
564
+ )
565
+ =
566
+ 𝒩
567
+
568
+ (
569
+ 𝑓
570
+
571
+ (
572
+ 𝐱
573
+ )
574
+ |
575
+ 𝜇
576
+
577
+ (
578
+ 𝐱
579
+ )
580
+ ,
581
+ 𝜎
582
+ 2
583
+
584
+ (
585
+ 𝐱
586
+ )
587
+ )
588
+ , where the conditional mean
589
+
590
+
591
+ 𝜇
592
+
593
+ (
594
+ 𝐱
595
+ )
596
+ =
597
+ cov
598
+
599
+ (
600
+ 𝑓
601
+
602
+ (
603
+ 𝐱
604
+ )
605
+ ,
606
+ 𝐟
607
+ )
608
+
609
+ 𝐊
610
+
611
+ 1
612
+
613
+ 𝐟
614
+ ,
615
+
616
+ (4)
617
+
618
+ and
619
+ 𝜎
620
+ 2
621
+
622
+ (
623
+ 𝐱
624
+ )
625
+ =
626
+ cov
627
+
628
+ (
629
+ 𝑓
630
+
631
+ (
632
+ 𝐱
633
+ )
634
+ ,
635
+ 𝑓
636
+
637
+ (
638
+ 𝐱
639
+ )
640
+ )
641
+
642
+ cov
643
+
644
+ (
645
+ 𝑓
646
+
647
+ (
648
+ 𝐱
649
+ )
650
+ ,
651
+ 𝐟
652
+ )
653
+
654
+ 𝐊
655
+
656
+ 1
657
+
658
+ cov
659
+
660
+ (
661
+ 𝐟
662
+ ,
663
+ 𝑓
664
+
665
+ (
666
+ 𝐱
667
+ )
668
+ )
669
+ ,
670
+ cov
671
+
672
+ (
673
+ 𝑓
674
+
675
+ (
676
+ 𝐱
677
+ )
678
+ ,
679
+ 𝐟
680
+ )
681
+ =
682
+ 𝑘
683
+
684
+ (
685
+ 𝐱
686
+ ,
687
+ 𝐗
688
+ )
689
+ =
690
+ [
691
+ 𝑘
692
+
693
+ (
694
+ 𝐱
695
+ ,
696
+ 𝐱
697
+ 1
698
+ )
699
+ ,
700
+
701
+ ,
702
+ 𝑘
703
+
704
+ (
705
+ 𝐱
706
+ ,
707
+ 𝐱
708
+ 𝑁
709
+ )
710
+ ]
711
+ and
712
+ 𝐗
713
+ =
714
+ [
715
+ 𝐱
716
+ 1
717
+ ,
718
+
719
+ ,
720
+ 𝐱
721
+ 𝑁
722
+ ]
723
+
724
+ .
725
+
726
+ 3Gaussian Process PDE Solvers
727
+
728
+ Covariance Design. When the PDE solution
729
+ 𝑢
730
+ includes high frequencies or multi-scale information, one naturally wants to estimate these target frequencies outright in the frequency domain. To this end, we consider the solution’s power spectrum,
731
+ 𝑆
732
+
733
+ (
734
+ 𝑠
735
+ )
736
+ =
737
+ |
738
+ 𝑢
739
+ ^
740
+
741
+ (
742
+ 𝑠
743
+ )
744
+ |
745
+ 2
746
+ where
747
+ 𝑢
748
+ ^
749
+
750
+ (
751
+ 𝑠
752
+ )
753
+ is the Fourier transform of
754
+ 𝑢
755
+ , and
756
+ 𝑠
757
+ denotes the frequency. The power spectrum characterizes the strength of every possible frequency within the solution. To flexibly capture the dominant high and/or multi-scale frequencies, we use a mixture of student
758
+ 𝑡
759
+ distributions to model the power spectrum,
760
+
761
+
762
+ 𝑆
763
+
764
+ (
765
+ 𝑠
766
+ )
767
+ =
768
+
769
+ 𝑞
770
+ =
771
+ 1
772
+ 𝑄
773
+ 𝑤
774
+ 𝑞
775
+
776
+ St
777
+
778
+ (
779
+ 𝑠
780
+ ;
781
+ 𝜇
782
+ 𝑞
783
+ ,
784
+ 𝜌
785
+ 𝑞
786
+ 2
787
+ ,
788
+ 𝜈
789
+ )
790
+ ,
791
+
792
+ (5)
793
+
794
+ where
795
+ 𝑤
796
+ 𝑞
797
+ >
798
+ 0
799
+ is the weight of component
800
+ 𝑞
801
+ , St stands for student
802
+ 𝑡
803
+ distribution,
804
+ 𝜇
805
+ 𝑞
806
+ is the mean,
807
+ 𝜌
808
+ 𝑞
809
+ 2
810
+ is the inverse variance, and
811
+ 𝜈
812
+ is the degree of freedom. Note that
813
+ 𝑤
814
+ 𝑞
815
+ does not need to be normalized (their summation is not necessary to be one). Each student
816
+ 𝑡
817
+ distribution characterizes one principle frequency
818
+ 𝜇
819
+ 𝑞
820
+ , and also robustly models the (potentially many) minor frequencies with a fat tailed density (Bishop,, 2007). An alternative choice is a mixture of Gaussian,
821
+ 𝑆
822
+
823
+ (
824
+ 𝑠
825
+ )
826
+ =
827
+
828
+ 𝑞
829
+ =
830
+ 1
831
+ 𝑄
832
+ 𝑤
833
+ 𝑞
834
+
835
+ 𝒩
836
+
837
+ (
838
+ 𝑠
839
+ ;
840
+ 𝜇
841
+ 𝑞
842
+ ,
843
+ 𝜌
844
+ 𝑞
845
+ 2
846
+ )
847
+ . But the Gaussian distribution has thin tails, hence is sensitive to long-tail outliers and can be less robust (in capturing minor frequencies).
848
+
849
+ Next, we convert the spectrum model into a covariance function to enable our GP solver to flexibly estimate the target frequencies. According to the Wiener-Khinchin theorem (Wiener,, 1930; Khintchine,, 1934), for a wide-sense stationary random process, under mild conditions, its power spectrum1 and the auto-correlation form a Fourier pair. We model the solution
850
+ 𝑢
851
+ as drawn from a stationary GP, and the auto-correlation is the covariance function, denoted by
852
+ 𝑘
853
+
854
+ (
855
+ 𝑥
856
+ ,
857
+ 𝑥
858
+
859
+ )
860
+ =
861
+ 𝑘
862
+
863
+ (
864
+ 𝑥
865
+
866
+ 𝑥
867
+
868
+ )
869
+ . We then have
870
+
871
+
872
+ 𝑆
873
+
874
+ (
875
+ 𝑠
876
+ )
877
+ =
878
+
879
+ 𝑘
880
+
881
+ (
882
+ 𝑧
883
+ )
884
+
885
+ 𝑒
886
+
887
+ 𝑖
888
+
889
+ 2
890
+
891
+ 𝜋
892
+
893
+ 𝑠
894
+
895
+ 𝑧
896
+
897
+ d
898
+ 𝑧
899
+ ,
900
+ 𝑘
901
+
902
+ (
903
+ 𝑧
904
+ )
905
+ =
906
+
907
+ 𝑆
908
+
909
+ (
910
+ 𝑠
911
+ )
912
+
913
+ 𝑒
914
+ 𝑖
915
+
916
+ 2
917
+
918
+ 𝜋
919
+
920
+ 𝑧
921
+
922
+ 𝑠
923
+
924
+ d
925
+ 𝑠
926
+ ,
927
+
928
+ (6)
929
+
930
+ where
931
+ 𝑧
932
+ =
933
+ 𝑥
934
+
935
+ 𝑥
936
+
937
+ , and
938
+ 𝑖
939
+ indicates complex numbers. Therefore, we can obtain the covariance function by applying the inverse Fourier transform over
940
+ 𝑆
941
+
942
+ (
943
+ 𝑠
944
+ )
945
+ . However, the straightforward mixture in (5) will lead to a complex-valued covariance function. To obtain a real-valued covariance, inside each component we add another student
946
+ 𝑡
947
+ distribution with mean
948
+
949
+ 𝑢
950
+ 𝑞
951
+ so as to cancel out the imaginary part after integration. In addition, to make the derivation convenient, we scale the inverse variance and degree of freedom by a constant. We use the following power spectrum model,
952
+
953
+
954
+ 𝑆
955
+
956
+ (
957
+ 𝑠
958
+ )
959
+ =
960
+
961
+ 𝑞
962
+ =
963
+ 1
964
+ 𝑄
965
+ 𝑤
966
+ 𝑞
967
+
968
+ (
969
+ St
970
+
971
+ (
972
+ 𝑠
973
+ ;
974
+ 𝜇
975
+ 𝑞
976
+ ,
977
+ 4
978
+
979
+ 𝜋
980
+ 2
981
+
982
+ 𝜌
983
+ 𝑞
984
+ 2
985
+ ,
986
+ 2
987
+
988
+ 𝜈
989
+ )
990
+ +
991
+ St
992
+
993
+ (
994
+ 𝑠
995
+ ;
996
+
997
+ 𝜇
998
+ 𝑞
999
+ ,
1000
+ 4
1001
+
1002
+ 𝜋
1003
+ 2
1004
+
1005
+ 𝜌
1006
+ 𝑞
1007
+ 2
1008
+ ,
1009
+ 2
1010
+
1011
+ 𝜈
1012
+ )
1013
+ )
1014
+ .
1015
+
1016
+ (7)
1017
+
1018
+ Applying inverse Fourier transform in (6), we can derive the following covariance function,
1019
+
1020
+
1021
+ 𝑘
1022
+ StM
1023
+
1024
+ (
1025
+ 𝑥
1026
+ ,
1027
+ 𝑥
1028
+
1029
+ )
1030
+ =
1031
+
1032
+ 𝑞
1033
+ =
1034
+ 1
1035
+ 𝑄
1036
+ 𝑤
1037
+ 𝑞
1038
+
1039
+ 𝛾
1040
+ 𝜈
1041
+ ,
1042
+ 𝜌
1043
+ 𝑞
1044
+
1045
+ (
1046
+ 𝑥
1047
+ ,
1048
+ 𝑥
1049
+
1050
+ )
1051
+
1052
+ cos
1053
+
1054
+ (
1055
+ 2
1056
+
1057
+ 𝜋
1058
+
1059
+ 𝜇
1060
+ 𝑞
1061
+
1062
+ (
1063
+ 𝑥
1064
+
1065
+ 𝑥
1066
+
1067
+ )
1068
+ )
1069
+ ,
1070
+
1071
+ (8)
1072
+
1073
+ where
1074
+ 𝛾
1075
+ 𝜈
1076
+ ,
1077
+ 𝜌
1078
+ 𝑞
1079
+
1080
+ (
1081
+ 𝑥
1082
+ ,
1083
+ 𝑥
1084
+
1085
+ )
1086
+ =
1087
+ 2
1088
+ 1
1089
+
1090
+ 𝜈
1091
+ Γ
1092
+
1093
+ (
1094
+ 𝜈
1095
+ )
1096
+
1097
+ (
1098
+ 2
1099
+
1100
+ 𝜈
1101
+
1102
+ |
1103
+ 𝑥
1104
+
1105
+ 𝑥
1106
+
1107
+ |
1108
+ 𝜌
1109
+ 𝑞
1110
+ )
1111
+ 𝜈
1112
+
1113
+ 𝐾
1114
+ 𝜈
1115
+
1116
+ (
1117
+ 2
1118
+
1119
+ 𝜈
1120
+
1121
+ |
1122
+ 𝑥
1123
+
1124
+ 𝑥
1125
+
1126
+ |
1127
+ 𝜌
1128
+ 𝑞
1129
+ )
1130
+ is the Matérn kernel with degree of freedom
1131
+ 𝜈
1132
+ and length scale
1133
+ 𝜌
1134
+ 𝑞
1135
+ , and
1136
+ 𝐾
1137
+ 𝜈
1138
+ is the modified Bessel function of the second kind. The details of the derivation is left in Appendix. We now can see that the frequency information
1139
+ 𝜇
1140
+ 𝑞
1141
+ and component weights
1142
+ 𝑤
1143
+ 𝑞
1144
+ are embedded into the covariance function. By learning a GP model, we expect to capture the true frequencies of the solution. One can also construct a symmetric Gaussian mixture in the same way, and via inverse Fourier transform obtain
1145
+
1146
+
1147
+ 𝑘
1148
+ GM
1149
+
1150
+ (
1151
+ 𝑥
1152
+ ,
1153
+ 𝑥
1154
+
1155
+ )
1156
+ =
1157
+
1158
+ 𝑞
1159
+ =
1160
+ 1
1161
+ 𝑄
1162
+ 𝑤
1163
+ 𝑞
1164
+
1165
+ exp
1166
+
1167
+ (
1168
+
1169
+ 𝜌
1170
+ 𝑞
1171
+ 2
1172
+
1173
+ (
1174
+ 𝑥
1175
+
1176
+ 𝑥
1177
+
1178
+ )
1179
+ 2
1180
+ )
1181
+
1182
+ cos
1183
+
1184
+ (
1185
+ 2
1186
+
1187
+ 𝜋
1188
+
1189
+ (
1190
+ 𝑥
1191
+
1192
+ 𝑥
1193
+
1194
+ )
1195
+
1196
+ 𝜇
1197
+ 𝑞
1198
+ )
1199
+ .
1200
+
1201
+ (9)
1202
+
1203
+ This is known as the spectral mixture kernel (Wilson and Adams,, 2013), which was originally proposed to construct an expressive stationary kernel according to its Fourier decomposition, because in principle the Gaussian mixture can well approximate any distribution, as long as using enough many components. Wilson and Adams, (2013) showed that the spectral mixture kernel can well recover many popular kernels, such as rational quadratic and periodic kernel. In this paper, we take a different motivation and viewpoint. We argue that a similar design can be very effective in extracting dominant frequencies in PDE solving.
1204
+
1205
+ How to Determine the Component Number? Since the number of dominant frequencies is unknown apriori, the solution accuracy can be sensitive to the choice of the component number
1206
+ 𝑄
1207
+ . A too small
1208
+ 𝑄
1209
+ can miss important (high) frequencies while a too big
1210
+ 𝑄
1211
+ can bring in excessive noisy frequencies. To address this problem, we set a large
1212
+ 𝑄
1213
+ (e.g.,
1214
+ 50
1215
+ ), initialize the frequency parameters
1216
+ 𝜇
1217
+ 𝑞
1218
+ across a wide range, and then optimize the component weights in the log domain. This turns out to be equivalent to assigning each
1219
+ 𝑤
1220
+ 𝑞
1221
+ a Jefferys prior. Specifically, define
1222
+ 𝑤
1223
+ ¯
1224
+ 𝑞
1225
+ =
1226
+ log
1227
+
1228
+ (
1229
+ 𝑤
1230
+ 𝑞
1231
+ )
1232
+ . Since we do not place an additional prior over
1233
+ 𝑤
1234
+ ¯
1235
+ 𝑞
1236
+ , we can view
1237
+ 𝑝
1238
+
1239
+ (
1240
+ 𝑤
1241
+ ¯
1242
+ 𝑞
1243
+ )
1244
+
1245
+ 1
1246
+ . Then we have
1247
+
1248
+
1249
+ 𝑝
1250
+
1251
+ (
1252
+ 𝑤
1253
+ 𝑞
1254
+ )
1255
+ =
1256
+ 𝑝
1257
+
1258
+ (
1259
+ 𝑤
1260
+ ¯
1261
+ 𝑞
1262
+ )
1263
+
1264
+ |
1265
+ d
1266
+
1267
+ 𝑤
1268
+ ¯
1269
+ 𝑞
1270
+ d
1271
+
1272
+ 𝑤
1273
+ 𝑞
1274
+ |
1275
+
1276
+ 1
1277
+ 𝑤
1278
+ 𝑞
1279
+ .
1280
+
1281
+ (10)
1282
+
1283
+ The Jeffreys prior has a very high density near zero, and hence induces strong sparsity during the learning of
1284
+ 𝑤
1285
+ 𝑞
1286
+  (Figueiredo,, 2001). Accordingly, the excessive frequency components can be automatically pruned, and the learning drives the remaining
1287
+ 𝜇
1288
+ 𝑞
1289
+ ’s toward the target frequencies. This have been verified by our experiments; see Fig. 4 in Section 6.
1290
+
1291
+ GP Solver Model to Enable Fast Computation. To fulfill efficient and scalable calculation, we multiply our covariance function at each input dimension to construct a product kernel,
1292
+
1293
+
1294
+ cov
1295
+
1296
+ (
1297
+ 𝑓
1298
+
1299
+ (
1300
+ 𝐱
1301
+ )
1302
+ ,
1303
+ 𝑓
1304
+
1305
+ (
1306
+ 𝐱
1307
+
1308
+ )
1309
+ )
1310
+ =
1311
+ 𝜅
1312
+
1313
+ (
1314
+ 𝐱
1315
+ ,
1316
+ 𝐱
1317
+
1318
+ |
1319
+ Θ
1320
+ )
1321
+ =
1322
+
1323
+ 𝑗
1324
+ =
1325
+ 1
1326
+ 𝑑
1327
+ 𝑘
1328
+ StM
1329
+
1330
+ (
1331
+ 𝑥
1332
+ 𝑗
1333
+ ,
1334
+ 𝑥
1335
+ 𝑗
1336
+
1337
+ |
1338
+ 𝜽
1339
+ 𝑞
1340
+ )
1341
+ ,
1342
+
1343
+ (11)
1344
+
1345
+ where
1346
+ 𝜽
1347
+ 𝑞
1348
+ =
1349
+ {
1350
+ 𝑤
1351
+ 𝑞
1352
+ ,
1353
+ 𝜇
1354
+ 𝑞
1355
+ ,
1356
+ 𝜌
1357
+ 𝑞
1358
+ }
1359
+ and
1360
+ Θ
1361
+ =
1362
+ {
1363
+ 𝜽
1364
+ 𝑞
1365
+ }
1366
+ 𝑞
1367
+ =
1368
+ 1
1369
+ 𝑄
1370
+ are the kernel parameters. Note that the product kernel is equivalent to performing a (high-dimensional) feature mapping for each input dimension and then computing the tensor-product across the features. It is a highly expressive structure and commonly used in finite element design (ARNOLD et al.,, 2012). Next, we create a grid on the domain
1371
+ Ω
1372
+ . We can randomly sample or specially design the locations at each input dimension, and then construct the grid through a Cartesian product. Denote the locations at each input dimension
1373
+ 𝑗
1374
+ by
1375
+ 𝐡
1376
+ 𝑗
1377
+ =
1378
+ [
1379
+
1380
+ 𝑗
1381
+
1382
+ 1
1383
+ ,
1384
+
1385
+ ,
1386
+
1387
+ 𝑗
1388
+
1389
+ 𝑀
1390
+ 𝑗
1391
+ ]
1392
+ , we have an
1393
+ 𝑀
1394
+ 1
1395
+ ×
1396
+
1397
+ ×
1398
+ 𝑀
1399
+ 𝑑
1400
+ grid,
1401
+
1402
+
1403
+ 𝒢
1404
+ =
1405
+ 𝐡
1406
+ 1
1407
+ ×
1408
+
1409
+ ×
1410
+ 𝐡
1411
+ 𝑑
1412
+ =
1413
+ {
1414
+ 𝐱
1415
+ =
1416
+ (
1417
+ 𝑥
1418
+ 1
1419
+ ,
1420
+
1421
+ ,
1422
+ 𝑥
1423
+ 𝑑
1424
+ )
1425
+ |
1426
+ 𝑥
1427
+ 𝑗
1428
+
1429
+ 𝐡
1430
+ 𝑗
1431
+ ,
1432
+ 1
1433
+
1434
+ 𝑗
1435
+
1436
+ 𝑑
1437
+ }
1438
+ .
1439
+
1440
+ (12)
1441
+
1442
+ We will use the grid points on the boundary
1443
+
1444
+ Ω
1445
+ to fit the boundary conditions and all the grid points as the collocation points to fit the equation.
1446
+
1447
+ Denote the solution values at
1448
+ 𝒢
1449
+ by
1450
+ 𝒰
1451
+ =
1452
+ {
1453
+ 𝑢
1454
+
1455
+ (
1456
+ 𝐱
1457
+ )
1458
+ |
1459
+ 𝐱
1460
+
1461
+ 𝒢
1462
+ }
1463
+ , which is an
1464
+ 𝑀
1465
+ 1
1466
+ ×
1467
+
1468
+ ×
1469
+ 𝑀
1470
+ 𝑑
1471
+ array. According to the GP prior over
1472
+ 𝑢
1473
+
1474
+ (
1475
+
1476
+ )
1477
+ , we have a multi-variate Gaussian prior distribution,
1478
+ 𝑝
1479
+
1480
+ (
1481
+ 𝒰
1482
+ )
1483
+ =
1484
+ 𝒩
1485
+
1486
+ (
1487
+ vec
1488
+
1489
+ (
1490
+ 𝒰
1491
+ )
1492
+ |
1493
+ 𝟎
1494
+ ,
1495
+ 𝐂
1496
+ )
1497
+ , where
1498
+ vec
1499
+
1500
+ (
1501
+
1502
+ )
1503
+ is to flatten
1504
+ 𝒰
1505
+ into a vector,
1506
+ 𝐂
1507
+ is the covariance matrix computed from
1508
+ 𝒢
1509
+ with kernel
1510
+ 𝜅
1511
+
1512
+ (
1513
+
1514
+ ,
1515
+
1516
+ )
1517
+ . Denote the grid points on the boundary by
1518
+ ��
1519
+ =
1520
+ 𝒢
1521
+
1522
+
1523
+ Ω
1524
+ . To fit the boundary condition, we use a Gaussian likelihood,
1525
+ 𝑝
1526
+
1527
+ (
1528
+ 𝐠
1529
+ |
1530
+ 𝐮
1531
+
1532
+ )
1533
+ =
1534
+ 𝒩
1535
+
1536
+ (
1537
+ 𝐠
1538
+ |
1539
+ 𝐮
1540
+ 𝑏
1541
+ ,
1542
+ 𝜏
1543
+ 1
1544
+
1545
+ 1
1546
+
1547
+ 𝐈
1548
+ )
1549
+ , where
1550
+ 𝐠
1551
+ =
1552
+ vec
1553
+
1554
+ (
1555
+ {
1556
+ 𝑔
1557
+
1558
+ (
1559
+ 𝐱
1560
+ )
1561
+ |
1562
+ 𝐱
1563
+
1564
+
1565
+ }
1566
+ )
1567
+ ,
1568
+ 𝐮
1569
+ 𝑏
1570
+ are the values of
1571
+ 𝒰
1572
+ on
1573
+
1574
+ (flatten into a vector), and
1575
+ 𝜏
1576
+ 1
1577
+ >
1578
+ 0
1579
+ is the inverse variance. Next, we want to fit the equation at
1580
+ 𝒢
1581
+ . To this end, we need to first obtain the prediction of all the relevant derivatives of
1582
+ 𝑢
1583
+ in the PDE, e.g.,
1584
+
1585
+ 𝑥
1586
+ 1
1587
+ 𝑢
1588
+ and
1589
+
1590
+ 𝑥
1591
+ 1
1592
+
1593
+ 𝑥
1594
+ 2
1595
+ 𝑢
1596
+ , at the grid
1597
+ 𝒢
1598
+ . Since
1599
+ 𝑢
1600
+ ’s derivatives also follow the GP prior, we use the kernel derivative to obtain their cross covariance function (see (3)), with which to compute the GP conditional mean (conditioned on
1601
+ 𝒰
1602
+ ) as the prediction. Take
1603
+
1604
+ 𝑥
1605
+ 1
1606
+ 𝑢
1607
+ and
1608
+
1609
+ 𝑥
1610
+ 1
1611
+
1612
+ 𝑥
1613
+ 2
1614
+ 𝑢
1615
+ as examples. We have
1616
+
1617
+
1618
+
1619
+ 𝑥
1620
+ 1
1621
+ 𝑢
1622
+
1623
+ (
1624
+ 𝐱
1625
+ )
1626
+ =
1627
+
1628
+ 𝑥
1629
+ 1
1630
+ 𝐤
1631
+
1632
+ (
1633
+ 𝐱
1634
+ ,
1635
+ 𝒢
1636
+ )
1637
+
1638
+ 𝐂
1639
+
1640
+ 1
1641
+
1642
+ vec
1643
+
1644
+ (
1645
+ 𝒰
1646
+ )
1647
+ ,
1648
+
1649
+ 𝑥
1650
+ 1
1651
+
1652
+ 𝑥
1653
+ 2
1654
+ 𝑢
1655
+
1656
+ (
1657
+ 𝐱
1658
+ )
1659
+ =
1660
+
1661
+ 𝑥
1662
+ 1
1663
+
1664
+ 𝑥
1665
+ 2
1666
+ 𝐤
1667
+
1668
+ (
1669
+ 𝐱
1670
+ ,
1671
+ 𝒢
1672
+ )
1673
+
1674
+ 𝐂
1675
+
1676
+ 1
1677
+
1678
+ vec
1679
+
1680
+ (
1681
+ 𝒰
1682
+ )
1683
+ ,
1684
+
1685
+ (13)
1686
+
1687
+ where
1688
+ 𝐤
1689
+
1690
+ (
1691
+ 𝐱
1692
+ ,
1693
+ 𝒢
1694
+ )
1695
+ =
1696
+ [
1697
+ 𝑘
1698
+
1699
+ (
1700
+ 𝐱
1701
+ ,
1702
+ 𝐱
1703
+ 1
1704
+
1705
+ )
1706
+ ,
1707
+
1708
+ ,
1709
+ 𝑘
1710
+
1711
+ (
1712
+ 𝐱
1713
+ ,
1714
+ 𝐱
1715
+ 𝑀
1716
+
1717
+ )
1718
+ ]
1719
+ where
1720
+ 𝑀
1721
+ =
1722
+
1723
+ 𝑗
1724
+ 𝑀
1725
+ 𝑗
1726
+ and all
1727
+ 𝐱
1728
+ 𝑗
1729
+
1730
+ constitute
1731
+ 𝒢
1732
+ . We can accordingly predict the values of the all the relevant
1733
+ 𝑢
1734
+ derivatives at
1735
+ 𝒢
1736
+ , and combine them to obtain the PDE (see (1)) evaluation at
1737
+ 𝒢
1738
+ , which we denote by
1739
+
1740
+ . To fit the GP model to the equation, we use another Gaussian likelihood,
1741
+ 𝑝
1742
+
1743
+ (
1744
+ 𝟎
1745
+ |
1746
+ 𝒰
1747
+ )
1748
+ =
1749
+ 𝒩
1750
+
1751
+ (
1752
+ 𝟎
1753
+ |
1754
+ vec
1755
+
1756
+ (
1757
+
1758
+ )
1759
+ ,
1760
+ 𝜏
1761
+ 2
1762
+
1763
+ 1
1764
+
1765
+ 𝐈
1766
+ )
1767
+ , where
1768
+ 𝟎
1769
+ is an virtual observation, and
1770
+ 𝜏
1771
+ 2
1772
+ >
1773
+ 0
1774
+ . Note that we use the same framework as in (Chen et al.,, 2021; Long et al., 2022b,). However, there are two critical differences. First, rather than randomly sample the collocation points, we place all the collocation points on a grid. Second, rather than assign a multivariate Gaussian distribution over the function values and all of its derivatives, we only model the distribution of the function values (at the grid). We then use the GP conditional mean to predict the derivative values. As we will discuss in Section 4, these modeling strategies, coupled with the product covariance (11), enable highly efficient and scalable computation, yet do not need any low rank approximations.
1775
+
1776
+ 4Algorithm
1777
+
1778
+ We maximize the log joint probability2 to estimate
1779
+ 𝒰
1780
+ , the kernel parameters
1781
+ Θ
1782
+ , and the likelihood inverse variances
1783
+ 𝜏
1784
+ 1
1785
+ and
1786
+ 𝜏
1787
+ 2
1788
+ . To flexibly adjust the influence of the boundary likelihood so as to balance the competition between the boundary and equation likelihoods (Wang et al., 2020a,; Wang et al., 2020c,), we introduce a free hyper-parameter
1789
+ 𝜆
1790
+ 𝑏
1791
+ >
1792
+ 0
1793
+ , and maximize the weighted log joint probability,
1794
+
1795
+
1796
+
1797
+
1798
+ (
1799
+ 𝒰
1800
+ ,
1801
+ Θ
1802
+ ,
1803
+ 𝜏
1804
+ 1
1805
+ ,
1806
+ 𝜏
1807
+ 2
1808
+ )
1809
+ =
1810
+
1811
+ log
1812
+
1813
+ 𝒩
1814
+
1815
+ (
1816
+ vec
1817
+
1818
+ (
1819
+ 𝒰
1820
+ )
1821
+ |
1822
+ 𝟎
1823
+ ,
1824
+ 𝐂
1825
+ )
1826
+ +
1827
+ 𝜆
1828
+ 𝑏
1829
+
1830
+ log
1831
+
1832
+ 𝒩
1833
+
1834
+ (
1835
+ 𝐠
1836
+ |
1837
+ 𝐮
1838
+ 𝑏
1839
+ ,
1840
+ 𝜏
1841
+ 1
1842
+
1843
+ 1
1844
+
1845
+ 𝐈
1846
+ )
1847
+ +
1848
+ log
1849
+
1850
+ 𝒩
1851
+
1852
+ (
1853
+ 𝟎
1854
+ |
1855
+ vec
1856
+
1857
+ (
1858
+
1859
+ )
1860
+ ,
1861
+ 𝜏
1862
+ 2
1863
+
1864
+ 1
1865
+
1866
+ 𝐈
1867
+ )
1868
+
1869
+
1870
+ =
1871
+
1872
+
1873
+ 1
1874
+ 2
1875
+
1876
+ log
1877
+
1878
+ |
1879
+ 𝐂
1880
+ |
1881
+
1882
+ 1
1883
+ 2
1884
+
1885
+ vec
1886
+
1887
+ (
1888
+ 𝒰
1889
+ )
1890
+
1891
+
1892
+ 𝐂
1893
+
1894
+ 1
1895
+
1896
+ vec
1897
+
1898
+ (
1899
+ 𝒰
1900
+ )
1901
+ +
1902
+ 𝜆
1903
+ 𝑏
1904
+
1905
+ [
1906
+ 𝑁
1907
+ 𝑏
1908
+ 2
1909
+
1910
+ log
1911
+
1912
+ 𝜏
1913
+ 1
1914
+
1915
+ 𝜏
1916
+ 1
1917
+ 2
1918
+
1919
+
1920
+ 𝐮
1921
+ 𝑏
1922
+
1923
+ 𝐠
1924
+
1925
+ 2
1926
+ ]
1927
+
1928
+
1929
+ +
1930
+ 𝑀
1931
+ 2
1932
+
1933
+ log
1934
+
1935
+ 𝜏
1936
+ 2
1937
+
1938
+ 𝜏
1939
+ 2
1940
+ 2
1941
+
1942
+
1943
+ vec
1944
+
1945
+ (
1946
+
1947
+ )
1948
+
1949
+ 2
1950
+ +
1951
+ const
1952
+ .
1953
+
1954
+ (14)
1955
+
1956
+ Naive computation of
1957
+
1958
+ is extremely expensive when the grid is dense, namely,
1959
+ 𝑀
1960
+ is large. That is because the covariance matrix
1961
+ 𝐂
1962
+ is between all the grid points, of size
1963
+ 𝑀
1964
+ ×
1965
+ 𝑀
1966
+ (
1967
+ 𝑀
1968
+ =
1969
+
1970
+ 𝑗
1971
+ 𝑀
1972
+ 𝑗
1973
+ ). Also, to obtain
1974
+
1975
+ , we need to compute the cross-covariance between every needed derivative in the PDE and
1976
+ 𝑢
1977
+ across all the grid points. Consequently, the naive computation of the log determinant and inverse of
1978
+ 𝐂
1979
+ (see (14) ) and the required cross-covariance take the time and space complexity
1980
+ 𝒪
1981
+
1982
+ (
1983
+ 𝑀
1984
+ 3
1985
+ )
1986
+ and
1987
+ 𝒪
1988
+
1989
+ (
1990
+ 𝑀
1991
+ 2
1992
+ )
1993
+ , respectively, which can be infeasible even when each
1994
+ 𝑀
1995
+ 𝑗
1996
+ is relatively small. For example, when
1997
+ 𝑑
1998
+ =
1999
+ 3
2000
+ , and
2001
+ 𝑀
2002
+ 1
2003
+ =
2004
+ 𝑀
2005
+ 2
2006
+ =
2007
+ 𝑀
2008
+ 3
2009
+ =
2010
+ 100
2011
+ , we have
2012
+ 𝑀
2013
+ =
2014
+ 10
2015
+ 6
2016
+ and the computation of
2017
+ 𝐂
2018
+ will be too costly to be practical (on most computing platforms).
2019
+
2020
+ Thanks to that (1) our prior distribution is only on all the function values at the grid, and (2) our covariance function is a product over each input dimension (see (11)). We can derive a Kronecker product structure in
2021
+ 𝐂
2022
+ , namely,
2023
+ 𝐂
2024
+ =
2025
+ 𝐂
2026
+ 1
2027
+
2028
+
2029
+
2030
+ 𝐂
2031
+ 𝑑
2032
+ , where
2033
+ 𝐶
2034
+ 𝑗
2035
+ =
2036
+ 𝑘
2037
+ StM
2038
+
2039
+ (
2040
+ 𝐡
2041
+ 𝑗
2042
+ ,
2043
+ 𝐡
2044
+ 𝑗
2045
+ )
2046
+ is the kernel matrix on
2047
+ 𝐡
2048
+ 𝑗
2049
+ — the locations at input dimension
2050
+ 𝑗
2051
+ , of size
2052
+ 𝑀
2053
+ 𝑗
2054
+ ×
2055
+ 𝑀
2056
+ 𝑗
2057
+ . Note that we can also use
2058
+ 𝑘
2059
+ GM
2060
+ in (9). Using the Kronecker product properties (Minka,, 2000), we obtain
2061
+
2062
+
2063
+ log
2064
+
2065
+ |
2066
+ 𝐂
2067
+ |
2068
+
2069
+ =
2070
+
2071
+ 𝑗
2072
+ =
2073
+ 1
2074
+ 𝑑
2075
+ 𝑀
2076
+ 𝑀
2077
+ 𝑗
2078
+
2079
+ log
2080
+
2081
+ |
2082
+ 𝐂
2083
+ 𝑗
2084
+ |
2085
+ ,
2086
+
2087
+
2088
+ 𝐂
2089
+
2090
+ 1
2091
+
2092
+ vec
2093
+
2094
+ (
2095
+ 𝒰
2096
+ )
2097
+
2098
+ =
2099
+ (
2100
+ 𝐂
2101
+ 1
2102
+
2103
+ 1
2104
+
2105
+
2106
+
2107
+ 𝐂
2108
+ 𝑑
2109
+
2110
+ 1
2111
+ )
2112
+
2113
+ vec
2114
+
2115
+ (
2116
+ 𝒰
2117
+ )
2118
+ =
2119
+ vec
2120
+
2121
+ (
2122
+ 𝒰
2123
+ ×
2124
+ 1
2125
+ 𝐂
2126
+ 1
2127
+
2128
+ 1
2129
+ ×
2130
+ 2
2131
+
2132
+ ×
2133
+ 𝑑
2134
+ 𝐂
2135
+ 𝑑
2136
+
2137
+ 1
2138
+ )
2139
+ ,
2140
+
2141
+ (15)
2142
+
2143
+ where
2144
+ ×
2145
+ 𝑗
2146
+ is the tensor-matrix product at mode
2147
+ 𝑗
2148
+ . Accordingly, we can first compute the local log determinant and inverse at each input dimension (i.e., for each
2149
+ 𝐂
2150
+ 𝑗
2151
+ ), which reduces the time and space complexity to
2152
+ 𝒪
2153
+
2154
+ (
2155
+
2156
+ 𝑗
2157
+ =
2158
+ 1
2159
+ 𝑑
2160
+ 𝑀
2161
+ 𝑗
2162
+ 3
2163
+ )
2164
+ and
2165
+ 𝒪
2166
+
2167
+ (
2168
+
2169
+ 𝑗
2170
+ =
2171
+ 1
2172
+ 𝑑
2173
+ 𝑀
2174
+ 𝑗
2175
+ 2
2176
+ )
2177
+ , respectively. Then we perform the multilinear operation in the last line of (15), i.e., sequentially multiplying the array
2178
+ 𝒰
2179
+ with each
2180
+ 𝐶
2181
+ 𝑗
2182
+
2183
+ 1
2184
+ , which takes the time complexity
2185
+ 𝒪
2186
+
2187
+ (
2188
+ (
2189
+
2190
+ 𝑗
2191
+ =
2192
+ 1
2193
+ 𝑑
2194
+ 𝑀
2195
+ 𝑗
2196
+ )
2197
+
2198
+ 𝑀
2199
+ )
2200
+ . The computational cost is substantially reduced.
2201
+
2202
+ Furthermore, since our product covariance function is factorized over each input dimension, the cross covariance between any derivative of
2203
+ 𝑢
2204
+ and
2205
+ 𝑢
2206
+ itself still maintains a product form — because only the kernel(s) at the corresponding input dimension(s) need to be differentiated. For example,
2207
+
2208
+
2209
+ cov
2210
+
2211
+ (
2212
+
2213
+ 𝑥
2214
+ 1
2215
+
2216
+ 𝑥
2217
+ 2
2218
+ 𝑢
2219
+
2220
+ (
2221
+ 𝐱
2222
+ )
2223
+ ,
2224
+ 𝑢
2225
+
2226
+ (
2227
+ 𝐱
2228
+
2229
+ )
2230
+ )
2231
+ =
2232
+
2233
+ 𝑥
2234
+ 1
2235
+
2236
+ 𝑥
2237
+ 2
2238
+ 𝜅
2239
+
2240
+ (
2241
+ 𝐱
2242
+ ,
2243
+ 𝐱
2244
+
2245
+ )
2246
+ =
2247
+
2248
+ 𝑥
2249
+ 1
2250
+
2251
+ 𝑥
2252
+ 2
2253
+
2254
+ 𝑗
2255
+ 𝜅
2256
+
2257
+ (
2258
+ 𝑥
2259
+ 𝑗
2260
+ ,
2261
+ 𝑥
2262
+ 𝑗
2263
+
2264
+ )
2265
+
2266
+
2267
+ =
2268
+
2269
+ 𝑥
2270
+ 1
2271
+ 𝜅
2272
+
2273
+ (
2274
+ 𝑥
2275
+ 1
2276
+ ,
2277
+ 𝑥
2278
+ 1
2279
+
2280
+ )
2281
+
2282
+
2283
+ 𝑥
2284
+ 2
2285
+ 𝜅
2286
+
2287
+ (
2288
+ 𝑥
2289
+ 2
2290
+ ,
2291
+ 𝑥
2292
+ 2
2293
+
2294
+ )
2295
+
2296
+
2297
+ 𝑗
2298
+
2299
+ 1
2300
+ ,
2301
+ 2
2302
+ 𝜅
2303
+
2304
+ (
2305
+ 𝑥
2306
+ 𝑗
2307
+ ,
2308
+ 𝑥
2309
+ 𝑗
2310
+
2311
+ )
2312
+ .
2313
+
2314
+ (16)
2315
+
2316
+ Accordingly, we can also obtain Kronecker product structures in predicting each derivative of
2317
+ 𝑢
2318
+ . Take
2319
+
2320
+ 𝑥
2321
+ 1
2322
+
2323
+ 𝑥
2324
+ 2
2325
+ 𝑢
2326
+ as an example. According to (13), we can derive that
2327
+
2328
+
2329
+
2330
+ 𝑥
2331
+ 1
2332
+
2333
+ 𝑥
2334
+ 2
2335
+ 𝑢
2336
+
2337
+ (
2338
+ 𝐱
2339
+ )
2340
+ =
2341
+ (
2342
+
2343
+ 𝑥
2344
+ 1
2345
+ 𝐤
2346
+
2347
+ (
2348
+ 𝑥
2349
+ 1
2350
+ ,
2351
+ 𝐡
2352
+ 1
2353
+ )
2354
+
2355
+
2356
+ 𝑥
2357
+ 2
2358
+ 𝐤
2359
+
2360
+ (
2361
+ 𝑥
2362
+ 2
2363
+ ,
2364
+ 𝐡
2365
+ 2
2366
+ )
2367
+
2368
+
2369
+
2370
+ 𝐤
2371
+
2372
+ (
2373
+ 𝑥
2374
+ 𝑑
2375
+ ,
2376
+ 𝐡
2377
+ 𝑑
2378
+ )
2379
+ )
2380
+
2381
+ (
2382
+ 𝐂
2383
+ 1
2384
+
2385
+ 1
2386
+
2387
+
2388
+
2389
+ 𝐂
2390
+ 𝑑
2391
+
2392
+ 1
2393
+ )
2394
+
2395
+ vec
2396
+
2397
+ (
2398
+ 𝒰
2399
+ )
2400
+
2401
+
2402
+ =
2403
+ (
2404
+
2405
+ 𝑥
2406
+ 1
2407
+ 𝐤
2408
+
2409
+ (
2410
+ 𝑥
2411
+ 1
2412
+ ,
2413
+ 𝐡
2414
+ 1
2415
+ )
2416
+
2417
+ 𝐂
2418
+ 1
2419
+
2420
+ 1
2421
+
2422
+
2423
+ 𝑥
2424
+ 2
2425
+ 𝐤
2426
+
2427
+ (
2428
+ 𝑥
2429
+ 2
2430
+ ,
2431
+ 𝐡
2432
+ 2
2433
+ )
2434
+
2435
+ 𝐂
2436
+ 2
2437
+
2438
+ 1
2439
+
2440
+
2441
+
2442
+ 𝐤
2443
+
2444
+ (
2445
+ 𝑥
2446
+ 𝑑
2447
+ ,
2448
+ 𝐡
2449
+ 𝑑
2450
+ )
2451
+
2452
+ 𝐂
2453
+ 𝑑
2454
+
2455
+ 1
2456
+ )
2457
+
2458
+ vec
2459
+
2460
+ (
2461
+ 𝒰
2462
+ )
2463
+
2464
+
2465
+ =
2466
+ vec
2467
+
2468
+ (
2469
+ 𝒰
2470
+ ×
2471
+ 1
2472
+
2473
+ 𝑥
2474
+ 1
2475
+ 𝐤
2476
+
2477
+ (
2478
+ 𝑥
2479
+ 1
2480
+ ,
2481
+ 𝐡
2482
+ 1
2483
+ )
2484
+
2485
+ 𝐂
2486
+ 1
2487
+
2488
+ 1
2489
+ ×
2490
+ 2
2491
+
2492
+ 𝑥
2493
+ 2
2494
+ 𝐤
2495
+
2496
+ (
2497
+ 𝑥
2498
+ 2
2499
+ ,
2500
+ 𝐡
2501
+ 2
2502
+ )
2503
+
2504
+ 𝐂
2505
+ 2
2506
+
2507
+ 1
2508
+ ×
2509
+ 3
2510
+ 𝐤
2511
+
2512
+ (
2513
+ 𝑥
2514
+ 3
2515
+ ,
2516
+ 𝐡
2517
+ 3
2518
+ )
2519
+
2520
+ 𝐂
2521
+ 3
2522
+
2523
+ 1
2524
+ ×
2525
+ 4
2526
+
2527
+ ×
2528
+ 𝑑
2529
+ 𝐤
2530
+
2531
+ (
2532
+ 𝑥
2533
+ 𝑑
2534
+ ,
2535
+ 𝐡
2536
+ 𝑑
2537
+ )
2538
+
2539
+ 𝐂
2540
+ 𝑑
2541
+
2542
+ 1
2543
+ )
2544
+ .
2545
+
2546
+
2547
+ Denote the values of
2548
+
2549
+ 𝑥
2550
+ 1
2551
+
2552
+ 𝑥
2553
+ 2
2554
+ 𝑢
2555
+ at the grid
2556
+
2557
+ by
2558
+
2559
+ 𝑥
2560
+ 1
2561
+
2562
+ 𝑥
2563
+ 2
2564
+ 𝒰
2565
+
2566
+ =
2567
+ Δ
2568
+
2569
+ {
2570
+
2571
+ 𝑥
2572
+ 1
2573
+
2574
+ 𝑥
2575
+ 2
2576
+ 𝑢
2577
+
2578
+ (
2579
+ 𝐱
2580
+ )
2581
+ |
2582
+ 𝐱
2583
+
2584
+
2585
+ }
2586
+ . Then it is straightforward to obtain
2587
+
2588
+ 𝑥
2589
+ 1
2590
+
2591
+ 𝑥
2592
+ 2
2593
+ 𝒰
2594
+ =
2595
+ 𝒰
2596
+ ×
2597
+ 1
2598
+
2599
+ 1
2600
+ 𝐂
2601
+ 1
2602
+
2603
+ 𝐂
2604
+ 1
2605
+
2606
+ 1
2607
+ ×
2608
+ 2
2609
+
2610
+ 1
2611
+ 𝐂
2612
+ 2
2613
+
2614
+ 𝐂
2615
+ 2
2616
+
2617
+ 1
2618
+ , where
2619
+
2620
+ 1
2621
+ means taking the derivative w.r.t the first input variable, and we have
2622
+
2623
+ 1
2624
+ 𝐂
2625
+ 1
2626
+ =
2627
+ [
2628
+
2629
+
2630
+ 11
2631
+ 𝐤
2632
+
2633
+ (
2634
+
2635
+ 11
2636
+ ,
2637
+ 𝐡
2638
+ 1
2639
+ )
2640
+ ;
2641
+
2642
+ ;
2643
+
2644
+
2645
+ 1
2646
+
2647
+ 𝑀
2648
+ 1
2649
+ 𝐤
2650
+
2651
+ (
2652
+
2653
+ 1
2654
+
2655
+ 𝑀
2656
+ 1
2657
+ ,
2658
+ 𝐡
2659
+ 1
2660
+ )
2661
+ ]
2662
+ and
2663
+
2664
+ 1
2665
+ 𝐂
2666
+ 2
2667
+ =
2668
+ [
2669
+
2670
+
2671
+ 21
2672
+ 𝐤
2673
+
2674
+ (
2675
+
2676
+ 21
2677
+ ,
2678
+ 𝐡
2679
+ 2
2680
+ )
2681
+ ;
2682
+
2683
+ ;
2684
+
2685
+
2686
+ 2
2687
+
2688
+ 𝑀
2689
+ 2
2690
+ 𝐤
2691
+
2692
+ (
2693
+
2694
+ 2
2695
+
2696
+ 𝑀
2697
+ 2
2698
+ ,
2699
+ 𝐡
2700
+ 2
2701
+ )
2702
+ ]
2703
+ . Hence, we just need to perform two tensor-matrix products, which takes
2704
+ 𝒪
2705
+
2706
+ (
2707
+ (
2708
+ 𝑀
2709
+ 1
2710
+ +
2711
+ 𝑀
2712
+ 2
2713
+ )
2714
+
2715
+ 𝑀
2716
+ )
2717
+ operations, and is efficient and convenient. Similarly, we can compute the prediction of all the associated
2718
+ 𝑢
2719
+ derivatives in the PDE operator, with which we can obtain
2720
+
2721
+ — the PDE evaluation at the grid in (14). We can then use automatic differentiation to calculate the gradient to maximize (14).
2722
+
2723
+ Algorithm Complexity. The time complexity of our algorithm is
2724
+ 𝒪
2725
+
2726
+ (
2727
+
2728
+ 𝑗
2729
+ 𝑀
2730
+ 𝑗
2731
+ 3
2732
+ +
2733
+ (
2734
+
2735
+ 𝑗
2736
+ 𝑀
2737
+ 𝑗
2738
+ )
2739
+
2740
+ 𝑀
2741
+ )
2742
+ . The space complexity is
2743
+ 𝒪
2744
+
2745
+ (
2746
+
2747
+ 𝑗
2748
+ 𝑀
2749
+ 𝑗
2750
+ 2
2751
+ +
2752
+ 𝑀
2753
+ )
2754
+ , including the storage of the covariance matrix at each input dimension, and the solution estimate at grid
2755
+ 𝒢
2756
+ , namely
2757
+ 𝒰
2758
+ .
2759
+
2760
+ 5Related Work
2761
+
2762
+ Although the PINN has many success stories, e.g., (Raissi et al.,, 2020; Chen et al.,, 2020; Jin et al.,, 2021; Sirignano and Spiliopoulos,, 2018; Zhu et al.,, 2019; Geneva and Zabaras,, 2020; Sahli Costabal et al.,, 2020), the training is known to be challenging, which is partly due to that applying differential operators over the NN can complicate the loss landscape (Krishnapriyan et al.,, 2021). Recent works have analyzed common failure modes of PINNs which include modeling problems exhibiting high-frequency, multi-scale, chaotic, or turbulent behaviors (Wang et al., 2020c,; Wang et al., 2020b,; Wang et al., 2020a,; Wang et al.,, 2022), or when the governing PDEs are stiff (Krishnapriyan et al.,, 2021; Mojgani et al.,, 2022). One class of approaches to mitigate the training challenge is to set different weights for the boundary and residual loss terms. For example, Wight and Zhao, (2020) suggested to set a large weight for the boundary loss to prevent the dominance of the residual loss. Wang et al., 2020a proposed a dynamic weighting scheme based on the gradient statistics of the loss terms. Wang et al., 2020c developed an adaptive weighting approach based on the eigen-values of NTK. Liu and Wang, (2021) employed a mini-max optimization and updated the loss weights via stochastic ascent. McClenny and Braga-Neto, (2020) used a multiplicative soft attention mask to dynamically re-weight the loss term on each data point and collocation point. Another strategy is to modify the NN architecture so as to exactly satisfy the boundary conditions, e.g., (Lu et al.,, 2021; Lyu et al.,, 2020; Lagaris et al.,, 1998). However, these methods are restricted to particular types of boundary conditions, and are less flexible than the original PINN framework. Tancik et al., (2020); Wang et al., 2021b used Gaussian distributions to construct random Fourier features to improve the learning of the high-frequency and multi-scale information. The number of Gaussian variances and their scales are critical to the success of these methods. But these hyperparameters are quite difficult to choose.
2763
+
2764
+ Earlier works (Graepel,, 2003) have used GP for solving linear PDEs with noisy measurement of source terms. In (Wang et al., 2021a,), the rationale and guarantees of using GP as a prior for PDE solutions are discussed. The work also justifies the usage of the product kernel in terms of sample path properties. The recent work (Chen et al.,, 2021) develops a general approach for solving both linear and nonlinear PDEs. Long et al., 2022b proposed a GP framework to integrate various differential equations. The recent work (Chen et al.,, 2023) uses sparse inverse Cholesky factorization to approximate the kernel matrix so as to handle a large number of collocation points. These methods use SE and Matérn kernels and are challenging to capture high-frequency and multi-scale solutions. The recent work (Pförtner et al.,, 2022) proposes a physics-informed GP solver for linear PDEs that generalizes weighted residuals. In (Härkönen et al.,, 2022), a GP kernel is constructed via the Ehrenpreis-Palamodov fundamental principle and nonlinear Fourier transform to solve linear PDEs with constant coefficients. This work also derives the spectral mixture kernel as an instance of its own kernel design. The computational advantage of using Kronecker product structures have been realized in (Saatcci,, 2012), and applied in other tasks, such as nonparametric tensor decomposition (Xu et al.,, 2012), sparse approximation with massive inducing points (Wilson and Nickisch,, 2015; Izmailov et al.,, 2018), and high-dimensional output regression (Zhe et al.,, 2019). In Wilson et al., (2015) it further points out that if one uses a regular (evenly-spaced), each kernel matrix will has a Toeplitz structure, which can lead to
2765
+ 𝑂
2766
+
2767
+ (
2768
+ 𝑛
2769
+
2770
+ log
2771
+
2772
+ 𝑛
2773
+ )
2774
+ computation. However, in machine learning applications, data is typically not observed at a grid and the Kronecker product has a limited usage. By contrast, for PDE solving, it is natural to estimate the solution values on a grid, which opens the possibility of using Kronecker products combined with GP for efficient computation. More general discussions about Bayesian learning and PDE problems are given in (Owhadi,, 2015; Cockayne et al.,, 2017). Tensor methods used in numerical computation are discussed in (Gavrilyuk and Khoromskij,, 2019).
2775
+
2776
+ 6Experiment
2777
+
2778
+ To evaluate GP-HM, we considered three commonly-used benchmark PDE families in the literature of machine learning solvers (Raissi et al.,, 2019; Wang et al., 2021b,; Krishnapriyan et al.,, 2021): Poisson, Allen-Cahn and Advection. Following the prior works, we fabricated a series of solutions to thoroughly examine the performance. The details are given in Section B of Appendix.
2779
+
2780
+ We compared with the following state-of-the-art ML solvers: (1) standard PINN, (2) Weighted PINN (W-PINN) that up-weight the boundary loss to reduce the dominance of the residual loss, and to more effectively propagate the boundary information, (3) Rowdy (Jagtap et al.,, 2022), PINN with an adaptive activation function, which combines a standard activation with several
2781
+ sin
2782
+ or
2783
+ cos
2784
+ activations. (4) RFF-PINN, feeding Random Fourier Features to the PINN (Wang et al., 2021b,). To ensure RFF-PINN to achieve the best performance, we followed (Wang et al., 2020c,) to dynamically re-weight the loss terms based on NTK eigenvalues (Wang et al., 2020c,). (5) Spectral Method (Boyd,, 2001), which approximates the solution with a linear combination of trigonometric bases, and estimates the basis coefficients via least mean squares. In addition, we also tested (6) GP-SE and (7) GP-Matérn, GP solvers with the square exponential (SE) and the Matérn kernel. The details about the hyperparameter setting and tuning is provided in Section B of Appendix. We denote our method using the covariance function based on (8) and (9) by GP-HM-StM and GP-HM-GM, respectively. We compared with several traditional numerical solvers: (8) Chebfun3 that solves PDEs based on Chebyshev interpolants, (9) Finite Difference (FD), which solves the PDEs via discretization based on finite difference. We used PyPDE library4 to solve 1D/2D Poisson equations, and 1D advection (using methods of lines). Note that PyPDE does not support solving nonlinear stationary PDEs, namely 1D/2D Allen-Cahn Equation in (28), and so we implemented the finite difference with Scipy and Krylov method for root finding. Note also that the Chebfun library does not support 2D Poisson and nonlinear stationary PDEs, namely, 1D/2D Allen-Cahn equation, and so it has very limited usage. We employed the default settings in Chebfun library. When using PyPDE, we set spacial discretization to 400 and 400 time steps (if needed). For 1D Allen-cahn, the spatial discretization is set to 400. For 2D Allen-cahn, we used a
2785
+ 45
2786
+ ×
2787
+ 45
2788
+ grid; otherwise, the root finding either ran forever or failed due to numerical instability. We have also tested the Spectral Galerking method implemented by the Shenfun library5. However, we found it failed in every test case (the relative
2789
+ 𝐿
2790
+ 2
2791
+ error is at several thousands). Hence, we did not report the results.
2792
+
2793
+ Method 1D 2D
2794
+
2795
+ 𝑢
2796
+ 1
2797
+
2798
+ 𝑢
2799
+ 2
2800
+
2801
+ 𝑢
2802
+ 3
2803
+
2804
+ 𝑢
2805
+ 4
2806
+
2807
+ 𝑢
2808
+ 5
2809
+
2810
+ 𝑢
2811
+ 6
2812
+
2813
+ 𝑢
2814
+ 7
2815
+
2816
+ PINN 1.36e0 1.40e0 1.00e0 1.42e1 6.03e-1 1.63e0 9.99e-1
2817
+ W-PINN 1.31e0 2.65e-1 1.86e0 2.60e1 6.94e-1 1.63e0 6.75e-1
2818
+ RFF-PINN 4.97e-4 2.00e-5 7.29e-2 2.80e-1 5.74e-1 1.69e0 7.99 e-1
2819
+ Rowdy 1.70e0 1.00e0 1.00e0 1.01e0 1.03e0 2.24e1 7.36e-1
2820
+ Spectral method 2.36e-2 3.47e0 1.02e0 1.02e0 9.98e-1 1.58e-2 1.04e0
2821
+ Chebfun 3.05e-11 1.17e-11 5.81e-11 1.14e-10 8.95e-10 N/A N/A
2822
+ Finite Difference 5.58e-1 4.78e-2 2.34e-1 1.47e0 1.40e0 2.33e-1 1.75e-2
2823
+ GP-SE 2.70e-2 9.99e-1 9.99e-1 3.19e-1 9.75e-1 9.99e-1 9.53e-1
2824
+ GP-Matérn 3.32e-2 9.8e-1 5.15e-1 1.83e-2 6.27e-1 6.28e-1 3.54e-2
2825
+ GP-HM-GM 3.99e-7 2.73e-3 3.92e-6 1.55e-6 1.82e-3 6.46e-5 1.06e-3
2826
+ GP-HM-StM 6.53e-7 2.71e-3 3.17e-6 8.97e-7 4.22e-4 6.87e-5 1.02e-3
2827
+ Table 1:Relative
2828
+ 𝐿
2829
+ 2
2830
+ error in solving 1D and 2D Poisson equations, where
2831
+ 𝑢
2832
+ 𝑗
2833
+ ’s are different high-frequency and multi-scale solutions:
2834
+ 𝑢
2835
+ 1
2836
+ =
2837
+ sin
2838
+
2839
+ (
2840
+ 100
2841
+
2842
+ 𝑥
2843
+ )
2844
+ ,
2845
+ 𝑢
2846
+ 2
2847
+ =
2848
+ sin
2849
+
2850
+ (
2851
+ 𝑥
2852
+ )
2853
+ +
2854
+ 0.1
2855
+
2856
+ sin
2857
+
2858
+ (
2859
+ 20
2860
+
2861
+ 𝑥
2862
+ )
2863
+ +
2864
+ 0.05
2865
+
2866
+ cos
2867
+
2868
+ (
2869
+ 100
2870
+
2871
+ 𝑥
2872
+ )
2873
+ ,
2874
+ 𝑢
2875
+ 3
2876
+ =
2877
+ sin
2878
+
2879
+ (
2880
+ 6
2881
+
2882
+ 𝑥
2883
+ )
2884
+
2885
+ cos
2886
+
2887
+ (
2888
+ 100
2889
+
2890
+ 𝑥
2891
+ )
2892
+ ,
2893
+ 𝑢
2894
+ 4
2895
+ =
2896
+ 𝑥
2897
+
2898
+ sin
2899
+
2900
+ (
2901
+ 200
2902
+
2903
+ 𝑥
2904
+ )
2905
+ ,
2906
+ 𝑢
2907
+ 5
2908
+ =
2909
+ sin
2910
+
2911
+ (
2912
+ 500
2913
+
2914
+ 𝑥
2915
+ )
2916
+
2917
+ 2
2918
+
2919
+ (
2920
+ 𝑥
2921
+
2922
+ 0.5
2923
+ )
2924
+ 2
2925
+ ,
2926
+ 𝑢
2927
+ 6
2928
+ =
2929
+ sin
2930
+
2931
+ (
2932
+ 100
2933
+
2934
+ 𝑥
2935
+ )
2936
+
2937
+ sin
2938
+
2939
+ (
2940
+ 100
2941
+
2942
+ 𝑦
2943
+ )
2944
+ and
2945
+ 𝑢
2946
+ 7
2947
+ =
2948
+ sin
2949
+
2950
+ (
2951
+ 6
2952
+
2953
+ 𝑥
2954
+ )
2955
+
2956
+ sin
2957
+
2958
+ (
2959
+ 20
2960
+
2961
+ 𝑥
2962
+ )
2963
+ +
2964
+ sin
2965
+
2966
+ (
2967
+ 6
2968
+
2969
+ 𝑦
2970
+ )
2971
+
2972
+ sin
2973
+
2974
+ (
2975
+ 20
2976
+
2977
+ 𝑦
2978
+ )
2979
+ .
2980
+ Method 1D Allen-cahn 2D Allen-cahn 1D Advection
2981
+
2982
+ 𝑢
2983
+ 1
2984
+
2985
+ 𝑢
2986
+ 2
2987
+
2988
+ PINN 1.41e0 1.14e1 1.96e1 1.00e0
2989
+ W-PINN 1.34e0 1.45e1 2.03e1 1.01e0
2990
+ RFF-PINN 1.24e-3 2.46e-1 7.17e-1 9.96e-1
2991
+ Rowdy 1.30e0 1.31e0 1.18e0 1.03e0
2992
+ Spectral method 2.34e-2 2.45e1 2.45e1 2.67e0
2993
+ Chebfun 1.39e-08 2.94e-10 N/A 1.39e0
2994
+ Finite Difference 2.32e-01 2.36e-1 3.23e0 1.29e-1
2995
+ GP-SE 2.74e-2 1.06e-2 3.48e-1 9.99e-1
2996
+ GP-Matérn 3.32e-2 5.16e-2 2.96e-1 9.99e-1
2997
+ GP-HM-StM 7.71e-6 4.76e-6 2.99e-3 9.08e-4
2998
+ GP-HM-GM 4.91e-6 4.24e-6 5.78e-3 3.59e-3
2999
+ Table 2:Relative
3000
+ 𝐿
3001
+ 2
3002
+ error in solving 1D, 2D Allen-cahn equations and 1D advection equation, where
3003
+ 𝑢
3004
+ 1
3005
+ and
3006
+ 𝑢
3007
+ 2
3008
+ are two test solutions for 1D Allen-cahn:
3009
+ 𝑢
3010
+ 1
3011
+ =
3012
+ sin
3013
+
3014
+ (
3015
+ 100
3016
+
3017
+ 𝑥
3018
+ )
3019
+ ,
3020
+ 𝑢
3021
+ 2
3022
+ =
3023
+ sin
3024
+
3025
+ (
3026
+ 6
3027
+
3028
+ 𝑥
3029
+ )
3030
+
3031
+ cos
3032
+
3033
+ (
3034
+ 100
3035
+
3036
+ 𝑥
3037
+ )
3038
+ . The test solution for 2D Allen-cahn is
3039
+ (
3040
+ sin
3041
+
3042
+ (
3043
+ 𝑥
3044
+ )
3045
+ +
3046
+ 0.1
3047
+
3048
+ sin
3049
+
3050
+ (
3051
+ 20
3052
+
3053
+ 𝑥
3054
+ )
3055
+ +
3056
+ cos
3057
+
3058
+ (
3059
+ 100
3060
+
3061
+ 𝑥
3062
+ )
3063
+ )
3064
+
3065
+ (
3066
+ sin
3067
+
3068
+ (
3069
+ 𝑦
3070
+ )
3071
+ +
3072
+ 0.1
3073
+
3074
+ sin
3075
+
3076
+ (
3077
+ 20
3078
+
3079
+ 𝑦
3080
+ )
3081
+ +
3082
+ cos
3083
+
3084
+ (
3085
+ 100
3086
+
3087
+ 𝑦
3088
+ )
3089
+ )
3090
+ , and for 1D advection equation is
3091
+ sin
3092
+
3093
+ (
3094
+ 𝑥
3095
+
3096
+ 200
3097
+
3098
+ 𝑡
3099
+ )
3100
+ .
3101
+
3102
+ Solution Accuracy. We report the relative
3103
+ 𝐿
3104
+ 2
3105
+ error (normalized root-mean-square error) of each method in Table 1 and 2. The best result and the smaller error between GP-HM-StM and GP-HM-GM are made bold. We can see that, among all the ML solvers, our method achieves the smallest solution error in all the cases except that for the 1D Poisson equation with solution
3106
+ 𝑢
3107
+ 2
3108
+ , RFF-PINN is better. However, in all the cases, the solution error of GP-HM achieves at least 1e-3 level. In quite a few cases, our method even reaches an error around 1e-6 and 1e-7. It shows that GP-HM can successfully solve all these equations. By contrast, GP solvers using the plain SE and Matérn kernel result in several orders of the magnitude bigger errors. The standard PINN and W-PINN basically failed to solve every equation. While Rowdy improved upon PINN and W-PINN in most cases, the error is still quite large. The inferior performance of the spectral method implies that only using trigonometric bases is not sufficient. With the usage of the random Fourier features, RFF-PINN can greatly boost the performance of PINN and W-PINN in many cases. However, in most cases, it is still much inferior to GP-HM. The performance of RFF-PINN is very sensitive to the number and scales of the Gaussian variance, and these hyper-parameters are not easy to choose. We have tried 20 settings and report the best performance (see Section B in Appendix). Compared with traditional solvers, we can see Chebfun performs very well, and achieves the highest solution accuracy except for the 1D advection problem. However, Chebfun is limited to 1D problems and temporal PDEs. It cannot handle 2D stationary PDEs, no matter linear or nonlinear. Finite Difference can provide reasonable accuracy, but the performance is consistently much worse than GP-HM. This might be due to the challenge in solving the root finding problem, caused by the high-frequency/multi-scale information implied in the source term. Overall, we can see that our method is general enough to solve different types of PDEs (1D/2D, linear/nonlinear, stationary and non-stationary); to achieve a satisfactory accuracy, we do NOT need to change the computation framework to re-develop the solver. By contrast, it is known that the success of numerical solvers tightly binds to the specific problem, domain knowledge, skillful implementation, and numerous numerical tricks. Any change of these aspects can cause failures of the solvers and demand for a re-design and re-implementation. It therefore brings significant challenges in usage.
3109
+
3110
+ Point-wise Error. We then show the point-wise solution error in Fig. 1, 2, 3, and in Appendix Fig. 5, 6, 7. We can see that GP-SE is difficult to capture high frequencies. While GP-Matérn is better, it is unable to grasp all the scale information. RFF-PINN successfully captured multi-scale frequencies in Fig. 1, but it failed in more challenging cases as in Fig. 2 and 3. In 2D Poisson and 1D Advection, the point-wise error of both GP-HM-StM and GP-HM-GM is quite uniform across the domain and is close to zero (dark blue); see Fig. 3, and in Appendix Fig. 6, 7. By contrast, the other methods exhibit large errors in a few local regions. These results show that GP-HM not only gives a superior global accuracy, but locally recovers individual solution values.
3111
+
3112
+ Frequency Learning. Third, we investigated the learned component weights
3113
+ 𝑤
3114
+ 𝑞
3115
+ and frequencies
3116
+ 𝜇
3117
+ 𝑞
3118
+ of GP-HM. In Fig. 4, we show the results for two Poisson equations. As we can see, although the number of components
3119
+ 𝑄
3120
+ is set to be much larger than the number of true frequencies, the estimation of most weights
3121
+ 𝑤
3122
+ 𝑞
3123
+ is very small (less than
3124
+ 10
3125
+
3126
+ 10
3127
+ ). That means, excessive frequency components have been automatically pruned. The remaining components with significant weights completely match the number of true frequencies in the solution. The frequency estimation
3128
+ 𝜇
3129
+ 𝑞
3130
+ is very close to the ground-truth. This demonstrates that the implicit Jefferys prior (by optimizing
3131
+ 𝑤
3132
+ 𝑞
3133
+ in the log space) can indeed implement sparsity, select the right frequency number, and recover the ground-truth frequency values. Finally, we show additional results in Section C of Appendix.
3134
+
3135
+ Figure 1:Prediction for the 1D Poisson equation with solution
3136
+ sin
3137
+
3138
+ (
3139
+ 𝑥
3140
+ )
3141
+ +
3142
+ 0.1
3143
+
3144
+ sin
3145
+
3146
+ (
3147
+ 20
3148
+
3149
+ 𝑥
3150
+ )
3151
+ +
3152
+ 0.05
3153
+
3154
+ cos
3155
+
3156
+ (
3157
+ 100
3158
+
3159
+ 𝑥
3160
+ )
3161
+ .
3162
+ Figure 2:Prediction for the 1D Poisson equation with solution
3163
+ sin
3164
+
3165
+ (
3166
+ 500
3167
+
3168
+ 𝑥
3169
+ )
3170
+
3171
+ 2
3172
+
3173
+ (
3174
+ 𝑥
3175
+
3176
+ 0.5
3177
+ )
3178
+ 2
3179
+ .
3180
+ Figure 3:Point-wise solution error for 2D Allen-cahn equation, and the solution is
3181
+ (
3182
+ sin
3183
+
3184
+ (
3185
+ 𝑥
3186
+ )
3187
+ +
3188
+ 0.1
3189
+
3190
+ sin
3191
+
3192
+ (
3193
+ 20
3194
+
3195
+ 𝑥
3196
+ )
3197
+ +
3198
+ cos
3199
+
3200
+ (
3201
+ 100
3202
+
3203
+ 𝑥
3204
+ )
3205
+ )
3206
+
3207
+ (
3208
+ sin
3209
+
3210
+ (
3211
+ 𝑦
3212
+ )
3213
+ +
3214
+ 0.1
3215
+
3216
+ sin
3217
+
3218
+ (
3219
+ 20
3220
+
3221
+ 𝑦
3222
+ )
3223
+ +
3224
+ cos
3225
+
3226
+ (
3227
+ 100
3228
+
3229
+ 𝑦
3230
+ )
3231
+ )
3232
+ .
3233
+
3234
+ (a)Poisson-1D
3235
+ (b)Poisson-2D
3236
+ 𝑥
3237
+ -dim
3238
+ (c)Poisson-2D
3239
+ 𝑦
3240
+ -dim
3241
+ Figure 4:The learned component weights and frequency values. For each number pair a(b) in the figure, “a” is the learned frequency by GP-HM, and “b” is the ground-truth. The expressions on the top are the solutions.
3242
+ 7Conclusion
3243
+
3244
+ We have presented GP-HM, a GP solver specifically designed for high-frequency and multi-scale PDEs. On a set of benchmark tasks, GP-HM shows promising performance. This might motivate alternative directions of developing machine learning solvers. In the future, we plan to develop more powerful optimization algorithms to further accelerate the convergence and to investigate GP-HM in a variety of practical applications.
3245
+
3246
+ Acknowledgments
3247
+
3248
+ This work has been supported by MURI AFOSR grant FA9550-20-1-0358, NSF CAREER Award IIS-2046295, and and NSF OAC-2311685.
3249
+
3250
+ References
3251
+ ARNOLD et al., (2012)
3252
+
3253
+ ARNOLD, D. N., BOFFI, D., and BONIZZONI, F. (2012).Tensor product finite element differential forms and their approximation properties.arXiv preprint arXiv:1212.6559.
3254
+ Bishop, (2007)
3255
+
3256
+ Bishop, C. M. (2007).Pattern Recognition and Machine Learning.Springer.
3257
+ Boyd, (2001)
3258
+
3259
+ Boyd, J. P. (2001).Chebyshev and Fourier spectral methods.Courier Corporation.
3260
+ Chen et al., (2021)
3261
+
3262
+ Chen, Y., Hosseini, B., Owhadi, H., and Stuart, A. M. (2021).Solving and learning nonlinear PDEs with Gaussian processes.arXiv preprint arXiv:2103.12959.
3263
+ Chen et al., (2020)
3264
+
3265
+ Chen, Y., Lu, L., Karniadakis, G. E., and Dal Negro, L. (2020).Physics-informed neural networks for inverse problems in nano-optics and metamaterials.Optics express, 28(8):11618–11633.
3266
+ Chen et al., (2023)
3267
+
3268
+ Chen, Y., Owhadi, H., and Schäfer, F. (2023).Sparse cholesky factorization for solving nonlinear pdes via gaussian processes.arXiv preprint arXiv:2304.01294.
3269
+ Cockayne et al., (2017)
3270
+
3271
+ Cockayne, J., Oates, C., Sullivan, T., and Girolami, M. (2017).Probabilistic numerical methods for pde-constrained bayesian inverse problems.In AIP Conference Proceedings, volume 1853. AIP Publishing.
3272
+ Figueiredo, (2001)
3273
+
3274
+ Figueiredo, M. (2001).Adaptive sparseness using Jeffreys prior.Advances in neural information processing systems, 14.
3275
+ Frostig et al., (2018)
3276
+
3277
+ Frostig, R., Johnson, M. J., and Leary, C. (2018).Compiling machine learning programs via high-level tracing.Systems for Machine Learning, 4(9).
3278
+ Gavrilyuk and Khoromskij, (2019)
3279
+
3280
+ Gavrilyuk, I. and Khoromskij, B. N. (2019).Tensor numerical methods: actual theory and recent applications.Computational Methods in Applied Mathematics, 19(1):1–4.
3281
+ Geneva and Zabaras, (2020)
3282
+
3283
+ Geneva, N. and Zabaras, N. (2020).Modeling the dynamics of pde systems with physics-constrained deep auto-regressive networks.Journal of Computational Physics, 403:109056.
3284
+ Graepel, (2003)
3285
+
3286
+ Graepel, T. (2003).Solving noisy linear operator equations by gaussian processes: Application to ordinary and partial differential equations.In ICML, volume 3, pages 234–241.
3287
+ Grimmett and Stirzaker, (2020)
3288
+
3289
+ Grimmett, G. and Stirzaker, D. (2020).Probability and random processes.Oxford university press.
3290
+ Härkönen et al., (2022)
3291
+
3292
+ Härkönen, M., Lange-Hegermann, M., and Raictua, B. (2022).Gaussian process priors for systems of linear partial differential equations with constant coefficients.arXiv preprint arXiv:2212.14319.
3293
+ Izmailov et al., (2018)
3294
+
3295
+ Izmailov, P., Novikov, A., and Kropotov, D. (2018).Scalable gaussian processes with billions of inducing inputs via tensor train decomposition.In International Conference on Artificial Intelligence and Statistics, pages 726–735.
3296
+ Jagtap et al., (2022)
3297
+
3298
+ Jagtap, A. D., Shin, Y., Kawaguchi, K., and Karniadakis, G. E. (2022).Deep kronecker neural networks: A general framework for neural networks with adaptive activation functions.Neurocomputing, 468:165–180.
3299
+ Jin et al., (2021)
3300
+
3301
+ Jin, X., Cai, S., Li, H., and Karniadakis, G. E. (2021).Nsfnets (navier-stokes flow nets): Physics-informed neural networks for the incompressible navier-stokes equations.Journal of Computational Physics, 426:109951.
3302
+ Khintchine, (1934)
3303
+
3304
+ Khintchine, A. (1934).Korrelationstheorie der stationären stochastischen prozesse.Mathematische Annalen, 109(1):604–615.
3305
+ Krishnapriyan et al., (2021)
3306
+
3307
+ Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., and Mahoney, M. W. (2021).Characterizing possible failure modes in physics-informed neural networks.Advances in Neural Information Processing Systems, 34:26548–26560.
3308
+ Lagaris et al., (1998)
3309
+
3310
+ Lagaris, I. E., Likas, A., and Fotiadis, D. I. (1998).Artificial neural networks for solving ordinary and partial differential equations.IEEE transactions on neural networks, 9(5):987–1000.
3311
+ Lathi, (1998)
3312
+
3313
+ Lathi, B. P. (1998).Modern digital and analog communication systems.Oxford University Press, Inc.
3314
+ Liu and Wang, (2021)
3315
+
3316
+ Liu, D. and Wang, Y. (2021).A dual-dimer method for training physics-constrained neural networks with minimax architecture.Neural Networks, 136:112–125.
3317
+ (23)
3318
+
3319
+ Long, D., Wang, Z., Krishnapriyan, A., Kirby, R., Zhe, S., and Mahoney, M. (2022a).AutoIP: A united framework to integrate physics into Gaussian processes.In International Conference on Machine Learning, pages 14210–14222. PMLR.
3320
+ (24)
3321
+
3322
+ Long, D., Wang, Z., Krishnapriyan, A., Kirby, R., Zhe, S., and Mahoney, M. (2022b).AutoIP: A united framework to integrate physics into Gaussian processes.In Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and Sabato, S., editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 14210–14222. PMLR.
3323
+ Lu et al., (2021)
3324
+
3325
+ Lu, L., Pestourie, R., Yao, W., Wang, Z., Verdugo, F., and Johnson, S. G. (2021).Physics-informed neural networks with hard constraints for inverse design.SIAM Journal on Scientific Computing, 43(6):B1105–B1132.
3326
+ Lyu et al., (2020)
3327
+
3328
+ Lyu, L., Wu, K., Du, R., and Chen, J. (2020).Enforcing exact boundary and initial conditions in the deep mixed residual method.arXiv preprint arXiv:2008.01491.
3329
+ McClenny and Braga-Neto, (2020)
3330
+
3331
+ McClenny, L. and Braga-Neto, U. (2020).Self-adaptive physics-informed neural networks using a soft attention mechanism.arXiv preprint arXiv:2009.04544.
3332
+ Minka, (2000)
3333
+
3334
+ Minka, T. P. (2000).Old and new matrix algebra useful for statistics.See www. stat. cmu. edu/minka/papers/matrix. html, 4.
3335
+ Mojgani et al., (2022)
3336
+
3337
+ Mojgani, R., Balajewicz, M., and Hassanzadeh, P. (2022).Lagrangian pinns: A causality-conforming solution to failure modes of physics-informed neural networks.arXiv preprint arXiv:2205.02902.
3338
+ Owhadi, (2015)
3339
+
3340
+ Owhadi, H. (2015).Bayesian numerical homogenization.Multiscale Modeling & Simulation, 13(3):812–828.
3341
+ Paszke et al., (2019)
3342
+
3343
+ Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. (2019).Pytorch: An imperative style, high-performance deep learning library.Advances in neural information processing systems, 32.
3344
+ Pförtner et al., (2022)
3345
+
3346
+ Pförtner, M., Steinwart, I., Hennig, P., and Wenger, J. (2022).Physics-informed gaussian process regression generalizes linear pde solvers.arXiv preprint arXiv:2212.12474.
3347
+ Rahaman et al., (2019)
3348
+
3349
+ Rahaman, N., Baratin, A., Arpit, D., Draxler, F., Lin, M., Hamprecht, F., Bengio, Y., and Courville, A. (2019).On the spectral bias of neural networks.In International Conference on Machine Learning, pages 5301–5310. PMLR.
3350
+ Raissi et al., (2017)
3351
+
3352
+ Raissi, M., Perdikaris, P., and Karniadakis, G. E. (2017).Machine learning of linear differential equations using gaussian processes.Journal of Computational Physics, 348:683–693.
3353
+ Raissi et al., (2019)
3354
+
3355
+ Raissi, M., Perdikaris, P., and Karniadakis, G. E. (2019).Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.Journal of Computational Physics, 378:686–707.
3356
+ Raissi et al., (2020)
3357
+
3358
+ Raissi, M., Yazdani, A., and Karniadakis, G. E. (2020).Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations.Science, 367(6481):1026–1030.
3359
+ Rasmussen and Williams, (2006)
3360
+
3361
+ Rasmussen, C. E. and Williams, C. K. I. (2006).Gaussian Processes for Machine Learning.MIT Press.
3362
+ Saatcci, (2012)
3363
+
3364
+ Saatcci, Y. (2012).Scalable inference for structured Gaussian process models.PhD thesis, Citeseer.
3365
+ Sahli Costabal et al., (2020)
3366
+
3367
+ Sahli Costabal, F., Yang, Y., Perdikaris, P., Hurtado, D. E., and Kuhl, E. (2020).Physics-informed neural networks for cardiac activation mapping.Frontiers in Physics, 8:42.
3368
+ Sirignano and Spiliopoulos, (2018)
3369
+
3370
+ Sirignano, J. and Spiliopoulos, K. (2018).Dgm: A deep learning algorithm for solving partial differential equations.Journal of computational physics, 375:1339–1364.
3371
+ Tancik et al., (2020)
3372
+
3373
+ Tancik, M., Srinivasan, P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Singhal, U., Ramamoorthi, R., Barron, J., and Ng, R. (2020).Fourier features let networks learn high frequency functions in low dimensional domains.Advances in Neural Information Processing Systems, 33:7537–7547.
3374
+ (42)
3375
+
3376
+ Wang, J., Cockayne, J., Chkrebtii, O., Sullivan, T. J., and Oates, C. J. (2021a).Bayesian numerical methods for nonlinear partial differential equations.Statistics and Computing, 31:1–20.
3377
+ Wang et al., (2022)
3378
+
3379
+ Wang, S., Sankaran, S., and Perdikaris, P. (2022).Respecting causality is all you need for training physics-informed neural networks.arXiv preprint arXiv:2203.07404.
3380
+ (44)
3381
+
3382
+ Wang, S., Teng, Y., and Perdikaris, P. (2020a).Understanding and mitigating gradient pathologies in physics-informed neural networks.arXiv preprint arXiv:2001.04536.
3383
+ (45)
3384
+
3385
+ Wang, S., Wang, H., and Perdikaris, P. (2020b).On the eigenvector bias of fourier feature networks: From regression to solving multi-scale pdes with physics-informed neural networks.arXiv preprint arXiv:2012.10047.
3386
+ (46)
3387
+
3388
+ Wang, S., Wang, H., and Perdikaris, P. (2021b).On the eigenvector bias of fourier feature networks: From regression to solving multi-scale pdes with physics-informed neural networks.Computer Methods in Applied Mechanics and Engineering, 384:113938.
3389
+ (47)
3390
+
3391
+ Wang, S., Yu, X., and Perdikaris, P. (2020c).When and why pinns fail to train: A neural tangent kernel perspective.arXiv preprint arXiv:2007.14527.
3392
+ Wiener, (1930)
3393
+
3394
+ Wiener, N. (1930).Generalized harmonic analysis.Acta mathematica, 55(1):117–258.
3395
+ Wight and Zhao, (2020)
3396
+
3397
+ Wight, C. L. and Zhao, J. (2020).Solving allen-cahn and cahn-hilliard equations using the adaptive physics informed neural networks.arXiv preprint arXiv:2007.04542.
3398
+ Wilson and Adams, (2013)
3399
+
3400
+ Wilson, A. and Adams, R. (2013).Gaussian process kernels for pattern discovery and extrapolation.In International conference on machine learning, pages 1067–1075. PMLR.
3401
+ Wilson and Nickisch, (2015)
3402
+
3403
+ Wilson, A. and Nickisch, H. (2015).Kernel interpolation for scalable structured gaussian processes (kiss-gp).In International Conference on Machine Learning, pages 1775–1784.
3404
+ Wilson et al., (2015)
3405
+
3406
+ Wilson, A. G., Dann, C., and Nickisch, H. (2015).Thoughts on massively scalable gaussian processes.arXiv preprint arXiv:1511.01870.
3407
+ Xu et al., (2012)
3408
+
3409
+ Xu, Z., Yan, F., and Qi, Y. (2012).Infinite tucker decomposition: nonparametric bayesian models for multiway data analysis.In Proceedings of the 29th International Coference on International Conference on Machine Learning, pages 1675–1682.
3410
+ Zhe et al., (2019)
3411
+
3412
+ Zhe, S., Xing, W., and Kirby, R. M. (2019).Scalable high-order gaussian process regression.In The 22nd International Conference on Artificial Intelligence and Statistics, pages 2611–2620.
3413
+ Zhu et al., (2019)
3414
+
3415
+ Zhu, Y., Zabaras, N., Koutsourelakis, P.-S., and Perdikaris, P. (2019).Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data.Journal of Computational Physics, 394:56–81.
3416
+ Appendix
3417
+ Appendix ACovariance Function Derivation
3418
+
3419
+ In this section, we show how to obtain our covariance function in (8) of the main paper. We leverage the fact that the student
3420
+ 𝑡
3421
+ density is a scale mixture of Gaussians with a Gamma prior over the inverse variance,
3422
+
3423
+
3424
+ 𝑝
3425
+
3426
+ (
3427
+ 𝑥
3428
+ |
3429
+ 𝜇
3430
+ ,
3431
+ 𝑎
3432
+ ,
3433
+ 𝑏
3434
+ )
3435
+ =
3436
+
3437
+ 0
3438
+
3439
+ 𝒩
3440
+
3441
+ (
3442
+ 𝑥
3443
+ |
3444
+ 𝜇
3445
+ ,
3446
+ 𝜏
3447
+
3448
+ 1
3449
+ )
3450
+
3451
+ Gam
3452
+
3453
+ (
3454
+ 𝜏
3455
+ |
3456
+ 𝑎
3457
+ ,
3458
+ 𝑏
3459
+ )
3460
+
3461
+ d
3462
+ 𝜏
3463
+
3464
+
3465
+ =
3466
+ 𝑏
3467
+ 𝑎
3468
+ Γ
3469
+
3470
+ (
3471
+ 𝑎
3472
+ )
3473
+
3474
+ (
3475
+ 1
3476
+ 2
3477
+
3478
+ 𝜋
3479
+ )
3480
+ 1
3481
+ /
3482
+ 2
3483
+
3484
+ [
3485
+ 𝑏
3486
+ +
3487
+ (
3488
+ 𝑥
3489
+
3490
+ 𝜇
3491
+ )
3492
+ 2
3493
+ 2
3494
+ ]
3495
+
3496
+ 𝑎
3497
+
3498
+ 1
3499
+ /
3500
+ 2
3501
+
3502
+ Γ
3503
+
3504
+ (
3505
+ 𝑎
3506
+ +
3507
+ 1
3508
+ /
3509
+ 2
3510
+ )
3511
+ .
3512
+
3513
+ (17)
3514
+
3515
+ The key to obtain this is to leverage the form of the normalizer of the Gamma distribution. When merging terms in the Gaussian and Gamma prior in the integration, one can construct another unnormalized Gamma distribution. Accordingly, the integration w.r.t
3516
+ 𝜏
3517
+ gives rises to the normalizer.
3518
+
3519
+ If we set
3520
+ 𝜈
3521
+ =
3522
+ 2
3523
+
3524
+ 𝑎
3525
+ and
3526
+ 𝜆
3527
+ =
3528
+ 𝑎
3529
+ /
3530
+ 𝑏
3531
+ , we immediately obtain the standard student
3532
+ 𝑡
3533
+ density,
3534
+
3535
+
3536
+ St
3537
+
3538
+ (
3539
+ 𝑥
3540
+ |
3541
+ 𝜇
3542
+ ,
3543
+ 𝜆
3544
+ ,
3545
+ 𝜈
3546
+ )
3547
+ =
3548
+ Γ
3549
+
3550
+ (
3551
+ 𝜈
3552
+ /
3553
+ 2
3554
+ +
3555
+ 1
3556
+ /
3557
+ 2
3558
+ )
3559
+ Γ
3560
+
3561
+ (
3562
+ 𝜈
3563
+ /
3564
+ 2
3565
+ )
3566
+
3567
+ (
3568
+ 𝜆
3569
+ 𝜋
3570
+
3571
+ 𝜈
3572
+ )
3573
+ 1
3574
+ /
3575
+ 2
3576
+
3577
+ [
3578
+ 1
3579
+ +
3580
+ 𝜆
3581
+
3582
+ (
3583
+ 𝑥
3584
+
3585
+ 𝜇
3586
+ )
3587
+ 2
3588
+ 𝜈
3589
+ ]
3590
+
3591
+ 𝜈
3592
+ /
3593
+ 2
3594
+
3595
+ 1
3596
+ /
3597
+ 2
3598
+ ,
3599
+
3600
+ (18)
3601
+
3602
+ where
3603
+ 𝜇
3604
+ is the mean,
3605
+ 𝜆
3606
+ is the precision (inverse variance) parameters, and
3607
+ 𝜈
3608
+ is the degree of freedom.
3609
+
3610
+ Next, we observe that the spectral density of a Matérn covariance function is a student
3611
+ 𝑡
3612
+ density (Rasmussen and Williams,, 2006). Given the Matérn covariance
3613
+
3614
+
3615
+ 𝛾
3616
+ 𝜈
3617
+ ,
3618
+ 𝜌
3619
+ 𝑞
3620
+
3621
+ (
3622
+ 𝑥
3623
+ ,
3624
+ 𝑥
3625
+
3626
+ )
3627
+ =
3628
+ 2
3629
+ 1
3630
+
3631
+ 𝜈
3632
+ Γ
3633
+
3634
+ (
3635
+ 𝜈
3636
+ )
3637
+
3638
+ (
3639
+ 2
3640
+
3641
+ 𝜈
3642
+
3643
+ |
3644
+ 𝑥
3645
+
3646
+ 𝑥
3647
+
3648
+ |
3649
+ 𝜌
3650
+ 𝑞
3651
+ )
3652
+ 𝜈
3653
+
3654
+ 𝐾
3655
+ 𝜈
3656
+
3657
+ (
3658
+ 2
3659
+
3660
+ 𝜈
3661
+
3662
+ |
3663
+ 𝑥
3664
+
3665
+ 𝑥
3666
+
3667
+ |
3668
+ 𝜌
3669
+ 𝑞
3670
+ )
3671
+ ,
3672
+
3673
+ (19)
3674
+
3675
+ the spectral density is
3676
+ St
3677
+
3678
+ (
3679
+ 𝑠
3680
+ ;
3681
+ 0
3682
+ ,
3683
+ 4
3684
+
3685
+ 𝜋
3686
+ 2
3687
+
3688
+ 𝜌
3689
+ 2
3690
+ ,
3691
+ 2
3692
+
3693
+ 𝜈
3694
+ )
3695
+ . That means,
3696
+
3697
+
3698
+ 𝛾
3699
+ 𝜈
3700
+ ,
3701
+ 𝜌
3702
+
3703
+ (
3704
+ Δ
3705
+ )
3706
+ =
3707
+
3708
+
3709
+
3710
+
3711
+ St
3712
+
3713
+ (
3714
+ 𝑠
3715
+ ;
3716
+ 0
3717
+ ,
3718
+ 4
3719
+
3720
+ 𝜋
3721
+ 2
3722
+
3723
+ 𝜌
3724
+ 2
3725
+ ,
3726
+ 2
3727
+
3728
+ 𝜈
3729
+ )
3730
+
3731
+ exp
3732
+
3733
+ {
3734
+ 𝑖
3735
+
3736
+ 2
3737
+
3738
+ 𝜋
3739
+
3740
+ 𝑠
3741
+
3742
+ Δ
3743
+ }
3744
+
3745
+ d
3746
+ 𝑠
3747
+ ,
3748
+
3749
+ (20)
3750
+
3751
+ where
3752
+ Δ
3753
+ =
3754
+ |
3755
+ 𝑥
3756
+
3757
+ 𝑥
3758
+
3759
+ |
3760
+ . From the scale-mixture form (17), we can set
3761
+ 𝑎
3762
+ ^
3763
+ =
3764
+ 𝜈
3765
+ and
3766
+ 𝑏
3767
+ ^
3768
+ =
3769
+ 𝑎
3770
+ ^
3771
+ /
3772
+ (
3773
+ 4
3774
+
3775
+ 𝜋
3776
+ 2
3777
+
3778
+ 𝜌
3779
+ 2
3780
+ )
3781
+ , and obtain
3782
+
3783
+
3784
+ St
3785
+
3786
+ (
3787
+ 𝑠
3788
+ ;
3789
+ 0
3790
+ ,
3791
+ 4
3792
+
3793
+ 𝜋
3794
+ 2
3795
+
3796
+ 𝜌
3797
+ 2
3798
+ ,
3799
+ 2
3800
+
3801
+ 𝜈
3802
+ )
3803
+ =
3804
+
3805
+ 0
3806
+
3807
+ 𝒩
3808
+
3809
+ (
3810
+ 𝑠
3811
+ |
3812
+ 0
3813
+ ,
3814
+ 𝜏
3815
+
3816
+ 1
3817
+ )
3818
+
3819
+ Gam
3820
+
3821
+ (
3822
+ 𝜏
3823
+ |
3824
+ 𝑎
3825
+ ^
3826
+ ,
3827
+ 𝑏
3828
+ ^
3829
+ )
3830
+
3831
+ d
3832
+ 𝜏
3833
+ .
3834
+
3835
+ (21)
3836
+
3837
+ Substituting (21) into (20), we have
3838
+
3839
+
3840
+ 𝛾
3841
+ 𝜈
3842
+ ,
3843
+ 𝜌
3844
+
3845
+ (
3846
+ Δ
3847
+ )
3848
+ =
3849
+
3850
+ 0
3851
+
3852
+ Gam
3853
+
3854
+ (
3855
+ 𝜏
3856
+ |
3857
+ 𝑎
3858
+ ^
3859
+ ,
3860
+ 𝑏
3861
+ ^
3862
+ )
3863
+
3864
+
3865
+
3866
+
3867
+
3868
+ 𝒩
3869
+
3870
+ (
3871
+ 𝑠
3872
+ |
3873
+ 0
3874
+ ,
3875
+ 𝜏
3876
+
3877
+ 1
3878
+ )
3879
+
3880
+ exp
3881
+
3882
+ {
3883
+ 𝑖
3884
+
3885
+ 2
3886
+
3887
+ 𝜋
3888
+
3889
+ 𝑠
3890
+
3891
+ Δ
3892
+ }
3893
+
3894
+ d
3895
+ 𝑠
3896
+
3897
+ d
3898
+ 𝜏
3899
+ .
3900
+
3901
+ (22)
3902
+
3903
+ Consider the inverse Fourier transform,
3904
+
3905
+
3906
+
3907
+
3908
+
3909
+
3910
+ St
3911
+
3912
+ (
3913
+ 𝑠
3914
+ ;
3915
+ 𝜇
3916
+ ,
3917
+ 4
3918
+
3919
+ 𝜋
3920
+ 2
3921
+
3922
+ 𝜌
3923
+ 2
3924
+ ,
3925
+ 2
3926
+
3927
+ 𝜈
3928
+ )
3929
+
3930
+ exp
3931
+
3932
+ (
3933
+ 𝑖
3934
+
3935
+ 2
3936
+
3937
+ 𝜋
3938
+
3939
+ Δ
3940
+
3941
+ 𝑠
3942
+ )
3943
+
3944
+ d
3945
+ 𝑠
3946
+
3947
+
3948
+ =
3949
+
3950
+ 0
3951
+
3952
+ Gam
3953
+
3954
+ (
3955
+ 𝜏
3956
+ |
3957
+ 𝑎
3958
+ ^
3959
+ ,
3960
+ 𝑏
3961
+ ^
3962
+ )
3963
+
3964
+
3965
+
3966
+
3967
+
3968
+ 𝒩
3969
+
3970
+ (
3971
+ 𝑠
3972
+ |
3973
+ 𝜇
3974
+ ,
3975
+ 𝜏
3976
+
3977
+ 1
3978
+ )
3979
+
3980
+ exp
3981
+
3982
+ (
3983
+ 𝑖
3984
+
3985
+ 2
3986
+
3987
+ 𝜋
3988
+
3989
+ 𝑠
3990
+
3991
+ Δ
3992
+ )
3993
+
3994
+ d
3995
+ 𝑠
3996
+
3997
+ d
3998
+ 𝜏
3999
+ ,
4000
+
4001
+ (23)
4002
+
4003
+ we observe that
4004
+
4005
+
4006
+ 𝔽
4007
+
4008
+ 1
4009
+
4010
+ [
4011
+ 𝒩
4012
+
4013
+ (
4014
+ 𝑠
4015
+ |
4016
+ 𝜇
4017
+ ,
4018
+ 𝜏
4019
+
4020
+ 1
4021
+ )
4022
+ ]
4023
+ =
4024
+
4025
+ 𝒩
4026
+
4027
+ (
4028
+ 𝑠
4029
+ |
4030
+ 𝜇
4031
+ ,
4032
+ 𝜏
4033
+
4034
+ 1
4035
+ )
4036
+
4037
+ exp
4038
+
4039
+ (
4040
+ 𝑖
4041
+ ��
4042
+ 2
4043
+
4044
+ 𝜋
4045
+
4046
+ 𝑠
4047
+
4048
+ Δ
4049
+ )
4050
+
4051
+ d
4052
+ 𝑠
4053
+
4054
+
4055
+ =
4056
+ exp
4057
+
4058
+ (
4059
+
4060
+ 2
4061
+
4062
+ 𝜋
4063
+ 2
4064
+
4065
+ 𝜏
4066
+
4067
+ 1
4068
+
4069
+ Δ
4070
+ 2
4071
+ )
4072
+
4073
+ exp
4074
+
4075
+ (
4076
+ 𝑖
4077
+
4078
+ 2
4079
+
4080
+ 𝜋
4081
+
4082
+ 𝜇
4083
+
4084
+ Δ
4085
+ )
4086
+
4087
+
4088
+ =
4089
+ 𝔽
4090
+
4091
+ 1
4092
+
4093
+ [
4094
+ 𝒩
4095
+
4096
+ (
4097
+ 𝑠
4098
+ |
4099
+ 0
4100
+ ,
4101
+ 𝜏
4102
+
4103
+ 1
4104
+ )
4105
+ ]
4106
+
4107
+ exp
4108
+
4109
+ (
4110
+ 𝑖
4111
+
4112
+ 2
4113
+
4114
+ 𝜋
4115
+
4116
+ 𝜇
4117
+
4118
+ Δ
4119
+ )
4120
+
4121
+
4122
+ =
4123
+
4124
+ 𝒩
4125
+
4126
+ (
4127
+ 𝑠
4128
+ |
4129
+ 0
4130
+ ,
4131
+ 𝜏
4132
+
4133
+ 1
4134
+ )
4135
+
4136
+ exp
4137
+
4138
+ (
4139
+ 𝑖
4140
+
4141
+ 2
4142
+
4143
+ 𝜋
4144
+
4145
+ 𝑠
4146
+
4147
+ Δ
4148
+ )
4149
+
4150
+ d
4151
+ 𝑠
4152
+
4153
+ exp
4154
+
4155
+ (
4156
+ 𝑖
4157
+
4158
+ 2
4159
+
4160
+ 𝜋
4161
+
4162
+ 𝜇
4163
+
4164
+ Δ
4165
+ )
4166
+ ,
4167
+
4168
+ (24)
4169
+
4170
+ where
4171
+ 𝔽
4172
+
4173
+ 1
4174
+ is the inverse Fourier transform, and
4175
+ 𝑖
4176
+ indicates complex numbers. Note that when we set
4177
+ 𝜇
4178
+ =
4179
+ 0
4180
+ , from the second line, we see
4181
+ 𝔽
4182
+
4183
+ 1
4184
+
4185
+ [
4186
+ 𝒩
4187
+
4188
+ (
4189
+ 𝑠
4190
+ |
4191
+ 0
4192
+ ,
4193
+ 𝜏
4194
+
4195
+ 1
4196
+ )
4197
+ ]
4198
+ =
4199
+ exp
4200
+
4201
+ (
4202
+
4203
+ 2
4204
+
4205
+ 𝜋
4206
+ 2
4207
+
4208
+ 𝜏
4209
+
4210
+ 1
4211
+
4212
+ Δ
4213
+ 2
4214
+ )
4215
+ . That means, the inverse transform just moves out a Fourier basis with frequency
4216
+ 𝜇
4217
+ .
4218
+
4219
+ Substitute (24) into (22), we obtain
4220
+
4221
+
4222
+
4223
+
4224
+
4225
+
4226
+ St
4227
+
4228
+ (
4229
+ 𝑠
4230
+ ;
4231
+ 𝜇
4232
+ ,
4233
+ 4
4234
+
4235
+ 𝜋
4236
+ 2
4237
+
4238
+ 𝜌
4239
+ 2
4240
+ ,
4241
+ 2
4242
+
4243
+ 𝜈
4244
+ )
4245
+
4246
+ exp
4247
+
4248
+ (
4249
+ 𝑖
4250
+
4251
+ 2
4252
+
4253
+ 𝜋
4254
+
4255
+ Δ
4256
+
4257
+ 𝑠
4258
+ )
4259
+
4260
+ d
4261
+ 𝑠
4262
+
4263
+
4264
+ =
4265
+
4266
+ 0
4267
+
4268
+ Gam
4269
+
4270
+ (
4271
+ 𝜏
4272
+ |
4273
+ 𝑎
4274
+ ^
4275
+ ,
4276
+ 𝑏
4277
+ ^
4278
+ )
4279
+
4280
+
4281
+
4282
+
4283
+
4284
+ 𝒩
4285
+
4286
+ (
4287
+ 𝑠
4288
+ |
4289
+ 0
4290
+ ,
4291
+ 𝜏
4292
+
4293
+ 1
4294
+ )
4295
+
4296
+ exp
4297
+
4298
+ (
4299
+ 𝑖
4300
+
4301
+ 2
4302
+
4303
+ 𝜋
4304
+
4305
+ 𝑠
4306
+
4307
+ Δ
4308
+ )
4309
+
4310
+ d
4311
+ 𝑠
4312
+
4313
+ d
4314
+ 𝜏
4315
+
4316
+ exp
4317
+
4318
+ (
4319
+ 𝑖
4320
+
4321
+ 2
4322
+
4323
+ 𝜋
4324
+
4325
+ 𝜇
4326
+
4327
+ Δ
4328
+ )
4329
+
4330
+
4331
+ =
4332
+ 𝛾
4333
+ 𝜈
4334
+ ,
4335
+ 𝜌
4336
+
4337
+ (
4338
+ Δ
4339
+ )
4340
+
4341
+ exp
4342
+
4343
+ (
4344
+ 𝑖
4345
+
4346
+ 2
4347
+
4348
+ 𝜋
4349
+
4350
+ 𝜇
4351
+
4352
+ Δ
4353
+ )
4354
+ .
4355
+
4356
+
4357
+ Therefore, when we model the spectral density
4358
+ 𝑆
4359
+
4360
+ (
4361
+ 𝑠
4362
+ )
4363
+ as a mixture of student-t distribution,
4364
+
4365
+
4366
+ 𝑆
4367
+
4368
+ (
4369
+ 𝑠
4370
+ )
4371
+ =
4372
+
4373
+ 𝑞
4374
+ =
4375
+ 1
4376
+ 𝑄
4377
+ 𝑤
4378
+ 𝑞
4379
+
4380
+ (
4381
+ St
4382
+
4383
+ (
4384
+ 𝑠
4385
+ ;
4386
+ 𝜇
4387
+ 𝑞
4388
+ ,
4389
+ 4
4390
+
4391
+ 𝜋
4392
+ 2
4393
+
4394
+ 𝜌
4395
+ 𝑞
4396
+ 2
4397
+ ,
4398
+ 2
4399
+
4400
+ 𝜈
4401
+ )
4402
+ +
4403
+ St
4404
+
4405
+ (
4406
+ 𝑠
4407
+ ;
4408
+
4409
+ 𝜇
4410
+ 𝑞
4411
+ ,
4412
+ 4
4413
+
4414
+ 𝜋
4415
+ 2
4416
+
4417
+ 𝜌
4418
+ 𝑞
4419
+ 2
4420
+ ,
4421
+ 2
4422
+
4423
+ 𝜈
4424
+ )
4425
+ )
4426
+ ,
4427
+
4428
+ (25)
4429
+
4430
+ It is straightforward to obtain the following covariance function,
4431
+
4432
+
4433
+ 𝑘
4434
+ StM
4435
+
4436
+ (
4437
+ 𝑥
4438
+ ,
4439
+ 𝑥
4440
+
4441
+ )
4442
+ =
4443
+
4444
+ 𝑞
4445
+ =
4446
+ 1
4447
+ 𝑄
4448
+ 𝑤
4449
+ 𝑞
4450
+
4451
+ 𝛾
4452
+ 𝜈
4453
+ ,
4454
+ 𝜌
4455
+ 𝑞
4456
+
4457
+ (
4458
+ 𝑥
4459
+ ,
4460
+ 𝑥
4461
+
4462
+ )
4463
+
4464
+ cos
4465
+
4466
+ (
4467
+ 2
4468
+
4469
+ 𝜋
4470
+
4471
+ 𝜇
4472
+ 𝑞
4473
+
4474
+ (
4475
+ 𝑥
4476
+
4477
+ 𝑥
4478
+
4479
+ )
4480
+ )
4481
+ .
4482
+
4483
+ (26)
4484
+ Appendix BExperimental Settings
4485
+
4486
+ The Poisson Equation. We considered 1D and 2D Poisson equations with different source functions that lead to various scale information in the solution. We used Dirichlet boundary conditions.
4487
+
4488
+
4489
+ 𝑢
4490
+ 𝑥
4491
+
4492
+ 𝑥
4493
+
4494
+ =
4495
+ 𝑓
4496
+
4497
+ (
4498
+ 𝑥
4499
+ )
4500
+ ,
4501
+ 𝑥
4502
+
4503
+ [
4504
+ 0
4505
+ ,
4506
+ 2
4507
+
4508
+ 𝜋
4509
+ ]
4510
+ ,
4511
+
4512
+
4513
+ 𝑢
4514
+ 𝑥
4515
+
4516
+ 𝑥
4517
+ +
4518
+ 𝑢
4519
+ 𝑦
4520
+
4521
+ 𝑦
4522
+
4523
+ =
4524
+ 𝑓
4525
+
4526
+ (
4527
+ 𝑥
4528
+ ,
4529
+ 𝑦
4530
+ )
4531
+ ,
4532
+ (
4533
+ 𝑥
4534
+ ,
4535
+ 𝑦
4536
+ )
4537
+
4538
+ [
4539
+ 0
4540
+ ,
4541
+ 2
4542
+
4543
+ 𝜋
4544
+ ]
4545
+ ×
4546
+ [
4547
+ 0
4548
+ ,
4549
+ 2
4550
+
4551
+ 𝜋
4552
+ ]
4553
+ .
4554
+
4555
+ (27)
4556
+
4557
+ For the 1D Poisson equation, we created source functions
4558
+ 𝑓
4559
+ that give the following high-frequency and multi-frequency solutions,
4560
+ 𝑢
4561
+ 1
4562
+ =
4563
+ sin
4564
+
4565
+ (
4566
+ 100
4567
+
4568
+ 𝑥
4569
+ )
4570
+ ,
4571
+ 𝑢
4572
+ 2
4573
+ =
4574
+ sin
4575
+
4576
+ (
4577
+ 𝑥
4578
+ )
4579
+ +
4580
+ 0.1
4581
+
4582
+ sin
4583
+
4584
+ (
4585
+ 20
4586
+
4587
+ 𝑥
4588
+ )
4589
+ +
4590
+ 0.05
4591
+
4592
+ cos
4593
+
4594
+ (
4595
+ 100
4596
+
4597
+ 𝑥
4598
+ )
4599
+ ,
4600
+ 𝑢
4601
+ 3
4602
+ =
4603
+ sin
4604
+
4605
+ (
4606
+ 6
4607
+
4608
+ 𝑥
4609
+ )
4610
+
4611
+ cos
4612
+
4613
+ (
4614
+ 100
4615
+
4616
+ 𝑥
4617
+ )
4618
+ , and
4619
+ 𝑢
4620
+ 4
4621
+ =
4622
+ 𝑥
4623
+
4624
+ sin
4625
+
4626
+ (
4627
+ 200
4628
+
4629
+ 𝑥
4630
+ )
4631
+ . In addition, we tested with a challenging hybrid solution that mixes a high-frequency with a quadratic function,
4632
+ 𝑢
4633
+ 5
4634
+ =
4635
+ sin
4636
+
4637
+ (
4638
+ 500
4639
+
4640
+ 𝑥
4641
+ )
4642
+
4643
+ 2
4644
+
4645
+ (
4646
+ 𝑥
4647
+
4648
+ 0.5
4649
+ )
4650
+ 2
4651
+ where we set
4652
+ 𝑥
4653
+
4654
+ [
4655
+ 0
4656
+ ,
4657
+ 1
4658
+ ]
4659
+ . For the 2D Poisson equation, we tested with the following multi-scale solutions,
4660
+ 𝑢
4661
+ 6
4662
+ =
4663
+ sin
4664
+
4665
+ (
4666
+ 100
4667
+
4668
+ 𝑥
4669
+ )
4670
+
4671
+ sin
4672
+
4673
+ (
4674
+ 100
4675
+
4676
+ 𝑦
4677
+ )
4678
+ and
4679
+ 𝑢
4680
+ 7
4681
+ =
4682
+ sin
4683
+
4684
+ (
4685
+ 6
4686
+
4687
+ 𝑥
4688
+ )
4689
+
4690
+ sin
4691
+
4692
+ (
4693
+ 20
4694
+
4695
+ 𝑥
4696
+ )
4697
+ +
4698
+ sin
4699
+
4700
+ (
4701
+ 6
4702
+
4703
+ 𝑦
4704
+ )
4705
+
4706
+ sin
4707
+
4708
+ (
4709
+ 20
4710
+
4711
+ 𝑦
4712
+ )
4713
+ .
4714
+
4715
+ Allen-Cahn Equation. We considered 1D and 2D Allen-Cahn (nonlinear diffusion-reaction) equations with different source functions and Dirichlet boundary conditions.
4716
+
4717
+
4718
+ 𝑢
4719
+ 𝑥
4720
+
4721
+ 𝑥
4722
+ +
4723
+ 𝑢
4724
+
4725
+ (
4726
+ 𝑢
4727
+ 2
4728
+
4729
+ 1
4730
+ )
4731
+
4732
+ =
4733
+ 𝑓
4734
+
4735
+ (
4736
+ 𝑥
4737
+ )
4738
+ ,
4739
+ 𝑥
4740
+
4741
+ [
4742
+ 0
4743
+ ,
4744
+ 2
4745
+
4746
+ 𝜋
4747
+ ]
4748
+ ,
4749
+
4750
+
4751
+ 𝑢
4752
+ 𝑥
4753
+
4754
+ 𝑥
4755
+ +
4756
+ 𝑢
4757
+ 𝑦
4758
+
4759
+ 𝑦
4760
+ +
4761
+ 𝑢
4762
+
4763
+ (
4764
+ 𝑢
4765
+ 2
4766
+
4767
+ 1
4768
+ )
4769
+
4770
+ =
4771
+ 𝑓
4772
+
4773
+ (
4774
+ 𝑥
4775
+ ,
4776
+ 𝑦
4777
+ )
4778
+ ,
4779
+ (
4780
+ 𝑥
4781
+ ,
4782
+ 𝑦
4783
+ )
4784
+
4785
+ [
4786
+ 0
4787
+ ,
4788
+ 1
4789
+ ]
4790
+ ×
4791
+ [
4792
+ 0
4793
+ ,
4794
+ 1
4795
+ ]
4796
+ .
4797
+
4798
+ (28)
4799
+
4800
+ For the 1D equation, we tested with solutions
4801
+ 𝑢
4802
+ 1
4803
+ =
4804
+ sin
4805
+
4806
+ (
4807
+ 100
4808
+
4809
+ 𝑥
4810
+ )
4811
+ and
4812
+ 𝑢
4813
+ 2
4814
+ =
4815
+ sin
4816
+
4817
+ (
4818
+ 6
4819
+
4820
+ 𝑥
4821
+ )
4822
+
4823
+ cos
4824
+
4825
+ (
4826
+ 100
4827
+
4828
+ 𝑥
4829
+ )
4830
+ . For the 2D equation, we created the source
4831
+ 𝑓
4832
+ that gives the following mixed-scale solution,
4833
+ 𝑢
4834
+ =
4835
+ (
4836
+ sin
4837
+
4838
+ (
4839
+ 𝑥
4840
+ )
4841
+ +
4842
+ 0.1
4843
+
4844
+ sin
4845
+
4846
+ (
4847
+ 20
4848
+
4849
+ 𝑥
4850
+ )
4851
+ +
4852
+ cos
4853
+
4854
+ (
4855
+ 100
4856
+
4857
+ 𝑥
4858
+ )
4859
+ )
4860
+
4861
+ (
4862
+ sin
4863
+
4864
+ (
4865
+ 𝑦
4866
+ )
4867
+ +
4868
+ 0.1
4869
+
4870
+ sin
4871
+
4872
+ (
4873
+ 20
4874
+
4875
+ 𝑦
4876
+ )
4877
+ +
4878
+ cos
4879
+
4880
+ (
4881
+ 100
4882
+
4883
+ 𝑦
4884
+ )
4885
+ )
4886
+ .
4887
+
4888
+ Advection Equation. Third, we evaluated with a 1D advection (one-way) equation,
4889
+
4890
+
4891
+ 𝑢
4892
+ 𝑡
4893
+ +
4894
+ 200
4895
+
4896
+ 𝑢
4897
+ 𝑥
4898
+ =
4899
+ 0
4900
+ ,
4901
+ 𝑥
4902
+
4903
+ [
4904
+ 0
4905
+ ,
4906
+ 2
4907
+
4908
+ 𝜋
4909
+ ]
4910
+ ,
4911
+ 𝑡
4912
+
4913
+ [
4914
+ 0
4915
+ ,
4916
+ 1
4917
+ ]
4918
+ .
4919
+
4920
+ (29)
4921
+
4922
+ We used the Dirichlet boundary conditions, and the solution has an analytical form,
4923
+ 𝑢
4924
+
4925
+ (
4926
+ 𝑥
4927
+ ,
4928
+ 𝑡
4929
+ )
4930
+ =
4931
+
4932
+
4933
+ (
4934
+ 𝑥
4935
+
4936
+ 200
4937
+
4938
+ 𝑡
4939
+ )
4940
+ where
4941
+
4942
+
4943
+ (
4944
+ 𝑥
4945
+ )
4946
+ is the initial condition for which we chose as
4947
+
4948
+
4949
+ (
4950
+ 𝑥
4951
+ )
4952
+ =
4953
+ sin
4954
+
4955
+ (
4956
+ 𝑥
4957
+ )
4958
+ .
4959
+
4960
+ Method Implementation. We implemented our method with JAX (Frostig et al.,, 2018) while all the competing ML based solvers with Pytorch (Paszke et al.,, 2019). For all the kernels, we initialized the length-scale to
4961
+ 1
4962
+ . For the Matérn kernel (component), we chose
4963
+ 𝜈
4964
+ =
4965
+ 5
4966
+ /
4967
+ 2
4968
+ . For our method, we set the number of components
4969
+ 𝑄
4970
+ =
4971
+ 30
4972
+ , and initialized each
4973
+ 𝑤
4974
+ 𝑞
4975
+ =
4976
+ 1
4977
+ /
4978
+ 𝑄
4979
+ . For 1D Poisson and 1D Allen-cahn equations, we varied the 1D mesh points from 400, 600 and 900. For 2D Poisson, 2D Allen-cahn and 1D advection, we varied the mesh from
4980
+ 200
4981
+ ×
4982
+ 200
4983
+ ,
4984
+ 400
4985
+ ×
4986
+ 400
4987
+ and
4988
+ 600
4989
+ ×
4990
+ 600
4991
+ . We chose an ending frequency
4992
+ 𝐹
4993
+ from {20, 40, 100}, and initialize
4994
+ 𝑢
4995
+ 𝑞
4996
+ ’s with linspace(0, F, Q). We used ADAM for optimization, and the learning rate was set to
4997
+ 10
4998
+
4999
+ 2
5000
+ . The maximum number of iterations was set to 1M, and we used the summation of the boundary loss and residual loss less than
5001
+ 10
5002
+
5003
+ 6
5004
+ as the stopping condition. The solution estimate
5005
+ 𝒰
5006
+ was initialized as zero. We set the
5007
+ 𝜆
5008
+ 𝑏
5009
+ =
5010
+ 500
5011
+ . For W-PINN, we varied the weight of the residual loss from
5012
+ {
5013
+ 10
5014
+ ,
5015
+ 10
5016
+ 3
5017
+ ,
5018
+ 10
5019
+ 4
5020
+ }
5021
+ . For Rowdy, we combined tanh with
5022
+ sin
5023
+ activation,
5024
+ 𝜙
5025
+
5026
+ (
5027
+ 𝑥
5028
+ )
5029
+ =
5030
+ tanh
5031
+
5032
+ (
5033
+ 𝑥
5034
+ )
5035
+ +
5036
+
5037
+ 𝑘
5038
+ =
5039
+ 2
5040
+ 𝐾
5041
+ 𝑛
5042
+
5043
+ sin
5044
+
5045
+ (
5046
+ (
5047
+ 𝑘
5048
+
5049
+ 1
5050
+ )
5051
+
5052
+ 𝑛
5053
+
5054
+ 𝑥
5055
+ )
5056
+ . We followed the original Rowdy paper (Jagtap et al.,, 2022) to set the scaling factor
5057
+ 𝑛
5058
+ =
5059
+ 10
5060
+ and varied
5061
+ 𝐾
5062
+ from 3, 5 and 9. For the spectral method, we used 200 Trigonometric bases, including
5063
+ cos
5064
+
5065
+ (
5066
+ 𝑛
5067
+
5068
+ 𝑥
5069
+ )
5070
+ and
5071
+ sin
5072
+
5073
+ (
5074
+ 𝑛
5075
+
5076
+ 𝑥
5077
+ )
5078
+ where
5079
+ 𝑛
5080
+ =
5081
+ 1
5082
+ ,
5083
+ 2
5084
+ ,
5085
+
5086
+ ,
5087
+ 100
5088
+ . We used the tensor-product for the 2D problems and 1D advection. We used the least mean square method to estimate the basis weights. To run RFF-PINNs, we need to specify the number and scales of the Gaussian variances to construct the random features. To ensure a broad coverage, we varied the number of variances from {1, 2, 3, 5}. For each number, we set the variances to be the commonly used values suggested by authors,
5089
+ {
5090
+ 1
5091
+ ,
5092
+ 20
5093
+ ,
5094
+ 50
5095
+ ,
5096
+ 100
5097
+ }
5098
+ , combined with randomly sampled ones. The detailed specification is given by Table 3. There are in total 20 settings. We report the best result of RFF-PINN from these settings. For all the PINN based methods, we varied the number of collocation points from 10K and 12K.
5099
+
5100
+ Number Scales
5101
+ 1
5102
+ 1
5103
+ ,
5104
+ 20
5105
+ ,
5106
+ 50
5107
+ ,
5108
+ 100
5109
+ ,
5110
+ rand
5111
+
5112
+ (
5113
+ 1
5114
+ ,
5115
+ [
5116
+ 1
5117
+ ,
5118
+ 𝐾
5119
+ ]
5120
+ )
5121
+
5122
+ 2
5123
+ 3
5124
+ ×
5125
+ rand
5126
+
5127
+ (
5128
+ 2
5129
+ ,
5130
+ {
5131
+ 1
5132
+ ,
5133
+ 20
5134
+ ,
5135
+ 50
5136
+ ,
5137
+ 100
5138
+ ,
5139
+ rand
5140
+
5141
+ (
5142
+ 1
5143
+ ,
5144
+ [
5145
+ 1
5146
+ ,
5147
+ 𝐾
5148
+ ]
5149
+ )
5150
+ }
5151
+ )
5152
+ ,
5153
+ 2
5154
+ ×
5155
+ rand
5156
+
5157
+ (
5158
+ 2
5159
+ ,
5160
+ [
5161
+ 1
5162
+ ,
5163
+ 𝐾
5164
+ ]
5165
+ )
5166
+
5167
+ 3
5168
+ 3
5169
+ ×
5170
+ rand
5171
+
5172
+ (
5173
+ 3
5174
+ ,
5175
+ {
5176
+ 1
5177
+ ,
5178
+ 20
5179
+ ,
5180
+ 50
5181
+ ,
5182
+ 100
5183
+ ,
5184
+ rand
5185
+
5186
+ (
5187
+ 1
5188
+ ,
5189
+ [
5190
+ 1
5191
+ ,
5192
+ 𝐾
5193
+ ]
5194
+ )
5195
+ }
5196
+ )
5197
+ ,
5198
+ 2
5199
+ ×
5200
+ rand
5201
+
5202
+ (
5203
+ 3
5204
+ ,
5205
+ [
5206
+ 1
5207
+ ,
5208
+ 𝐾
5209
+ ]
5210
+ )
5211
+
5212
+ 5
5213
+ 2
5214
+ ×
5215
+ {
5216
+ 1
5217
+ ,
5218
+ 20
5219
+ ,
5220
+ 50
5221
+ ,
5222
+ 100
5223
+ ,
5224
+ rand
5225
+
5226
+ (
5227
+ 1
5228
+ ,
5229
+ [
5230
+ 1
5231
+ ,
5232
+ 𝐾
5233
+ ]
5234
+ )
5235
+ }
5236
+ ,
5237
+ 3
5238
+ ×
5239
+ rand
5240
+
5241
+ (
5242
+ 5
5243
+ ,
5244
+ [
5245
+ 1
5246
+ ,
5247
+ 𝐾
5248
+ ]
5249
+ )
5250
+ Table 3:The number and scales of the Gaussian variances used in RFF-PINN, where
5251
+ rand
5252
+
5253
+ (
5254
+ 𝑘
5255
+ ,
5256
+ 𝒜
5257
+ )
5258
+ means randomly selecting
5259
+ 𝑘
5260
+ elements from the set
5261
+ 𝒜
5262
+ without replacement,
5263
+ 𝑙
5264
+ ×
5265
+ means repeating the sampling to generate
5266
+ 𝑙
5267
+ configurations, and
5268
+ 𝐾
5269
+ is the maximum candidate frequency for which we set
5270
+ 𝐾
5271
+ =
5272
+ 200
5273
+ .
5274
+ Figure 5:Prediction for the 1D Poisson equation with solution
5275
+ 𝑥
5276
+
5277
+ sin
5278
+
5279
+ (
5280
+ 200
5281
+
5282
+ 𝑥
5283
+ )
5284
+ .
5285
+ Figure 6:Point-wise solution error for 2D Poisson equation and the solution is
5286
+ 𝑢
5287
+
5288
+ (
5289
+ 𝑥
5290
+ )
5291
+ =
5292
+ sin
5293
+
5294
+ (
5295
+ 6
5296
+
5297
+ 𝑥
5298
+ )
5299
+
5300
+ sin
5301
+
5302
+ (
5303
+ 20
5304
+
5305
+ 𝑥
5306
+ )
5307
+ +
5308
+ sin
5309
+
5310
+ (
5311
+ 6
5312
+
5313
+ 𝑦
5314
+ )
5315
+
5316
+ sin
5317
+
5318
+ (
5319
+ 20
5320
+
5321
+ 𝑦
5322
+ )
5323
+ .
5324
+ Figure 7:Point-wise solution error for 1D Advection equation and the solution is
5325
+ sin
5326
+
5327
+ (
5328
+ 𝑥
5329
+
5330
+ 200
5331
+
5332
+ 𝑡
5333
+ )
5334
+ .
5335
+ Method 1D 2D
5336
+
5337
+ 𝑢
5338
+ 1
5339
+
5340
+ 𝑢
5341
+ 2
5342
+
5343
+ 𝑢
5344
+ 3
5345
+
5346
+ 𝑢
5347
+ 4
5348
+
5349
+ 𝑢
5350
+ 5
5351
+
5352
+ 𝑢
5353
+ 6
5354
+
5355
+ 𝑢
5356
+ 7
5357
+
5358
+ PINN
5359
+ 622
5360
+
5361
+ 688
5362
+
5363
+ 624
5364
+
5365
+ 610
5366
+
5367
+ 619
5368
+
5369
+ 4
5370
+ ,
5371
+ 275
5372
+
5373
+ 5
5374
+ ,
5375
+ 355
5376
+
5377
+ RFF-PINN
5378
+ 562
5379
+
5380
+ 546
5381
+
5382
+ 576
5383
+
5384
+ 555
5385
+
5386
+ 544
5387
+
5388
+ 3
5389
+ ,
5390
+ 394
5391
+
5392
+ 5
5393
+ ,
5394
+ 493
5395
+
5396
+ Spectral method
5397
+ 502
5398
+
5399
+ 495
5400
+
5401
+ 600
5402
+
5403
+ 480
5404
+
5405
+ 517
5406
+
5407
+ 5
5408
+ ,
5409
+ 778
5410
+
5411
+ 7
5412
+ ,
5413
+ 062
5414
+
5415
+ Chebfun 1.05 1.22 1.19 1.38 3.90 N/A N/A
5416
+ Finite Difference 1.25e-02 1.27e-2 1.22e-2 1.22e-2 1.22e-2 N/A N/A
5417
+ GP-HM-GM
5418
+ 536
5419
+
5420
+ 1
5421
+ ,
5422
+ 858
5423
+
5424
+ 775
5425
+
5426
+ 703
5427
+
5428
+ 3
5429
+ ,
5430
+ 510
5431
+
5432
+ 4
5433
+ ,
5434
+ 173
5435
+
5436
+ 5
5437
+ ,
5438
+ 561
5439
+
5440
+ GP-HM-StM
5441
+ 683
5442
+
5443
+ 2
5444
+ ,
5445
+ 164
5446
+
5447
+ 914
5448
+
5449
+ 852
5450
+
5451
+ 4
5452
+ ,
5453
+ 263
5454
+
5455
+ 5
5456
+ ,
5457
+ 263
5458
+
5459
+ 6
5460
+ ,
5461
+ 435
5462
+ Table 4:Running time in seconds in solving 1D and 2D Poisson equations, where
5463
+ 𝑢
5464
+ 𝑗
5465
+ ’s are different high-frequency and multi-scale solutions:
5466
+ 𝑢
5467
+ 1
5468
+ =
5469
+ sin
5470
+
5471
+ (
5472
+ 100
5473
+
5474
+ 𝑥
5475
+ )
5476
+ ,
5477
+ 𝑢
5478
+ 2
5479
+ =
5480
+ sin
5481
+
5482
+ (
5483
+ 𝑥
5484
+ )
5485
+ +
5486
+ 0.1
5487
+
5488
+ sin
5489
+
5490
+ (
5491
+ 20
5492
+
5493
+ 𝑥
5494
+ )
5495
+ +
5496
+ 0.05
5497
+
5498
+ cos
5499
+
5500
+ (
5501
+ 100
5502
+
5503
+ 𝑥
5504
+ )
5505
+ ,
5506
+ 𝑢
5507
+ 3
5508
+ =
5509
+ sin
5510
+
5511
+ (
5512
+ 6
5513
+
5514
+ 𝑥
5515
+ )
5516
+ ���
5517
+ cos
5518
+
5519
+ (
5520
+ 100
5521
+
5522
+ 𝑥
5523
+ )
5524
+ ,
5525
+ 𝑢
5526
+ 4
5527
+ =
5528
+ 𝑥
5529
+
5530
+ sin
5531
+
5532
+ (
5533
+ 200
5534
+
5535
+ 𝑥
5536
+ )
5537
+ ,
5538
+ 𝑢
5539
+ 5
5540
+ =
5541
+ sin
5542
+
5543
+ (
5544
+ 500
5545
+
5546
+ 𝑥
5547
+ )
5548
+
5549
+ 2
5550
+
5551
+ (
5552
+ 𝑥
5553
+
5554
+ 0.5
5555
+ )
5556
+ 2
5557
+ ,
5558
+ 𝑢
5559
+ 6
5560
+ =
5561
+ sin
5562
+
5563
+ (
5564
+ 100
5565
+
5566
+ 𝑥
5567
+ )
5568
+
5569
+ sin
5570
+
5571
+ (
5572
+ 100
5573
+
5574
+ 𝑦
5575
+ )
5576
+ and
5577
+ 𝑢
5578
+ 7
5579
+ =
5580
+ sin
5581
+
5582
+ (
5583
+ 6
5584
+
5585
+ 𝑥
5586
+ )
5587
+
5588
+ sin
5589
+
5590
+ (
5591
+ 20
5592
+
5593
+ 𝑥
5594
+ )
5595
+ +
5596
+ sin
5597
+
5598
+ (
5599
+ 6
5600
+
5601
+ 𝑦
5602
+ )
5603
+
5604
+ sin
5605
+
5606
+ (
5607
+ 20
5608
+
5609
+ 𝑦
5610
+ )
5611
+ .
5612
+ Method 1D Allen-cahn 2D Allen-cahn 1D Advection
5613
+
5614
+ 𝑢
5615
+ 1
5616
+
5617
+ 𝑢
5618
+ 2
5619
+
5620
+ PINN 509 828 2,509 2,496
5621
+ RFF-PINN 1,227 1,172 4,421 2,495
5622
+ Spectral method 504 552 3,840 2,188
5623
+ Chebfun 6.57 6.0 N/A 1.39
5624
+ Finite Difference 2.32e-1 2.36e-1 1,130 12.6
5625
+ GP-HM-StM 735 2,291 7,447 2,574
5626
+ GP-HM-GM 612 2,013 6,238 2,239
5627
+ Table 5: Running time in seconds solving 1D, 2D Allen-cahn equations and 1D advection equation, where
5628
+ 𝑢
5629
+ 1
5630
+ and
5631
+ 𝑢
5632
+ 2
5633
+ are two test solutions for 1D Allen-cahn:
5634
+ 𝑢
5635
+ 1
5636
+ =
5637
+ sin
5638
+
5639
+ (
5640
+ 100
5641
+
5642
+ 𝑥
5643
+ )
5644
+ ,
5645
+ 𝑢
5646
+ 2
5647
+ =
5648
+ sin
5649
+
5650
+ (
5651
+ 6
5652
+
5653
+ 𝑥
5654
+ )
5655
+
5656
+ cos
5657
+
5658
+ (
5659
+ 100
5660
+
5661
+ 𝑥
5662
+ )
5663
+ . The test solution for 2D Allen-cahn is
5664
+ (
5665
+ sin
5666
+
5667
+ (
5668
+ 𝑥
5669
+ )
5670
+ +
5671
+ 0.1
5672
+
5673
+ sin
5674
+
5675
+ (
5676
+ 20
5677
+
5678
+ 𝑥
5679
+ )
5680
+ +
5681
+ cos
5682
+
5683
+ (
5684
+ 100
5685
+
5686
+ 𝑥
5687
+ )
5688
+ )
5689
+
5690
+ (
5691
+ sin
5692
+
5693
+ (
5694
+ 𝑦
5695
+ )
5696
+ +
5697
+ 0.1
5698
+
5699
+ sin
5700
+
5701
+ (
5702
+ 20
5703
+
5704
+ 𝑦
5705
+ )
5706
+ +
5707
+ cos
5708
+
5709
+ (
5710
+ 100
5711
+
5712
+ 𝑦
5713
+ )
5714
+ )
5715
+ , and for 1D advection equation is
5716
+ sin
5717
+
5718
+ (
5719
+ 𝑥
5720
+
5721
+ 200
5722
+
5723
+ 𝑡
5724
+ )
5725
+ .
5726
+ Appendix CMore Results
5727
+ C.1Learning Behavior and Computational Efficiency
5728
+
5729
+ (a)1D Poisson with solution
5730
+ 𝑢
5731
+ 3
5732
+ .
5733
+ (b)2D Poisson with solution
5734
+ 𝑢
5735
+ 7
5736
+ Figure 8:The learning curve.
5737
+
5738
+ We examined the training behavior of our method. As shown in Fig. 8, with the covariance based on the student
5739
+ 𝑡
5740
+ mixture, GP-HM can converge faster or behave more robustly during the training. Overall, in most cases, GP-HM with covariance based on the student
5741
+ 𝑡
5742
+ mixture performs better than with Gaussian mixture.
5743
+
5744
+ The computation efficiency of GP-HM is comparable to PINN-type approaches. For example, on solving 1D Poisson and Allen-cahn equations, the average per-iteration time of GP-HM (mesh 200), PINN and RFF-PINN are 0.006, 0.004 and 0.004 seconds. For 2D Poisson and Allen-cahn equations and 1D advection, the average per-iteration time of GP-HM (mesh
5745
+ 200
5746
+ ×
5747
+ 200
5748
+ ) is 0.022 seconds while PINN and RFF-PINN (with two scales) took 0.006 and 0.02 seconds, respectively. We examined the running time on a Linux workstation with NVIDIA GeForce RTX 3090 GPU. Thanks to the usage of the grid structure and the product covariance, our GP solver can scale to a large number of collocation points, without need for additional low rank approximations.
5749
+
5750
+ We also reported the total running time for every test case in Table 4 and 5. We can see that the running time of GP-HM is comparable to PINN and RFF-PINN in most cases. However, the ML based solvers are slower than traditional methods. This might be because the ML solvers use optimization to find the solution approximation while the numerical methods often use interpolation and fixed point iterations, which are usually more efficient.
5751
+
5752
+
5753
+ (a)1D Poisson with
5754
+ 𝑢
5755
+ 3
5756
+ (b)2D Poisson with
5757
+ 𝑢
5758
+ 6
5759
+ Figure 9:The solution error using different grid resolutions.
5760
+ C.2Influence of Collocation Point Quantity
5761
+
5762
+ We examined how the number of collocation points influences the solution accuracy. To this end, we tested with a 1D Poisson and 2D Poisson equation, whose solutions include high frequencies. In Fig. 9, we show the solution accuracy with different grid sizes (resolutions). We can see that in both PDEs, using low resolutions gives much worse accuracy, e.g., less than 200 in 1D and
5763
+ 200
5764
+ ×
5765
+ 200
5766
+ in 2D Poisson. The decent performance is obtained only when resolutions is high enough, e.g., 300 in 1D and
5767
+ 400
5768
+ ×
5769
+ 400
5770
+ in 2D Poisson. That means, the number of collocation points is large (particularly for 2D problems, e.g., 160K collocation points for the resolution
5771
+ 400
5772
+ ×
5773
+ 400
5774
+ ). However, it is extremely costly or practically infeasible for the existent GP solvers to incorporate massive collocation points, due to the huge covariance matrix. Our GP solver (defined on a grid) and computational method can avoid computing the full covariance matrix, and highly efficiently scale to high resolutions. The results have demonstrated the importance and value of our model and computation method.
5775
+
5776
+ Appendix DLimitation and Discussion
5777
+
5778
+ The learning of GP-HM can automatically prune useless frequencies and meanwhile adjusts
5779
+ 𝜇
5780
+ 𝑞
5781
+ for the preserved components, namely, those with nontrivial values of
5782
+ 𝑤
5783
+ 𝑞
5784
+ , to align with the true frequencies in the solution. However, the selection and adjustment of the covariance components often require many iterations, like tens of thousands, see Fig. 7(a). More interestingly, we found that the first-order optimization approaches, like ADAM, perform well, yet the second-order optimization, which in theory converges much faster, such as L-BFGS, performs badly. This might be because the component selection and adjustment is a challenging optimization task, and might easily encounter inferior local optimums. To overcome this limitation and challenge, we plan to try with alternative sparse prior distribution over the weights
5785
+ 𝑤
5786
+ 𝑞
5787
+ , such as the horse-shoe prior and the spike-and-slab prior, to accelerate the pruning and frequency learning. We also plan to try other optimization strategies, such as alternating updates of the component weights and frequencies, to see if we can accelerate the convergence and if we can embed and take advantage of the second-order optimization algorithms.
5788
+
5789
+ Generated by L A T E xml
5790
+ Instructions for reporting errors
5791
+
5792
+ We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
5793
+
5794
+ Click the "Report Issue" button.
5795
+ Open a report feedback form via keyboard, use "Ctrl + ?".
5796
+ Make a text selection and click the "Report Issue for Selection" button near your cursor.
5797
+ You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
5798
+
5799
+ Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
5800
+
5801
+ Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
5802
+
5803
+ Report Issue
5804
+ Report Issue for Selection