mishig HF Staff commited on
Commit
cf12e30
·
verified ·
1 Parent(s): babfce0

Add 1 files

Browse files
Files changed (1) hide show
  1. 2009/2009.05580.md +4016 -0
2009/2009.05580.md ADDED
@@ -0,0 +1,4016 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Autoregressive Transformer Neural Network for Simulating Open Quantum Systems via a Probabilistic Formulation
2
+
3
+ URL Source: https://arxiv.org/html/2009.05580
4
+
5
+ Markdown Content:
6
+ ††
7
+ Autoregressive Transformer Neural Network for Simulating Open Quantum Systems via a Probabilistic Formulation
8
+ Di Luo
9
+ Department of Physics, University of Illinois at Urbana-Champaign, IL 61801, USA
10
+ IQUIST and Institute for Condensed Matter Theory, University of Illinois at Urbana-Champaign
11
+ Zhuo Chen
12
+ Department of Physics, University of Illinois at Urbana-Champaign, IL 61801, USA
13
+ Juan Carrasquilla
14
+ Vector Institute for Artificial Intelligence, MaRS Centre, Toronto, Ontario, Canada
15
+ Department of Physics and Astronomy, University of Waterloo, Ontario, N2L 3G1,Canada
16
+ Bryan K. Clark
17
+ Department of Physics, University of Illinois at Urbana-Champaign, IL 61801, USA
18
+ IQUIST and Institute for Condensed Matter Theory, University of Illinois at Urbana-Champaign
19
+ NCSA Center for Artificial Intelligence Innovation,University of Illinois at Urbana-Champaign
20
+ Abstract
21
+
22
+ The theory of open quantum systems lays the foundations for a substantial part of modern research in quantum science and engineering. Rooted in the dimensionality of their extended Hilbert spaces, the high computational complexity of simulating open quantum systems calls for the development of strategies to approximate their dynamics. In this paper, we present an approach for tackling open quantum system dynamics. Using an exact probabilistic formulation of quantum physics based on positive operator-valued measure (POVM), we compactly represent quantum states with autoregressive transformer neural networks; such networks bring significant algorithmic flexibility due to efficient exact sampling and tractable density. We further introduce the concept of String States to partially restore the symmetry of the autoregressive transformer neural network and improve the description of local correlations. Efficient algorithms have been developed to simulate the dynamics of the Liouvillian superoperator using a forward-backward trapezoid method and find the steady state via a variational formulation. Our approach is benchmarked on prototypical one and two-dimensional systems, finding results which closely track the exact solution and achieve higher accuracy than alternative approaches based on using Markov chain Monte Carlo to sample restricted Boltzmann machines. Our work provides general methods for understanding quantum dynamics in various contexts, as well as techniques for solving high-dimensional probabilistic differential equations in classical setups.
23
+
24
+ Introduction. While the universe itself is a closed quantum system, all other systems within the universe are open quantum systems coupled to the environment around them. Open quantum systems (OQS) play a crucial role in fundamental quantum science, quantum control and quantum engineering Verstraete et al. (2009); Barreiro et al. (2011). In recent years, there has been a significant interest both theoretically and experimentally in better understanding open quantum systems Sieberer et al. (2016); Maghrebi and Gorshkov (2016); Mascarenhas et al. (2015); Cui et al. (2015); Jaschke et al. (2018); Werner et al. (2016); Biella et al. (2018); Jin et al. (2016); Finazzi et al. (2015); Rota et al. (2017, 2019); Casteels et al. (2018); Nagy and Savona (2018); Shammah et al. (2018); Vicentini et al. (2019a); Kshetrimayum et al. (2017); Carusotto and Ciuti (2013); Hartmann (2016); Noh and Angelakis (2016); Bartolo et al. (2016); Biella et al. (2017); Biondi et al. (2017); Carmichael (2015); Casteels et al. (2017, 2016); Fink et al. (2017, 2018); Fitzpatrick et al. (2017); Foss-Feig et al. (2017); Kessler et al. (2012); Marino and Diehl (2016); Savona (2017); Sieberer et al. (2013); Vicentini et al. (2018); Jin et al. (2016); Lee et al. (2013); Rota et al. (2018, 2017); Casteels et al. (2018). In the field of quantum engineering, coupling to the environment generates decoherence driving the destruction of entanglement within quantum devices. Quantum computers rely on the qubit-environment coupling to apply quantum gates as well as try to minimize unwanted coupling to mitigate errors on the qubits Blais et al. (2020).
25
+
26
+ Unlike closed quantum states which can be represented by a wavefunction, the density matrix
27
+ 𝜌
28
+ becomes the core object of study in open quantum systems. A typical model of an OQS evolves the density matrix under both the Hamiltonian
29
+ 𝐻
30
+ as well as a series of dissipative operators which transfer energy and information out to a featureless bath leading to the Lindblad equation,
31
+
32
+
33
+ 𝜌
34
+ ˙
35
+ =
36
+
37
+
38
+ 𝜌
39
+
40
+
41
+ 𝑖
42
+
43
+ [
44
+ 𝐻
45
+ ,
46
+ 𝜌
47
+ ]
48
+ +
49
+
50
+ 𝑘
51
+ 𝛾
52
+ 𝑘
53
+ 2
54
+
55
+ (
56
+ 2
57
+
58
+ Γ
59
+ 𝑘
60
+
61
+ 𝜌
62
+
63
+ Γ
64
+ 𝑘
65
+
66
+
67
+ {
68
+ 𝜌
69
+ ,
70
+ Γ
71
+ 𝑘
72
+
73
+
74
+ Γ
75
+ 𝑘
76
+ }
77
+ )
78
+ ,
79
+
80
+ (1)
81
+
82
+ where
83
+ 𝛾
84
+ 𝑘
85
+ are the dissipation rates associated with jump operators
86
+ Γ
87
+ 𝑘
88
+ . Although there is hope that quantum algorithms Yoshioka et al. (2019); Liu et al. (2020); Lee et al. (2020); Ramusat and Savona (2020); Liu et al. (2021) may eventually overcome the simulation bottlenecks in OQS, a direct solution to the Lindblad equation is difficult because the Hilbert space grows exponentially with the number of particles, making classical simulations largely intractable. To deal with this curse of dimensionality, OQS have historically been studied with renormalization group approaches Finazzi et al. (2015); Rota et al. (2017, 2019), mean field methods Biella et al. (2018); Jin et al. (2016); Scarlatella et al. (2020); or simulated with tensor networks Mascarenhas et al. (2015); Verstraete et al. (2004); Zwolak and Vidal (2004); Cui et al. (2015); Jaschke et al. (2018); Werner et al. (2016); Kshetrimayum et al. (2017) which compress the density matrix. Unfortunately, while tensor networks have proved fruitful in one dimension, their use for OQS in higher dimensions has been severely limited. Recently, inspired by the advances in the description of many-body systems in terms of neural network wavefunctions Lagaris et al. (1997); Carleo and Troyer (2017); Luo and Clark (2019); Pfau et al. (2019); Hermann et al. (2019); Hibat-Allah et al. (2020); Sharir et al. (2020); Lu et al. (2019); Gao and Duan (2017); Glasser et al. (2018), ideas from machine learning have been applied to OQS studying real-time dynamics in one dimension (1-D), steady states in one and two dimensions (2-D) Vicentini et al. (2019b); Yoshioka and Hamazaki (2019); Hartmann and Carleo (2019); Nagy and Savona (2019) and determining the Liouvillian gap Yuan et al. (2020) by representing the density matrix as a restricted Boltzmann machine (RBM) Torlai and Melko (2018).
89
+
90
+ Here, we outline an alternative approach to using machine learning ideas to simulate the Lindblad equation. Many machine learning architectures and generative models (such as the RBM) have fundamentally been designed to represent probability distributions (e.g. probability distributions over images on the internet) making them inadequate to store quantum states, which are complex valued in general. To overcome this, novel approaches have been devised such as using complex weights within RBM; despite these innovative ideas, effectively representing states with signs have been a key bottleneck in this field Ferrari et al. (2019); Westerhout et al. (2020); Szabó and Castelnovo (2020).
91
+
92
+ This motivation has inspired us to utilize the recent developments in the probabilistic formulation of quantum mechanics Lundeen et al. (2009); Carrasquilla et al. (2019a, b); Kiktenko et al. (2020) to simulate the Lindblad equation. In this formulation the state is mapped to a probability distribution which we represent compactly using the Transformer Vaswani et al. (2017)—a machine learning architecture from which one can efficiently sample the probability distribution exactly. Using this, we develop efficient algorithms to both update the state of the Transformer under dynamic evolution as well as find the Transformer which represents the steady state of the Lindblad equation. To perform the dynamic evolution, we combine the second-order forward-backward trapezoid method Iserles (2008) with stochastic optimization on the Transformer. Since the Transformer does not naively preserve the symmetry of the true dynamic (or fixed-point) state, we further improve upon our results by developing an additional ansatz—string states—which explicitly restores some of these symmetries. We proceed to benchmark this work on a series of one- and two-dimensional systems.
93
+
94
+ Lindblad Equation as a Probability Equation. The general objective of this paper is to develop an approach to solve for the dynamics and fixed point of the density matrix
95
+ 𝜌
96
+ in the Lindblad equation (Eq. 1). We test this approach on two models—the transverse-field Ising model (TFIM), where
97
+ 𝐻
98
+ =
99
+ 𝑉
100
+ 4
101
+
102
+
103
+
104
+ 𝑖
105
+ ,
106
+ 𝑗
107
+
108
+ 𝜎
109
+ 𝑖
110
+ (
111
+ 𝑧
112
+ )
113
+
114
+ 𝜎
115
+ 𝑗
116
+ (
117
+ 𝑧
118
+ )
119
+ +
120
+ 𝑔
121
+ 2
122
+
123
+
124
+ 𝑘
125
+ 𝜎
126
+ 𝑘
127
+ (
128
+ 𝑥
129
+ )
130
+ ,
131
+ and the Heisenberg model, where
132
+ 𝐻
133
+ =
134
+
135
+
136
+ 𝑖
137
+ ,
138
+ 𝑗
139
+
140
+
141
+ 𝑤
142
+ =
143
+ 𝑥
144
+ ,
145
+ 𝑦
146
+ ,
147
+ 𝑧
148
+ 𝐽
149
+ 𝑤
150
+
151
+ 𝜎
152
+ 𝑖
153
+ (
154
+ 𝑤
155
+ )
156
+
157
+ 𝜎
158
+ 𝑗
159
+ (
160
+ 𝑤
161
+ )
162
+ +
163
+ 𝐵
164
+
165
+
166
+ 𝑘
167
+ 𝜎
168
+ 𝑘
169
+ (
170
+ 𝑧
171
+ )
172
+ .
173
+ In both cases,
174
+ Γ
175
+ 𝑘
176
+ =
177
+ 𝜎
178
+ 𝑘
179
+ (
180
+
181
+ )
182
+ =
183
+ 1
184
+ 2
185
+
186
+ (
187
+ 𝜎
188
+ 𝑘
189
+ (
190
+ 𝑥
191
+ )
192
+
193
+ 𝑖
194
+
195
+ 𝜎
196
+ 𝑘
197
+ (
198
+ 𝑦
199
+ )
200
+ )
201
+ . We are interested in the expectation values of local observables given by the Pauli matrices averaged over all qubits, i.e. for a system with
202
+ 𝑛
203
+ qubits, we consider
204
+
205
+ 𝜎
206
+ 𝑤
207
+
208
+ =
209
+ 1
210
+ 𝑛
211
+
212
+
213
+ 𝑖
214
+
215
+ 𝜎
216
+ 𝑖
217
+ (
218
+ 𝑤
219
+ )
220
+
221
+ for
222
+ 𝑤
223
+ =
224
+ 𝑥
225
+ ,
226
+ 𝑦
227
+ ,
228
+ 𝑧
229
+ . Typically, the density matrix
230
+ 𝜌
231
+ is represented (explicitly or implicitly) in an orthogonal basis. In this work, we instead represent
232
+ 𝜌
233
+ in the POVM formalism. Given an informationally complete POVM (IC-POVM), a density matrix
234
+ 𝜌
235
+ of a spin-
236
+ 1
237
+ /
238
+ 2
239
+ system can be uniquely mapped to a probability distribution
240
+ 𝑝
241
+
242
+ (
243
+ 𝒂
244
+ )
245
+ , where
246
+ 𝒂
247
+ spans over all
248
+ 4
249
+ 𝑛
250
+ measurement outcomes in the POVM basis. An IC-POVM is defined by a collection of positive semi-definite operators
251
+ {
252
+ 𝑀
253
+ (
254
+ 𝒂
255
+ )
256
+ }
257
+ called the frame, which specifies the probability distribution
258
+ 𝑝
259
+
260
+ (
261
+ 𝒂
262
+ )
263
+ =
264
+ Tr
265
+
266
+ (
267
+ 𝜌
268
+
269
+ 𝑀
270
+ (
271
+ 𝒂
272
+ )
273
+ )
274
+ . The inverse transformation is given by
275
+ 𝜌
276
+ =
277
+
278
+ 𝒃
279
+ 𝑝
280
+
281
+ (
282
+ 𝒃
283
+ )
284
+
285
+ 𝑁
286
+ (
287
+ 𝒃
288
+ )
289
+ , where the dual-frame
290
+ {
291
+ 𝑁
292
+ (
293
+ 𝒃
294
+ )
295
+ }
296
+ can be computed from the frame as
297
+ 𝑁
298
+ (
299
+ 𝒃
300
+ )
301
+ =
302
+
303
+ 𝑎
304
+ 𝑀
305
+ (
306
+ 𝒂
307
+ )
308
+
309
+ 𝑇
310
+ 𝒂
311
+
312
+ 𝒃
313
+
314
+ 1
315
+ . The elements of the overlap matrix
316
+ 𝑇
317
+ are given by
318
+ 𝑇
319
+ 𝒂
320
+
321
+ 𝒃
322
+ =
323
+ Tr
324
+
325
+ (
326
+ 𝑀
327
+ (
328
+ 𝒂
329
+ )
330
+
331
+ 𝑀
332
+ (
333
+ 𝒃
334
+ )
335
+ )
336
+ , and
337
+ 𝑇
338
+ 𝒂
339
+
340
+ 𝒃
341
+
342
+ 1
343
+ represent the elements of the inverse overlap matrix
344
+ 𝑇
345
+
346
+ 1
347
+ . Thus, we can re-express the Lindblad equation as
348
+
349
+
350
+ 𝑝
351
+ ˙
352
+
353
+ (
354
+ 𝒂
355
+ )
356
+ =
357
+
358
+ 𝒃
359
+ 𝑝
360
+
361
+ (
362
+ 𝒃
363
+ )
364
+
365
+ 𝐿
366
+ 𝒂
367
+ 𝒃
368
+ =
369
+
370
+ 𝒃
371
+ 𝑝
372
+
373
+ (
374
+ 𝒃
375
+ )
376
+
377
+ (
378
+ 𝐴
379
+ 𝒂
380
+ 𝒃
381
+ +
382
+ 𝐵
383
+ 𝒂
384
+ 𝒃
385
+ )
386
+ ,
387
+
388
+ (2)
389
+
390
+ with
391
+
392
+
393
+ 𝐴
394
+ 𝒂
395
+ 𝒃
396
+
397
+ =
398
+
399
+ 𝑖
400
+
401
+ Tr
402
+
403
+ (
404
+ 𝐻
405
+
406
+ [
407
+ 𝑁
408
+ (
409
+ 𝒃
410
+ )
411
+ ,
412
+ 𝑀
413
+ (
414
+ 𝒂
415
+ )
416
+ ]
417
+ )
418
+ ;
419
+
420
+
421
+ 𝐵
422
+ 𝒂
423
+ 𝒃
424
+
425
+ =
426
+
427
+ 𝑘
428
+ 𝛾
429
+ 𝑘
430
+ 2
431
+
432
+ Tr
433
+
434
+ (
435
+ 2
436
+
437
+ Γ
438
+ 𝑘
439
+
440
+ 𝑁
441
+ (
442
+ 𝒃
443
+ )
444
+
445
+ Γ
446
+ 𝑘
447
+
448
+
449
+ 𝑀
450
+ (
451
+ 𝒂
452
+ )
453
+
454
+ Γ
455
+ 𝑘
456
+
457
+
458
+ Γ
459
+ 𝑘
460
+
461
+ {
462
+ 𝑁
463
+ (
464
+ 𝒃
465
+ )
466
+ ,
467
+ 𝑀
468
+ (
469
+ 𝒂
470
+ )
471
+ }
472
+ )
473
+ .
474
+
475
+ (3)
476
+
477
+ We work with an IC-POVM where the frame and dual-frame are constructed from local frames acting on single spins as
478
+ {
479
+ 𝑀
480
+ (
481
+ 𝒂
482
+ )
483
+ }
484
+ =
485
+ {
486
+ 𝑀
487
+ (
488
+ 𝑎
489
+ 1
490
+ )
491
+
492
+ 𝑀
493
+ (
494
+ 𝑎
495
+ 2
496
+ )
497
+
498
+ 𝑀
499
+ (
500
+ 𝑎
501
+ 3
502
+ )
503
+
504
+
505
+ }
506
+ and
507
+ {
508
+ 𝑁
509
+ (
510
+ 𝒃
511
+ )
512
+ }
513
+ =
514
+ {
515
+ 𝑁
516
+ (
517
+ 𝑏
518
+ 1
519
+ )
520
+
521
+ 𝑁
522
+ (
523
+ 𝑏
524
+ 2
525
+ )
526
+
527
+ 𝑁
528
+ (
529
+ 𝑏
530
+ 3
531
+ )
532
+
533
+
534
+ }
535
+ with four outcomes per spin
536
+ 𝑎
537
+ 𝑖
538
+ . This allows us to write
539
+ 𝑝
540
+
541
+ (
542
+ 𝒂
543
+ )
544
+ =
545
+ 𝑝
546
+
547
+ (
548
+ 𝑎
549
+ 1
550
+ ,
551
+ 𝑎
552
+ 2
553
+ ,
554
+ 𝑎
555
+ 3
556
+ ,
557
+
558
+ )
559
+ . The expectation value of observables are given by
560
+
561
+ 𝑂
562
+
563
+ =
564
+
565
+ 𝒃
566
+ 𝑝
567
+
568
+ (
569
+ 𝒃
570
+ )
571
+
572
+ Tr
573
+
574
+ (
575
+ 𝑂
576
+
577
+ 𝑁
578
+ 𝒃
579
+ )
580
+
581
+ 1
582
+ 𝑁
583
+ 𝑠
584
+
585
+
586
+ 𝒃
587
+
588
+ 𝑝
589
+ 𝑁
590
+ 𝑠
591
+ Tr
592
+
593
+ (
594
+ 𝑂
595
+
596
+ 𝑁
597
+ 𝒃
598
+ )
599
+ ,
600
+ where
601
+ 𝑁
602
+ 𝑠
603
+ is the number of samples
604
+ 𝒃
605
+ drawn from the distribution
606
+ 𝑝
607
+
608
+ (
609
+ 𝒃
610
+ )
611
+ used to estimate
612
+
613
+ 𝑂
614
+
615
+ . We emphasize that a complete specification of the probability distribution
616
+ 𝑝
617
+
618
+ (
619
+ 𝒃
620
+ )
621
+ requires
622
+ 4
623
+ 𝑛
624
+ probability values for an
625
+ 𝑛
626
+ -site system.
627
+
628
+ Autoregressive Models and String States. We have chosen to model the probability distribution in a compact way with an autoregressive neural network where the probability of a given configuration
629
+ 𝒂
630
+ is expressed through its conditional probabilities
631
+ 𝑝
632
+ 𝜃
633
+
634
+ (
635
+ 𝒂
636
+ )
637
+ =
638
+
639
+ 𝑘
640
+ 𝑝
641
+ 𝜃
642
+
643
+ (
644
+ 𝑎
645
+ 𝑘
646
+ |
647
+ 𝑎
648
+ 1
649
+ ,
650
+ 𝑎
651
+ 2
652
+ ,
653
+
654
+ ,
655
+ 𝑎
656
+ 𝑘
657
+
658
+ 1
659
+ )
660
+ . This representation allows for exact sampling of a configuration from the space of probability distributions without invoking Markov chain Monte Carlo techniques. Modern incarnations of autoregressive models include, among others, recurrent neural networks (RNN) Hochreiter and Schmidhuber (1997); Cho et al. (2014), pixel convolutional neural networks (PixelCNN) van den Oord et al. (2016), Transformers Vaswani et al. (2017). Recent work has effectively applied these models to quantum systems Carrasquilla et al. (2019a); Hibat-Allah et al. (2020); Sharir et al. (2020); Carrasquilla et al. (2019b); Cha et al. (2021). Here, we use an autoregressive Transformer, which follows the same architecture as the model in Carrasquilla et al. (2019b). The Transformer consists of two hyper-parameters: the number of transformer layers stacked on each other
661
+ 𝑛
662
+ 𝑙
663
+ and the hidden dimension
664
+ 𝑛
665
+ 𝑑
666
+ , which we adjusted for different tests.
667
+
668
+ (a) String 0.
669
+ (b)String 1.
670
+ (c)String 2.
671
+ (d)String 3.
672
+ (e)String 4.
673
+ (f)String 5.
674
+ (g)String 6.
675
+ (h)String 7.
676
+ (i)String 8.
677
+ Figure 1:Strings used for mapping 1-D Transformer to 2-D quantum systems. String 0 is the default mapping (which we refer to as no strings). We always refer to the first (in order)
678
+ 𝑛
679
+ strings (excluding string 0) when we say we used
680
+ 𝑛
681
+ strings.
682
+
683
+ Since our Transformer gives ‘ordered’ measurement outcomes, when we simulate two-dimensional systems we need to choose a linear ordering of our two-dimensional sites (i.e. a string of sites). We consider two different single-string orderings (string 0 and string 1 from Fig. 1(a)). These strings explicitly break a symmetry of our system which then would need to be restored (to the degree to which the model has the variational freedom to do so) by the Transformer itself. We can partially (or completely) restore this symmetry explicitly by choosing our ansatz to be a mixture of distributions defined over multiple different symmetry-related strings - i.e.
684
+ 𝑝
685
+ 𝜃
686
+
687
+ (
688
+ 𝒂
689
+ )
690
+ =
691
+
692
+ 𝒮
693
+ 𝑝
694
+ 𝜃
695
+
696
+ (
697
+ 𝒂
698
+ |
699
+ 𝒮
700
+ )
701
+
702
+ 𝑝
703
+
704
+ (
705
+ 𝒮
706
+ )
707
+ , where
708
+ 𝑝
709
+
710
+ (
711
+ 𝒮
712
+ )
713
+ =
714
+ 1
715
+ /
716
+ 𝑁
717
+ string
718
+ for a total number of
719
+ 𝑁
720
+ string
721
+ strings; we call this refined ansatz a String state. This linear combination of the Transformer probabilities can be interpreted as a mixture model Hastie et al. (2001) and bears some resemblance to string bond states Schuch et al. (2008). Restoring symmetries explicitly has proved useful in variational calculations of quantum states Mahajan and Sharma (2019); Tahara and Imada (2008); Qiu et al. (2017); Hibat-Allah et al. (2020); Ferrari et al. (2019). Given a set of strings and a configuration
722
+ 𝒂
723
+ we can compute
724
+ 𝑝
725
+
726
+ (
727
+ 𝒂
728
+ )
729
+ explicitly. Sampling an
730
+ 𝒂
731
+ from
732
+ 𝑝
733
+ 𝜃
734
+
735
+ (
736
+ 𝒂
737
+ )
738
+ is also straightforward because of linearity and the fact that each term in our average is positive. To do so, we first sample an ordered
739
+ {
740
+ 𝑎
741
+ 1
742
+ ,
743
+ 𝑎
744
+ 2
745
+ ,
746
+
747
+
748
+ 𝑎
749
+ 𝑘
750
+
751
+ 1
752
+ }
753
+ from the Transformer and then randomly choose a string to map these ordered values to get the final configuration. Here, we test a subset of strings
754
+ 1
755
+ -
756
+ 𝑘
757
+ for different
758
+ 𝑘
759
+ (see Fig. 1(b)-1(i)).
760
+
761
+ Optimization and Results. Eq. 2 gives a prescription for applying time evolution to the density matrix by time-evolving the POVM probability distribution. To solve for the time-evolved distribution, we discretize time and use a second-order forward-backward trapezoid method Iserles (2008). We designed the following objective function
762
+
763
+
764
+
765
+ 𝒞
766
+ =
767
+ 1
768
+ 𝑁
769
+ 𝑠
770
+
771
+
772
+ 𝒂
773
+
774
+ 𝑝
775
+ 𝜃
776
+
777
+ (
778
+ 𝑡
779
+ +
780
+ 2
781
+
782
+ 𝜏
783
+ )
784
+ 𝑁
785
+ 𝑠
786
+ 1
787
+ 𝑝
788
+ 𝜃
789
+
790
+ (
791
+ 𝑡
792
+ +
793
+ 2
794
+
795
+ 𝜏
796
+ )
797
+
798
+ (
799
+ 𝒂
800
+ )
801
+
802
+
803
+ |
804
+
805
+ 𝒃
806
+ [
807
+ 𝑝
808
+ 𝜃
809
+
810
+ (
811
+ 𝑡
812
+ +
813
+ 2
814
+
815
+ 𝜏
816
+ )
817
+
818
+ (
819
+ 𝒃
820
+ )
821
+
822
+ (
823
+ 𝛿
824
+ 𝒂
825
+ 𝒃
826
+
827
+ 𝜏
828
+
829
+ 𝐿
830
+ 𝒂
831
+ 𝒃
832
+ )
833
+
834
+ 𝑝
835
+ 𝜃
836
+
837
+ (
838
+ 𝑡
839
+ )
840
+
841
+ (
842
+ 𝒃
843
+ )
844
+
845
+ (
846
+ 𝛿
847
+ 𝒂
848
+ 𝒃
849
+ +
850
+ 𝜏
851
+
852
+ 𝐿
853
+ 𝒂
854
+ 𝒃
855
+ )
856
+ ]
857
+ |
858
+ ,
859
+
860
+ (4)
861
+
862
+ where
863
+ 𝑁
864
+ 𝑠
865
+ is the number of samples,
866
+ 𝛿
867
+ 𝒂
868
+ 𝒃
869
+ is the Kronecker delta function, the sum over
870
+ 𝒂
871
+ is sampled stochastically from
872
+ 𝑝
873
+ 𝜃
874
+
875
+ (
876
+ 𝑡
877
+ +
878
+ 2
879
+
880
+ 𝜏
881
+ )
882
+ , the sum over
883
+ 𝒃
884
+ can be evaluated effiencently as explained in Supplementary Material Luo et al. (2020) Sec. IX and the gradient of the objective function
885
+ 𝒞
886
+ with respect to the parameters in
887
+ 𝑝
888
+ 𝜃
889
+
890
+ (
891
+ 𝑡
892
+ +
893
+ 2
894
+
895
+ 𝜏
896
+ )
897
+
898
+ (
899
+ 𝒃
900
+ )
901
+ is computed using PyTorch’s Paszke et al. (2019) automatic differentiation. To optimize the objective function we use Adam Kingma and Ba (2014). In the limit where
902
+ 𝒞
903
+ is zero, we get exact time evolution up to the discretization error induced by the trapezoid rule. More typically, it will be impossible for the Transformer to exactly represent the time-evolved state; instead by minimizing
904
+ 𝒞
905
+ the optimization continuously projects onto a nearby state in the manifold of distributions represented by our Transformer. This can be viewed as a higher order generalization of IT-SWO Kochkov and Clark (2018) and the method in Ref. Gutiérrez and Mendl, 2019 but here applied instead to a probability distribution. The dominant source of error in performing our dynamics comes from the limited set of states that the Transformer can represent. Additionally, it is possible that even within this manifold of states, one may not reach the optimal value if there are optimization issues such as local minima. Over multiple time steps, errors will naturally accumulate due to the unitary dynamics of the system and be suppressed by the dissipative operators which should drive all dynamics to a fixed point.
906
+
907
+ Figure 2:The expectation value
908
+
909
+ 𝜎
910
+ 𝑧
911
+
912
+ as a function of time (a) for the 1-D Heisenberg model with
913
+ 𝐵
914
+ =
915
+ 𝛾
916
+ ,
917
+ 𝐽
918
+ 𝑥
919
+ =
920
+ 2
921
+
922
+ 𝛾
923
+ ,
924
+ 𝐽
925
+ 𝑦
926
+ =
927
+ 0
928
+ , and
929
+ 𝐽
930
+ 𝑧
931
+ =
932
+ 𝛾
933
+ using a time step
934
+ 𝜏
935
+ =
936
+ 0.005
937
+
938
+ 𝛾
939
+
940
+ 1
941
+ . The initial state is the product state
942
+
943
+ 𝑖
944
+ =
945
+ 1
946
+ 𝑁
947
+ |
948
+
949
+
950
+ (
951
+
952
+ 𝜎
953
+ 𝑦
954
+
955
+ =
956
+
957
+ 1
958
+ ). (b) for the
959
+ 3
960
+ ×
961
+ 3
962
+ Heisenberg model with
963
+ 𝐵
964
+ =
965
+ 0
966
+ ,
967
+ 𝐽
968
+ 𝑥
969
+ =
970
+ 0.9
971
+
972
+ 𝛾
973
+ ,
974
+ 𝐽
975
+ 𝑦
976
+ =
977
+ 1.0
978
+
979
+ 𝛾
980
+ ,
981
+ 1.8
982
+
983
+ 𝛾
984
+ , and
985
+ 𝐽
986
+ 𝑧
987
+ =
988
+ 𝛾
989
+ using a time step
990
+ 𝜏
991
+ =
992
+ 0.008
993
+
994
+ 𝛾
995
+
996
+ 1
997
+ . The initial state is the product state
998
+
999
+ 𝑖
1000
+ =
1001
+ 1
1002
+ 𝑁
1003
+ |
1004
+
1005
+
1006
+ (
1007
+
1008
+ 𝜎
1009
+ 𝑧
1010
+
1011
+ =
1012
+ 1
1013
+ ). Both models use periodic boundary conditions. Exact curves are produced using QuTip Johansson et al. (2013, 2012). The Transformer has one encoder layer and 32 hidden dimensions, and is trained using a forward-backward trapezoid method with a sample size
1014
+ 𝑁
1015
+ 𝑠
1016
+ =
1017
+ 12000
1018
+ .
1019
+
1020
+ We test this dynamic evolution on the 1-D and a 2-D Heisenberg model (see Fig. 2) using the tetrahedral POVM basis (see Supplementary Material Luo et al. (2020) Sec. II) where we find that the dynamics matches closely to the exact result. We capture both the qualitative behavior (i.e. the peaks and oscillation of the observables) as well as their quantitative values. The values are especially accurate in both the limit of small and large time. In our results, we have simulated one-dimensional chains up to
1021
+ 𝑁
1022
+ =
1023
+ 40
1024
+ and two-dimensional chains for
1025
+ 3
1026
+ ×
1027
+ 3
1028
+ lattices.
1029
+
1030
+ One approach to finding the fixed point of the Liouvillian superoperator
1031
+
1032
+ is through a sufficiently long time-evolution (for an example see the large time limit of Fig. 2). Interestingly, our approximate time evolution fluctuates around a fixed value of the observable, though it may not reach a true fixed point (i.e.
1033
+ 𝑝
1034
+ 𝜃
1035
+
1036
+ (
1037
+ 𝑡
1038
+ +
1039
+ 2
1040
+
1041
+ 𝜏
1042
+ )
1043
+ =
1044
+ 𝑝
1045
+ 𝜃
1046
+
1047
+ (
1048
+ 𝑡
1049
+ )
1050
+ ) even in the limit of small
1051
+ 𝜏
1052
+ . (see Supplementary Material Luo et al. (2020) Sec. VII)
1053
+
1054
+ Figure 3:Variational steady-state solution for a 16-site TFIM chain with periodic boundary condition and
1055
+ 𝑉
1056
+ =
1057
+ 2
1058
+
1059
+ 𝛾
1060
+ (orange dots). The initial state is the product state
1061
+
1062
+ 𝑖
1063
+ =
1064
+ 1
1065
+ 𝑁
1066
+ |
1067
+
1068
+
1069
+ (
1070
+
1071
+ 𝜎
1072
+ 𝑧
1073
+
1074
+ =
1075
+ 1
1076
+ ). The Transformer has one encoder layer and 32 hidden dimensions, and is trained using Adam Kingma and Ba (2014) in 500 iterations with
1077
+ 𝑁
1078
+ 𝑠
1079
+ =
1080
+ 12000
1081
+ . Green points are the fixed point solution representing the density matrix as an RBM; both the exact curve (black line) and density matrix results are digitized from Ref. Vicentini et al., 2019b.
1082
+ Figure 4:Steady-state solutions for
1083
+ 3
1084
+ ×
1085
+ 3
1086
+ Heisenberg model with periodic boundary condition with
1087
+ 𝐵
1088
+ =
1089
+ 0
1090
+ ,
1091
+ 𝐽
1092
+ 𝑥
1093
+ =
1094
+ 0.9
1095
+
1096
+ 𝛾
1097
+ , and
1098
+ 𝐽
1099
+ 𝑧
1100
+ =
1101
+ 𝛾
1102
+ . The exact curves (black lines) are produced using QuTiP Johansson et al. (2013, 2012). (a)
1103
+
1104
+ 𝜎
1105
+ 𝑧
1106
+
1107
+ for different values of
1108
+ 𝐽
1109
+ 𝑦
1110
+ for POVM variational results (var), POVM dynamics (dyn) and POVM dynamics starting from the variational results (var+dyn). The two integers in the legend label are the number of transformer layers and hidden dimensions. (b) Steady-state solution
1111
+ 𝐽
1112
+ 𝑦
1113
+ =
1114
+ 1.8
1115
+
1116
+ 𝛾
1117
+ comparing different variational ansatz. “0s” and “1s” use one string (String 0 and String 1); “2s”, “4s”, and “8s” use Strings 1-2, 1-4, and 1-8 respectively (see Fig. 1(i)) All initial states are
1118
+
1119
+ 𝑖
1120
+ =
1121
+ 1
1122
+ 𝑁
1123
+ |
1124
+
1125
+
1126
+ (
1127
+
1128
+ 𝜎
1129
+ 𝑧
1130
+
1131
+ =
1132
+ 1
1133
+ ). The dynamics and variational plus dynamics approaches use the time step
1134
+ 𝜏
1135
+ =
1136
+ 0.008
1137
+
1138
+ 𝛾
1139
+
1140
+ 1
1141
+ . The results of two transformer layers are computed exactly under all POVM frame elements.
1142
+
1143
+ Alternatively, we can search for the fixed point by direct minimization of the
1144
+ 𝐿
1145
+ 1
1146
+ -norm of
1147
+ 𝑝
1148
+ 𝜃
1149
+ ˙
1150
+ giving
1151
+
1152
+
1153
+
1154
+ 𝑝
1155
+ 𝜃
1156
+ ˙
1157
+
1158
+ 1
1159
+ =
1160
+
1161
+ 𝒂
1162
+ |
1163
+
1164
+ 𝒃
1165
+ 𝑝
1166
+ 𝜃
1167
+
1168
+ (
1169
+ 𝒃
1170
+ )
1171
+
1172
+ 𝐿
1173
+ 𝒂
1174
+ 𝒃
1175
+ |
1176
+
1177
+ 1
1178
+ 𝑁
1179
+ 𝑠
1180
+
1181
+
1182
+ 𝒂
1183
+
1184
+ 𝑝
1185
+ 𝜃
1186
+ 𝑁
1187
+ 𝑠
1188
+ |
1189
+
1190
+ 𝒃
1191
+ 𝑝
1192
+ 𝜃
1193
+
1194
+ (
1195
+ 𝒃
1196
+ )
1197
+
1198
+ 𝐿
1199
+ 𝒂
1200
+ 𝒃
1201
+ |
1202
+ 𝑝
1203
+ 𝜃
1204
+
1205
+ (
1206
+ 𝒂
1207
+ )
1208
+ ,
1209
+
1210
+ (5)
1211
+
1212
+ where the second line offers a stochastic approach to evaluate the
1213
+
1214
+ 𝑝
1215
+ 𝜃
1216
+ ˙
1217
+
1218
+ 1
1219
+ by sampling
1220
+ 𝒂
1221
+ from
1222
+ 𝑝
1223
+ 𝜃
1224
+
1225
+ (
1226
+ 𝒂
1227
+ )
1228
+ . The gradient in Eq. S30 is taken with respect to the parameters in
1229
+ 𝑝
1230
+ 𝜃
1231
+
1232
+ (
1233
+ 𝒃
1234
+ )
1235
+ using PyTorch’s Paszke et al. (2019) automatic differentiation. Notice that because the gradients of Eq. S23 and Eq. S30 (see Supplementary Material Luo et al. (2020) Sec. VII) are different (except in the limit where the manifold of states representable by the Transformer span the full space), they will converge to different answers.
1236
+
1237
+ In Fig. 3, we consider the one-dimensional TFIM, with the 4-Pauli POVM basis (see Supplementary Material Luo et al. (2020) Sec. II), and compute the expectation value of all three Pauli matrices at various values of
1238
+ 𝛾
1239
+ . We find strong agreement with the exact method. In addition, we find that this approach performs particularly well in the regime of
1240
+ 1
1241
+ <
1242
+ 𝑔
1243
+ /
1244
+ 𝛾
1245
+ <
1246
+ 2.5
1247
+ which have proven particularly challenging for the RBM method Vicentini et al. (2019b). We can further improve the performance by averaging over multiple simulations (see Supplementary Material Luo et al. (2020) Sec. V). In Fig. 4, we consider optimizing a
1248
+ 3
1249
+ ×
1250
+ 3
1251
+ Heisenberg model using Eq. S30 with various different variational ansatz (here we use the tetrahedral POVM basis (see Supplementary Material Luo et al. (2020) Sec. II)). In looking at the quality of
1252
+
1253
+ 𝜎
1254
+ 𝑧
1255
+
1256
+ we find that increasing the size of the Transformer both in depth and hidden dimension improves the result although this improvement is marginal until we reach two transformer layers and a hidden dimension of 64. Interestingly, we find that the use of strings has a significant effect on our results (see Fig. 1(i)). To begin with, the use of string 1 is marginally superior to string 0. We expect this is because string 1 better addresses local correlations. More importantly, we find that there is a significant improvement (for any Transformer) by including more symmetry related strings out to the maximum of eight strings we considered. In fact, eight strings with one hidden layer and a hidden dimension of 32 provides a similar accuracy to 1 string with 2 hidden layers and a hidden dimension of 64. Additionally, we compared the results obtained through time evolution at long time to the fixed point method and found that the steady-state approached by the time-evolved state provides significantly more accurate results. While the evaluation of the dynamics is computationally slower, we find that supplementing the fixed-point method with further dynamical evolution achieves the same steady-state solution as the dynamical approach at an overall reduced computational time.
1257
+
1258
+ Conclusion. We have demonstrated an approach, whose run time complexity per iteration step is polynomial on the system size and the hidden dimensions, to simulate the real-time dynamics of open quantum systems via an exact probabilistic formulation. By parameterizing the quantum state using an autoregressive Transformer, we accurately track the dynamics and steady state in 1-D and 2-D transverse field Ising and Heisenberg models. For 2-D systems, we introduce String States which partially restore the symmetry of the Transformer.
1259
+
1260
+ Our methods constitute an important step in the machine learning approach for quantum many-body dynamics simulation. It provides the first exact sampling method for neural networks in OQS, which is a crucial improvement over the standard Markov chain Monte Carlo techniques with RBM, as well as an efficient stochastic optimization method for high dimensional differential equations. Our approach is versatile and applicable to general quantum dynamics in various contexts, including closed systems quantum dynamics, finite temperature dynamics of the density matrix, as well as challenging fermionic transport problems Souza and Sanz (2017); Yan (2014) with interactions to the environment Berkelbach and Thoss (2020). Due to the probabilistic formulation as a quantum-classical mapping, our work has applications beyond quantum mechanics and demonstrates how to efficiently solve high-dimensional probabilistic differential equations with autoregressive neural networks. Such probabilistic equations appear in a wide variety of classical contexts and our work represents an important step forward in the direction.
1261
+
1262
+ Acknowledgements. Di Luo is grateful for the insightful discussion with Filippo Vicentini, and appreciates a lot the help from Filippo Vicentini, Alberto Biella and Cristiano Ciuti on providing the original data from their paper Vicentini et al. (2019b). Di Luo would also like to thank Mohamed Hibat-Allah for sharing his insights on the RNN wavefunction. Zhuo Chen is in debt to Qiwei Zhang for her contribution in digitizing Fig. 3 for the exact result and drawing string figures in Fig. 1. J.C. acknowledges support from Natural Sciences and Engineering Research Council of Canada (NSERC), the Shared Hierarchical Academic Research Computing Network (SHARCNET), Compute Canada, Google Quantum Research Award, and the Canadian Institute for Advanced Research (CIFAR) AI chair program. BKC acknowledges support from the Department of Energy grant DOE desc0020165. This work utilized resources supported by the National Science Foundation’s Major Research Instrumentation program, grant #1725729, as well as the University of Illinois at Urbana-Champaign”. Z.C. acknowledges support from the A.C. Anderson Summer Research Award.
1263
+
1264
+ References
1265
+ Verstraete et al. (2009) F. Verstraete, M. M. Wolf,  and J. Ignacio Cirac, Nature Physics 5, 633 (2009).
1266
+ Barreiro et al. (2011) J. T. Barreiro, M. Müller, P. Schindler, D. Nigg, T. Monz, M. Chwalla, M. Hennrich, C. F. Roos, P. Zoller,  and R. Blatt, Nature 470, 486 (2011).
1267
+ Sieberer et al. (2016) L. M. Sieberer, M. Buchhold,  and S. Diehl, Reports on Progress in Physics 79, 096001 (2016).
1268
+ Maghrebi and Gorshkov (2016) M. F. Maghrebi and A. V. Gorshkov, Phys. Rev. B 93, 014307 (2016).
1269
+ Mascarenhas et al. (2015) E. Mascarenhas, H. Flayac,  and V. Savona, Phys. Rev. A 92, 022116 (2015).
1270
+ Cui et al. (2015) J. Cui, J. I. Cirac,  and M. C. Bañuls, Phys. Rev. Lett. 114, 220601 (2015).
1271
+ Jaschke et al. (2018) D. Jaschke, S. Montangero,  and L. D. Carr, Quantum Science and Technology 4, 013001 (2018).
1272
+ Werner et al. (2016) A. H. Werner, D. Jaschke, P. Silvi, M. Kliesch, T. Calarco, J. Eisert,  and S. Montangero, Phys. Rev. Lett. 116, 237201 (2016).
1273
+ Biella et al. (2018) A. Biella, J. Jin, O. Viyuela, C. Ciuti, R. Fazio,  and D. Rossini, Phys. Rev. B 97, 035103 (2018).
1274
+ Jin et al. (2016) J. Jin, A. Biella, O. Viyuela, L. Mazza, J. Keeling, R. Fazio,  and D. Rossini, Phys. Rev. X 6, 031011 (2016).
1275
+ Finazzi et al. (2015) S. Finazzi, A. Le Boité, F. Storme, A. Baksic,  and C. Ciuti, Phys. Rev. Lett. 115, 080604 (2015).
1276
+ Rota et al. (2017) R. Rota, F. Storme, N. Bartolo, R. Fazio,  and C. Ciuti, Phys. Rev. B 95, 134431 (2017).
1277
+ Rota et al. (2019) R. Rota, F. Minganti, C. Ciuti,  and V. Savona, Phys. Rev. Lett. 122, 110405 (2019).
1278
+ Casteels et al. (2018) W. Casteels, R. M. Wilson,  and M. Wouters, Phys. Rev. A 97, 062107 (2018).
1279
+ Nagy and Savona (2018) A. Nagy and V. Savona, Phys. Rev. A 97, 052129 (2018).
1280
+ Shammah et al. (2018) N. Shammah, S. Ahmed, N. Lambert, S. De Liberato,  and F. Nori, Phys. Rev. A 98, 063815 (2018).
1281
+ Vicentini et al. (2019a) F. Vicentini, F. Minganti, A. Biella, G. Orso,  and C. Ciuti, Phys. Rev. A 99, 032115 (2019a).
1282
+ Kshetrimayum et al. (2017) A. Kshetrimayum, H. Weimer,  and R. Orús, Nature Communications 8, 1291 (2017).
1283
+ Carusotto and Ciuti (2013) I. Carusotto and C. Ciuti, Rev. Mod. Phys. 85, 299 (2013).
1284
+ Hartmann (2016) M. J. Hartmann, Journal of Optics 18, 104005 (2016).
1285
+ Noh and Angelakis (2016) C. Noh and D. G. Angelakis, Reports on Progress in Physics 80, 016401 (2016).
1286
+ Bartolo et al. (2016) N. Bartolo, F. Minganti, W. Casteels,  and C. Ciuti, Phys. Rev. A 94, 033841 (2016).
1287
+ Biella et al. (2017) A. Biella, F. Storme, J. Lebreuilly, D. Rossini, R. Fazio, I. Carusotto,  and C. Ciuti, Phys. Rev. A 96, 023839 (2017).
1288
+ Biondi et al. (2017) M. Biondi, G. Blatter, H. E. Türeci,  and S. Schmidt, Phys. Rev. A 96, 043809 (2017).
1289
+ Carmichael (2015) H. J. Carmichael, Phys. Rev. X 5, 031028 (2015).
1290
+ Casteels et al. (2017) W. Casteels, R. Fazio,  and C. Ciuti, Phys. Rev. A 95, 012128 (2017).
1291
+ Casteels et al. (2016) W. Casteels, F. Storme, A. Le Boité,  and C. Ciuti, Phys. Rev. A 93, 033824 (2016).
1292
+ Fink et al. (2017) J. M. Fink, A. Dombi, A. Vukics, A. Wallraff,  and P. Domokos, Phys. Rev. X 7, 011012 (2017).
1293
+ Fink et al. (2018) T. Fink, A. Schade, S. Höfling, C. Schneider,  and A. Imamoglu, Nature Physics 14, 365 (2018).
1294
+ Fitzpatrick et al. (2017) M. Fitzpatrick, N. M. Sundaresan, A. C. Y. Li, J. Koch,  and A. A. Houck, Phys. Rev. X 7, 011016 (2017).
1295
+ Foss-Feig et al. (2017) M. Foss-Feig, P. Niroula, J. T. Young, M. Hafezi, A. V. Gorshkov, R. M. Wilson,  and M. F. Maghrebi, Phys. Rev. A 95, 043826 (2017).
1296
+ Kessler et al. (2012) E. M. Kessler, G. Giedke, A. Imamoglu, S. F. Yelin, M. D. Lukin,  and J. I. Cirac, Phys. Rev. A 86, 012116 (2012).
1297
+ Marino and Diehl (2016) J. Marino and S. Diehl, Phys. Rev. Lett. 116, 070407 (2016).
1298
+ Savona (2017) V. Savona, Phys. Rev. A 96, 033826 (2017).
1299
+ Sieberer et al. (2013) L. M. Sieberer, S. D. Huber, E. Altman,  and S. Diehl, Phys. Rev. Lett. 110, 195301 (2013).
1300
+ Vicentini et al. (2018) F. Vicentini, F. Minganti, R. Rota, G. Orso,  and C. Ciuti, Phys. Rev. A 97, 013853 (2018).
1301
+ Lee et al. (2013) T. E. Lee, S. Gopalakrishnan,  and M. D. Lukin, Phys. Rev. Lett. 110, 257204 (2013).
1302
+ Rota et al. (2018) R. Rota, F. Minganti, A. Biella,  and C. Ciuti, New Journal of Physics 20, 045003 (2018).
1303
+ Blais et al. (2020) A. Blais, A. L. Grimsmo, S. M. Girvin,  and A. Wallraff, “Circuit quantum electrodynamics,”  (2020), arXiv:2005.12667 [quant-ph] .
1304
+ Yoshioka et al. (2019) N. Yoshioka, Y. O. Nakagawa, K. Mitarai,  and K. Fujii, “Variational quantum algorithm for non-equilibrium steady states,”  (2019), arXiv:1908.09836 [quant-ph] .
1305
+ Liu et al. (2020) Z. Liu, L. M. Duan,  and D.-L. Deng, “Solving quantum master equations with deep quantum neural networks,”  (2020), arXiv:2008.05488 [quant-ph] .
1306
+ Lee et al. (2020) C.-K. Lee, P. Patil, S. Zhang,  and C.-Y. Hsieh, “A neural-network variational quantum algorithm for many-body dynamics,”  (2020), arXiv:2008.13329 [quant-ph] .
1307
+ Ramusat and Savona (2020) N. Ramusat and V. Savona, “A quantum algorithm for the direct estimation of the steady state of open quantum systems,”  (2020), arXiv:2008.07133 [quant-ph] .
1308
+ Liu et al. (2021) H.-Y. Liu, T.-P. Sun, Y.-C. Wu,  and G.-P. Guo, “Variational quantum algorithms for steady states of open quantum systems,”  (2021), arXiv:2001.02552 [quant-ph] .
1309
+ Scarlatella et al. (2020) O. Scarlatella, A. A. Clerk, R. Fazio,  and M. Schiró, “Dynamical mean-field theory for open markovian quantum many body systems,”  (2020), arXiv:2008.02563 [cond-mat.stat-mech] .
1310
+ Verstraete et al. (2004) F. Verstraete, J. J. García-Ripoll,  and J. I. Cirac, Phys. Rev. Lett. 93, 207204 (2004).
1311
+ Zwolak and Vidal (2004) M. Zwolak and G. Vidal, Phys. Rev. Lett. 93, 207205 (2004).
1312
+ Lagaris et al. (1997) I. E. Lagaris, A. Likas,  and D. I. Fotiadis, Computer Physics Communications 104, 1 (1997).
1313
+ Carleo and Troyer (2017) G. Carleo and M. Troyer, Science 355, 602–606 (2017).
1314
+ Luo and Clark (2019) D. Luo and B. K. Clark, Phys. Rev. Lett. 122, 226401 (2019).
1315
+ Pfau et al. (2019) D. Pfau, J. S. Spencer, A. G. de G. Matthews,  and W. M. C. Foulkes, “Ab-initio solution of the many-electron schrödinger equation with deep neural networks,”  (2019), arXiv:1909.02487 [physics.chem-ph] .
1316
+ Hermann et al. (2019) J. Hermann, Z. Schätzle,  and F. Noé, “Deep neural network solution of the electronic schrödinger equation,”  (2019), arXiv:1909.08423 [physics.comp-ph] .
1317
+ Hibat-Allah et al. (2020) M. Hibat-Allah, M. Ganahl, L. E. Hayward, R. G. Melko,  and J. Carrasquilla, Phys. Rev. Research 2, 023358 (2020).
1318
+ Sharir et al. (2020) O. Sharir, Y. Levine, N. Wies, G. Carleo,  and A. Shashua, Phys. Rev. Lett. 124, 020503 (2020).
1319
+ Lu et al. (2019) S. Lu, X. Gao,  and L.-M. Duan, Phys. Rev. B 99, 155136 (2019).
1320
+ Gao and Duan (2017) X. Gao and L.-M. Duan, Nature Communications 8, 662 (2017).
1321
+ Glasser et al. (2018) I. Glasser, N. Pancotti, M. August, I. D. Rodriguez,  and J. I. Cirac, Physical Review X 8 (2018), 10.1103/physrevx.8.011006.
1322
+ Vicentini et al. (2019b) F. Vicentini, A. Biella, N. Regnault,  and C. Ciuti, Phys. Rev. Lett. 122, 250503 (2019b).
1323
+ Yoshioka and Hamazaki (2019) N. Yoshioka and R. Hamazaki, Phys. Rev. B 99, 214306 (2019).
1324
+ Hartmann and Carleo (2019) M. J. Hartmann and G. Carleo, Phys. Rev. Lett. 122, 250502 (2019).
1325
+ Nagy and Savona (2019) A. Nagy and V. Savona, Phys. Rev. Lett. 122, 250501 (2019).
1326
+ Yuan et al. (2020) D. Yuan, H. Wang, Z. Wang,  and D.-L. Deng, “Solving the liouvillian gap with artificial neural networks,”  (2020), arXiv:2009.00019 [quant-ph] .
1327
+ Torlai and Melko (2018) G. Torlai and R. G. Melko, Phys. Rev. Lett. 120, 240503 (2018).
1328
+ Ferrari et al. (2019) F. Ferrari, F. Becca,  and J. Carrasquilla, Phys. Rev. B 100, 125131 (2019).
1329
+ Westerhout et al. (2020) T. Westerhout, N. Astrakhantsev, K. S. Tikhonov, M. I. Katsnelson,  and A. A. Bagrov, Nature Communications 11, 1593 (2020).
1330
+ Szabó and Castelnovo (2020) A. Szabó and C. Castelnovo, Phys. Rev. Research 2, 033075 (2020).
1331
+ Lundeen et al. (2009) J. S. Lundeen, A. Feito, H. Coldenstrodt-Ronge, K. L. Pregnell, C. Silberhorn, T. C. Ralph, J. Eisert, M. B. Plenio,  and I. A. Walmsley, Nature Physics 5, 27 (2009).
1332
+ Carrasquilla et al. (2019a) J. Carrasquilla, G. Torlai, R. G. Melko,  and L. Aolita, Nature Machine Intelligence 1, 155 (2019a).
1333
+ Carrasquilla et al. (2019b) J. Carrasquilla, D. Luo, F. Pérez, A. Milsted, B. K. Clark, M. Volkovs,  and L. Aolita, “Probabilistic simulation of quantum circuits with the transformer,”  (2019b), arXiv:1912.11052 .
1334
+ Kiktenko et al. (2020) E. O. Kiktenko, A. O. Malyshev, A. S. Mastiukova, V. I. Man’ko, A. K. Fedorov,  and D. Chruściński, Physical Review A 101 (2020), 10.1103/physreva.101.052320.
1335
+ Vaswani et al. (2017) A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser,  and I. Polosukhin, CoRR abs/1706.03762 (2017), arXiv:1706.03762 .
1336
+ Iserles (2008) A. Iserles, “Euler’s method and beyond,” in A First Course in the Numerical Analysis of Differential Equations, Cambridge Texts in Applied Mathematics (Cambridge University Press, 2008) p. 8–13, 2nd ed.
1337
+ Hochreiter and Schmidhuber (1997) S. Hochreiter and J. Schmidhuber, Neural Comput. 9, 1735–1780 (1997).
1338
+ Cho et al. (2014) K. Cho, B. van Merrienboer, Ç. Gülçehre, F. Bougares, H. Schwenk,  and Y. Bengio, CoRR abs/1406.1078 (2014), arXiv:1406.1078 .
1339
+ van den Oord et al. (2016) A. van den Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves,  and K. Kavukcuoglu, CoRR abs/1606.05328 (2016), arXiv:1606.05328 .
1340
+ Cha et al. (2021) P. Cha, P. Ginsparg, F. Wu, J. Carrasquilla, P. L. McMahon,  and E.-A. Kim, Machine Learning: Science and Technology 3, 01LT01 (2021).
1341
+ Hastie et al. (2001) T. Hastie, R. Tibshirani,  and J. Friedman, The Elements of Statistical Learning, Springer Series in Statistics (Springer New York Inc., New York, NY, USA, 2001).
1342
+ Schuch et al. (2008) N. Schuch, M. M. Wolf, F. Verstraete,  and J. I. Cirac, Phys. Rev. Lett. 100, 040501 (2008).
1343
+ Mahajan and Sharma (2019) A. Mahajan and S. Sharma, The Journal of Physical Chemistry A 123, 3911 (2019).
1344
+ Tahara and Imada (2008) D. Tahara and M. Imada, Journal of the Physical Society of Japan 77, 114701 (2008).
1345
+ Qiu et al. (2017) Y. Qiu, T. M. Henderson, J. Zhao,  and G. E. Scuseria, The Journal of Chemical Physics 147, 064111 (2017).
1346
+ Luo et al. (2020) D. Luo, Z. Chen, J. Carrasquilla,  and B. K. Clark, “Supplementary material,”  (2020).
1347
+ Paszke et al. (2019) A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai,  and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,”  (2019), arXiv:1912.01703 [cs.LG] .
1348
+ Kingma and Ba (2014) D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”  (2014), arXiv:1412.6980 [cs.LG] .
1349
+ Kochkov and Clark (2018) D. Kochkov and B. K. Clark, “Variational optimization in the ai era: Computational graph states and supervised wave-function optimization,”  (2018), arXiv:1811.12423 [cond-mat.str-el] .
1350
+ Gutiérrez and Mendl (2019) I. L. Gutiérrez and C. B. Mendl, “Real time evolution with neural-network quantum states,”  (2019), arXiv:1912.08831 [cond-mat.dis-nn] .
1351
+ Johansson et al. (2013) J. Johansson, P. Nation,  and F. Nori, Computer Physics Communications 184, 1234 (2013).
1352
+ Johansson et al. (2012) J. Johansson, P. Nation,  and F. Nori, Computer Physics Communications 183, 1760 (2012).
1353
+ Souza and Sanz (2017) F. M. Souza and L. Sanz, Phys. Rev. A 96, 052110 (2017).
1354
+ Yan (2014) Y. Yan, The Journal of Chemical Physics 140, 054105 (2014).
1355
+ Berkelbach and Thoss (2020) T. C. Berkelbach and M. Thoss, The Journal of Chemical Physics 152, 020401 (2020).
1356
+ Appendix A Supplementary Material for Autoregressive Transformer Neural Network for Simulating Open Quantum Systems via a Probabilistic Formulation
1357
+ Appendix BI. Lindblad Equation in POVM Formalism
1358
+
1359
+ Start from the Lindblad equation for density matrices
1360
+
1361
+
1362
+ 𝜌
1363
+ ˙
1364
+ =
1365
+
1366
+
1367
+ 𝜌
1368
+ =
1369
+
1370
+ 𝑖
1371
+
1372
+ [
1373
+ 𝐻
1374
+ ,
1375
+ 𝜌
1376
+ ]
1377
+ +
1378
+
1379
+ 𝑘
1380
+ 𝛾
1381
+ 𝑘
1382
+ 2
1383
+
1384
+ (
1385
+ 2
1386
+
1387
+ Γ
1388
+ 𝑘
1389
+
1390
+ 𝜌
1391
+
1392
+ Γ
1393
+ 𝑘
1394
+
1395
+
1396
+ {
1397
+ 𝜌
1398
+ ,
1399
+ Γ
1400
+ 𝑘
1401
+
1402
+
1403
+ Γ
1404
+ 𝑘
1405
+ }
1406
+ )
1407
+ ,
1408
+
1409
+ (S1)
1410
+
1411
+ the frame and dual-frame satisfy
1412
+
1413
+
1414
+ 𝑝
1415
+
1416
+ (
1417
+ 𝒂
1418
+ )
1419
+ =
1420
+ Tr
1421
+
1422
+ (
1423
+ 𝜌
1424
+
1425
+ 𝑀
1426
+ (
1427
+ 𝒂
1428
+ )
1429
+ )
1430
+ ,
1431
+
1432
+ (S2)
1433
+
1434
+ and
1435
+
1436
+
1437
+ 𝜌
1438
+ =
1439
+
1440
+ 𝒃
1441
+ 𝑝
1442
+
1443
+ (
1444
+ 𝒃
1445
+ )
1446
+
1447
+ 𝑁
1448
+ (
1449
+ 𝒃
1450
+ )
1451
+ .
1452
+
1453
+ (S3)
1454
+
1455
+ Plugging Eq. S3 into Eq. S1, we have
1456
+
1457
+
1458
+
1459
+ 𝒃
1460
+ 𝑝
1461
+ ˙
1462
+
1463
+ (
1464
+ 𝒃
1465
+ )
1466
+
1467
+ 𝑁
1468
+ (
1469
+ 𝒃
1470
+ )
1471
+ =
1472
+
1473
+ 𝒃
1474
+ 𝑝
1475
+
1476
+ (
1477
+ 𝒃
1478
+ )
1479
+
1480
+ [
1481
+
1482
+ 𝑖
1483
+
1484
+ [
1485
+ 𝐻
1486
+ ,
1487
+ 𝑁
1488
+ (
1489
+ 𝒃
1490
+ )
1491
+ ]
1492
+ +
1493
+
1494
+ 𝑘
1495
+ 𝛾
1496
+ 𝑘
1497
+ 2
1498
+
1499
+ (
1500
+ 2
1501
+
1502
+ Γ
1503
+ 𝑘
1504
+
1505
+ 𝑁
1506
+ (
1507
+ 𝒃
1508
+ )
1509
+
1510
+ Γ
1511
+ 𝑘
1512
+
1513
+
1514
+ {
1515
+ 𝑁
1516
+ (
1517
+ 𝒃
1518
+ )
1519
+ ,
1520
+ Γ
1521
+ 𝑘
1522
+
1523
+
1524
+ Γ
1525
+ 𝑘
1526
+ }
1527
+ )
1528
+ ]
1529
+ .
1530
+
1531
+ (S4)
1532
+
1533
+ Plugging Eq. S3 into Eq. S2, we have
1534
+
1535
+
1536
+ 𝑝
1537
+
1538
+ (
1539
+ 𝒂
1540
+ )
1541
+ =
1542
+
1543
+ 𝒃
1544
+ 𝑝
1545
+
1546
+ (
1547
+ 𝒃
1548
+ )
1549
+
1550
+ Tr
1551
+
1552
+ (
1553
+ 𝑁
1554
+ (
1555
+ 𝒃
1556
+ )
1557
+
1558
+ 𝑀
1559
+ (
1560
+ 𝒂
1561
+ )
1562
+ )
1563
+ .
1564
+
1565
+ (S5)
1566
+
1567
+ Therefore, let’s add
1568
+ 𝑀
1569
+ (
1570
+ 𝒂
1571
+ )
1572
+ and take trace on both sides of Eq. S4,
1573
+
1574
+
1575
+
1576
+ 𝒃
1577
+ 𝑝
1578
+ ˙
1579
+
1580
+ (
1581
+ 𝒃
1582
+ )
1583
+
1584
+ Tr
1585
+
1586
+ (
1587
+ 𝑁
1588
+ (
1589
+ 𝒃
1590
+ )
1591
+
1592
+ 𝑀
1593
+ (
1594
+ 𝒂
1595
+ )
1596
+ )
1597
+ =
1598
+
1599
+ 𝒃
1600
+ 𝑝
1601
+
1602
+ (
1603
+ 𝒃
1604
+ )
1605
+
1606
+ [
1607
+
1608
+ 𝑖
1609
+
1610
+ Tr
1611
+
1612
+ (
1613
+ [
1614
+ 𝐻
1615
+ ,
1616
+ 𝑁
1617
+ (
1618
+ 𝒃
1619
+ )
1620
+ ]
1621
+
1622
+ 𝑀
1623
+ (
1624
+ 𝒂
1625
+ )
1626
+ )
1627
+ +
1628
+
1629
+ 𝑘
1630
+ 𝛾
1631
+ 𝑘
1632
+ 2
1633
+
1634
+ Tr
1635
+
1636
+ (
1637
+ 2
1638
+
1639
+ Γ
1640
+ 𝑘
1641
+
1642
+ 𝑁
1643
+ (
1644
+ 𝒃
1645
+ )
1646
+
1647
+ Γ
1648
+ 𝑘
1649
+
1650
+
1651
+ 𝑀
1652
+ (
1653
+ 𝒂
1654
+ )
1655
+
1656
+ {
1657
+ 𝑁
1658
+ (
1659
+ 𝒃
1660
+ )
1661
+ ,
1662
+ Γ
1663
+ 𝑘
1664
+
1665
+
1666
+ Γ
1667
+ 𝑘
1668
+ }
1669
+
1670
+ 𝑀
1671
+ (
1672
+ 𝒂
1673
+ )
1674
+ )
1675
+ ]
1676
+ .
1677
+
1678
+ (S6)
1679
+
1680
+ Replacing the left side with Eq. S5 and rearranging the right side, we arrive at
1681
+
1682
+
1683
+ 𝑝
1684
+ ˙
1685
+
1686
+ (
1687
+ 𝒂
1688
+ )
1689
+ =
1690
+
1691
+ 𝒃
1692
+ 𝑝
1693
+
1694
+ (
1695
+ 𝒃
1696
+ )
1697
+
1698
+ [
1699
+
1700
+ 𝑖
1701
+
1702
+ Tr
1703
+
1704
+ (
1705
+ 𝐻
1706
+
1707
+ [
1708
+ 𝑁
1709
+ (
1710
+ 𝒃
1711
+ )
1712
+ ,
1713
+ 𝑀
1714
+ (
1715
+ 𝒂
1716
+ )
1717
+ ]
1718
+ )
1719
+ +
1720
+
1721
+ 𝑘
1722
+ 𝛾
1723
+ 𝑘
1724
+ 2
1725
+
1726
+ Tr
1727
+
1728
+ (
1729
+ 2
1730
+
1731
+ Γ
1732
+ 𝑘
1733
+
1734
+ 𝑁
1735
+ (
1736
+ 𝒃
1737
+ )
1738
+
1739
+ Γ
1740
+ 𝑘
1741
+
1742
+
1743
+ 𝑀
1744
+ (
1745
+ 𝒂
1746
+ )
1747
+
1748
+ Γ
1749
+ 𝑘
1750
+
1751
+
1752
+ Γ
1753
+ 𝑘
1754
+
1755
+ {
1756
+ 𝑁
1757
+ (
1758
+ 𝒃
1759
+ )
1760
+ ,
1761
+ 𝑀
1762
+ (
1763
+ 𝒂
1764
+ )
1765
+ }
1766
+ )
1767
+ ]
1768
+
1769
+
1770
+ 𝒃
1771
+ 𝑝
1772
+
1773
+ (
1774
+ 𝒃
1775
+ )
1776
+
1777
+ 𝐿
1778
+ 𝒂
1779
+ 𝒃
1780
+ .
1781
+
1782
+ (S7)
1783
+
1784
+ Notice that the equation of motion is exact and mathematically equivalent to the standard density matrix Lindblad equation. Therefore, this equation preserves the positivity of the probability distributions as long as the initial probability distribution is positive and corresponds to a quantum state, which is the case as it is derived from a physical state. Our algorithm based on this equation also imposes positivity since the autoregressive neural network always parameterizes a positive probability distribution by construction.
1785
+
1786
+ Appendix CII. Tetrahedral and 4-Pauli POVM
1787
+
1788
+ In the main paper, we used two POVMs, the tetrahedral POVM and the 4-Pauli POVM. The tetrahedral POVM forms a tetrahedral in the Bloch sphere. In particular, it takes the form of
1789
+
1790
+
1791
+ 𝑀
1792
+ (
1793
+ 𝑎
1794
+ )
1795
+ =
1796
+ 1
1797
+ 4
1798
+
1799
+ (
1800
+ 𝟙
1801
+ +
1802
+ 𝒗
1803
+ (
1804
+ 𝑎
1805
+ )
1806
+
1807
+ 𝝈
1808
+ )
1809
+ ,
1810
+
1811
+ (S8)
1812
+
1813
+ where
1814
+ 𝟙
1815
+ is the identity matrix,
1816
+ 𝝈
1817
+ are the Pauli matrices, and
1818
+ 𝒗
1819
+ (
1820
+ 𝑎
1821
+ )
1822
+ are four unit vectors which form a tetrahedral. In the main paper, we choose the four vectors to be
1823
+
1824
+
1825
+ 𝒗
1826
+ (
1827
+ 1
1828
+ )
1829
+
1830
+ =
1831
+
1832
+ 0
1833
+ ,
1834
+ 0
1835
+ ,
1836
+ 1
1837
+
1838
+ ,
1839
+
1840
+ (S9)
1841
+
1842
+
1843
+ 𝒗
1844
+ (
1845
+ 2
1846
+ )
1847
+
1848
+ =
1849
+
1850
+ 2
1851
+
1852
+ 2
1853
+ 3
1854
+ ,
1855
+ 0
1856
+ ,
1857
+
1858
+ 1
1859
+ 3
1860
+
1861
+ ,
1862
+
1863
+ (S10)
1864
+
1865
+
1866
+ 𝒗
1867
+ (
1868
+ 3
1869
+ )
1870
+
1871
+ =
1872
+
1873
+
1874
+ 2
1875
+ 3
1876
+ ,
1877
+ 6
1878
+ 3
1879
+ ,
1880
+
1881
+ 1
1882
+ 3
1883
+
1884
+ ,
1885
+
1886
+ (S11)
1887
+
1888
+
1889
+ 𝒗
1890
+ (
1891
+ 4
1892
+ )
1893
+
1894
+ =
1895
+
1896
+
1897
+ 2
1898
+ 3
1899
+ ,
1900
+
1901
+ 6
1902
+ 3
1903
+ ,
1904
+
1905
+ 1
1906
+ 3
1907
+
1908
+ .
1909
+
1910
+ (S12)
1911
+
1912
+ The 4-Pauli POVM, on the other hand, takes the form of
1913
+
1914
+
1915
+ 𝑀
1916
+ (
1917
+ 1
1918
+ )
1919
+
1920
+ =
1921
+ 1
1922
+ 3
1923
+
1924
+ |
1925
+ 0
1926
+
1927
+
1928
+
1929
+ 0
1930
+ |
1931
+ ,
1932
+
1933
+ (S13)
1934
+
1935
+
1936
+ 𝑀
1937
+ (
1938
+ 2
1939
+ )
1940
+
1941
+ =
1942
+ 1
1943
+ 3
1944
+
1945
+ |
1946
+ +
1947
+
1948
+
1949
+
1950
+ +
1951
+ |
1952
+ ,
1953
+
1954
+ (S14)
1955
+
1956
+
1957
+ 𝑀
1958
+ (
1959
+ 3
1960
+ )
1961
+
1962
+ =
1963
+ 1
1964
+ 3
1965
+
1966
+ |
1967
+ 𝑟
1968
+
1969
+
1970
+
1971
+ 𝑟
1972
+ |
1973
+ ,
1974
+
1975
+ (S15)
1976
+
1977
+
1978
+ 𝑀
1979
+ (
1980
+ 4
1981
+ )
1982
+
1983
+ =
1984
+ 𝟙
1985
+
1986
+ 𝑀
1987
+ (
1988
+ 1
1989
+ )
1990
+
1991
+ 𝑀
1992
+ (
1993
+ 2
1994
+ )
1995
+
1996
+ 𝑀
1997
+ (
1998
+ 3
1999
+ )
2000
+ ,
2001
+
2002
+ (S16)
2003
+
2004
+ where
2005
+ |
2006
+ 0
2007
+
2008
+ ,
2009
+ |
2010
+ +
2011
+
2012
+ , and
2013
+ |
2014
+ 𝑟
2015
+
2016
+ are the positive eigenstates of
2017
+ 𝜎
2018
+ 𝑧
2019
+ ,
2020
+ 𝜎
2021
+ ����
2022
+ , and
2023
+ 𝜎
2024
+ 𝑦
2025
+ respectively. The multi-site POVM is constructed as the tensor product of single-site POVMs.
2026
+
2027
+ Appendix DIII. Convergence of Loss
2028
+ Figure S1:
2029
+
2030
+ 𝜎
2031
+ 𝑧
2032
+
2033
+ for variational plus dynamics result for
2034
+ 3
2035
+ ×
2036
+ 3
2037
+ Heisenberg model with
2038
+ 𝐵
2039
+ =
2040
+ 0
2041
+ ,
2042
+ 𝐽
2043
+ 𝑥
2044
+ =
2045
+ 0.9
2046
+
2047
+ 𝛾
2048
+ ,
2049
+ 𝐽
2050
+ 𝑦
2051
+ =
2052
+ 1.8
2053
+
2054
+ 𝛾
2055
+ , and
2056
+ 𝐽
2057
+ 𝑧
2058
+ =
2059
+ 𝛾
2060
+ using one transformer layer, 32 hidden dimensions and no strings. The Transformer is trained using a forward-backward trapezoid method with 12000 samples and a time step
2061
+ 𝜏
2062
+ =
2063
+ 0.008
2064
+
2065
+ 𝛾
2066
+
2067
+ 1
2068
+ .
2069
+ Figure S2:Examples of variational loss values. (a) loss values for 16 spins 1D TFIM with
2070
+ 𝑉
2071
+ =
2072
+ 𝑔
2073
+ =
2074
+ 2
2075
+
2076
+ 𝛾
2077
+ using one transformer layer and 32 hidden dimensions. (b) loss values for
2078
+ 3
2079
+ ×
2080
+ 3
2081
+ Heisenberg model with
2082
+ 𝐵
2083
+ =
2084
+ 0
2085
+ ,
2086
+ 𝐽
2087
+ 𝑥
2088
+ =
2089
+ 0.9
2090
+
2091
+ 𝛾
2092
+ ,
2093
+ 𝐽
2094
+ 𝑦
2095
+ =
2096
+ 1.8
2097
+
2098
+ 𝛾
2099
+ , and
2100
+ 𝐽
2101
+ 𝑧
2102
+ =
2103
+ 𝛾
2104
+ using two layers, 64 hidden dimensions and no strings. The Transformer is trained using Adam Kingma and Ba (2014) with a sample size of 12000.
2105
+
2106
+ Here we show that the observable converges for dynamics process (Fig. S1) and the training loss converges for variational method (Fig. S2).
2107
+
2108
+ Appendix EIV. Results for Heisenberg Model with Larger Energy Scale
2109
+ Figure S3:
2110
+
2111
+ 𝜎
2112
+ 𝑧
2113
+
2114
+ as a function of time computed with POVM dynamics. Various different system sizes of short time dynamics results for Heisenberg model in 1-D configuration with periodic boundary condition where
2115
+ 𝐵
2116
+ =
2117
+ 10
2118
+
2119
+ 𝛾
2120
+ ,
2121
+ 𝐽
2122
+ 𝑥
2123
+ =
2124
+ 20
2125
+
2126
+ 𝛾
2127
+ ,
2128
+ 𝐽
2129
+ 𝑦
2130
+ =
2131
+ 0
2132
+ , and
2133
+ 𝐽
2134
+ 𝑧
2135
+ =
2136
+ 10
2137
+
2138
+ 𝛾
2139
+ . The initial state is a product state of
2140
+
2141
+ 𝑖
2142
+ =
2143
+ 1
2144
+ 𝑁
2145
+ |
2146
+
2147
+
2148
+ (
2149
+
2150
+ 𝜎
2151
+ 𝑦
2152
+
2153
+ =
2154
+
2155
+ 1
2156
+ ). The exact curve is produced using QuTip Johansson et al. (2013, 2012). The Transformer is trained using a forward-backward trapezoid method with a sample size of 12000 and a time step of
2157
+ 0.0005
2158
+
2159
+ 𝛾
2160
+
2161
+ 1
2162
+ . The neural network has one encoder layer and 32 hidden dimensions.
2163
+ Figure S4:
2164
+
2165
+ 𝜎
2166
+ 𝑧
2167
+
2168
+ as a function of time computed with POVM dynamics. Various different system sizes of long time dynamics results for Heisenberg model in 1-D configuration with periodic boundary condition where
2169
+ 𝐵
2170
+ =
2171
+ 10
2172
+
2173
+ 𝛾
2174
+ ,
2175
+ 𝐽
2176
+ 𝑥
2177
+ =
2178
+ 20
2179
+
2180
+ 𝛾
2181
+ ,
2182
+ 𝐽
2183
+ 𝑦
2184
+ =
2185
+ 0
2186
+ , and
2187
+ 𝐽
2188
+ 𝑧
2189
+ =
2190
+ 10
2191
+
2192
+ 𝛾
2193
+ . The initial state is a product state of
2194
+
2195
+ 𝑖
2196
+ =
2197
+ 1
2198
+ 𝑁
2199
+ |
2200
+
2201
+
2202
+ (
2203
+
2204
+ 𝜎
2205
+ 𝑦
2206
+
2207
+ =
2208
+
2209
+ 1
2210
+ ). The exact curve is produced using QuTip Johansson et al. (2013, 2012). The neural network is trained using a forward-backward trapezoid method with a sample size of 12000 and a time step of
2211
+ 0.0075
2212
+
2213
+ 𝛾
2214
+
2215
+ 1
2216
+ . The neural network has one encoder layer and 32 hidden dimensions.
2217
+
2218
+ The dynamics algorithm is also tested on a different Heisenberg model where
2219
+ 𝐵
2220
+ =
2221
+ 10
2222
+
2223
+ 𝛾
2224
+ ,
2225
+ 𝐽
2226
+ 𝑥
2227
+ =
2228
+ 20
2229
+
2230
+ 𝛾
2231
+ ,
2232
+ 𝐽
2233
+ 𝑦
2234
+ =
2235
+ 0
2236
+ , and
2237
+ 𝐽
2238
+ 𝑧
2239
+ =
2240
+ 10
2241
+
2242
+ 𝛾
2243
+ . This model has a higher energy compared with the model in the main paper. Since the dissipation operator is relatively small compared with the Hamiltonian, this model reveals closed system properties as well as open system properties. In Fig. S3, we show the short time dynamics for different number of spins of this model. It can be seen that for short time dynamics, the neural network predicts the observables to a great precision. In Fig. S4, we show the long time dynamics behavior for different number of spins. Even though the performances of different number of spins are different, it starts to converge to the right steady state as the system size goes larger.
2244
+
2245
+ Appendix FV. Improve Performance by Combining Probabilities
2246
+ Figure S5:Data of multiple runs for variational steady state solution for one dimensional 16-site spin chain for the TFIM with periodic boundary condition where
2247
+ 𝑉
2248
+ =
2249
+ 𝑔
2250
+ =
2251
+ 2
2252
+
2253
+ 𝛾
2254
+ . The initial state is a product state of
2255
+
2256
+ 𝑖
2257
+ =
2258
+ 1
2259
+ 𝑁
2260
+ |
2261
+
2262
+
2263
+ (
2264
+
2265
+ 𝜎
2266
+ 𝑧
2267
+
2268
+ =
2269
+ 1
2270
+ ). The neural network has one encoder layer and 32 hidden dimensions, and is trained using Adam Kingma and Ba (2014) in 500 iterations with a sample size of 12000. The exact curve (black line) is digitized from Ref. Vicentini et al., 2019b.
2271
+
2272
+ Because of the stochastic nature of the initialization and training process, each training could yield a slightly different result. In principle, we can average over multiple results to achieve better performance by defining the overall POVM probability
2273
+
2274
+
2275
+ 𝑝
2276
+
2277
+ (
2278
+ 𝒂
2279
+ )
2280
+ =
2281
+ 1
2282
+ 𝑁
2283
+
2284
+
2285
+ 𝑖
2286
+ 𝑝
2287
+ 𝑖
2288
+
2289
+ (
2290
+ 𝒂
2291
+ )
2292
+ .
2293
+
2294
+ (S17)
2295
+
2296
+ Then, the observable is computed as
2297
+
2298
+
2299
+
2300
+ 𝑂
2301
+
2302
+ =
2303
+ 1
2304
+ 𝑁
2305
+
2306
+
2307
+ 𝑖
2308
+ ,
2309
+ 𝒃
2310
+ 𝑝
2311
+ 𝑖
2312
+
2313
+ (
2314
+ 𝒃
2315
+ )
2316
+
2317
+ Tr
2318
+
2319
+ (
2320
+ 𝑂
2321
+
2322
+ 𝑁
2323
+ 𝒃
2324
+ )
2325
+ =
2326
+ 1
2327
+ 𝑁
2328
+
2329
+
2330
+ 𝑖
2331
+
2332
+ 𝑂
2333
+
2334
+ 𝑖
2335
+ ,
2336
+
2337
+ (S18)
2338
+
2339
+ which turns out to be the average of observables. In Fig. S5, we show the results for 1-D transverse Ising model for multiple training processes. It can be noted that for
2340
+
2341
+ 𝜎
2342
+ 𝑥
2343
+
2344
+ and
2345
+
2346
+ 𝜎
2347
+ 𝑦
2348
+
2349
+ , the average result is indeed better.
2350
+
2351
+ Appendix GVI. Neural Network Initialization
2352
+
2353
+ All the neural networks used in the main paper have the weights and biases (except in the last layer) initialized using PyTorch Paszke et al. (2019) default linear layer normalization. All training process starts with a product state of either
2354
+ |
2355
+
2356
+
2357
+ or
2358
+ |
2359
+
2360
+
2361
+ (
2362
+
2363
+ 𝜎
2364
+ 𝑧
2365
+
2366
+ =
2367
+ 1
2368
+ or
2369
+
2370
+ 𝜎
2371
+ 𝑦
2372
+
2373
+ =
2374
+
2375
+ 1
2376
+ respectively, see figure captions for the exact initial state). To initialize the neural networks in such a product state, in the last fully connected layer, the weight is set to zero and the bias is set to
2377
+ log
2378
+
2379
+ (
2380
+
2381
+ 𝜓
2382
+ |
2383
+
2384
+ 𝑀
2385
+ (
2386
+ 𝑎
2387
+ )
2388
+
2389
+ |
2390
+ 𝜓
2391
+
2392
+ )
2393
+ where
2394
+ 𝑀
2395
+ (
2396
+ 𝑎
2397
+ )
2398
+ is the single spin POVM basis. Thus, after softmax, the output of the neural network would be the corresponding product state in POVM basis.
2399
+
2400
+ Appendix HVII. Explanation of the Dynamics and Variational Cost Functions in Detail
2401
+
2402
+ In this section, we explain the dynamics and variational cost functions (Eq. 4 and Eq. 5) in the main paper in detail. We start with the dynamics cost function. We would like to produce the probability distribution at
2403
+ 𝑡
2404
+ +
2405
+ 2
2406
+
2407
+ 𝜏
2408
+ from the probability distribution at
2409
+ 𝑡
2410
+ using the forward-backward trapezoid method Iserles (2008) as
2411
+
2412
+
2413
+ 𝑝
2414
+ 𝜃
2415
+
2416
+ (
2417
+ 𝑡
2418
+ +
2419
+ 2
2420
+
2421
+ 𝜏
2422
+ )
2423
+
2424
+ 𝜏
2425
+
2426
+ 𝐿
2427
+
2428
+ 𝑝
2429
+ 𝜃
2430
+
2431
+ (
2432
+ 𝑡
2433
+ +
2434
+ 2
2435
+
2436
+ 𝜏
2437
+ )
2438
+ =
2439
+ 𝑝
2440
+ 𝜃
2441
+
2442
+ (
2443
+ 𝑡
2444
+ )
2445
+ +
2446
+ 𝜏
2447
+
2448
+ 𝐿
2449
+
2450
+ 𝑝
2451
+ 𝜃
2452
+
2453
+ (
2454
+ 𝑡
2455
+ )
2456
+ .
2457
+
2458
+ (S19)
2459
+
2460
+ To make the notation compatible with the main paper, we could write Eq. S19 as
2461
+
2462
+
2463
+
2464
+ 𝒃
2465
+ 𝑝
2466
+ 𝜃
2467
+
2468
+ (
2469
+ 𝑡
2470
+ +
2471
+ 2
2472
+
2473
+ 𝜏
2474
+ )
2475
+
2476
+ (
2477
+ 𝒃
2478
+ )
2479
+
2480
+ (
2481
+ 𝛿
2482
+ 𝒂
2483
+ 𝒃
2484
+
2485
+ 𝜏
2486
+
2487
+ 𝐿
2488
+ 𝒂
2489
+ 𝒃
2490
+ )
2491
+ =
2492
+
2493
+ 𝒃
2494
+ 𝑝
2495
+ 𝜃
2496
+
2497
+ (
2498
+ 𝑡
2499
+ )
2500
+
2501
+ (
2502
+ 𝒃
2503
+ )
2504
+
2505
+ (
2506
+ 𝛿
2507
+ 𝒂
2508
+ 𝒃
2509
+ +
2510
+ 𝜏
2511
+
2512
+ 𝐿
2513
+ 𝒂
2514
+ 𝒃
2515
+ )
2516
+ ,
2517
+
2518
+ (S20)
2519
+
2520
+ where
2521
+ 𝛿
2522
+ 𝒂
2523
+ 𝒃
2524
+ is the Kronecker delta function. Then, we could design the cost function as the
2525
+ 𝐿
2526
+ 1
2527
+ -distance between the left hand side and right hand side as
2528
+
2529
+
2530
+ 𝒞
2531
+ =
2532
+
2533
+ 𝒂
2534
+ |
2535
+
2536
+ 𝒃
2537
+ [
2538
+ 𝑝
2539
+ 𝜃
2540
+
2541
+ (
2542
+ 𝑡
2543
+ +
2544
+ 2
2545
+
2546
+ 𝜏
2547
+ )
2548
+
2549
+ (
2550
+ 𝒃
2551
+ )
2552
+
2553
+ (
2554
+ 𝛿
2555
+ 𝒂
2556
+ 𝒃
2557
+
2558
+ 𝜏
2559
+
2560
+ 𝐿
2561
+ 𝒂
2562
+ 𝒃
2563
+ )
2564
+
2565
+ 𝑝
2566
+ 𝜃
2567
+
2568
+ (
2569
+ 𝑡
2570
+ )
2571
+
2572
+ (
2573
+ 𝒃
2574
+ )
2575
+
2576
+ (
2577
+ 𝛿
2578
+ 𝒂
2579
+ 𝒃
2580
+ +
2581
+ 𝜏
2582
+
2583
+ 𝐿
2584
+ 𝒂
2585
+ 𝒃
2586
+ )
2587
+ ]
2588
+ |
2589
+ .
2590
+
2591
+ (S21)
2592
+
2593
+ Minimizing this cost function with respect to
2594
+ 𝑝
2595
+ 𝜃
2596
+
2597
+ (
2598
+ 𝑡
2599
+ +
2600
+ 2
2601
+
2602
+ 𝜏
2603
+ )
2604
+ would be equivalent to solving the original equation if the cost function can be minimized to zero. However, since the neural network can only approximate the probability distribution, we are approximately solving the original equation. In addition, the probability distribution has a dimension that exponentially increases as the number of spins increases, so it is not feasible to minimize Eq. S21 exactly. Therefore, we seek for a stochastic version of the cost function. This can be achieved by applying the trick of multiplying the cost function by
2605
+ 1
2606
+ =
2607
+ 𝑝
2608
+ 𝜃
2609
+
2610
+ (
2611
+ 𝑡
2612
+ +
2613
+ 2
2614
+
2615
+ 𝜏
2616
+ )
2617
+
2618
+ (
2619
+ 𝒂
2620
+ )
2621
+ /
2622
+ 𝑝
2623
+ 𝜃
2624
+
2625
+ (
2626
+ 𝑡
2627
+ +
2628
+ 2
2629
+
2630
+ 𝜏
2631
+ )
2632
+
2633
+ (
2634
+ 𝒂
2635
+ )
2636
+ as
2637
+
2638
+
2639
+ 𝒞
2640
+ =
2641
+
2642
+ 𝒂
2643
+ 𝑝
2644
+ 𝜃
2645
+
2646
+ (
2647
+ 𝑡
2648
+ +
2649
+ 2
2650
+
2651
+ 𝜏
2652
+ )
2653
+
2654
+ (
2655
+ 𝒂
2656
+ )
2657
+
2658
+ 1
2659
+ 𝑝
2660
+ 𝜃
2661
+
2662
+ (
2663
+ 𝑡
2664
+ +
2665
+ 2
2666
+
2667
+ 𝜏
2668
+ )
2669
+
2670
+ (
2671
+ 𝒂
2672
+ )
2673
+
2674
+ |
2675
+
2676
+ 𝒃
2677
+ [
2678
+ 𝑝
2679
+ 𝜃
2680
+
2681
+ (
2682
+ 𝑡
2683
+ +
2684
+ 2
2685
+
2686
+ 𝜏
2687
+ )
2688
+
2689
+ (
2690
+ 𝒃
2691
+ )
2692
+
2693
+ (
2694
+ 𝛿
2695
+ 𝒂
2696
+ 𝒃
2697
+
2698
+ 𝜏
2699
+
2700
+ 𝐿
2701
+ 𝒂
2702
+ 𝒃
2703
+ )
2704
+
2705
+ 𝑝
2706
+ 𝜃
2707
+
2708
+ (
2709
+ 𝑡
2710
+ )
2711
+
2712
+ (
2713
+ 𝒃
2714
+ )
2715
+
2716
+ (
2717
+ 𝛿
2718
+ 𝒂
2719
+ 𝒃
2720
+ +
2721
+ 𝜏
2722
+
2723
+ 𝐿
2724
+ 𝒂
2725
+ 𝒃
2726
+ )
2727
+ ]
2728
+ |
2729
+ .
2730
+
2731
+ (S22)
2732
+
2733
+ Notice that we should only take gradient on
2734
+ 𝑝
2735
+ 𝜃
2736
+
2737
+ (
2738
+ 𝑡
2739
+ +
2740
+ 2
2741
+
2742
+ 𝜏
2743
+ )
2744
+
2745
+ (
2746
+ 𝒃
2747
+ )
2748
+ but not on
2749
+ 𝑝
2750
+ 𝜃
2751
+
2752
+ (
2753
+ 𝑡
2754
+ +
2755
+ 2
2756
+
2757
+ 𝜏
2758
+ )
2759
+
2760
+ (
2761
+ 𝒂
2762
+ )
2763
+ . Then, we could turn the first
2764
+ 𝑝
2765
+ 𝜃
2766
+
2767
+ (
2768
+ 𝑡
2769
+ +
2770
+ 2
2771
+
2772
+ 𝜏
2773
+ )
2774
+
2775
+ (
2776
+ 𝒂
2777
+ )
2778
+ into sampling
2779
+ 𝒂
2780
+
2781
+ 𝑝
2782
+ 𝜃
2783
+
2784
+ (
2785
+ 𝑡
2786
+ +
2787
+ 2
2788
+
2789
+ 𝜏
2790
+ )
2791
+ and the resulting equation is
2792
+
2793
+
2794
+ 𝒞
2795
+ =
2796
+ 1
2797
+ 𝑁
2798
+ 𝑠
2799
+
2800
+
2801
+ 𝒂
2802
+
2803
+ 𝑝
2804
+ 𝜃
2805
+
2806
+ (
2807
+ 𝑡
2808
+ +
2809
+ 2
2810
+
2811
+ 𝜏
2812
+ )
2813
+ 𝑁
2814
+ 𝑠
2815
+ 1
2816
+ 𝑝
2817
+ 𝜃
2818
+
2819
+ (
2820
+ 𝑡
2821
+ +
2822
+ 2
2823
+
2824
+ 𝜏
2825
+ )
2826
+
2827
+ (
2828
+ 𝒂
2829
+ )
2830
+
2831
+ |
2832
+
2833
+ 𝒃
2834
+ [
2835
+ 𝑝
2836
+ 𝜃
2837
+
2838
+ (
2839
+ 𝑡
2840
+ +
2841
+ 2
2842
+
2843
+ 𝜏
2844
+ )
2845
+
2846
+ (
2847
+ 𝒃
2848
+ )
2849
+
2850
+ (
2851
+ 𝛿
2852
+ 𝒂
2853
+ 𝒃
2854
+
2855
+ 𝜏
2856
+
2857
+ 𝐿
2858
+ 𝒂
2859
+ 𝒃
2860
+ )
2861
+
2862
+ 𝑝
2863
+ 𝜃
2864
+
2865
+ (
2866
+ 𝑡
2867
+ )
2868
+
2869
+ (
2870
+ 𝒃
2871
+ )
2872
+
2873
+ (
2874
+ 𝛿
2875
+ 𝒂
2876
+ 𝒃
2877
+ +
2878
+ 𝜏
2879
+
2880
+ 𝐿
2881
+ 𝒂
2882
+ 𝒃
2883
+ )
2884
+ ]
2885
+ |
2886
+ ,
2887
+
2888
+ (S23)
2889
+
2890
+ which is exactly Eq. 4 in the main paper. Sampling
2891
+ 𝒂
2892
+
2893
+ 𝑝
2894
+ 𝜃
2895
+
2896
+ (
2897
+ 𝑡
2898
+ +
2899
+ 2
2900
+
2901
+ 𝜏
2902
+ )
2903
+ is efficient, as the autoregressive neural network is designed to sample from the probability distributions efficiently and exactly. (See Sec. VIII for sampling details.) We should additionally notice that we don’t need to sample over
2904
+ 𝒃
2905
+ . Since the Hamiltonian and jump operators are local,
2906
+ 𝐿
2907
+ 𝒂
2908
+ 𝒃
2909
+ is sparse. For a given
2910
+ 𝒂
2911
+ , only a small number of (16 per two-body local Hamiltonian)
2912
+ 𝒃
2913
+ ’s are involved in the computation. Therefore, we could evaluate the sum over
2914
+ 𝒃
2915
+ exactly. Notice that autoregressive neural network allows exact inference so that the probability of each configuration can be exactly evaluated. The final cost function
2916
+ 𝐶
2917
+ is then the expectation of the summand over
2918
+ 𝑎
2919
+ sampled from
2920
+ 𝑝
2921
+ 𝜃
2922
+
2923
+ (
2924
+ 𝑡
2925
+ +
2926
+ 2
2927
+
2928
+ 𝜏
2929
+ )
2930
+ .
2931
+
2932
+ We can deal with the variational cost function similarly. Since we are searching for the steady state, we would like to solve for
2933
+ 𝑝
2934
+ 𝜃
2935
+ such that
2936
+
2937
+
2938
+ 0
2939
+ =
2940
+ 𝑝
2941
+ 𝜃
2942
+ ˙
2943
+ =
2944
+ 𝐿
2945
+
2946
+ 𝑝
2947
+ 𝜃
2948
+ ,
2949
+
2950
+ (S24)
2951
+
2952
+ or, in the notation of the main paper,
2953
+
2954
+
2955
+ 0
2956
+ =
2957
+ 𝑝
2958
+ 𝜃
2959
+ ˙
2960
+
2961
+ (
2962
+ 𝒂
2963
+ )
2964
+ =
2965
+
2966
+ 𝒃
2967
+ 𝑝
2968
+ 𝜃
2969
+
2970
+ (
2971
+ 𝒃
2972
+ )
2973
+
2974
+ 𝐿
2975
+ 𝒂
2976
+ 𝒃
2977
+ .
2978
+
2979
+ (S25)
2980
+
2981
+ We can just define the cost function as
2982
+
2983
+
2984
+
2985
+ 𝑝
2986
+ 𝜃
2987
+ ˙
2988
+
2989
+ 1
2990
+ =
2991
+
2992
+ 𝒂
2993
+ |
2994
+
2995
+ 𝒃
2996
+ 𝑝
2997
+ 𝜃
2998
+
2999
+ (
3000
+ 𝒃
3001
+ )
3002
+
3003
+ 𝐿
3004
+ 𝒂
3005
+ 𝒃
3006
+ |
3007
+ .
3008
+
3009
+ (S26)
3010
+
3011
+ Similar to the dynamics cost function, this can be turned into a stochastic cost function using the same method, and the result is
3012
+
3013
+
3014
+
3015
+ 𝑝
3016
+ 𝜃
3017
+ ˙
3018
+
3019
+ 1
3020
+ =
3021
+ 1
3022
+ 𝑁
3023
+ 𝑠
3024
+
3025
+
3026
+ 𝒂
3027
+
3028
+ 𝑝
3029
+ 𝜃
3030
+ 𝑁
3031
+ 𝑠
3032
+ |
3033
+
3034
+ 𝒃
3035
+ 𝑝
3036
+ 𝜃
3037
+
3038
+ (
3039
+ 𝒃
3040
+ )
3041
+
3042
+ 𝐿
3043
+ 𝒂
3044
+ 𝒃
3045
+ |
3046
+ 𝑝
3047
+ 𝜃
3048
+
3049
+ (
3050
+ 𝒂
3051
+ )
3052
+ ,
3053
+
3054
+ (S27)
3055
+
3056
+ the same as Eq. 5 in the main paper. Similar to the dynamics cost function, the gradient should be taken on
3057
+ 𝑝
3058
+ 𝜃
3059
+
3060
+ (
3061
+ 𝒃
3062
+ )
3063
+ only.
3064
+
3065
+ Notice that these two cost functions serve two different purposes. The dynamics cost function is designed to train a neural network for each time step in an evolution, while the variational cost function seeks the steady state directly. In other words, the dynamics cost function produces many neural network states, where each of them represents the quantum state at a particular time, but the variational cost function only produces one neural network state, which is the final steady state. Since the evolution is dissipative, the neural network states generated from time evolution for large time should match the neural network state generated from the variational cost function. In practice, however, we noticed that time evolution generally produces better steady states. We believe the reason lies in the fact that the gradient of the two cost functions are different. (Please refer to the Sec. IX for details.)
3066
+
3067
+ Appendix IVIII. Exact Sampling from Conditional Probability Distributions
3068
+
3069
+ In this section, we explain how the probability distribution is sampled exactly. In the main paper, we explained that the Transformer neural network parameterizes the probability distribution over all spins as a product of conditional probabilities on each spin as
3070
+
3071
+
3072
+ 𝑝
3073
+
3074
+ (
3075
+ 𝒂
3076
+ )
3077
+ =
3078
+ 𝑝
3079
+
3080
+ (
3081
+ 𝑎
3082
+ 1
3083
+ ,
3084
+ 𝑎
3085
+ 2
3086
+ ,
3087
+ 𝑎
3088
+ 3
3089
+ ,
3090
+
3091
+ )
3092
+ =
3093
+
3094
+ 𝑘
3095
+ 𝑝
3096
+ 𝜃
3097
+
3098
+ (
3099
+ 𝑎
3100
+ 𝑘
3101
+ |
3102
+ 𝑎
3103
+ 1
3104
+ ,
3105
+ 𝑎
3106
+ 2
3107
+ ,
3108
+
3109
+ ,
3110
+ 𝑎
3111
+ 𝑘
3112
+
3113
+ 1
3114
+ )
3115
+ .
3116
+
3117
+ (S28)
3118
+
3119
+ The sampling procedure is as follows:
3120
+
3121
+ 1.
3122
+
3123
+ Sample
3124
+ 𝑎
3125
+ 1
3126
+
3127
+
3128
+ 𝑝
3129
+ 𝜃
3130
+
3131
+ (
3132
+ 𝑎
3133
+ 1
3134
+ )
3135
+ ;
3136
+
3137
+ 2.
3138
+
3139
+ Sample
3140
+ 𝑎
3141
+ 2
3142
+
3143
+
3144
+ 𝑝
3145
+ 𝜃
3146
+
3147
+ (
3148
+ 𝑎
3149
+ 2
3150
+ |
3151
+ 𝑎
3152
+ 1
3153
+
3154
+ )
3155
+ ;
3156
+
3157
+ 3.
3158
+
3159
+ Sample
3160
+ 𝑎
3161
+ 3
3162
+
3163
+
3164
+ 𝑝
3165
+ 𝜃
3166
+
3167
+ (
3168
+ 𝑎
3169
+ 3
3170
+ |
3171
+ 𝑎
3172
+ 1
3173
+
3174
+ ,
3175
+ 𝑎
3176
+ 2
3177
+
3178
+ )
3179
+ ;
3180
+
3181
+
3182
+
3183
+ Due to the autoregressive structure of the neural network, each sample can be drawn efficiently Vaswani et al. (2017). This procedure allows for sampling without Markov chain Monte Carlo (MCMC), so it does not need to “warm up” before generating usable samples. In addition it allows for an arbitrary number of samples to be sampled parallelly and independently, avoiding the correlation between samples in MCMC.
3184
+
3185
+ Appendix JIX. Efficient Evaluateion of Cost Functions
3186
+
3187
+ Both the dynamics cost function
3188
+
3189
+
3190
+ 𝒞
3191
+ =
3192
+ 1
3193
+ 𝑁
3194
+
3195
+
3196
+ 𝒂
3197
+
3198
+ 𝑝
3199
+ 𝜃
3200
+
3201
+ (
3202
+ 𝑡
3203
+ +
3204
+ 2
3205
+
3206
+ 𝜏
3207
+ )
3208
+ 𝑁
3209
+ 1
3210
+ 𝑝
3211
+ 𝜃
3212
+
3213
+ (
3214
+ 𝑡
3215
+ +
3216
+ 2
3217
+
3218
+ 𝜏
3219
+ )
3220
+
3221
+ (
3222
+ 𝒂
3223
+ )
3224
+
3225
+ |
3226
+
3227
+ 𝒃
3228
+ [
3229
+ 𝑝
3230
+ 𝜃
3231
+
3232
+ (
3233
+ 𝑡
3234
+ +
3235
+ 2
3236
+
3237
+ 𝜏
3238
+ )
3239
+
3240
+ (
3241
+ 𝒃
3242
+ )
3243
+
3244
+ (
3245
+ 𝛿
3246
+ 𝒂
3247
+ 𝒃
3248
+
3249
+ 𝜏
3250
+
3251
+ 𝐿
3252
+ 𝒂
3253
+ 𝒃
3254
+ )
3255
+
3256
+ 𝑝
3257
+ 𝜃
3258
+
3259
+ (
3260
+ 𝑡
3261
+ )
3262
+
3263
+ (
3264
+ 𝒃
3265
+ )
3266
+
3267
+ (
3268
+ 𝛿
3269
+ 𝒂
3270
+ 𝒃
3271
+ +
3272
+ 𝜏
3273
+
3274
+ 𝐿
3275
+ 𝒂
3276
+ 𝒃
3277
+ )
3278
+ ]
3279
+ |
3280
+ .
3281
+
3282
+ (S29)
3283
+
3284
+ and the steady state cost function
3285
+
3286
+
3287
+
3288
+ 𝑝
3289
+ 𝜃
3290
+ ˙
3291
+
3292
+ 1
3293
+
3294
+ 1
3295
+ 𝑁
3296
+ 𝑠
3297
+
3298
+
3299
+ 𝒂
3300
+
3301
+ 𝑝
3302
+ 𝜃
3303
+ 𝑁
3304
+ 𝑠
3305
+ |
3306
+
3307
+ 𝒃
3308
+ 𝑝
3309
+ 𝜃
3310
+
3311
+ (
3312
+ 𝒃
3313
+ )
3314
+
3315
+ 𝐿
3316
+ 𝒂
3317
+ 𝒃
3318
+ |
3319
+ 𝑝
3320
+ 𝜃
3321
+
3322
+ (
3323
+ 𝒂
3324
+ )
3325
+ ,
3326
+
3327
+ (S30)
3328
+
3329
+ requires evaluating summation over
3330
+ 𝒃
3331
+ . Since the operator
3332
+ 𝐿
3333
+ 𝒂
3334
+ 𝒃
3335
+ is a sum of local operators, the cost functions need to sum only
3336
+ 𝒃
3337
+ ’s that is connected to the configuration
3338
+ 𝒂
3339
+ ’s through the local operators. For each
3340
+ 𝒂
3341
+ , since the local operators only couple polynomial number of
3342
+ 𝒃
3343
+ ’s (specifically 16 in our case), the evaluation of the sum is efficient.
3344
+
3345
+ Appendix KX. Analysis of the Gradients of Dynamics and Variational Cost Functions
3346
+
3347
+ The gradient of the dynamics cost function
3348
+
3349
+
3350
+ 𝒞
3351
+ =
3352
+ 1
3353
+ 𝑁
3354
+
3355
+
3356
+ 𝒂
3357
+
3358
+ 𝑝
3359
+ 𝜃
3360
+
3361
+ (
3362
+ 𝑡
3363
+ +
3364
+ 2
3365
+
3366
+ 𝜏
3367
+ )
3368
+ 𝑁
3369
+ 1
3370
+ 𝑝
3371
+ 𝜃
3372
+
3373
+ (
3374
+ 𝑡
3375
+ +
3376
+ 2
3377
+
3378
+ 𝜏
3379
+ )
3380
+
3381
+ (
3382
+ 𝒂
3383
+ )
3384
+
3385
+ |
3386
+
3387
+ 𝒃
3388
+ [
3389
+ 𝑝
3390
+ 𝜃
3391
+
3392
+ (
3393
+ 𝑡
3394
+ +
3395
+ 2
3396
+
3397
+ 𝜏
3398
+ )
3399
+
3400
+ (
3401
+ ��
3402
+ )
3403
+
3404
+ (
3405
+ 𝛿
3406
+ 𝒂
3407
+ 𝒃
3408
+
3409
+ 𝜏
3410
+
3411
+ 𝐿
3412
+ 𝒂
3413
+ 𝒃
3414
+ )
3415
+
3416
+ 𝑝
3417
+ 𝜃
3418
+
3419
+ (
3420
+ 𝑡
3421
+ )
3422
+
3423
+ (
3424
+ 𝒃
3425
+ )
3426
+
3427
+ (
3428
+ 𝛿
3429
+ 𝒂
3430
+ 𝒃
3431
+ +
3432
+ 𝜏
3433
+
3434
+ 𝐿
3435
+ 𝒂
3436
+ 𝒃
3437
+ )
3438
+ ]
3439
+ |
3440
+ .
3441
+
3442
+ (S31)
3443
+
3444
+ is
3445
+
3446
+
3447
+
3448
+ 𝒞
3449
+
3450
+ 𝜃
3451
+ =
3452
+
3453
+ 𝒂
3454
+ [
3455
+
3456
+ 𝒃
3457
+
3458
+
3459
+ 𝑝
3460
+ 𝜃
3461
+
3462
+ (
3463
+ 𝑡
3464
+ +
3465
+ 2
3466
+
3467
+ 𝜏
3468
+ )
3469
+
3470
+ (
3471
+ 𝒃
3472
+ )
3473
+
3474
+ 𝜃
3475
+ (
3476
+ 𝛿
3477
+ 𝒂
3478
+ 𝒃
3479
+
3480
+ 𝜏
3481
+ 𝐿
3482
+ 𝒂
3483
+ 𝒃
3484
+ )
3485
+ ]
3486
+ sign
3487
+ {
3488
+
3489
+ 𝒃
3490
+ [
3491
+ 𝑝
3492
+ 𝜃
3493
+
3494
+ (
3495
+ 𝑡
3496
+ +
3497
+ 2
3498
+
3499
+ 𝜏
3500
+ )
3501
+ (
3502
+ 𝒃
3503
+ )
3504
+ (
3505
+ 𝛿
3506
+ 𝒂
3507
+ 𝒃
3508
+
3509
+ 𝜏
3510
+ 𝐿
3511
+ 𝒂
3512
+ 𝒃
3513
+ )
3514
+
3515
+ 𝑝
3516
+ 𝜃
3517
+
3518
+ (
3519
+ 𝑡
3520
+ )
3521
+ (
3522
+ 𝒃
3523
+ )
3524
+ (
3525
+ 𝛿
3526
+ 𝒂
3527
+ 𝒃
3528
+ +
3529
+ 𝜏
3530
+ 𝐿
3531
+ 𝒂
3532
+ 𝒃
3533
+ )
3534
+ ]
3535
+ }
3536
+ .
3537
+
3538
+ (S32)
3539
+
3540
+ We find in our simulations that there is no numerical fixed point (i.e.
3541
+ 𝑝
3542
+ 𝜃
3543
+
3544
+ (
3545
+ 𝑡
3546
+ )
3547
+ =
3548
+ 𝑝
3549
+ 𝜃
3550
+
3551
+ (
3552
+ 𝑡
3553
+ +
3554
+ 2
3555
+
3556
+ 𝜏
3557
+ )
3558
+ ) for our Transformer dynamics. This is plausible as
3559
+
3560
+ 𝒃
3561
+ 𝑝
3562
+ 𝜃
3563
+
3564
+ (
3565
+ 𝑡
3566
+ +
3567
+ 2
3568
+
3569
+ 𝜏
3570
+ )
3571
+
3572
+ (
3573
+ 𝒃
3574
+ )
3575
+
3576
+ 𝐿
3577
+ 𝑎
3578
+ 𝑏
3579
+ is not generically going to be zero except in the case when the Transformer reaches the exact steady state solution (which generically a moderate size transformer won’t be able to represent). This means that neither Eq. S31 nor Eq. S32 is going to be zero.
3580
+
3581
+ The gradient of the variational cost function
3582
+
3583
+
3584
+
3585
+ 𝑝
3586
+ 𝜃
3587
+ ˙
3588
+
3589
+ 1
3590
+ =
3591
+
3592
+ 𝒂
3593
+ |
3594
+
3595
+ 𝒃
3596
+ 𝑝
3597
+ 𝜃
3598
+
3599
+ (
3600
+ 𝒃
3601
+ )
3602
+
3603
+ 𝐿
3604
+ 𝒂
3605
+ 𝒃
3606
+ |
3607
+ =
3608
+ 1
3609
+ 𝑁
3610
+
3611
+
3612
+ 𝒂
3613
+
3614
+ 𝑝
3615
+ 𝜃
3616
+ 𝑁
3617
+ |
3618
+
3619
+ 𝒃
3620
+ 𝑝
3621
+ 𝜃
3622
+
3623
+ (
3624
+ 𝒃
3625
+ )
3626
+
3627
+ 𝐿
3628
+ 𝒂
3629
+ 𝒃
3630
+ |
3631
+ 𝑝
3632
+ 𝜃
3633
+
3634
+ (
3635
+ 𝒂
3636
+ )
3637
+ ,
3638
+
3639
+ (S33)
3640
+
3641
+ is
3642
+
3643
+
3644
+
3645
+
3646
+ 𝑝
3647
+ 𝜃
3648
+ ˙
3649
+
3650
+ 1
3651
+
3652
+ 𝜃
3653
+ =
3654
+
3655
+
3656
+ 𝒂
3657
+ [
3658
+
3659
+ 𝒃
3660
+
3661
+ 𝑝
3662
+ 𝜃
3663
+
3664
+ (
3665
+ 𝒃
3666
+ )
3667
+
3668
+ 𝜃
3669
+
3670
+ 𝐿
3671
+ 𝒂
3672
+ 𝒃
3673
+ ]
3674
+
3675
+ sign
3676
+
3677
+ [
3678
+
3679
+ 𝒃
3680
+ 𝑝
3681
+ 𝜃
3682
+
3683
+ (
3684
+ 𝒃
3685
+ )
3686
+
3687
+ 𝐿
3688
+ 𝒂
3689
+ 𝒃
3690
+ ]
3691
+ .
3692
+
3693
+ (S34)
3694
+
3695
+ It is worth contrasting how these two approaches produce different results. It can be easily observed that the gradient is very different for these two approaches. In addition, the dynamics algorithm only locally matches the Transformer at two different time steps, while the variational algorithm globally searches for the steady state.
3696
+
3697
+ Since empirically the variational method is fast but not so accurate while the dynamics method is accurate but not fast, we might combine the gradient to take the advantages of both approaches. One can consider the interpolated dynamics cost function as follows
3698
+
3699
+
3700
+ 𝒞
3701
+ 1
3702
+ =
3703
+ 𝜆
3704
+
3705
+ 𝒞
3706
+ +
3707
+ (
3708
+ 1
3709
+
3710
+ 𝜆
3711
+ )
3712
+
3713
+
3714
+ 𝑝
3715
+ 𝜃
3716
+ ˙
3717
+
3718
+ 1
3719
+ .
3720
+
3721
+ (S35)
3722
+
3723
+ During the dynamics process, one could slowly increase
3724
+ 𝜆
3725
+ from 0 to 1, switching from variational algorithm to dynamics algorithm. This cost function would produce inaccurate intermediate dynamics process, but should produce accurate steady state result. In the main paper, we have performed dynamics after the variational results, attaining accurate observables while reducing the training cost, which could be viewed as a special case of the above.
3726
+
3727
+ Even though the numerical fixed point may not be a local minimum of Eq. S31 as discussed previously, it might still be useful to consider how gradient of the dynamics loss at the numerical fixed point looks like
3728
+
3729
+
3730
+
3731
+ 𝒞
3732
+
3733
+ 𝜃
3734
+
3735
+ =
3736
+
3737
+ 𝒂
3738
+ [
3739
+
3740
+ 𝒃
3741
+
3742
+ 𝑝
3743
+ 𝜃
3744
+
3745
+ (
3746
+ 𝒃
3747
+ )
3748
+
3749
+ 𝜃
3750
+
3751
+ (
3752
+ 𝛿
3753
+ 𝒂
3754
+ 𝒃
3755
+
3756
+ 𝜏
3757
+
3758
+ 𝐿
3759
+ 𝒂
3760
+ 𝒃
3761
+ )
3762
+ ]
3763
+
3764
+ sign
3765
+
3766
+ [
3767
+
3768
+
3769
+ 𝒃
3770
+ 𝑝
3771
+ 𝜃
3772
+
3773
+ (
3774
+ 𝒃
3775
+ )
3776
+
3777
+ 𝐿
3778
+ 𝒂
3779
+ 𝒃
3780
+ ]
3781
+
3782
+
3783
+ =
3784
+ 𝜏
3785
+
3786
+
3787
+
3788
+ 𝑝
3789
+ 𝜃
3790
+ ˙
3791
+
3792
+ 1
3793
+
3794
+ 𝜃
3795
+
3796
+
3797
+ 𝒂
3798
+
3799
+ 𝑝
3800
+ 𝜃
3801
+
3802
+ (
3803
+ 𝒂
3804
+ )
3805
+
3806
+ 𝜃
3807
+
3808
+ sign
3809
+
3810
+ [
3811
+
3812
+ 𝒃
3813
+ 𝑝
3814
+ 𝜃
3815
+
3816
+ (
3817
+ 𝒃
3818
+ )
3819
+
3820
+ 𝐿
3821
+ 𝒂
3822
+ 𝒃
3823
+ ]
3824
+ .
3825
+
3826
+ (S36)
3827
+
3828
+ The first term is the same as the variational gradient, up to a scaling factor. Although a direct optimization using this gradient would not work since it only works at the numerical fixed point, one could be inspired by this gradient and formulate a new variational cost function as
3829
+
3830
+
3831
+ 𝒞
3832
+ 2
3833
+ =
3834
+ 𝜆
3835
+
3836
+
3837
+ 𝑝
3838
+ 𝜃
3839
+ ˙
3840
+
3841
+ 1
3842
+
3843
+ (
3844
+ 1
3845
+
3846
+ 𝜆
3847
+ )
3848
+
3849
+
3850
+ 𝒂
3851
+ 𝑝
3852
+ 𝜃
3853
+
3854
+ (
3855
+ 𝒂
3856
+ )
3857
+
3858
+ sign
3859
+
3860
+ [
3861
+
3862
+ 𝒃
3863
+ 𝑝
3864
+ 𝜃
3865
+
3866
+ (
3867
+ 𝒃
3868
+ )
3869
+
3870
+ 𝐿
3871
+ 𝒂
3872
+ 𝒃
3873
+ ]
3874
+ .
3875
+
3876
+ (S37)
3877
+
3878
+ Then, one could choose different
3879
+ 𝜆
3880
+ to adjust the effect of the second term. In practice, we observed some improvements using this cost function, but the performances are unstable. It may be related to property that the sign function is sensitive to small changes.
3881
+
3882
+ Appendix LXI. Additional Benchmarks with Classical and Quantum Algorithms
3883
+
3884
+ In the main paper, we compared the results from Ref. Vicentini et al., 2019b and showed that we achieved better results. Here, we additionally benchmark with Fig. 3 in Ref. Nagy and Savona, 2019 (results shown in Fig. S6), Fig. 9 in Ref. Yoshioka et al., 2019 (results shown in Fig. S7), and Fig. 3 in Ref Liu et al., 2021 (results shown in Fig. S8). Specifically, Ref. Nagy and Savona, 2019 is another stochastic machine learning algorithm with Restricted Boltzmann machine (RBM) in the standard density matrix formulation, while Ref. Yoshioka et al., 2019 and Ref. Liu et al., 2021 are recent variational quantum algorithms. Below, we show that our results are significantly better with respect to all the algorithms above.
3885
+
3886
+ Figure S6:
3887
+ 3
3888
+ ×
3889
+ 3
3890
+ Heisenberg model benchmarked with Ref. Nagy and Savona, 2019. This system is the same as in Fig. 4 in the main paper. The exact curve (blue) is generated using QuTiP Johansson et al. (2013, 2012). The benchmark curve (orange, Ref. Nagy and Savona, 2019), is based on an RBM. Our results (green and red) are the same as in the main paper. The two numbers in the legend specify the number of layers and hidden dimensions
3891
+ 𝑛
3892
+ 𝑑
3893
+ .
3894
+ Figure S7:8 qubit transverse-field Ising model benchmarked with Ref. Yoshioka et al., 2019. The system Hamiltonian is the same as in Fig. 3 in the main paper but with open boundary condition, with
3895
+ 𝑉
3896
+ =
3897
+ 2
3898
+ and
3899
+ 𝑔
3900
+ shown in the figure. The jump operators are slightly different from the main paper such that there are two different jump operators with
3901
+ Γ
3902
+ (
3903
+ 1
3904
+ )
3905
+ =
3906
+ 𝜎
3907
+ (
3908
+
3909
+ )
3910
+ , and
3911
+ Γ
3912
+ (
3913
+ 2
3914
+ )
3915
+ =
3916
+ 𝜎
3917
+ (
3918
+ 𝑧
3919
+ )
3920
+ . The corresponding dissipation rates are
3921
+ 𝛾
3922
+ (
3923
+ 1
3924
+ )
3925
+ =
3926
+ 4
3927
+ and
3928
+ 𝛾
3929
+ (
3930
+ 2
3931
+ )
3932
+ =
3933
+ 2
3934
+ . Ref. Yoshioka et al., 2019 uses a slightly different convention resulting in a difference in
3935
+ 𝑔
3936
+ and
3937
+ 𝛾
3938
+ which we have verified by matching their curves in our convention. The exact curve (blue) is generated using QuTiP Johansson et al. (2013, 2012) and is superimposed to the figure in Ref. Yoshioka et al., 2019 to check for the correctness of the parameters. The benchmark curve (orange) is from Ref. Yoshioka et al., 2019. The two numbers in the legend mean number of layers and hidden dimensions, respectively. We note that while the neural network is not designed to work for a hidden dimension
3939
+ 𝑛
3940
+ 𝑑
3941
+ less than 8, the results presented here are still significantly better than Ref. Yoshioka et al., 2019 for
3942
+ 𝑛
3943
+ 𝑑
3944
+ <
3945
+ 8
3946
+ .
3947
+ Figure S8:4 qubit transverse-field Ising model benchmarked with Ref. Liu et al., 2021. The system Hamiltonian and jump operators are the same as in Fig. 3 in the main paper, with
3948
+ 𝑉
3949
+ =
3950
+ 0.3
3951
+ ,
3952
+ 𝑔
3953
+ =
3954
+ 1
3955
+ , and
3956
+ 𝛾
3957
+ =
3958
+ 0.5
3959
+ . The classical fidelity is defined as
3960
+ (
3961
+
3962
+ 𝑎
3963
+ 𝑝
3964
+ 𝜃
3965
+
3966
+ (
3967
+ 𝑎
3968
+ )
3969
+
3970
+ 𝑝
3971
+ exact
3972
+
3973
+ (
3974
+ 𝑎
3975
+ )
3976
+ )
3977
+ 2
3978
+ where
3979
+ 𝑝
3980
+ 𝜃
3981
+ is the neural network POVM probability distribution and
3982
+ 𝑝
3983
+ exact
3984
+ is the exact POVM probability distribution. The quantum fidelity is defined as
3985
+ Tr
3986
+
3987
+ (
3988
+ 𝜌
3989
+ 𝜃
3990
+
3991
+ 𝜌
3992
+ exact
3993
+
3994
+ 𝜌
3995
+ 𝜃
3996
+ )
3997
+ 2
3998
+ , where
3999
+ 𝜌
4000
+ 𝜃
4001
+ is the density matrix converted from
4002
+ 𝑝
4003
+ 𝜃
4004
+ and
4005
+ 𝜌
4006
+ exact
4007
+ is the exact density matrix. The exact results are generated using exact linear solver. The benchmark line (dashed black) is from Ref. Liu et al., 2021. The two numbers in the legend mean number of layers and hidden dimensions, respectively. We note that while the neural network is not designed to work for a hidden dimension
4008
+ 𝑛
4009
+ 𝑑
4010
+ less than 8, the results presented here are still significantly better than Ref. Liu et al., 2021 for
4011
+ 𝑛
4012
+ 𝑑
4013
+ <
4014
+ 8
4015
+ .
4016
+ Generated on Fri Jun 7 13:35:05 2024 by LaTeXML